• List of iSCSI Mutual CHAP Posts by OS
  • Tools and Utilities for Windows
  • Unix and Linux Distros

slice2

slice2

Monthly Archives: December 2013

HOWTO Secure iSCSI Luns Between Oracle Solaris 10 and NetApp Storage Using Bidirectional CHAP

27 Friday Dec 2013

Posted by Slice2 in iSCSI, NetApp, Oracle, Security, Solaris

≈ Leave a comment

Tags

iSCSI, NetApp, Oracle, Security, Solaris

This post demonstrates how to secure iSCSI luns between Oracle Solaris 10 and NetApp storage. Solaris calls it Bidirectional CHAP rather than Mutual CHAP. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple. Research the relationship between Solaris EFI, Solaris VTOC and lun size as well as UFS vs ZFS to make sure you choose the proper type for your environment. This was done with Solaris 10 x86. All steps except the fdisk step near the end are the same for SPARC systems.

1) You need to be running at least the Solaris 10 1/06 release. To verify, check your release file.
> cat /etc/release
Oracle Solaris 10 8/11 s10x_u10wos_17b X86

2) Check for the iSCSI packages.
> pkginfo | grep iSCSI
system    SUNWiscsir    Sun iSCSI Device Driver (root)
system    SUNWiscsiu    Sun iSCSI Management Utilities (usr)

a) For reference the iSCSI target packages are listed below. You don’t need them for this HOWTO.
SUNWiscsitgtr    Sun iSCSI Target (Root)
SUNWiscsitgtu    Sun iSCSI Target (Usr)

3) If not installed, mount the Solaris 10 DVD and install the packages. Note the SPARC path will be different: sol_10_811_sparc
If the DVD doesn’t mount automatically:
> mount -F hsfs /dev/rdsk/c0t2d0s2 /mnt
> cd /mnt/sol_10_811_x86/Solaris_10/Product
If it does:
> cd /cdrom/sol_10_811_x86/Solaris_10/Product
>/usr/sbin/pkgadd -d SUNWiscsir
>/usr/sbin/pkgadd -d SUNWiscsiu

4) Make sure the iSCSI service is running on your Solaris host.
> svcs | grep iscsi
online  6:41:58 svc:/network/iscsi/initiator:default

If not, start it.
> svcadm enable svc:/network/iscsi/initiator:default

5) Get your local iSCSI Initiator Node Name or iqn name on the Solaris host.
> iscsiadm list initiator-node | grep iqn
Initiator node name: iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9

6) Make sure the iscsi service is running on the NetApp.
netapp> iscsi status
If not, start it (You need a license for iscsi. Check with the license command.)
netapp> iscsi start

7) Create the volume that will hold the iscsi luns. This command assumes you have aggregate aggr1 already created. If not use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

8) Create a lun on the volume.
netapp> lun create -s 5g -t solaris_efi /vol/MCHAPVOL/SOL10_iSCSI_MCHAP_01

9) Create an igroup and add the Solaris iscsi node name or iqn from step 5 above to it.
netapp> igroup create -i -t solaris ISCSI_MCHAP_SOL10
netapp> igroup add ISCSI_MCHAP_SOL10 iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9
netapp> igroup show

ISCSI_MCHAP_SOL10 (iSCSI) (ostype: solaris):
iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 (not logged in)

10) Map the lun to the igroup and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/SOL10_iSCSI_MCHAP_01 ISCSI_MCHAP_SOL10 01

Note: Solaris EFI is for larger than 2 TB luns and Solaris VTOC for smaller disks. This lun is small just to demonstrate the configuration.

11) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

12) On the Solaris host, configure the target (NetApp controller) to be statically discovered. Note that there are two dashes “- -” in front of –static and –sendtargets. For some reason it displays as one dash in some browsers.
> iscsiadm modify discovery –static enable
> iscsiadm modify discovery –sendtargets enable
> iscsiadm add discovery-address 10.10.10.11:3260
> iscsiadm add static-config iqn.1992-08.com.netapp:sn.84167939,10.10.10.11:3260
> iscsiadm list static-config
Static Configuration Target: iqn.1992-08.com.netapp:sn.84167939,10.10.10.11:3260

13) Check your discovery methods. Make sure Statis and Send Targets are enabled.
> iscsiadm list discovery
Discovery:
Static: enabled
Send Targets: enabled
iSNS: disabled

14) Enable Bidirectional CHAP on the Solaris host for the target NetApp controller. There are two dashes “- -” in front of –authentication.
> iscsiadm modify target-param –authentication CHAP iqn.1992-08.com.netapp:sn.84167939
> iscsiadm modify target-param -B enable iqn.1992-08.com.netapp:sn.84167939

15) Set the target device secret key that identifies the target NetApp controller. Note Solaris supports a minimum of 12 and a maximum of 16 character CHAP secrets. Also, there are two dashes “- -” in front of –CHAP-secret. You can make up your own secrets.
> iscsiadm modify target-param –CHAP-secret iqn.1992-08.com.netapp:sn.84167939
Enter secret: NETAPPBICHAP
Re-enter secret: NETAPPBICHAP

16) Set the Solaris host initiator name and CHAP secret. Remember, there are two dashes “- -” in front of –authentication, –CHAP-name and –CHAP-secret. You can make up your own secrets.
> iscsiadm modify initiator-node –authentication CHAP
> iscsiadm modify initiator-node –CHAP-name iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9
> iscsiadm modify initiator-node –CHAP-secret
Enter secret: BIDIRCHAPSOL10
Re-enter secret: BIDIRCHAPSOL10

17) Verify your target parameters. Make sure Bidirectional Authentication is enabled and Authentication type is CHAP.
> iscsiadm list target-param -v iqn.1992-08.com.netapp:sn.84167939
Target: iqn.1992-08.com.netapp:sn.84167939
Alias: –
Bi-directional Authentication: enabled
Authentication Type: CHAP
CHAP Name: iqn.1992-08.com.netapp:sn.84167939
Login Parameters (Default/Configured):
Data Sequence In Order: yes/-
Data PDU In Order: yes/-
Default Time To Retain: 20/-
Default Time To Wait: 2/-
Error Recovery Level: 0/-
First Burst Length: 65536/-
Immediate Data: yes/-
Initial Ready To Transfer (R2T): yes/-
Max Burst Length: 262144/-
Max Outstanding R2T: 1/-
Max Receive Data Segment Length: 8192/-
Max Connections: 1/-
Header Digest: NONE/-
Data Digest: NONE/-
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/-
Login Retry Time Interval: 60/-
Configured Sessions: 1

18) Set the Bidirectional CHAP secrets on the NetApp controller.
netapp> iscsi security add -i iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 -s chap -p BIDIRCHAPSOL10 -n iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 -o NETAPPBICHAP -m iqn.1992-08.com.netapp:sn.84167939

a) View the iSCSI security configuration.
netapp> iscsi security show
init: iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 auth: CHAP Inbound password: **** Inbound username: iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.84167939

19) On the Solaris host, reconfigure the /dev namespace to recognize the iSCSI disk (lun) you just connected.
> devfsadm -i iscsi or devfsadm -Cv -i iscsi

20) Verify CHAP configuration on the server. Restart the server and you should see the iSCSI session on the NetApp console.
> reboot

a) As the server boots, on the NetApp console you should see the following message:
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 at IP addr 10.10.10.188

21) Login to server and format the disk. Note – the fdisk command below can be skipped on SPARC systems. Your input is in bold red in the next sequence.
> format
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c2t2d0 <DEFAULT cyl 2557 alt 2 hd 128 sec 32>
/iscsi/disk@0000iqn.1992-08.com.netapp%3Asn.8416793903E8,1Specify disk (enter its number): 1
selecting c2t2d0
[disk formatted]

FORMAT MENU:
disk       – select a disk
type       – select (define) a disk type
partition  – select (define) a partition table
current    – describe the current disk
format     – format and analyze the disk
fdisk      – run the fdisk program
repair     – repair a defective sector
label      – write label to the disk
analyze    – surface analysis
defect     – defect list management
backup     – search for backup labels
verify     – read and display labels
save       – save new disk/partition definitions
inquiry    – show vendor, product and revision
volname    – set 8-character volume name
!<cmd>     – execute <cmd>, then return
quit

format> fdisk   (Note: this command is only necessary on x86 systems. If you are on SPARC, skip to the next step.)
No fdisk table exists. The default partition for the disk is:

a 100% “SOLARIS System” partition

Type “y” to accept the default partition,  otherwise type “n” to edit the
partition table.
y

22) Partition the disk:

format> p

PARTITION MENU:
0      – change `0′ partition
1      – change `1′ partition
2      – change `2′ partition
3      – change `3′ partition
4      – change `4′ partition
5      – change `5′ partition
6      – change `6′ partition
7      – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name   – name the current table
print  – display the current table
label  – write partition map and label to the disk
!<cmd> – execute <cmd>, then return
quit
partition> p

Current partition table (original):
Total disk cylinders available: 2556 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0               (0/0/0)           0
1 unassigned    wm       0               0               (0/0/0)           0
2        backup    wu        0 – 2555    4.99GB     (2556/0/0) 10469376
3 unassigned    wm       0               0               (0/0/0)           0
4 unassigned    wm       0               0               (0/0/0)           0
5 unassigned    wm       0               0               (0/0/0)           0
6 unassigned    wm       0               0               (0/0/0)           0
7 unassigned    wm       0               0               (0/0/0)           0
8            boot    wu        0 –    0       2.00MB     (1/0/0)        4096
9 unassigned    wm       0               0               (0/0/0)           0

partition> 0
Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0         (0/0/0)           0

Enter partition id tag[unassigned]: <press enter>
Enter partition permission flags[wm]: <press enter?
Enter new starting cyl[0]: <press enter>
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 4.99gb

partition> l     (This is a lower case “L” not a numeral one or 1. This step labels the disk.)
Ready to label disk, continue? y

partition> q

format> q

23) Create the file system. You can choose either UFS or ZFS. Both options are shown below.

a) If you will use UFS:
> newfs -Tv /dev/rdsk/c2t2d0s0
newfs: construct a new file system /dev/rdsk/c2t2d0s0: (y/n)? y
pfexec mkfs -F ufs /dev/rdsk/c2t2d0s0 10465280 32 128 8192 8192 -1 1 250 1048576 t 0 -1 8 128 y
/dev/rdsk/c2t2d0s0: 10465280 sectors in 2555 cylinders of 128 tracks, 32 sectors
5110.0MB in 18 cyl groups (149 c/g, 298.00MB/g, 320 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 610368, 1220704, 1831040, 2441376, 3051712, 3662048, 4272384, 4882720,
5493056, 6103392, 6713728, 7324064, 7934400, 8544736, 9155072, 9765408, 10375744

> fsck /dev/rdsk/c2t2d0s0
> mkdir /old_ufs_filesystem
> mount /dev/dsk/c2t2d0s0 /old_ufs_filesystem
> vi /etc/vfstab and add the line below to the bottom of the file. This will mount it when the system boots.
/dev/dsk/c2t2d0s0 /dev/rdsk/c2t2d0s0 /old_ufs_filesystem  ufs  2 yes –
> wq! (to exit the vi session)

b) Check the new mount.
> df -h | grep old_ufs_filesystem
/dev/dsk/c2t2d0s0  4.9G 5.0M 4.9G 1% /old_ufs_filesystem

24) If you will use ZFS:
a) Create a pool.
> zpool create -f netappluns c2t2d0

b) Create the filesystem.
> zfs create netappluns/fs

c) List the new filesystem.
> zfs list -r netappluns
NAME            USED  AVAIL  REFER  MOUNTPOINT
netappluns      131K  4.89G    31K  /netappluns
netappluns/fs    31K  4.89G    31K  /netappluns/fs

Use the legacy display method.
> df -h | grep netappluns
netappluns             4.9G    32K   4.9G     1%    /netappluns
netappluns/fs          4.9G    31K   4.9G     1%    /netappluns/fs

25) You are done. Hope this helps.

HOWTO Change the Default Browser in OnCommand System Manager v3

26 Thursday Dec 2013

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

NetApp

You may not realize it but you can change the browser used by System Manager v3. It defaults to using Internet Explorer. Many companies lock IE down so hard with Group Policy that it breaks System Manager. To change it perform the following.

1) Launch System Manager. Once up, do not login to any of your controllers.
2) In the upper left, click Tools > Options.

ocsm-01
3) In the Browser Path section, enter the path to the browser you prefer and click Save and Close. Launch System Manager again and it will use the new browser.

ocsm-02
4) The browsers I’ve tested are as follows on Windows 7 x64.

Opera 18
C:\Program Files (x86)\Opera\launcher.exe
http://www.opera.com/computer

Firefox 26 (OCSM 3.1RC1 is buggy with this version of Firefox)
C:\Program Files (x86)\Mozilla Firefox\firefox.exe (for x64)
C:\Program Files\Mozilla Firefox\firefox.exe (for x32)
http://www.mozilla.org/en-US/

Chrome 31
C:\Users\cdm\AppData\Local\Google\Chrome\Application\chrome.exe
https://www.google.com/intl/en/chrome/browser/

 

Using HFS Standalone Web Server to Upgrade NetApp Data ONTAP and SP Firmware

23 Monday Dec 2013

Posted by Slice2 in Linux, NetApp, Windows

≈ Leave a comment

Tags

Linux, NetApp, Windows

For a while I have been using XAMPP as my goto quick and easy web server to temporarily serve files like ONTAP or SP firmware upgrades. Its easy to use and always works. Then there was Z-WAMP which was great because it was zero install. Again easy to use and always works. The problem was they also carried the extra baggage of PHP, MySQL, etc. All I needed was a simple http instance. And then I found HFS. It stands for HTTP File Server. Its simple, incredibly small, very portable, very easy to use and is a standalone executable. No installation. Just double-click hfs.exe and you are ready to go.

HFS also works perfectly on Linux using wine 1.4 and later. Just don’t use the Wine Gecko option when prompted. On Linux, when you run >wine hfs.exe you will be prompted to download the Gecko option. Just click cancel to continue.

From a NetApp perspective, its perfect for updating Data ONTAP and SP firmware over the network. Especially for shops that don’t run CIFS or NFS or where your Security overlords won’t allow you to NFS export and mount the root volume. I run HFS from my OnCommand Unified Manager server where I have all of my NetApp tools and utilities installed.

Download HFS here:
http://www.rejetto.com/hfs/?f=dl

1) To start, double-click hfs.exe.
a) Select No to add it to your right-click menu (unless you really want to).
b) If you need to change the default port 80 perform this step. If not, skip it. In the upper left, click the Port: 80 button and change it to something like 8082. Click OK.
Notes:
– Depending on how your NetApp applications are deployed, port 80 will probably be taken. A simple port change avoids conflicts. Don’t forget to create a firewall rule if you use a non-standard port.
– If you are running this from a laptop or server without other apps using port 80, then its probably safe to leave on port 80.
– If you want to click the “You Are in Easy Mode” button to change it to “Expert Mode,” you get additional transfer details. Its up to you.
c) Copy the downloaded version of Data ONTAP you will be upgrading to onto the server where you are running HFS.
d) In the HFS window on the upper left under the house/ icon, right-click and select Add files.

hfs01
e) Browse to the Data ONTAP file and select Open. It will now be listed under the home root /. Note that you can also drag and drop the file into this window.

hfs02

2) On the NetApp controller, if not already done, create the software directory and then verify your version and backup kernel.
netapp> software
netapp> software list
netapp> version
netapp> version -b

3) Download and install the Data ONTAP image from your HFS instance. Note the :8082 port definition in the URL below. If you changed it to something other than the default port 80, you must change it on the command line as well. If not, the default port 80 is correct.
netapp> software update http://10.10.10.81:8082/814_q_image.tgz

software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
software: Depending on system load, it may take many minutes
software: to complete this operation. Until it finishes, you will
software: not be able to use the console.
software: copying to 814_q_image.tgz
software: 5% file read from location.

<And that’s it. Output of the update truncated to shorten the post>

New NetApp Releases: ConfigAdvisor, ONTAP Powershell Toolkit, VASA Provider, NFS Plug-in for VAAI, Storage Replicator for VMware SRM, Linux Host Validator, OSM for OSX, SnapManager for SharePoint

22 Sunday Dec 2013

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

Linux, NetApp, VMware, Windows

ConfigAdvisor v3.4
Config Advisor is a configuration validation and health check tool for NetApp systems. It can be deployed at both secure sites and non-secure sites for data collection and analysis. Config Advisor can be used to check a NetApp system or FlexPod for the correctness of hardware installation and conformance to NetApp recommended settings. It collects data and runs a series of commands on the hardware, then checks for cabling, configuration, availability and best practice issues.
http://support.netapp.com/NOW/download/tools/config_advisor/download.shtml

Data ONTAP Powershell ToolKit v3.0.1
The Data ONTAP PowerShell Toolkit is a PowerShell module containing over 1300 cmdlets enabling the storage administration of NetApp controllers via ZAPI.  Full cmdlet sets are available for both 7-mode and clustered Data ONTAP.  The Toolkit also contains several cmdlets aimed at storage administration on the Windows host, including:  creating virtual disks, resizing virtual disks, reclaiming space in virtual disks, copying files, deleting files, reclaiming space on host volumes, and much more.
http://support.netapp.com/NOW/download/tools/powershell_toolkit/download.shtml

NetApp FAS/V-Series VASA Provider v1.0.1
NetApp FAS/V-Series VASA Provider for Data ONTAP operating in 7-Mode is a software component that supports the VMware VASA (vStorage APIs for Storage Awareness) framework, first introduced in vSphere 5. It acts as an information pipeline between NetApp storage systems and vCenter Server, enabling you to monitor relevant storage system status by collecting data such as the following:
1) Storage system topology
2) LUN and volume attributes
3) Events and alarms
http://support.netapp.com/NOW/download/software/vasa_win/1.0.1/download.shtml

NetApp NFS Plug-in v1.0.20 for VMware VAAI
The plug-in runs on the ESXi host and takes advantage of enhanced storage features offered by VMware vSphere. On the NetApp storage system, the NFS vStorage feature must be enabled for the ESXi host to take advantage of VMware VAAI. For details about enabling VMware vStorage over NFS, see the Data ONTAP 8.1 File Access and Protocols Management Guide For 7-Mode and the Clustered Data ONTAP File Access and Protocols Management Guide. The plug-in performs NFS-like remote procedure calls (RPCs) to the server, using the same credentials as that of an ESXi NFS client. This means that the plug-in requires no additional credentials and has the same access rights as the ESXi NFS client.
http://support.netapp.com/NOW/download/software/nfs_plugin_vaai/1.0.20/download.shtml

NetApp FAS/V-Series Storage Replication Adapter 2.1 for VMware vCenter Site Recovery Manager
NetApp FAS/V-Series Storage Replication Adapter for Data ONTAP operating in 7-Mode is a storage vendor specific plug-in to VMware vCenter Site Recovery Manager that enables interaction between Site Recovery Manager and the storage controller. The adapter interacts with the storage controller on behalf of Site Recovery Manager to discover storage arrays and their associated datastores and RDM devices, which are connected to vSphere. The adapter manages failover and test-failover of the virtual machines associated with these storage objects.
http://support.netapp.com/NOW/download/software/sra_7mode/2.1/download.shtml

Linux Host Validator and Configurator v1.0
The Config Validator tool can be used to validate the Linux host settings in NetApp SAN environment and change/configure them if necessary. The tool validates the settings related to the storage stack such as DM-multipath, iSCSI settings, HBA parameters, etc. on hosts connected to NetApp storage controllers running 7-Mode Data ONTAP or Clustered Data ONTAP. Unfortunately its RedHat only at this time.
http://support.netapp.com/NOW/download/tools/config_validator/download.shtml

OnCommand System Manager v3.1RC1 for Mac OSX
System Manager is a graphical management interface that enables you to manage storage systems and storage objects (such as disks, volumes, and aggregates) from a web browser.
http://support.netapp.com/NOW/download/tools/ocsm/download.shtml

SnapManager for SharePoint v8.01
SnapManager for Microsoft SharePoint is an enterprise-strength backup, recovery, and data management solution for Microsoft SharePoint 2013 and SharePoint 2010. SnapManager 8.0 for Microsoft SharePoint includes the following highlighted new features:
1) Clustered Data ONTAP 8.2 support
2) SnapVault integration using SnapManager 7.0 for SQL Server in clustered Data ONTAP
3) SnapManager for Microsoft SharePoint Manager role based access control (RBAC) support
4) SharePoint content database cloning
5) Complete backup and restore support for all SharePoint 2013 objects
http://support.netapp.com/NOW/download/software/snapmanager_sharepoint/8.0.1/

HOWTO Secure iSCSI Luns Between Fedora 20 and NetApp Storage with Mutual CHAP

21 Saturday Dec 2013

Posted by Slice2 in iSCSI, Linux, Security

≈ Leave a comment

Tags

iSCSI, Linux, Security

This post demonstrates how to enable Bidirectional or mutual CHAP on iSCSI luns between Fedora 20 and NetApp storage. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple.

1) If not already installed, install the iSCSI initiator on your system.
> yum install iscsi-initiator*
> reboot (don’t argue with me, just do it!)

2) Display your server’s new iscsi initiator or iqn nodename.
> cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:4622a8d25677

3) On the NetApp filer, create the volume that will hold the iscsi luns. This command assumes you have aggregate aggr1 already created.  If not use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

4) Create the lun in the volume.
netapp> lun create -s 5g -t linux /vol/MCHAPVOL/FED20_iSCSI_MCHAP_01

5) Create an igroup and add the Linux iscsi nodename or iqn from step 2 above to it.
netapp> igroup create -i -t linux ISCSI_MCHAP_FED20
netapp> igroup add ISCSI_MCHAP_FED20 iqn.1994-05.com.redhat:4622a8d25677
netapp> igroup show ISCSI_MCHAP_FED20

ISCSI_MCHAP_FED20 (iSCSI) (ostype: linux):
InitiatorName=iqn.1994-05.com.redhat:4622a8d25677 (not logged in)

6) Map the lun to the igroup and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/FED20_iSCSI_MCHAP_01 ISCSI_MCHAP_FED20 01

7) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

8) Set the CHAP secret on the NetApp controller.
netapp> iscsi security add -i iqn.1994-05.com.redhat:4622a8d25677 -s chap -p FED20 -n iqn.1994-05.com.redhat:4622a8d25677 -o NETAPPMCHAP -m iqn.1992-08.com.netapp:sn.84167939

netapp> iscsi security show
init: iqn.1994-05.com.redhat:4622a8d25677 auth: CHAP Inbound password: **** Inbound username: iqn.1994-05.com.redhat:4622a8d25677 Outbound password: ****Outbound username: iqn.1992-08.com.netapp:sn.84167939

9) On the server, edit your /etc/iscsi/iscsi.conf file and set the parameters below.
> vi /etc/iscsi/iscsid.conf
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1994-05.com.redhat:4622a8d25677
node.session.auth.password = FED20
node.session.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
node.session.auth.password_in = NETAPPMCHAP
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1994-05.com.redhat:4622a8d25677
discovery.sendtargets.auth.password = FED20
discovery.sendtargets.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
discovery.sendtargets.auth.password_in = NETAPPMCHAP
> wq!

10) On the server, restart the service and discover your iSCSI target (your storage system).
> service iscsi restart
Redirecting to /bin/systemctl restart  iscsi.service

a) You should see an entry on the NetApp console:
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1994-05.com.redhat:8ef4c68cfb5 at IP addr 10.10.10.195

b) Verify the target.
> iscsiadm -m discovery -t st -p 10.10.10.11
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

> iscsiadm -m node  (this should display the same as above)
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

11) On the server, manually login to the iSCSI target (your storage array). Note there are two dashes “- -” in front of targetname and login.
> iscsiadm -m node –targetname “iqn.1992-08.com.netapp:sn.84167939” –login

Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.

On the NetApp storage console you should see the iSCSI sessions:
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1994-05.com.redhat:4622a8d25677 at IP addr 10.10.10.184

a) Verify the iSCSI session on the filer:
netapp> iscsi session show
Session 25
Initiator Information
Initiator Name: iqn.1994-05.com.redhat:4622a8d25677
ISID: 00:02:3d:01:00:00
Initiator Alias: fed20

12) Stop and start the iscsi service on the server.
> service iscsi stop
Pause for 10 seconds and then run the next command.
> service iscsi start

13) From the server , check your session.
> iscsiadm -m session -P 1
Current Portal: 10.10.10.11:3260,1000
Persistent Portal: 10.10.10.11:3260,1000
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:4622a8d25677
Iface IPaddress: 10.10.10.184
Iface HWaddress: <empty>
Iface Netdev: <empty>
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

14) From the server, check the NetApp iSCSI details. Note there are two dashes “- -” in front of mode, targetname and portal.
> iscsiadm –mode node –targetname “iqn.1992-08.com.netapp:sn.84167939” –portal 10.10.10.11:3260

15) From the server, find and format the new lun (new disk).
> cat /var/log/messages | grep “unknown partition table”
fed20 kernel: [ 2769.356768]  sdb: unknown partition table

> fdisk /dev/sdb

Welcome to fdisk (util-linux 2.24).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.

Created a new DOS disklabel with disk identifier 0xc6cb1cf2.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

> fdisk /dev/sdb
Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-10485759, default 2048): <press enter>
Last sector, +sectors or +size{K,M,G,T,P} (2048-10485759, default 10485759): <press enter>

Created a new partition 1 of type ‘Linux’ and of size 5 GiB.

Command (m for help): p
Disk /dev/sdb: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x702e7603

Device    Boot Start       End  Blocks  Id System
/dev/sdb1       2048  10485759 5241856  83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

16) On the server, create the Linux file system on the new partition.
> mkfs -t ext4 /dev/sdb1
mke2fs 1.42.8 (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310464 blocks
65523 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

17) Verify the partition.
> blkid /dev/sdb1
/dev/sdb1: UUID=”c1466d95-2551-4e0a-9dcb-fd430be03fe7″ TYPE=”ext4″ PARTUUID=”702e7603-01″

18) Create the mount point and manually mount the directory.
> mkdir /newiscsilun
> mount /dev/sdb1 /newiscsilun
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  4.8G  10M  4.6G   1% /newiscsilun

19) Add the new mount point to /etc/fstab.
> vi /etc/fstab
/dev/sdb1 /newiscsilun ext4 _netdev 0 0
> wq!

Note: the _netdev option is important so that it doesn’t try mounting the target before the network is available.

20) Test that it survives a reboot by rebooting the server. With the _netdev set, iscsi starts and your CHAP logins should take place before it attempts to mount. After the reboot, login and verify its mounted.

> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  5.0G  139M  4.6G   3% /newiscsilun

21) On the server you can check session stats.
> iscsiadm -m session -s
Stats for session [sid: 1, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260]
iSCSI SNMP:
txdata_octets: 22136
rxdata_octets: 377532
noptx_pdus: 0
scsicmd_pdus: 60
tmfcmd_pdus: 0
login_pdus: 0
text_pdus: 0
dataout_pdus: 0
logout_pdus: 0
snack_pdus: 0
noprx_pdus: 0
scsirsp_pdus: 60
tmfrsp_pdus: 0
textrsp_pdus: 0
datain_pdus: 56
logoutrsp_pdus: 0
r2t_pdus: 0
async_pdus: 0
rjt_pdus: 0
digest_err: 0
timeout_err: 0
iSCSI Extended:
tx_sendpage_failures: 0
rx_discontiguous_hdr: 0
eh_abort_cnt: 0

22) As root, change permissions on /etc/iscsi/iscsid.conf. I’m not sure why they haven’t fixed this clear text CHAP password in a file issue so just make sure only root can read/write the file.
> chmod 600 /etc/iscsi/iscsid.conf

23) On the NetApp storage you can verify the Lun and the server’s session.
> lun show -v /vol/MCHAPVOL/FED20_iSCSI_MCHAP_01
/vol/MCHAPVOL/FED20_iSCSI_MCHAP_01 5g (5368709120) (r/w, online, mapped)
Serial#: hoagPJvUgR5s
Share: none
Space Reservation: enabled (not honored by containing Aggregate)
Multiprotocol Type: linux
Maps: ISCSI_MCHAP_FED20=1

> iscsi session show -v
Session 28
Initiator Information
Initiator Name: iqn.1994-05.com.redhat:4622a8d25677
ISID: 00:02:3d:01:00:00
Initiator Alias: fed20

Session Parameters
SessionType=Normal
TargetPortalGroupTag=1000
MaxConnections=1
ErrorRecoveryLevel=0
AuthMethod=CHAP
HeaderDigest=None
DataDigest=None
ImmediateData=Yes
InitialR2T=No
FirstBurstLength=65536
MaxBurstLength=65536
Initiator MaxRecvDataSegmentLength=65536
Target MaxRecvDataSegmentLength=65536
DefaultTime2Wait=2
DefaultTime2Retain=0
MaxOutstandingR2T=1
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
Command Window Size: 32

Connection Information
Connection 0
Remote Endpoint: 10.10.10.184:50977
Local Endpoint: 10.10.10.11:3260
Local Interface: e0a
TCP recv window size: 131400

Command Information
No commands active

HOWTO Secure iSCSI Luns Between Oracle Enterprise Linux 6.5 and NetApp Storage with Mutual CHAP

14 Saturday Dec 2013

Posted by Slice2 in Linux, NetApp, Oracle

≈ Leave a comment

Tags

Linux, NetApp, Oracle, Security

This post demonstrates how to enable bidirectional or mutual CHAP on iSCSI luns between Oracle Enterprise Linux 6 update 5 and NetApp storage. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple.

1) Install open-iscsi on your server.
> yum install iscsi-initiator*
> reboot (don’t argue with me, just do it!)

2) Display your server’s new iscsi initiator or iqn nodename.
> cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:523325af23

3) On the NetApp filer, create the volume that will hold the iscsi luns. This command assumes you have aggregate aggr1 already created. If not, use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

4) Create the lun in the volume.
netapp> lun create -s 5g -t linux /vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01

5) Create an igroup and add the Oracle Enterprise Linux iscsi nodename or iqn from step 2 above to it.
netapp> igroup create -i -t linux ISCSI_MCHAP_OEL6u5
netapp> igroup add ISCSI_MCHAP_OEL6u5 iqn.1988-12.com.oracle:523325af23
netapp> igroup show ISCSI_MCHAP_OEL6u5
ISCSI_MCHAP_OEL6u5 (iSCSI) (ostype: linux):
iqn.1988-12.com.oracle:523325af23 (not logged in)

6) Map the lun to the igroup and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01 ISCSI_MCHAP_OEL6u5 01

7) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

8) Set the CHAP secret on the NetApp controller.
netapp> iscsi security add -i iqn.1988-12.com.oracle:523325af23 -s chap -p MCHAPOEL6u5 -n iqn.1988-12.com.oracle:523325af23 -o NETAPPMCHAP -m iqn.1992-08.com.netapp:sn.84167939

netapp> iscsi security show
init: iqn.1988-12.com.oracle:523325af23 auth: CHAP Inbound password: **** Inbound username: iqn.1988-12.com.oracle:523325af23 Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.84167939

9) On the server, edit your /etc/iscsi/iscsi.conf file and set the parameters below.
> vi /etc/iscsi/iscsid.conf
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1988-12.com.oracle:523325af23
node.session.auth.password = MCHAPOEL6u5
node.session.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
node.session.auth.password_in = NETAPPMCHAP
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1988-12.com.oracle:523325af23
discovery.sendtargets.auth.password = MCHAPOEL6u5
discovery.sendtargets.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
discovery.sendtargets.auth.password_in = NETAPPMCHAP
> wq!

10) On the server, restart the service and discover your iSCSI target (your storage system).
> service iscsi restart
> iscsiadm -m discovery -t st -p 10.10.10.11
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

> iscsiadm -m node  (this should display the same as above)
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

11) On the server, manually login to the iSCSI target (your storage array). Note there are two dashes “- -” in front of –login. It always looks like one.
> iscsiadm -m node -T “iqn.1992-08.com.netapp:sn.84167939” –login
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.

Verify the iSCSI session on the filer:
netapp> iscsi session show
Session 10
Initiator Information
Initiator Name: iqn.1988-12.com.oracle:523325af23
ISID: 00:02:3d:01:00:00
Initiator Alias: oel6u5

12) Stop and start the iscsi service on the server.
> service iscsi stop
Pause for 10 seconds and then run the next command.
> service iscsi start

13) From the server, check your session.
> iscsiadm -m session -P 1
Target: iqn.1992-08.com.netapp:sn.84167939
Current Portal: 10.10.10.11:3260,1000
Persistent Portal: 10.10.10.11:3260,1000
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1988-12.com.oracle:523325af23
Iface IPaddress: 10.10.10.93
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

14) From the server, check the NetApp iSCSI details. Note there are two dashes “- -” in front of mode, targetname and portal. Sometimes it looks like one.
> iscsiadm -–mode node –-targetname “iqn.1992-08.com.netapp:sn.84167939″ –-portal 10.10.10.11:3260
# BEGIN RECORD 6.2.0-873.10.el6
node.name = iqn.1992-08.com.netapp:sn.84167939
node.tpgt = 1000
node.startup = automatic
node.leading_login = No
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
<output truncated to keep the post short>

15) From the server, find and format the new lun (new disk). On the fdisk command wizard, enter the letters in bold below.
> cat /var/log/messages | grep “unknown partition table”
Dec 14 08:55:02 oel6u5 kernel: sdb: unknown partition table

> fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x54ac8aa4.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): u
Changing display/entry units to sectors

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

> fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): c
DOS Compatibility flag is not set

Command (m for help): u
Changing display/entry units to sectors

Command (m for help): n
Command action
e   extended
p   primary partition (1-4) <press the P key>
p
Partition number (1-4): 1
First sector (2048-10485759, default 2048): <press enter>
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): <press enter>
Using default value 10485759

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes
166 heads, 62 sectors/track, 1018 cylinders, total 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x54ac8aa4

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10485759     5241856   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

16) On the server, create the Linux file system on the new partition.
> mkfs -t ext4 /dev/sdb1
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310464 blocks
65523 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

17) Verify the partition.
> blkid /dev/sdb1
/dev/sdb1: UUID=”1a6e2a56-924f-4e3b-b281-ded3a3141ab4″ TYPE=”ext4″

18) Create the mount point and manually mount the directory.
> mkdir /newiscsilun
> mount /dev/sdb1 /newiscsilun
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  4.8G  10M  4.6G   1% /newiscsilun

19) Add the new mount point to /etc/fstab.
> vi /etc/fstab
/dev/sdb1 /newiscsilun ext4 _netdev 0 0
> wq!

Note: the _netdev option is important so that it doesn’t try mounting the target before the network is available.

20) Test that it survives a reboot by rebooting the server. With the _netdev set, iscsi starts and your CHAP logins should take place before it attempts to mount. After the reboot, login and verify its mounted.
> reboot

When done rebooting, login and verify the lun is mounted.
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  4.8G  10M  4.6G   1% /newiscsilun

21) On the server you can check session stats.
> iscsiadm -m session -s
Stats for session [sid: 1, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260]
iSCSI SNMP:
txdata_octets: 31204
rxdata_octets: 917992
noptx_pdus: 0
scsicmd_pdus: 270
tmfcmd_pdus: 0
login_pdus: 0
text_pdus: 0
dataout_pdus: 0
logout_pdus: 0
snack_pdus: 0
noprx_pdus: 0
scsirsp_pdus: 270
tmfrsp_pdus: 0
textrsp_pdus: 0
datain_pdus: 242
logoutrsp_pdus: 0
r2t_pdus: 0
async_pdus: 0
rjt_pdus: 0
digest_err: 0
timeout_err: 0
iSCSI Extended:
tx_sendpage_failures: 0
rx_discontiguous_hdr: 0
eh_abort_cnt: 0

22) As root, change permissions on /etc/iscsi/iscsid.conf. I’m not sure why they haven’t fixed this clear text CHAP password in a file issue so just make sure only root can read/write the file.
> chmod 600 /etc/iscsi/iscsid.conf

23) On the NetApp storage you can verify the Lun and the server’s session.
>  lun show -v /vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01
/vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01      5g (5368709120)    (r/w, online, mapped)
Serial#: hoagPJvLcRy6
Share: none
Space Reservation: enabled (not honored by containing Aggregate)
Multiprotocol Type: linux
Maps: ISCSI_MCHAP_OEL6u5=1

>  iscsi session show -v
Session 12
Initiator Information
Initiator Name: iqn.1988-12.com.oracle:523325af23
ISID: 00:02:3d:01:00:00
Initiator Alias: oel6u5

Session Parameters
SessionType=Normal
TargetPortalGroupTag=1000
MaxConnections=1
ErrorRecoveryLevel=0
AuthMethod=CHAP
HeaderDigest=None
DataDigest=None
ImmediateData=Yes
InitialR2T=No
FirstBurstLength=65536
MaxBurstLength=65536
Initiator MaxRecvDataSegmentLength=65536
Target MaxRecvDataSegmentLength=65536
DefaultTime2Wait=2
DefaultTime2Retain=0
MaxOutstandingR2T=1
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
Command Window Size: 32

Connection Information
Connection 0
Remote Endpoint: 10.10.10.93:33454
Local Endpoint: 10.10.10.11:3260
Local Interface: e0a
TCP recv window size: 131400

Command Information
No commands active

Oracle Enterprise Linux 6.5 Hangs after Starting Certmonger

14 Saturday Dec 2013

Posted by Slice2 in Linux, Oracle

≈ 1 Comment

Tags

Linux, Oracle

So, you are installing Oracle Enterprise Linux 6 update 5 and you select the Desktop group of packages. When the system is finished installing and finally boots, it hangs at certmonger. The certmonger daemon monitors certificates for impending expiration, and can optionally refresh soon to be expired certificates with the help of a CA.

Why this kills the Desktop if X isn’t installed is beyond me. For some reason the dependent packages don’t get selected by yum. To fix it, perform the following steps.

1) Reboot and press the spacebar key to enter the boot menu during system start-up.
a) When the Grub menu appears, press the ‘e’ key.
b) Scroll down to the line with kernel and press the ‘e’ again.
c) At the end of the line, the last word should be ‘quiet’. Right arrow key over to the end of the line and press spacebar once to add a space after the word ‘quiet’ and press the 3 key.
d) Then press the Enter key and then the letter ‘b’ to boot the system.

2) The systems will boot into text mode. Now, add the X Window System rpm’s.
> yum update
> yum groupinstall “X Window System”

3) Reboot the system and you should have a working desktop.
> shutdown -r now

HOWTO Boot or Power On vSphere 5.x VMs from the Command Line

11 Wednesday Dec 2013

Posted by Slice2 in ESXi, VMware

≈ Leave a comment

Tags

ESXi, VMware

Sometimes you have to boot your VMs from the command line. This can happen when you have a power failure, or possibly as part of detailed start-up/shutdown procedures. No matter the reason getting familiar with, and documenting this option is a good best practice. In fact, creating a few dummy VMs to learn this process is even better.

Perform the following at either the ESXi console or via SSH Putty/xterm session.

1) To power on a VM from the command line, find the inventory ID of the VM. The first column is the VMID.
> vim-cmd vmsvc/getallvms |grep <name of your vm>

Or, if you just want to list them all, use getallvms.
> vim-cmd vmsvc/getallvms

Vmid  Name         File                                                Guest     Version
5       vcentersql  [vms] vcentersql/vcentersql.vmx   winSrv64Guest  vmx-09
8       labdc01      [vms] labdc01/labdc01.vmx           winSrv64Guest  vmx-09
10     splunk         [vms] splunk/splunk.vmx              winSrv64Guest  vmx-09
11     Kali105        [vms] Kali105/Kali105.vmx           deb664Guest   vmx-08
12     nessus        [vms] nessus/nessus.vmx            deb664Guest   vmx-08

2) Check the power state of the virtual machine you need to boot. In this case, I need to boot the Domain Controller first so I choose the Vmid 8 from the list above
> vim-cmd vmsvc/power.getstate 8
Retrieved runtime info
Powered off

3) Now that you have the Vmid and know the current state, boot the VM.
> vim-cmd vmsvc/power.on 8
Powering on VM

4) Check the process. Note that this command will only list your started and running VMs.
> esxcli vm process list
labdc01
World ID: 52680
Process ID: 0
VMX Cartel ID: 52669
UUID: 42 27 d6 f3 45 20 22 52-cb d2 b7 e1 c4 13 1a f4
Display Name: labdc01
Config File: /vmfs/volumes/9faff676-f7876623/labdc01/labdc01.vmx

4) Verify that the VM is up. Also verify on the VM Console or if Windows, RDP into the VM. Note that if your vCenter is not up, use the vSphere Client to login directly to the ESXi Host to check the VM console.
> vim-cmd vmsvc/power.getstate 8
Retrieved runtime info
Powered on

For reference, if you need to power off, reset or reboot a VM from the command line:

1) Get the Vmid from the output of getallvms.
> vim-cmd vmsvc/getallvms

2) Using the Vmid of the above command for the VM you want to control,  choose the command below based on the action you want to take (reboot, shutdown or reset).
> vim-cmd vmsvc/power.reboot <vmid>
> vim-cmd vmsvc/power.reset <vmid>
> vim-cmd vmsvc/power.shutdown <vmid>

HOWTO Install VMware Tools in Nested ESXi on ESXi

10 Tuesday Dec 2013

Posted by Slice2 in VMware

≈ Leave a comment

Tags

ESXi, VMware

A new VMware Fling was release a few weeks ago and I missed it. You can now install VMware tools in your nested ESXi Hosts. It works with nested ESXi running 5.0, 5.1 or 5.5. I’m running 5.1u1 (1312873) with nested 5.1u1 for this post and it works great. I suppose this works with VMWare Workstation 10 but I haven’t tried it.

The Fling, or tools can be downloaded here.
http://labs.vmware.com/flings/vmware-tools-for-nested-esxi

Steps:

1) Login to the ESXi console and enable SSH or the ESXi Shell.
a) In vCenter, open a console on the nested ESXi VM.
b) Press F2 and login as root.
c) Scroll down to Troubleshooting Options and press Enter.
d) Select either Enable SSH (prefered) or Enable ESXi Shell and press Enter to enable.
e) If SSH, launch Putty or an Xterm and login as root. If ESXi Shell, press ALT+F1 and login as root at the console. Press ALT+F2 to get back to the ESXi DCUI.

2) Whether you logged into the Host with SSH or at the console, place the ESXi Host in maintenance mode.
> esxcli system maintenanceMode set -e true

Now, verify Maintenence Mode is enabled.
> esxcli system maintenanceMode get
Enabled

3) Launch vSphere Client and connect directly to the nested ESXi host as root.
a) On the Configuration tab , select Storage. Right-click the local datastore and select Browse datastore.
b) Click the Upload A File icon and select the esx-tools-for-esxi-9.7.0-0.0.00000.i386.vib file and click Open.
c) When done, close out of the vSphere client session.

4) Back in the ESXi Host, change to the volume (datastore) where you placed the VIB.
> cd /vmfs/volumes/<your local datastore name>
> ls -l esx-tools*  (to verify that the file is there)
> esxcli software vib install -v /vmfs/volumes/<your local datastore name>/esx-tools-for-esxi-9.7.0-0.0.00000.i386.vib -f

Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMware_bootbank_esx-tools-for-esxi_9.7.0-0.0.00000
VIBs Removed:
VIBs Skipped:

5) Reboot the Host and set the reason for the reboot action.
> esxcli system shutdown reboot -r “Hey, I just installed VMware Tools on nested ESXi”

6) In vCenter (Web or vSphere client), check the Summary page for the nested ESXi Host. Is should now show VMware tools installed and current. Reference images are shown below.

a) Before VMware tools (showing both Web and vSphere Client images):
WebClient-nesxi-before vSphereClient-nesxi-before

b) And after VMware tools (showing both Web and vSphere Client images):
vSphereClient-nesxi-after WebClient-nesxi-after

Using Wireshark and Splunk to find iSCSI CHAP Negotiation Failures on VMware ESXi

02 Monday Dec 2013

Posted by Slice2 in iSCSI, NetApp, Security, VMware, Wireshark

≈ Leave a comment

Tags

iSCSI, NetApp, Security, VMware, Wireshark

This is a companion post to sniffing packets in ESXi I posted here.

Say you need to isolate traffic to troubleshoot iSCSI CHAP session negotiation failures between ESXi and NetApp storage.

Using Wireshark:

1) Dump the traffic to a pcap file and open it with Wireshark.  Before you start the capture, change directories so you can easily recover the pcap file from the datastore in vCenter.

> cd /vmfs/volumes/datastore1
> tcpdump-uw -i vmk1 -s 1514 -w esxihost01.pcap
> CTRL+C
a) When done, in vCenter select the ESXi host you were sniffing packets on, then click the Configuration tab > Storage.
b) Right-click datastore1 (or the datastore were your pcap file is) and select Browse datastore.
c) Click download a file > select the location and click OK.
d) Double-click the file and it will open in Wireshark.
e) In Wireshark, in the upper left, enter iscsi.login.T in the Filter: field and click Apply. This only shows the iSCSI login packets. You can clearly see on the right in the Info column, packet 856 is an Authentication Failure packet.

wiresharkISCSIlogin

Using Splunk:

Another way to see the authentication failure is with Splunk. Assuming your NetApp storage (or any vendor) is configured to send syslog to Splunk, you can easily find the event. Splunk is an excellent Syslog server. You can download and use it for free up to 500 Megs a day indexed. I won’t go into the Splunk configuration in this post. I’ll post that soon.

Download it from here: http://www.splunk.com/download?r=header

1) Login to the Splunk UI, click Search to launch the Search app, enter the string below and the results will be displayed.

> index=”*” host=”10.10.10.11″ “iSCSI” “failed”

– Note: replace the IP address with your storage controller hostname or IP.

SplunkiSCSIlogin

← Older posts

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Recent Posts

  • Patch Alma Linux 8.7 on an Offline or Air-Gapped System
  • HOWTO Remove /home logical volume and add that space to the root partition
  • Patch Rocky Linux 8.6 on an Offline or Air-Gapped System
  • HOWTO Install the Splunk Universal Forwarder on FreeBSD
  • HOWTO install a Splunk Universal Forwarder on Solaris 11 SPARC and x64 Using pkg(p5p) and tar
  • HOWTO install a Splunk Universal Forwarder on Solaris 10 SPARC and x64 Using pkgadd and tar
  • Recover Files from a Windows NTFS partition using Linux based SystemRescue
  • Sysmon Event ID 1 Process Creation rules for Splunk Universal Forwarder and McAfee All Access
  • Upgrading CentOS 7.2003 to 7.2009 on an Offline or Air-Gapped System
  • HOWTO Easily Resize the Default LVM Volume on Ubuntu 18.04
  • Create a Docker Container for your Cisco ESA, SMA or WSA Offline Content Updates
  • Apply the Mozilla Firefox STIG to Firefox on Ubuntu Linux 18.04
  • Dynamically Resize Those Tiny BlackArch Linux Terminals and Add a Scrollbar
  • Kali Linux OVA for Air-Gapped Use Build Process
  • HOWTO install the XFCE 4 Desktop on NetBSD 8.1
  • Build a Kali Linux ISO with the latest OS patches and packages
  • HOWTO quickly STIG Firefox 59.01
  • HOWTO mount a Synology NAS SMB share on Linux with SMBv1 disabled
  • Howto safely delete the WSUS WID on Windows 2012R2
  • HOWTO quickly STIG Firefox 45.0.1
  • Completing the vSphere vCenter Appliance Hardening Process
  • HOWTO install the XFCE 4.12 Desktop on NetBSD 7
  • Enabling TLS 1.2 on the Splunk 6.2x Console and Forwarders using Openssl and self signed certs.
  • HOWTO enable SSH on a Cisco ASA running 9.1.x
  • Apply a Windows 2012 R2 Domain GPO to a standalone Windows 2012 R2 server
  • Enable legacy SSL and Java SSL support in your browser for those old, crusty websites
  • HOWTO update FreeBSD 10.1 to the latest 11-current release
  • HOWTO Secure iSCSI Luns Between FreeBSD 10.1 and NetApp Storage with Mutual CHAP
  • HOWTO install the XFCE 4 Desktop on NetBSD 6.1.5
  • HOWTO Secure iSCSI Luns Between Ubuntu Server 14.10 and NetApp Storage with Mutual CHAP

Categories

  • Cisco (2)
  • ESXi (4)
  • FreeBSD (2)
  • HP (5)
  • iSCSI (12)
  • Linux (31)
  • Nessus (3)
  • NetApp (31)
  • NetBSD (10)
  • Oracle (9)
  • Security (48)
  • Solaris (9)
  • Splunk (5)
  • VMware (19)
  • Windows (20)
  • Wireshark (4)
  • XFCE (3)

Archives

  • February 2023
  • August 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • January 2021
  • December 2020
  • November 2020
  • August 2020
  • May 2020
  • September 2019
  • August 2019
  • March 2018
  • November 2016
  • March 2016
  • January 2016
  • November 2015
  • July 2015
  • June 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013

Blogroll

  • Adobe Security Bulletins
  • CentOS Blog
  • Cisco Security Blog
  • CSO Magazine
  • DHS National Vulnerability Database
  • Eric Sloof's NTPRO
  • HT SSL Tests
  • Intel Corp Security Advisories
  • Internet Usage World Stats
  • Kali Linux Blog
  • Linux Mint Blog
  • Meltdown and Spectre
  • Microsoft Security Blog
  • Microsoft Security Intelligence Report
  • Microsoft Security Research & Defense
  • Microsoft Security Response Center
  • MITRE CVE Site
  • NetApp Blogs
  • NetBSD Blog
  • Oracle OTN Security
  • Oracle Security Blog
  • PacketStorm
  • Redhat Security Blog
  • SC Magazine
  • Shodan Search Engine
  • US-CERT Alerts
  • US-CERT Bulletins
  • US-CERT Vulnerability Notes KB
  • VMware Blogs
  • VMware Security Advisories

Category Cloud

Cisco ESXi FreeBSD HP iSCSI Linux Nessus NetApp NetBSD Oracle Security Solaris Splunk VMware Windows Wireshark XFCE

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 38 other subscribers

Powered by WordPress.com.

 

Loading Comments...