• List of iSCSI Mutual CHAP Posts by OS
  • Tools and Utilities for Windows
  • Unix and Linux Distros

slice2

slice2

Tag Archives: NetApp

New Releases: NetApp 7-Mode Transition Tool 1.3, SnapDrive For Windows v7.0.3, SnapManager for Microsoft SQL Server v7.1

01 Tuesday Jul 2014

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

NetApp

7-Mode Transition Tool v1.3
7-Mode Transition Tool 1.3 includes the following new features and enhancements:

1) Managing concurrent SnapMirror transfers: The 7-Mode Transition Tool supports creating data copy schedules by specifying the number of concurrent SnapMirror transfers and throttle limit, for allowing better control of the transition data copy process.
2) Testing the transitioned volumes in clustered Data ONTAP before performing the cutover operation. The 7-Mode Transition Tool provides a new precutover phase that provides the ability to test the transitioned volumes, and provide a more predictable cutover duration by providing the following capabilities: Transfer configurations before disconnecting clients, Enable option for test cutover, Precutover in test mode (read/write mode) for testing configurations in clustered Data ONTAP before cutover, Precutover in read-only mode, Initiate on-demand SnapMirror updates.
3) Transitioning Kerberos configuration: The 7-Mode Transition Tool supports transitioning of Kerberos configuration from the web interface.
4) Removing volumes from subproject at any stage during the transition: The 7-Mode Transition Tool provides the ability to exclude a volume from a subproject at any stage during the transition.
5) Excluding CIFS local users and groups, UNIX users and groups, or netgroup configurations from transition: Note: This capability is available only from the CLI.
6) Support for 160 volumes or SnapMirror relationships per subproject: Across subprojects, the 7-Mode Transition Tool supports transition of a maximum of 240 volumes (standalone volumes or volume SnapMirror relationships). Note: This limit is applicable only to the web interface, there are no limits for CLI.
7) Support for transition to clustered Data ONTAP 8.2, 8.2.1, or 8.2.2: For the list of Data ONTAP 7-Mode versions supported for transition by the 7-Mode Transition Tool, see the NetApp Interoperability Matrix.
8) CLI wizard is deprecated

http://mysupport.netapp.com/NOW/download/software/ntap_7mtt/1.3/

 

SnapDrive For Windows v7.0.3
SnapDrive 7.0.3 for Windows is a maintenance release that includes bug fixes.

http://mysupport.netapp.com/NOW/download/software/snapdrive_win/7.0.3/

 

SnapManager for Microsoft SQL Server v7.1
SnapManager 7.1 for Microsoft SQL Server includes several new features and bug fixes. The new features include the following:

a) Support for Microsoft SQL Server 2014
b) Unrestricted database layout for LUNs and VMDKs
c) Recent Snapshot copies for each management group

http://mysupport.netapp.com/NOW/download/software/snapmanager_sql2k/7.1/

New NetApp Releases: OnCommand System Manager 3.1, Virtual Storage Console 5.0 for vSphere, VASA Provider 5.0 for Clustered Data ONTAP, 7-Mode Transition Tool 1.2, SnapManager 3.3.1 for Oracle

12 Saturday Apr 2014

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

NetApp

OnCommand System Manager 3.1 for Windows and Linux
New Features, Enhancements, and Changes in System Manager 3.1:
1) Support for storage Quality of Service (QoS)
2) For Data ONTAP 8.2 and later, you can manage storage QoS for FlexVol volumes and LUNs.
3) You can create QoS policy groups and assign FlexVol volumes or LUNs to new or existing policy groups. The maximum throughput specified for the policy group enables you manage the workload (input/output operations) of storage objects.
4) Support for managing HA pairs
5) For Data ONTAP 8.2.1, you can manage HA pairs in a cluster by manually initiating a takeover or giveback operation. You can also enable or disable automatic giveback for a node.
6) Support for SVMs with Infinite Volume
7) For Data ONTAP 8.2 and later, you can use System Manager to manage SVMs with Infinite Volume in a cluster. System Manager enables you to create, resize, mount, unmount, protect, and edit Infinite Volumes.
8) Infinite Volumes and FlexVol volumes can coexist in the same cluster.
9) User interface enhancements
a) Array LUNs: For Data ONTAP 8.2.1, you can install the V_StorageAttach license to add array LUNs to non-root aggregates.
b) Note: This enhancement is also available for storage systems running Data ONTAP operating in 7-Mode.
c) Network interfaces: For Data ONTAP 8.2.1, you can use the Network Interfaces window to migrate a data LIF to a different port on the same node or a different node within the cluster.
d) Shares: For Data ONTAP 8.2.1, you can use the Edit Shares window to enable or disable access-based enumeration for a share.
e) Terminology changes: To avoid confusion, it is important that you understand that starting with clustered Data ONTAP 8.2.1, Storage Virtual Machine (SVM) is the new descriptive name for Vserver. In the documentation, the term SVM refers to Vserver. The Data ONTAP command-line interface (CLI) continues to use the term Vserver in the output, and Vserver as a command or parameter name has not changed.

http://support.netapp.com/NOW/download/software/systemmgr_win/3.1/

Virtual Storage Console 5.0 for VMware vSphereVirtual Storage Console 5.0 for VMware vSphere
VSC 5.0 is a major change that includes a new look and seamless integration with the VMware vSphere Web Client. New features in this release include support for the following:
1) The VMware vSphere Web Client
2) VASA Provider for clustered Data ONTAP
3) SnapVault integration as a backup job option for clustered Data ONTAP
4) Adding a virtual machine or datastore to an existing backup job
5) Numerous bug fixes

VSC 5.0 discontinues support for the following:
1) vCenter 5.1 and earlier
2) VMware Desktop client
3) 32-bit Windows installations
4) mbralign
5) Single File Restore
6) Datastore Remote Replication
7) Flash Accel

http://support.netapp.com/NOW/download/software/vsc_win/5.0/

VASA Provider 5.0 for Clustered Data ONTAP
VASA Provider for clustered Data ONTAP is a virtual appliance that supports the VMware VASA (vStorage APIs for Storage Awareness) framework. It uses Virtual Storage Console for VMware vSphere as its management console. VASA Provider acts as an information pipeline that provides information to the vCenter Server about NetApp storage systems associated with VSC. Sharing this information with vCenter Server enables you to make more intelligent virtual machine provisioning decisions and be notified when certain storage conditions might affect your VMware environment.

http://support.netapp.com/NOW/download/software/vasa_cdot/5.0/

7-Mode Transition Tool
The 7-Mode Transition Tool enables copy-based transitions of Data ONTAP® 7G and 7-Mode FlexVol® volumes and configurations to new hardware that is running either clustered Data ONTAP 8.2 or 8.2.1, with minimum client disruption and retention of storage efficiency options.
Attention: You can transition only network-attached storage (NAS) environments to clustered Data ONTAP by using the 7-Mode Transition Tool.
New Features
1) Transition qtree-level NFS exports
2) Transition CIFS local users and groups
3) Bundle log files that provide details of the transition operations that have occurred on your system
4) Transition volumes with only NFS configuration (volumes with UNIX security style and no CIFS configuration) without requiring to configure CIFS on the Storage Virtual Machine (SVM, formerly known as Vserver).

http://support.netapp.com/NOW/download/software/ntap_7mtt/1.2/

SnapManager 3.3.1 for Oracle
New and enhanced features:
1) Support for Oracle Database 12c (non-CDB)
2)  SnapManager 3.3.1 for Oracle does not support container databases (CDBs) and pluggable databases (PDBs) available in Oracle Database 12c.
3) Support for Solaris on clustered Data ONTAP with SnapDrive 5.2.1 for UNIX
4) Supports vaulting in clustered Data ONTAP by using post-backup scripts
5) Allows access to SnapManager GUI from the browser when Java Runtime Environment (JRE) 1.7 is installed
6) Support for Automatic Storage Management (ASM) on Linux without using ASMLib
UNIX:
http://support.netapp.com/NOW/download/software/snapmanager_oracle_unix/3.3.1/
Windows:
http://support.netapp.com/NOW/download/software/snapmanager_oracle_win/3.3.1/

 

 

 

New NetApp Releases: SnapManager for Hyper-V, SnapDrive for Linux, Solaris x86 and SPARC, NFS Plug-in for VMware VAAI, SnapManager for Microsoft Exchange, VSC for Red Hat Enterprise Virtualization

01 Saturday Mar 2014

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

NetApp

New NetApp Releases:

SnapManager for Hyper-V v2.0.2
SnapManager for Hyper-V provides a solution for data protection and recovery for Microsoft Hyper-V virtual machines (VMs) running on Data ONTAP. You can perform application-consistent and crash-consistent dataset backups according to protection policies set by your backup administrator. You can also restore VMs from these backups. Reporting features enable you to monitor the status of and get detailed information about your backup and restore jobs.
SnapManager 2.0.2 for Hyper-V includes the following new features:
1) Support for Windows Server 2012 R2
http://support.netapp.com/NOW/download/software/snapmanager_hyperv_win/2.0.2/

SnapDrive for Linux, Solaris x86 and SPARC v5.2.1
SnapDrive for UNIX enables you to manage Snapshot copies and to automate storage provisioning tasks. It also helps you in recovering data if it is accidentally deleted or modified.
SnapDrive 5.2.1 for Linux supports the following new features:
1) Paravirtual SCSI controlled devices (PVSCSI) on Linux guest operating systems.
2) Allows to override the SnapMirror or SnapVault existence check, which is one of the mandatory checks performed during volume-based SnapRestore (VBSR) using configuration variables in Data ONTAP operating in 7-mode.
http://support.netapp.com/NOW/download/software/snapdrive_redhatlinux/5.2.1/

For Solaris:
1) Storage Area Network (SAN) and Network File System (NFS), in clustered Data ONTAP 8.2 or later.
2) Allows to override the SnapMirror or SnapVault existence check, which is one of the mandatory checks performed during volume-based SnapRestore (VBSR) using configuration variables in Data ONTAP operating in 7-mode.
Solaris x86:
http://support.netapp.com/NOW/download/software/snapdrive_solx86/5.2.1/
Solaris SPARC:
http://support.netapp.com/NOW/download/software/snapdrive_sol/5.2.1/

NFS Plug-in for VMware VAAI v1.0.21
The plug-in runs on the ESXi host and takes advantage of enhanced storage features offered by VMware vSphere. On the NetApp storage system, the NFS vStorage feature must be enabled for the ESXi host to take advantage of VMware VAAI. The plug-in performs NFS-like remote procedure calls (RPCs) to the server, using the same credentials as that of an ESXi NFS client. This means that the plug-in requires no additional credentials and has the same access rights as the ESXi NFS client.
New in this release:
1) IPv6 support in clustered Data ONTAP 8.2.1 or later, and Data ONTAP 8.1.1 or later operating in the 7-Mode storage system
http://support.netapp.com/NOW/download/software/nfs_plugin_vaai/1.0.21/

SnapManager for Microsoft Exchange v6.1
1) SnapManager 6.1 for Microsoft Exchange includes several new features and enhancements:
2) SnapManager 6.1 for Exchange supports clustered Data ONTAP 8.2.
3) SnapManager 6.1 for Exchange supports Microsoft Exchange Server 2007 and Exchange Server 2010 only.
4) SnapManager 6.1 for Exchange supports Windows Server 2008 R2 SP1 and Windows Server 2012 only.
5) Gapless backup support on DAG systems has been improved from three nodes to nine nodes.
6) UTM retention can be triggered independently from backup retention.
7) When specifying backup options, the backup retention setting for remote backups now applies to the selected management group.
http://support.netapp.com/NOW/download/software/snapmanager_e2k/6.1/

VSC for Red Hat Enterprise Virtualization v1.0
Virtual Storage Console (VSC) for Red Hat Enterprise Virtualization (RHEV) software is a single plug-in that provides storage controller configuration, Network File System (NFS)-based storage domain management (provisioning, deduplication, resizing, and destruction), and virtual machine (VM) cloning for RHEV environments with storage domains backed by NetApp storage systems.
http://support.netapp.com/NOW/download/software/vsc_rhev/1.0/

NetApp releases new SnapDrive and MPIO versions with support for Windows 2012 R2

08 Saturday Feb 2014

Posted by Slice2 in NetApp, Windows

≈ Leave a comment

Tags

NetApp, Windows

New versions of SnapDrive and MPIO that officially support Windows 2012 R2 have been released. See the URL’s below.

Data ONTAP DSM 4.1 for Windows MPIO
http://support.netapp.com/NOW/download/software/mpio_win/4.1/

SnapDrive 7.0.2 for Windows
http://support.netapp.com/NOW/download/software/snapdrive_win/7.0.2/

HOWTO Secure iSCSI Luns Between Red Hat Enterprise Linux 7 (Beta) and NetApp Storage with Mutual CHAP

01 Saturday Feb 2014

Posted by Slice2 in iSCSI, Linux, NetApp, Security

≈ Leave a comment

Tags

iSCSI, Linux, NetApp, Security

This post demonstrates how to enable Bidirectional or Mutual CHAP on iSCSI luns between Red Hat Enterprise Linux 7 (Beta) and NetApp storage. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple.

1) If not already installed, install the iSCSI initiator on your system.
> yum install iscsi-initiator*

2) Display your server’s new iscsi initiator or iqn nodename.
> cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:ece5618996a9

3) On the NetApp filer, create the volume that will hold the iscsi luns. This command assumes you have aggregate aggr1 already created.  If not use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

4) Create the lun in the volume.
netapp> lun create -s 5g -t linux /vol/MCHAPVOL/RHEL7_iSCSI_MCHAP_01

5) Create an igroup and add the Linux iscsi nodename or iqn from step 2 above to the new igroup.
netapp> igroup create -i -t linux ISCSI_MCHAP_RHEL7
netapp> igroup add ISCSI_MCHAP_RHEL7 iqn.1994-05.com.redhat:ece5618996a9
netapp> igroup show ISCSI_MCHAP_RHEL7

ISCSI_MCHAP_RHEL7 (iSCSI) (ostype: linux):
iqn.1994-05.com.redhat:ece5618996a9 (not logged in)

6) Map the lun to the igroup and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/RHEL7_iSCSI_MCHAP_01 ISCSI_MCHAP_RHEL7 01

7) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

8) Set the CHAP secret on the NetApp controller.
netapp> iscsi security add -i iqn.1994-05.com.redhat:ece5618996a9 -s chap -p RHEL7 -n iqn.1994-05.com.redhat:ece5618996a9 -o NETAPPMCHAP -m iqn.1992-08.com.netapp:sn.84167939

netapp> iscsi security show
init: iqn.1994-05.com.redhat:ece5618996a9 auth: CHAP Inbound password: **** Inbound username: iqn.1994-05.com.redhat:ece5618996a9 Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.84167939

9) On the server, edit your /etc/iscsi/iscsi.conf file and set the parameters below.
> vi /etc/iscsi/iscsid.conf
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1994-05.com.redhat:ece5618996a9
node.session.auth.password = RHEL7
node.session.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
node.session.auth.password_in = NETAPPMCHAP
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1994-05.com.redhat:ece5618996a9
discovery.sendtargets.auth.password = RHEL7
discovery.sendtargets.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
discovery.sendtargets.auth.password_in = NETAPPMCHAP
> wq!

10) On the server, restart the service and discover your iSCSI target (your storage system).
> service iscsi restart
Redirecting to /bin/systemctl restart  iscsi.service

a) Verify the target.
> iscsiadm -m discovery -t st -p 10.10.10.11
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

> iscsiadm -m node  (this should display the same as above)
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

11) On the server, manually login to the iSCSI target (your storage array). Note there are two dashes “- -” in front of targetname and login.
> iscsiadm -m node –targetname “iqn.1992-08.com.netapp:sn.84167939” –login
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.

a) On the NetApp storage console you should see the iSCSI session:
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1994-05.com.redhat:ece5618996a9 at IP addr 10.10.10.186

b) Verify the iSCSI session on the filer:
netapp> iscsi session show
Session 88
Initiator Information
Initiator Name: iqn.1994-05.com.redhat:ece5618996a9
ISID: 00:02:3d:01:00:00
Initiator Alias: rhel7

12) From the server , check your session.
> iscsiadm -m session -P 1
Target: iqn.1992-08.com.netapp:sn.84167939
Current Portal: 10.10.10.11:3260,1000
Persistent Portal: 10.10.10.11:3260,1000
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:ece5618996a9
Iface IPaddress: 10.10.10.186
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

13) From the server, check the NetApp iSCSI details. Note there are two dashes “- -” in front of mode, targetname and portal.
> iscsiadm –mode node –targetname “iqn.1992-08.com.netapp:sn.84167939” –portal 10.10.10.11:3260

14) From the server, find and format the new lun (new disk). Your fdisk commands are in bold red below.
> cat /var/log/messages | grep “unknown partition table”
rhel7 kernel: [   24.102281]  sdb: unknown partition table

> fdisk /dev/sdb

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x2c025f67.

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

> fdisk /dev/sdb
Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-10485759, default 2048): <press enter>
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): <press enter>
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): p
Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xeb560917

Device Boot  Start  End       Blocks   Id  System
/dev/sdb1    2048   10485759  5241856  83  Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

15) On the server, create the Linux file system on the new partition.
> mkfs -t ext4 /dev/sdb1
mke2fs 1.42.8 (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310464 blocks
65523 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

16) Verify the partition.
> blkid /dev/sdb1
/dev/sdb1: UUID=”540997d7-ee07-42b3-a4af-612af6812d18″ TYPE=”ext4″

17) Create the mount point and manually mount the directory.
> mkdir /newiscsilun
> mount /dev/sdb1 /newiscsilun
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  4.8G  10M  4.6G   1% /newiscsilun

18) Add the new mount point to /etc/fstab.
> vi /etc/fstab
/dev/sdb1 /newiscsilun ext4 _netdev 0 0
> wq!

Note: the _netdev option is important so that it doesn’t try mounting the target before the network is available.

19) Test that it survives a reboot by rebooting the server. With the _netdev set, iscsi starts and your CHAP logins should take place before it attempts to mount. After the reboot, login and verify its mounted.

> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  5.0G  139M  4.6G   3% /newiscsilun

20) On the server you can check session stats.
> iscsiadm -m session -s
Stats for session [sid: 1, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260]
iSCSI SNMP:
txdata_octets: 17096
rxdata_octets: 748232
noptx_pdus: 0
scsicmd_pdus: 213
tmfcmd_pdus: 0
login_pdus: 0
text_pdus: 0
dataout_pdus: 0
logout_pdus: 0
snack_pdus: 0
noprx_pdus: 0
scsirsp_pdus: 213
tmfrsp_pdus: 0
textrsp_pdus: 0
datain_pdus: 204
logoutrsp_pdus: 0
r2t_pdus: 0
async_pdus: 0
rjt_pdus: 0
digest_err: 0
timeout_err: 0
iSCSI Extended:
tx_sendpage_failures: 0
rx_discontiguous_hdr: 0
eh_abort_cnt: 0

21) As root, change permissions on /etc/iscsi/iscsid.conf. I’m not sure why they haven’t fixed this clear text CHAP password in a file issue so just make sure only root can read/write the file.
> chmod 600 /etc/iscsi/iscsid.conf

22) On the NetApp storage you can verify the Lun and the server’s session.
> lun show -v /vol/MCHAPVOL/RHEL7_iSCSI_MCHAP_01
/vol/MCHAPVOL/RHEL7_iSCSI_MCHAP_01      5g (5368709120)    (r/w, online, mapped)
Serial#: hoagPJvrDTup
Share: none
Space Reservation: enabled (not honored by containing Aggregate)
Multiprotocol Type: linux
Maps: ISCSI_MCHAP_RHEL7=1

> iscsi session show -v
Session 90
Initiator Information
Initiator Name: iqn.1994-05.com.redhat:ece5618996a9
ISID: 00:02:3d:01:00:00
Initiator Alias: rhel7

Session Parameters
SessionType=Normal
TargetPortalGroupTag=1000
MaxConnections=1
ErrorRecoveryLevel=0
AuthMethod=CHAP
HeaderDigest=None
DataDigest=None
ImmediateData=Yes
InitialR2T=No
FirstBurstLength=65536
MaxBurstLength=65536
Initiator MaxRecvDataSegmentLength=65536
Target MaxRecvDataSegmentLength=65536
DefaultTime2Wait=2
DefaultTime2Retain=0
MaxOutstandingR2T=1
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
Command Window Size: 32

Connection Information
Connection 0
Remote Endpoint: 10.10.10.186:59575
Local Endpoint: 10.10.10.11:3260
Local Interface: e0a
TCP recv window size: 131400

Command Information
No commands active

HOWTO Secure iSCSI Luns Between Oracle Solaris 11 and NetApp Storage Using Bidirectional CHAP

09 Thursday Jan 2014

Posted by Slice2 in iSCSI, NetApp, Oracle, Security, Solaris

≈ Leave a comment

Tags

iSCSI, NetApp, Oracle, Security, Solaris

This post demonstrates how to secure iSCSI luns between Oracle Solaris 11 and NetApp storage. Solaris calls it Bidirectional CHAP rather than Mutual CHAP. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple. Research the relationship between Solaris EFI, Solaris VTOC and lun size as well as UFS vs ZFS to make sure you choose the proper type for your environment. This was done with Solaris 11 (11/11) x86. All steps except the fdisk step near the end are the same for SPARC systems.

1) Check for the iSCSI packages. They should be installed by default.
> pkginfo | grep iSCSI
system    SUNWiscsir    Sun iSCSI Device Driver (root)
system    SUNWiscsiu    Sun iSCSI Management Utilities (usr)

2) Make sure the iSCSI service is running on your Solaris host.
> svcs | grep iscsi
online  6:41:58 svc:/network/iscsi/initiator:default

If not, start it.
> svcadm enable svc:/network/iscsi/initiator:default

3) Get your local iSCSI Initiator Node Name or iqn name on the Solaris host.
> iscsiadm list initiator-node | grep iqn
Initiator node name: iqn.1986-03.com.sun:01:e00000000000.52bcad1c

4) Make sure the iscsi service is running on the NetApp.
netapp> iscsi status

5) Create the volume that will hold the iscsi luns. This command assumes you have aggregate aggr1 already created. If not use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

6) Create a lun on the volume.
netapp> lun create -s 5g -t solaris_efi /vol/MCHAPVOL/SOL11_iSCSI_MCHAP_01

7) Create an igroup and add the Solaris iscsi node name or iqn from step 3 above to it.
netapp> igroup create -i -t solaris ISCSI_MCHAP_SOL11
netapp> igroup add ISCSI_MCHAP_SOL11 iqn.1986-03.com.sun:01:e00000000000.52bcad1c
netapp> igroup show

ISCSI_MCHAP_SOL11 (iSCSI) (ostype: solaris):
iqn.1986-03.com.sun:01:e00000000000.52bcad1c (not logged in)

8) Map the lun to the igroup and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/SOL11_iSCSI_MCHAP_01 ISCSI_MCHAP_SOL11 01

Note: Solaris EFI is for larger than 2 TB luns and Solaris VTOC for smaller disks. This lun is small just to demonstrate the configuration.

9) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.4055372815

10) On the Solaris host, configure the target (NetApp controller) to be statically discovered. Note that there are two dashes “- -” in front of –static and –sendtargets. For some reason it displays as one dash in some browsers.
> iscsiadm modify discovery –static enable
> iscsiadm modify discovery –sendtargets enable
> iscsiadm add discovery-address 10.10.10.141:3260
> iscsiadm add static-config iqn.1992-08.com.netapp:sn.4055372815,10.10.10.141:3260
> iscsiadm list static-config
Static Configuration Target: iqn.1992-08.com.netapp:sn.4055372815,10.10.10.141:3260

11) Check your discovery methods. Make sure Static and Send Targets are enabled.
> iscsiadm list discovery
Discovery:
Static: enabled
Send Targets: enabled
iSNS: disabled

12) Enable Bidirectional CHAP on the Solaris host for the target NetApp controller.
> iscsiadm modify target-param –authentication CHAP iqn.1992-08.com.netapp:sn.4055372815
> iscsiadm modify target-param -B enable iqn.1992-08.com.netapp:sn.4055372815

13) Set the target device secret key that identifies the target NetApp controller. Note Solaris supports a minimum of 12 and a maximum of 16 character CHAP secrets. Also, there are two dashes “- -” in front of –CHAP-secret. You can make up your own secrets.
> iscsiadm modify target-param –CHAP-secret iqn.1992-08.com.netapp:sn.4055372815
Enter secret: NETAPPBICHAP
Re-enter secret: NETAPPBICHAP

14) Set the Solaris host initiator name and CHAP secret. Remember, there are two dashes “- -” in front of –CHAP-secret. You can make up your own secrets.
> iscsiadm modify initiator-node –authentication CHAP
> iscsiadm modify initiator-node –CHAP-name iqn.1986-03.com.sun:01:e00000000000.52bcad1c
> iscsiadm modify initiator-node –CHAP-secret
Enter secret: BIDIRCHAPSOL11
Re-enter secret: BIDIRCHAPSOL11

15) Verify your target parameters. Make sure Bidirectional Authentication is enabled and Authentication type is CHAP.
> iscsiadm list target-param -v iqn.1992-08.com.netapp:sn.4055372815
Target: iqn.1992-08.com.netapp:sn.4055372815
Alias: –
Bi-directional Authentication: enabled
Authentication Type: CHAP
CHAP Name: iqn.1992-08.com.netapp:sn.4055372815
Login Parameters (Default/Configured):
Data Sequence In Order: yes/-
Data PDU In Order: yes/-
Default Time To Retain: 20/-
Default Time To Wait: 2/-
Error Recovery Level: 0/-
First Burst Length: 65536/-
Immediate Data: yes/-
Initial Ready To Transfer (R2T): yes/-
Max Burst Length: 262144/-
Max Outstanding R2T: 1/-
Max Receive Data Segment Length: 8192/-
Max Connections: 65535/-
Header Digest: NONE/-
Data Digest: NONE/-
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/-
Login Retry Time Interval: 60/-
Configured Sessions: 1

16) Set the Bidirectional CHAP secrets on the NetApp controller.
netapp> iscsi security add -i iqn.1986-03.com.sun:01:e00000000000.52bcad1c -s chap -p BIDIRCHAPSOL11 -n iqn.1986-03.com.sun:01:e00000000000.52bcad1c -o NETAPPBICHAP -m iqn.1992-08.com.netapp:sn.4055372815

a) View the iSCSI security configuration.
netapp> iscsi security show
init: iqn.1986-03.com.sun:01:e00000000000.52bcad1c auth: CHAP Local Inbound password: **** Inbound username: iqn.1986-03.com.sun:01:e00000000000.52bcad1c Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.4055372815

17) On the Solaris host, reconfigure the /dev namespace to recognize the iSCSI disk (lun) you just connected.
> devfsadm -i iscsi or devfsadm -Cv -i iscsi

18) Login to server and format the disk. Note – the fdisk command below can be skipped on SPARC systems. Your input is in bold red in the next sequence.
> format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c4t0d0 <VMware-Virtual disk-1.0 cyl 1824 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c5t2d0 <NETAPP-LUN-7350 cyl 2558 alt 2 hd 128 sec 32>
/iscsi/disk@0000iqn.1992-08.com.netapp%3Asn.8416793903E8,1
Specify disk (enter its number): 1
selecting c5t2d0
[disk formatted]
No Solaris fdisk partition found.

FORMAT MENU:
disk       – select a disk
type       – select (define) a disk type
partition  – select (define) a partition table
current    – describe the current disk
format     – format and analyze the disk
fdisk      – run the fdisk program
repair     – repair a defective sector
label      – write label to the disk
analyze    – surface analysis
defect     – defect list management
backup     – search for backup labels
verify     – read and display labels
save       – save new disk/partition definitions
inquiry    – show disk ID
volname    – set 8-character volume name
!<cmd>     – execute <cmd>, then return
quit
format> fdisk   (skip this command if you are on a SPARC system)
No fdisk table exists. The default partition for the disk is:

a 100% “SOLARIS System” partition

Type “y” to accept the default partition,  otherwise type “n” to edit the
partition table.
y

format> p

PARTITION MENU:
0      – change `0′ partition
1      – change `1′ partition
2      – change `2′ partition
3      – change `3′ partition
4      – change `4′ partition
5      – change `5′ partition
6      – change `6′ partition
7      – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name   – name the current table
print  – display the current table
label  – write partition map and label to the disk
!<cmd> – execute <cmd>, then return
quit
partition> p
Current partition table (default):
Total disk cylinders available: 2557 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0         (0/0/0)           0
1 unassigned    wm       0               0         (0/0/0)           0
2     backup    wu       0 – 2556        4.99GB    (2557/0/0) 10473472
3 unassigned    wm       0               0         (0/0/0)           0
4 unassigned    wm       0               0         (0/0/0)           0
5 unassigned    wm       0               0         (0/0/0)           0
6 unassigned    wm       0               0         (0/0/0)           0
7 unassigned    wm       0               0         (0/0/0)           0
8       boot    wu       0 –    0        2.00MB    (1/0/0)        4096
9 unassigned    wm       0               0         (0/0/0)           0

partition> 0
Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0         (0/0/0)           0

Enter partition id tag[unassigned]: <press enter>
Enter partition permission flags[wm]: <press enter>
Enter new starting cyl[0]: <press enter>
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 4.99gb

partition> l     (This is a lower case “L” not a numeral one or 1. This step labels the disk.)
Ready to label disk, continue? y

partition> q

format> q

19) Create the file system. You can choose either UFS or ZFS. Both options are shown below.

a) If you will use UFS:
> newfs -Tv /dev/rdsk/c5t2d0s0
newfs: construct a new file system /dev/rdsk/c5t2d0s0: (y/n)? y
mkfs -F ufs /dev/rdsk/c5t2d0s0 10465280 32 128 8192 8192 -1 1 250 1048576 t 0 -1 8 128 y
/dev/rdsk/c5t2d0s0:     10465280 sectors in 2555 cylinders of 128 tracks, 32 sectors
5110.0MB in 18 cyl groups (149 c/g, 298.00MB/g, 320 i/g)
super-block backups (for fsck -F ufs -o b=#) at: 32, 610368, 1220704, 1831040, 2441376,
3051712, 3662048, 4272384, 4882720, 5493056,
6103392, 6713728, 7324064, 7934400, 8544736, 9155072, 9765408, 10375744

> fsck /dev/rdsk/c5t2d0s0
> mkdir /old_ufs_filesystem
> mount /dev/dsk/c5t2d0s0 /old_ufs_filesystem
> vi /etc/vfstab and add the line below to the bottom of the file. This will mount it when the system boots.
/dev/dsk/c5t2d0s0 /dev/rdsk/c5t2d0s0 /old_ufs_filesystem  ufs  2 yes –
> wq! (to exit the vi session)

b) Check the new mount.
> df -h | grep old_ufs_filesystem
/dev/dsk/c5t2d0s0      5.0G  5.0M 4.9G 1% /old_ufs_filesystem

20) If you will use ZFS:
a) Create a pool.
> zpool create -f netappluns c5t2d0s0

b) Create the filesystem.
> zfs create netappluns/fs

c) List the new filesystem.
> zfs list -r netappluns
NAME           USED  AVAIL  REFER  MOUNTPOINT
netappluns     124K  4.89G    32K  /netappluns
netappluns/fs   31K  4.89G    31K  /netappluns/fs

d) Use the legacy display method.
> df -h | grep netappluns
netappluns       4.9G    32K   4.9G   1%    /netappluns
netappluns/fs    4.9G    31K   4.9G   1%    /netappluns/fs

21) You are done. Hope this helps.

Install and Configure the NetApp FAS/V-Series VASA Provider v1.0.1 for vSphere

01 Wednesday Jan 2014

Posted by Slice2 in NetApp, VMware

≈ Leave a comment

Tags

NetApp, VASA, VMware

The FAS/V-Series VASA Provider is a software component that supports the VMware VASA (vStorage APIs for Storage Awareness) framework, first introduced in vSphere 5. It acts as an information pipeline between NetApp storage systems and the vCenter Server, enabling you to monitor relevant storage system status.
FAS/V-Series VASA Provider collects data from your storage systems and delivers information about storage topology, LUN and volume attributes, and events and alarms to the vCenter Server.

1) Download the VASA Provider at the following URL:
http://support.netapp.com/NOW/download/software/vasa_win/1.0.1/

2) After it is downloaded, move the VASA provider to your server. Double-click netappvp-1-0-1-win64.exe > click Next > Next > Install. Make sure Launch VASA Configuration is selected and click Finish.

a) On the VASA Configuration window, in the upper left, enter your vCenter user and password and click Save.

b) On the right under Storage Systems, click Add. Add your storage systems that provide NFS or VMFS datastores to vCenter.

c) Under VMware vCenter, enter your vCenter FQDN, user and password and click Register Provider. Click OK on the VASA Provider Has Been Registered pop-up window and click OK to close the VASA Configuration window. Make sure the NetApp VASA Provider service is running by checking the status light shown as green. If it isn’t, manually start the service in Start > Run > services.msc.

vasa-00

3) Log out of vCenter and then log back in.

4) In vCenter, click View > Administration > Storage Providers. Under Vendor Providers, select NVP and the details will display below.

vasa-01
5) Click your Cluster > select the Storage Views tab. On the upper left of the Storage View tab next to View, click Maps. Your NetApp luns will be identified as such.

vasa-02
6) To see all of the storage profiles, click View > Management > VM Storage Profiles. At the top in the middle select Manage Storage Capabilities. The items with type “System” are now available. You are using Storage Profiles right?

vasa-03
7) Click View > Inventory > Datastores and Clusters. Select an NFS or VMFS Datastore provided by the NetApp array. On the Summary tab under Storage Capabilities, you should see System Storage Capability for that datastore. If you click the little blue call-out icon, the Storage Capabilities Details pop-up window appears.

vasa-04

HOWTO Secure iSCSI Luns Between Oracle Solaris 10 and NetApp Storage Using Bidirectional CHAP

27 Friday Dec 2013

Posted by Slice2 in iSCSI, NetApp, Oracle, Security, Solaris

≈ Leave a comment

Tags

iSCSI, NetApp, Oracle, Security, Solaris

This post demonstrates how to secure iSCSI luns between Oracle Solaris 10 and NetApp storage. Solaris calls it Bidirectional CHAP rather than Mutual CHAP. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple. Research the relationship between Solaris EFI, Solaris VTOC and lun size as well as UFS vs ZFS to make sure you choose the proper type for your environment. This was done with Solaris 10 x86. All steps except the fdisk step near the end are the same for SPARC systems.

1) You need to be running at least the Solaris 10 1/06 release. To verify, check your release file.
> cat /etc/release
Oracle Solaris 10 8/11 s10x_u10wos_17b X86

2) Check for the iSCSI packages.
> pkginfo | grep iSCSI
system    SUNWiscsir    Sun iSCSI Device Driver (root)
system    SUNWiscsiu    Sun iSCSI Management Utilities (usr)

a) For reference the iSCSI target packages are listed below. You don’t need them for this HOWTO.
SUNWiscsitgtr    Sun iSCSI Target (Root)
SUNWiscsitgtu    Sun iSCSI Target (Usr)

3) If not installed, mount the Solaris 10 DVD and install the packages. Note the SPARC path will be different: sol_10_811_sparc
If the DVD doesn’t mount automatically:
> mount -F hsfs /dev/rdsk/c0t2d0s2 /mnt
> cd /mnt/sol_10_811_x86/Solaris_10/Product
If it does:
> cd /cdrom/sol_10_811_x86/Solaris_10/Product
>/usr/sbin/pkgadd -d SUNWiscsir
>/usr/sbin/pkgadd -d SUNWiscsiu

4) Make sure the iSCSI service is running on your Solaris host.
> svcs | grep iscsi
online  6:41:58 svc:/network/iscsi/initiator:default

If not, start it.
> svcadm enable svc:/network/iscsi/initiator:default

5) Get your local iSCSI Initiator Node Name or iqn name on the Solaris host.
> iscsiadm list initiator-node | grep iqn
Initiator node name: iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9

6) Make sure the iscsi service is running on the NetApp.
netapp> iscsi status
If not, start it (You need a license for iscsi. Check with the license command.)
netapp> iscsi start

7) Create the volume that will hold the iscsi luns. This command assumes you have aggregate aggr1 already created. If not use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

8) Create a lun on the volume.
netapp> lun create -s 5g -t solaris_efi /vol/MCHAPVOL/SOL10_iSCSI_MCHAP_01

9) Create an igroup and add the Solaris iscsi node name or iqn from step 5 above to it.
netapp> igroup create -i -t solaris ISCSI_MCHAP_SOL10
netapp> igroup add ISCSI_MCHAP_SOL10 iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9
netapp> igroup show

ISCSI_MCHAP_SOL10 (iSCSI) (ostype: solaris):
iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 (not logged in)

10) Map the lun to the igroup and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/SOL10_iSCSI_MCHAP_01 ISCSI_MCHAP_SOL10 01

Note: Solaris EFI is for larger than 2 TB luns and Solaris VTOC for smaller disks. This lun is small just to demonstrate the configuration.

11) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

12) On the Solaris host, configure the target (NetApp controller) to be statically discovered. Note that there are two dashes “- -” in front of –static and –sendtargets. For some reason it displays as one dash in some browsers.
> iscsiadm modify discovery –static enable
> iscsiadm modify discovery –sendtargets enable
> iscsiadm add discovery-address 10.10.10.11:3260
> iscsiadm add static-config iqn.1992-08.com.netapp:sn.84167939,10.10.10.11:3260
> iscsiadm list static-config
Static Configuration Target: iqn.1992-08.com.netapp:sn.84167939,10.10.10.11:3260

13) Check your discovery methods. Make sure Statis and Send Targets are enabled.
> iscsiadm list discovery
Discovery:
Static: enabled
Send Targets: enabled
iSNS: disabled

14) Enable Bidirectional CHAP on the Solaris host for the target NetApp controller. There are two dashes “- -” in front of –authentication.
> iscsiadm modify target-param –authentication CHAP iqn.1992-08.com.netapp:sn.84167939
> iscsiadm modify target-param -B enable iqn.1992-08.com.netapp:sn.84167939

15) Set the target device secret key that identifies the target NetApp controller. Note Solaris supports a minimum of 12 and a maximum of 16 character CHAP secrets. Also, there are two dashes “- -” in front of –CHAP-secret. You can make up your own secrets.
> iscsiadm modify target-param –CHAP-secret iqn.1992-08.com.netapp:sn.84167939
Enter secret: NETAPPBICHAP
Re-enter secret: NETAPPBICHAP

16) Set the Solaris host initiator name and CHAP secret. Remember, there are two dashes “- -” in front of –authentication, –CHAP-name and –CHAP-secret. You can make up your own secrets.
> iscsiadm modify initiator-node –authentication CHAP
> iscsiadm modify initiator-node –CHAP-name iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9
> iscsiadm modify initiator-node –CHAP-secret
Enter secret: BIDIRCHAPSOL10
Re-enter secret: BIDIRCHAPSOL10

17) Verify your target parameters. Make sure Bidirectional Authentication is enabled and Authentication type is CHAP.
> iscsiadm list target-param -v iqn.1992-08.com.netapp:sn.84167939
Target: iqn.1992-08.com.netapp:sn.84167939
Alias: –
Bi-directional Authentication: enabled
Authentication Type: CHAP
CHAP Name: iqn.1992-08.com.netapp:sn.84167939
Login Parameters (Default/Configured):
Data Sequence In Order: yes/-
Data PDU In Order: yes/-
Default Time To Retain: 20/-
Default Time To Wait: 2/-
Error Recovery Level: 0/-
First Burst Length: 65536/-
Immediate Data: yes/-
Initial Ready To Transfer (R2T): yes/-
Max Burst Length: 262144/-
Max Outstanding R2T: 1/-
Max Receive Data Segment Length: 8192/-
Max Connections: 1/-
Header Digest: NONE/-
Data Digest: NONE/-
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/-
Login Retry Time Interval: 60/-
Configured Sessions: 1

18) Set the Bidirectional CHAP secrets on the NetApp controller.
netapp> iscsi security add -i iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 -s chap -p BIDIRCHAPSOL10 -n iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 -o NETAPPBICHAP -m iqn.1992-08.com.netapp:sn.84167939

a) View the iSCSI security configuration.
netapp> iscsi security show
init: iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 auth: CHAP Inbound password: **** Inbound username: iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.84167939

19) On the Solaris host, reconfigure the /dev namespace to recognize the iSCSI disk (lun) you just connected.
> devfsadm -i iscsi or devfsadm -Cv -i iscsi

20) Verify CHAP configuration on the server. Restart the server and you should see the iSCSI session on the NetApp console.
> reboot

a) As the server boots, on the NetApp console you should see the following message:
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1986-03.com.sun:01:ea2fccf7ffff.52b894f9 at IP addr 10.10.10.188

21) Login to server and format the disk. Note – the fdisk command below can be skipped on SPARC systems. Your input is in bold red in the next sequence.
> format
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c2t2d0 <DEFAULT cyl 2557 alt 2 hd 128 sec 32>
/iscsi/disk@0000iqn.1992-08.com.netapp%3Asn.8416793903E8,1Specify disk (enter its number): 1
selecting c2t2d0
[disk formatted]

FORMAT MENU:
disk       – select a disk
type       – select (define) a disk type
partition  – select (define) a partition table
current    – describe the current disk
format     – format and analyze the disk
fdisk      – run the fdisk program
repair     – repair a defective sector
label      – write label to the disk
analyze    – surface analysis
defect     – defect list management
backup     – search for backup labels
verify     – read and display labels
save       – save new disk/partition definitions
inquiry    – show vendor, product and revision
volname    – set 8-character volume name
!<cmd>     – execute <cmd>, then return
quit

format> fdisk   (Note: this command is only necessary on x86 systems. If you are on SPARC, skip to the next step.)
No fdisk table exists. The default partition for the disk is:

a 100% “SOLARIS System” partition

Type “y” to accept the default partition,  otherwise type “n” to edit the
partition table.
y

22) Partition the disk:

format> p

PARTITION MENU:
0      – change `0′ partition
1      – change `1′ partition
2      – change `2′ partition
3      – change `3′ partition
4      – change `4′ partition
5      – change `5′ partition
6      – change `6′ partition
7      – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name   – name the current table
print  – display the current table
label  – write partition map and label to the disk
!<cmd> – execute <cmd>, then return
quit
partition> p

Current partition table (original):
Total disk cylinders available: 2556 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0               (0/0/0)           0
1 unassigned    wm       0               0               (0/0/0)           0
2        backup    wu        0 – 2555    4.99GB     (2556/0/0) 10469376
3 unassigned    wm       0               0               (0/0/0)           0
4 unassigned    wm       0               0               (0/0/0)           0
5 unassigned    wm       0               0               (0/0/0)           0
6 unassigned    wm       0               0               (0/0/0)           0
7 unassigned    wm       0               0               (0/0/0)           0
8            boot    wu        0 –    0       2.00MB     (1/0/0)        4096
9 unassigned    wm       0               0               (0/0/0)           0

partition> 0
Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0         (0/0/0)           0

Enter partition id tag[unassigned]: <press enter>
Enter partition permission flags[wm]: <press enter?
Enter new starting cyl[0]: <press enter>
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 4.99gb

partition> l     (This is a lower case “L” not a numeral one or 1. This step labels the disk.)
Ready to label disk, continue? y

partition> q

format> q

23) Create the file system. You can choose either UFS or ZFS. Both options are shown below.

a) If you will use UFS:
> newfs -Tv /dev/rdsk/c2t2d0s0
newfs: construct a new file system /dev/rdsk/c2t2d0s0: (y/n)? y
pfexec mkfs -F ufs /dev/rdsk/c2t2d0s0 10465280 32 128 8192 8192 -1 1 250 1048576 t 0 -1 8 128 y
/dev/rdsk/c2t2d0s0: 10465280 sectors in 2555 cylinders of 128 tracks, 32 sectors
5110.0MB in 18 cyl groups (149 c/g, 298.00MB/g, 320 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 610368, 1220704, 1831040, 2441376, 3051712, 3662048, 4272384, 4882720,
5493056, 6103392, 6713728, 7324064, 7934400, 8544736, 9155072, 9765408, 10375744

> fsck /dev/rdsk/c2t2d0s0
> mkdir /old_ufs_filesystem
> mount /dev/dsk/c2t2d0s0 /old_ufs_filesystem
> vi /etc/vfstab and add the line below to the bottom of the file. This will mount it when the system boots.
/dev/dsk/c2t2d0s0 /dev/rdsk/c2t2d0s0 /old_ufs_filesystem  ufs  2 yes –
> wq! (to exit the vi session)

b) Check the new mount.
> df -h | grep old_ufs_filesystem
/dev/dsk/c2t2d0s0  4.9G 5.0M 4.9G 1% /old_ufs_filesystem

24) If you will use ZFS:
a) Create a pool.
> zpool create -f netappluns c2t2d0

b) Create the filesystem.
> zfs create netappluns/fs

c) List the new filesystem.
> zfs list -r netappluns
NAME            USED  AVAIL  REFER  MOUNTPOINT
netappluns      131K  4.89G    31K  /netappluns
netappluns/fs    31K  4.89G    31K  /netappluns/fs

Use the legacy display method.
> df -h | grep netappluns
netappluns             4.9G    32K   4.9G     1%    /netappluns
netappluns/fs          4.9G    31K   4.9G     1%    /netappluns/fs

25) You are done. Hope this helps.

HOWTO Change the Default Browser in OnCommand System Manager v3

26 Thursday Dec 2013

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

NetApp

You may not realize it but you can change the browser used by System Manager v3. It defaults to using Internet Explorer. Many companies lock IE down so hard with Group Policy that it breaks System Manager. To change it perform the following.

1) Launch System Manager. Once up, do not login to any of your controllers.
2) In the upper left, click Tools > Options.

ocsm-01
3) In the Browser Path section, enter the path to the browser you prefer and click Save and Close. Launch System Manager again and it will use the new browser.

ocsm-02
4) The browsers I’ve tested are as follows on Windows 7 x64.

Opera 18
C:\Program Files (x86)\Opera\launcher.exe
http://www.opera.com/computer

Firefox 26 (OCSM 3.1RC1 is buggy with this version of Firefox)
C:\Program Files (x86)\Mozilla Firefox\firefox.exe (for x64)
C:\Program Files\Mozilla Firefox\firefox.exe (for x32)
http://www.mozilla.org/en-US/

Chrome 31
C:\Users\cdm\AppData\Local\Google\Chrome\Application\chrome.exe
https://www.google.com/intl/en/chrome/browser/

 

Using HFS Standalone Web Server to Upgrade NetApp Data ONTAP and SP Firmware

23 Monday Dec 2013

Posted by Slice2 in Linux, NetApp, Windows

≈ Leave a comment

Tags

Linux, NetApp, Windows

For a while I have been using XAMPP as my goto quick and easy web server to temporarily serve files like ONTAP or SP firmware upgrades. Its easy to use and always works. Then there was Z-WAMP which was great because it was zero install. Again easy to use and always works. The problem was they also carried the extra baggage of PHP, MySQL, etc. All I needed was a simple http instance. And then I found HFS. It stands for HTTP File Server. Its simple, incredibly small, very portable, very easy to use and is a standalone executable. No installation. Just double-click hfs.exe and you are ready to go.

HFS also works perfectly on Linux using wine 1.4 and later. Just don’t use the Wine Gecko option when prompted. On Linux, when you run >wine hfs.exe you will be prompted to download the Gecko option. Just click cancel to continue.

From a NetApp perspective, its perfect for updating Data ONTAP and SP firmware over the network. Especially for shops that don’t run CIFS or NFS or where your Security overlords won’t allow you to NFS export and mount the root volume. I run HFS from my OnCommand Unified Manager server where I have all of my NetApp tools and utilities installed.

Download HFS here:
http://www.rejetto.com/hfs/?f=dl

1) To start, double-click hfs.exe.
a) Select No to add it to your right-click menu (unless you really want to).
b) If you need to change the default port 80 perform this step. If not, skip it. In the upper left, click the Port: 80 button and change it to something like 8082. Click OK.
Notes:
– Depending on how your NetApp applications are deployed, port 80 will probably be taken. A simple port change avoids conflicts. Don’t forget to create a firewall rule if you use a non-standard port.
– If you are running this from a laptop or server without other apps using port 80, then its probably safe to leave on port 80.
– If you want to click the “You Are in Easy Mode” button to change it to “Expert Mode,” you get additional transfer details. Its up to you.
c) Copy the downloaded version of Data ONTAP you will be upgrading to onto the server where you are running HFS.
d) In the HFS window on the upper left under the house/ icon, right-click and select Add files.

hfs01
e) Browse to the Data ONTAP file and select Open. It will now be listed under the home root /. Note that you can also drag and drop the file into this window.

hfs02

2) On the NetApp controller, if not already done, create the software directory and then verify your version and backup kernel.
netapp> software
netapp> software list
netapp> version
netapp> version -b

3) Download and install the Data ONTAP image from your HFS instance. Note the :8082 port definition in the URL below. If you changed it to something other than the default port 80, you must change it on the command line as well. If not, the default port 80 is correct.
netapp> software update http://10.10.10.81:8082/814_q_image.tgz

software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
software: Depending on system load, it may take many minutes
software: to complete this operation. Until it finishes, you will
software: not be able to use the console.
software: copying to 814_q_image.tgz
software: 5% file read from location.

<And that’s it. Output of the update truncated to shorten the post>

← Older posts

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Recent Posts

  • Patch Alma Linux 8.7 on an Offline or Air-Gapped System
  • HOWTO Remove /home logical volume and add that space to the root partition
  • Patch Rocky Linux 8.6 on an Offline or Air-Gapped System
  • HOWTO Install the Splunk Universal Forwarder on FreeBSD
  • HOWTO install a Splunk Universal Forwarder on Solaris 11 SPARC and x64 Using pkg(p5p) and tar
  • HOWTO install a Splunk Universal Forwarder on Solaris 10 SPARC and x64 Using pkgadd and tar
  • Recover Files from a Windows NTFS partition using Linux based SystemRescue
  • Sysmon Event ID 1 Process Creation rules for Splunk Universal Forwarder and McAfee All Access
  • Upgrading CentOS 7.2003 to 7.2009 on an Offline or Air-Gapped System
  • HOWTO Easily Resize the Default LVM Volume on Ubuntu 18.04
  • Create a Docker Container for your Cisco ESA, SMA or WSA Offline Content Updates
  • Apply the Mozilla Firefox STIG to Firefox on Ubuntu Linux 18.04
  • Dynamically Resize Those Tiny BlackArch Linux Terminals and Add a Scrollbar
  • Kali Linux OVA for Air-Gapped Use Build Process
  • HOWTO install the XFCE 4 Desktop on NetBSD 8.1
  • Build a Kali Linux ISO with the latest OS patches and packages
  • HOWTO quickly STIG Firefox 59.01
  • HOWTO mount a Synology NAS SMB share on Linux with SMBv1 disabled
  • Howto safely delete the WSUS WID on Windows 2012R2
  • HOWTO quickly STIG Firefox 45.0.1
  • Completing the vSphere vCenter Appliance Hardening Process
  • HOWTO install the XFCE 4.12 Desktop on NetBSD 7
  • Enabling TLS 1.2 on the Splunk 6.2x Console and Forwarders using Openssl and self signed certs.
  • HOWTO enable SSH on a Cisco ASA running 9.1.x
  • Apply a Windows 2012 R2 Domain GPO to a standalone Windows 2012 R2 server
  • Enable legacy SSL and Java SSL support in your browser for those old, crusty websites
  • HOWTO update FreeBSD 10.1 to the latest 11-current release
  • HOWTO Secure iSCSI Luns Between FreeBSD 10.1 and NetApp Storage with Mutual CHAP
  • HOWTO install the XFCE 4 Desktop on NetBSD 6.1.5
  • HOWTO Secure iSCSI Luns Between Ubuntu Server 14.10 and NetApp Storage with Mutual CHAP

Categories

  • Cisco (2)
  • ESXi (4)
  • FreeBSD (2)
  • HP (5)
  • iSCSI (12)
  • Linux (31)
  • Nessus (3)
  • NetApp (31)
  • NetBSD (10)
  • Oracle (9)
  • Security (48)
  • Solaris (9)
  • Splunk (5)
  • VMware (19)
  • Windows (20)
  • Wireshark (4)
  • XFCE (3)

Archives

  • February 2023
  • August 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • January 2021
  • December 2020
  • November 2020
  • August 2020
  • May 2020
  • September 2019
  • August 2019
  • March 2018
  • November 2016
  • March 2016
  • January 2016
  • November 2015
  • July 2015
  • June 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013

Blogroll

  • Adobe Security Bulletins
  • CentOS Blog
  • Cisco Security Blog
  • CSO Magazine
  • DHS National Vulnerability Database
  • Eric Sloof's NTPRO
  • HT SSL Tests
  • Intel Corp Security Advisories
  • Internet Usage World Stats
  • Kali Linux Blog
  • Linux Mint Blog
  • Meltdown and Spectre
  • Microsoft Security Blog
  • Microsoft Security Intelligence Report
  • Microsoft Security Research & Defense
  • Microsoft Security Response Center
  • MITRE CVE Site
  • NetApp Blogs
  • NetBSD Blog
  • Oracle OTN Security
  • Oracle Security Blog
  • PacketStorm
  • Redhat Security Blog
  • SC Magazine
  • Shodan Search Engine
  • US-CERT Alerts
  • US-CERT Bulletins
  • US-CERT Vulnerability Notes KB
  • VMware Blogs
  • VMware Security Advisories

Category Cloud

Cisco ESXi FreeBSD HP iSCSI Linux Nessus NetApp NetBSD Oracle Security Solaris Splunk VMware Windows Wireshark XFCE

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 38 other subscribers

Powered by WordPress.com.

 

Loading Comments...