• List of iSCSI Mutual CHAP Posts by OS
  • Tools and Utilities for Windows
  • Unix and Linux Distros

slice2

slice2

Category Archives: NetApp

New NetApp Releases: ConfigAdvisor, ONTAP Powershell Toolkit, VASA Provider, NFS Plug-in for VAAI, Storage Replicator for VMware SRM, Linux Host Validator, OSM for OSX, SnapManager for SharePoint

22 Sunday Dec 2013

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

Linux, NetApp, VMware, Windows

ConfigAdvisor v3.4
Config Advisor is a configuration validation and health check tool for NetApp systems. It can be deployed at both secure sites and non-secure sites for data collection and analysis. Config Advisor can be used to check a NetApp system or FlexPod for the correctness of hardware installation and conformance to NetApp recommended settings. It collects data and runs a series of commands on the hardware, then checks for cabling, configuration, availability and best practice issues.
http://support.netapp.com/NOW/download/tools/config_advisor/download.shtml

Data ONTAP Powershell ToolKit v3.0.1
The Data ONTAP PowerShell Toolkit is a PowerShell module containing over 1300 cmdlets enabling the storage administration of NetApp controllers via ZAPI.  Full cmdlet sets are available for both 7-mode and clustered Data ONTAP.  The Toolkit also contains several cmdlets aimed at storage administration on the Windows host, including:  creating virtual disks, resizing virtual disks, reclaiming space in virtual disks, copying files, deleting files, reclaiming space on host volumes, and much more.
http://support.netapp.com/NOW/download/tools/powershell_toolkit/download.shtml

NetApp FAS/V-Series VASA Provider v1.0.1
NetApp FAS/V-Series VASA Provider for Data ONTAP operating in 7-Mode is a software component that supports the VMware VASA (vStorage APIs for Storage Awareness) framework, first introduced in vSphere 5. It acts as an information pipeline between NetApp storage systems and vCenter Server, enabling you to monitor relevant storage system status by collecting data such as the following:
1) Storage system topology
2) LUN and volume attributes
3) Events and alarms
http://support.netapp.com/NOW/download/software/vasa_win/1.0.1/download.shtml

NetApp NFS Plug-in v1.0.20 for VMware VAAI
The plug-in runs on the ESXi host and takes advantage of enhanced storage features offered by VMware vSphere. On the NetApp storage system, the NFS vStorage feature must be enabled for the ESXi host to take advantage of VMware VAAI. For details about enabling VMware vStorage over NFS, see the Data ONTAP 8.1 File Access and Protocols Management Guide For 7-Mode and the Clustered Data ONTAP File Access and Protocols Management Guide. The plug-in performs NFS-like remote procedure calls (RPCs) to the server, using the same credentials as that of an ESXi NFS client. This means that the plug-in requires no additional credentials and has the same access rights as the ESXi NFS client.
http://support.netapp.com/NOW/download/software/nfs_plugin_vaai/1.0.20/download.shtml

NetApp FAS/V-Series Storage Replication Adapter 2.1 for VMware vCenter Site Recovery Manager
NetApp FAS/V-Series Storage Replication Adapter for Data ONTAP operating in 7-Mode is a storage vendor specific plug-in to VMware vCenter Site Recovery Manager that enables interaction between Site Recovery Manager and the storage controller. The adapter interacts with the storage controller on behalf of Site Recovery Manager to discover storage arrays and their associated datastores and RDM devices, which are connected to vSphere. The adapter manages failover and test-failover of the virtual machines associated with these storage objects.
http://support.netapp.com/NOW/download/software/sra_7mode/2.1/download.shtml

Linux Host Validator and Configurator v1.0
The Config Validator tool can be used to validate the Linux host settings in NetApp SAN environment and change/configure them if necessary. The tool validates the settings related to the storage stack such as DM-multipath, iSCSI settings, HBA parameters, etc. on hosts connected to NetApp storage controllers running 7-Mode Data ONTAP or Clustered Data ONTAP. Unfortunately its RedHat only at this time.
http://support.netapp.com/NOW/download/tools/config_validator/download.shtml

OnCommand System Manager v3.1RC1 for Mac OSX
System Manager is a graphical management interface that enables you to manage storage systems and storage objects (such as disks, volumes, and aggregates) from a web browser.
http://support.netapp.com/NOW/download/tools/ocsm/download.shtml

SnapManager for SharePoint v8.01
SnapManager for Microsoft SharePoint is an enterprise-strength backup, recovery, and data management solution for Microsoft SharePoint 2013 and SharePoint 2010. SnapManager 8.0 for Microsoft SharePoint includes the following highlighted new features:
1) Clustered Data ONTAP 8.2 support
2) SnapVault integration using SnapManager 7.0 for SQL Server in clustered Data ONTAP
3) SnapManager for Microsoft SharePoint Manager role based access control (RBAC) support
4) SharePoint content database cloning
5) Complete backup and restore support for all SharePoint 2013 objects
http://support.netapp.com/NOW/download/software/snapmanager_sharepoint/8.0.1/

HOWTO Secure iSCSI Luns Between Oracle Enterprise Linux 6.5 and NetApp Storage with Mutual CHAP

14 Saturday Dec 2013

Posted by Slice2 in Linux, NetApp, Oracle

≈ Leave a comment

Tags

Linux, NetApp, Oracle, Security

This post demonstrates how to enable bidirectional or mutual CHAP on iSCSI luns between Oracle Enterprise Linux 6 update 5 and NetApp storage. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple.

1) Install open-iscsi on your server.
> yum install iscsi-initiator*
> reboot (don’t argue with me, just do it!)

2) Display your server’s new iscsi initiator or iqn nodename.
> cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:523325af23

3) On the NetApp filer, create the volume that will hold the iscsi luns. This command assumes you have aggregate aggr1 already created. If not, use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

4) Create the lun in the volume.
netapp> lun create -s 5g -t linux /vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01

5) Create an igroup and add the Oracle Enterprise Linux iscsi nodename or iqn from step 2 above to it.
netapp> igroup create -i -t linux ISCSI_MCHAP_OEL6u5
netapp> igroup add ISCSI_MCHAP_OEL6u5 iqn.1988-12.com.oracle:523325af23
netapp> igroup show ISCSI_MCHAP_OEL6u5
ISCSI_MCHAP_OEL6u5 (iSCSI) (ostype: linux):
iqn.1988-12.com.oracle:523325af23 (not logged in)

6) Map the lun to the igroup and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01 ISCSI_MCHAP_OEL6u5 01

7) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

8) Set the CHAP secret on the NetApp controller.
netapp> iscsi security add -i iqn.1988-12.com.oracle:523325af23 -s chap -p MCHAPOEL6u5 -n iqn.1988-12.com.oracle:523325af23 -o NETAPPMCHAP -m iqn.1992-08.com.netapp:sn.84167939

netapp> iscsi security show
init: iqn.1988-12.com.oracle:523325af23 auth: CHAP Inbound password: **** Inbound username: iqn.1988-12.com.oracle:523325af23 Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.84167939

9) On the server, edit your /etc/iscsi/iscsi.conf file and set the parameters below.
> vi /etc/iscsi/iscsid.conf
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1988-12.com.oracle:523325af23
node.session.auth.password = MCHAPOEL6u5
node.session.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
node.session.auth.password_in = NETAPPMCHAP
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1988-12.com.oracle:523325af23
discovery.sendtargets.auth.password = MCHAPOEL6u5
discovery.sendtargets.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
discovery.sendtargets.auth.password_in = NETAPPMCHAP
> wq!

10) On the server, restart the service and discover your iSCSI target (your storage system).
> service iscsi restart
> iscsiadm -m discovery -t st -p 10.10.10.11
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

> iscsiadm -m node  (this should display the same as above)
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

11) On the server, manually login to the iSCSI target (your storage array). Note there are two dashes “- -” in front of –login. It always looks like one.
> iscsiadm -m node -T “iqn.1992-08.com.netapp:sn.84167939” –login
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.

Verify the iSCSI session on the filer:
netapp> iscsi session show
Session 10
Initiator Information
Initiator Name: iqn.1988-12.com.oracle:523325af23
ISID: 00:02:3d:01:00:00
Initiator Alias: oel6u5

12) Stop and start the iscsi service on the server.
> service iscsi stop
Pause for 10 seconds and then run the next command.
> service iscsi start

13) From the server, check your session.
> iscsiadm -m session -P 1
Target: iqn.1992-08.com.netapp:sn.84167939
Current Portal: 10.10.10.11:3260,1000
Persistent Portal: 10.10.10.11:3260,1000
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1988-12.com.oracle:523325af23
Iface IPaddress: 10.10.10.93
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

14) From the server, check the NetApp iSCSI details. Note there are two dashes “- -” in front of mode, targetname and portal. Sometimes it looks like one.
> iscsiadm -–mode node –-targetname “iqn.1992-08.com.netapp:sn.84167939″ –-portal 10.10.10.11:3260
# BEGIN RECORD 6.2.0-873.10.el6
node.name = iqn.1992-08.com.netapp:sn.84167939
node.tpgt = 1000
node.startup = automatic
node.leading_login = No
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
<output truncated to keep the post short>

15) From the server, find and format the new lun (new disk). On the fdisk command wizard, enter the letters in bold below.
> cat /var/log/messages | grep “unknown partition table”
Dec 14 08:55:02 oel6u5 kernel: sdb: unknown partition table

> fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x54ac8aa4.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): u
Changing display/entry units to sectors

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

> fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): c
DOS Compatibility flag is not set

Command (m for help): u
Changing display/entry units to sectors

Command (m for help): n
Command action
e   extended
p   primary partition (1-4) <press the P key>
p
Partition number (1-4): 1
First sector (2048-10485759, default 2048): <press enter>
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): <press enter>
Using default value 10485759

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes
166 heads, 62 sectors/track, 1018 cylinders, total 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x54ac8aa4

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10485759     5241856   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

16) On the server, create the Linux file system on the new partition.
> mkfs -t ext4 /dev/sdb1
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310464 blocks
65523 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

17) Verify the partition.
> blkid /dev/sdb1
/dev/sdb1: UUID=”1a6e2a56-924f-4e3b-b281-ded3a3141ab4″ TYPE=”ext4″

18) Create the mount point and manually mount the directory.
> mkdir /newiscsilun
> mount /dev/sdb1 /newiscsilun
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  4.8G  10M  4.6G   1% /newiscsilun

19) Add the new mount point to /etc/fstab.
> vi /etc/fstab
/dev/sdb1 /newiscsilun ext4 _netdev 0 0
> wq!

Note: the _netdev option is important so that it doesn’t try mounting the target before the network is available.

20) Test that it survives a reboot by rebooting the server. With the _netdev set, iscsi starts and your CHAP logins should take place before it attempts to mount. After the reboot, login and verify its mounted.
> reboot

When done rebooting, login and verify the lun is mounted.
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1  4.8G  10M  4.6G   1% /newiscsilun

21) On the server you can check session stats.
> iscsiadm -m session -s
Stats for session [sid: 1, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260]
iSCSI SNMP:
txdata_octets: 31204
rxdata_octets: 917992
noptx_pdus: 0
scsicmd_pdus: 270
tmfcmd_pdus: 0
login_pdus: 0
text_pdus: 0
dataout_pdus: 0
logout_pdus: 0
snack_pdus: 0
noprx_pdus: 0
scsirsp_pdus: 270
tmfrsp_pdus: 0
textrsp_pdus: 0
datain_pdus: 242
logoutrsp_pdus: 0
r2t_pdus: 0
async_pdus: 0
rjt_pdus: 0
digest_err: 0
timeout_err: 0
iSCSI Extended:
tx_sendpage_failures: 0
rx_discontiguous_hdr: 0
eh_abort_cnt: 0

22) As root, change permissions on /etc/iscsi/iscsid.conf. I’m not sure why they haven’t fixed this clear text CHAP password in a file issue so just make sure only root can read/write the file.
> chmod 600 /etc/iscsi/iscsid.conf

23) On the NetApp storage you can verify the Lun and the server’s session.
>  lun show -v /vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01
/vol/MCHAPVOL/OEL6u5_iSCSI_MCHAP_01      5g (5368709120)    (r/w, online, mapped)
Serial#: hoagPJvLcRy6
Share: none
Space Reservation: enabled (not honored by containing Aggregate)
Multiprotocol Type: linux
Maps: ISCSI_MCHAP_OEL6u5=1

>  iscsi session show -v
Session 12
Initiator Information
Initiator Name: iqn.1988-12.com.oracle:523325af23
ISID: 00:02:3d:01:00:00
Initiator Alias: oel6u5

Session Parameters
SessionType=Normal
TargetPortalGroupTag=1000
MaxConnections=1
ErrorRecoveryLevel=0
AuthMethod=CHAP
HeaderDigest=None
DataDigest=None
ImmediateData=Yes
InitialR2T=No
FirstBurstLength=65536
MaxBurstLength=65536
Initiator MaxRecvDataSegmentLength=65536
Target MaxRecvDataSegmentLength=65536
DefaultTime2Wait=2
DefaultTime2Retain=0
MaxOutstandingR2T=1
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
Command Window Size: 32

Connection Information
Connection 0
Remote Endpoint: 10.10.10.93:33454
Local Endpoint: 10.10.10.11:3260
Local Interface: e0a
TCP recv window size: 131400

Command Information
No commands active

Using Wireshark and Splunk to find iSCSI CHAP Negotiation Failures on VMware ESXi

02 Monday Dec 2013

Posted by Slice2 in iSCSI, NetApp, Security, VMware, Wireshark

≈ Leave a comment

Tags

iSCSI, NetApp, Security, VMware, Wireshark

This is a companion post to sniffing packets in ESXi I posted here.

Say you need to isolate traffic to troubleshoot iSCSI CHAP session negotiation failures between ESXi and NetApp storage.

Using Wireshark:

1) Dump the traffic to a pcap file and open it with Wireshark.  Before you start the capture, change directories so you can easily recover the pcap file from the datastore in vCenter.

> cd /vmfs/volumes/datastore1
> tcpdump-uw -i vmk1 -s 1514 -w esxihost01.pcap
> CTRL+C
a) When done, in vCenter select the ESXi host you were sniffing packets on, then click the Configuration tab > Storage.
b) Right-click datastore1 (or the datastore were your pcap file is) and select Browse datastore.
c) Click download a file > select the location and click OK.
d) Double-click the file and it will open in Wireshark.
e) In Wireshark, in the upper left, enter iscsi.login.T in the Filter: field and click Apply. This only shows the iSCSI login packets. You can clearly see on the right in the Info column, packet 856 is an Authentication Failure packet.

wiresharkISCSIlogin

Using Splunk:

Another way to see the authentication failure is with Splunk. Assuming your NetApp storage (or any vendor) is configured to send syslog to Splunk, you can easily find the event. Splunk is an excellent Syslog server. You can download and use it for free up to 500 Megs a day indexed. I won’t go into the Splunk configuration in this post. I’ll post that soon.

Download it from here: http://www.splunk.com/download?r=header

1) Login to the Splunk UI, click Search to launch the Search app, enter the string below and the results will be displayed.

> index=”*” host=”10.10.10.11″ “iSCSI” “failed”

– Note: replace the IP address with your storage controller hostname or IP.

SplunkiSCSIlogin

Installing and Configuring the NetApp NFS Plug-in v1.0.20 for VMware VAAI

30 Saturday Nov 2013

Posted by Slice2 in NetApp, VMware

≈ Leave a comment

Tags

NetApp, VMware

The plug-in installs on the VMware ESXi v5x host. It takes advantage of vSphere’s enhanced storage features. On the NetApp controller, the nfs.vstorage.enable option has to be set to “on” so the ESXi host can take advantage of VMware VAAI.  This plug-in performs NFS-like RPCs to the server, using the same credentials as that of an ESXi NFS client. That means the plug-in needs no other permissions and has the same access rights as the ESXi NFS client. This is supported with DOT 8.1.1 and later.

The NFS plug-in includes these features:

Copy Offload – A process that used to take a few minutes now runs in seconds. This reduces traffic on the ESXi host and lowers CPU utilization for that task.

Space Reservation – This allows you to create thick virtual disks on NFS datastores. Through the VAAI Reserve Space primitive, you reserve space for the file when its created.

Download the plugin here: http://support.netapp.com/NOW/download/software/nfs_plugin_vaai/1.0.20/

1) Configure the NetApp Controller (this is for 7-Mode).
> options nfs.vstorage.enable on

2) In vCenter, select an ESXi host. Select the Configuration tab and then Storage under Hardware.
a) Under Datastores, right-click datastore1 (or whatever your local datastore is named) and select Browse datastore.
b) Click the Upload icon and select Upload a file. Browse to the NetAppNasPlugin.v20.zip file and click Open > Yes.

3) Enable SSH on the ESXi host or use the console CLI.
a) In vCenter, select the host > Configuration tab > Security Profile > across from Services, click Properties.
b) Scroll down to SSH and click Options. Click Start > OK > OK.

4) Verify that VAAI is enabled on the ESXi host. The output should be 1:
> esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove
Value of HardwareAcceleratedMove is 1

> esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit
Value of HardwareAcceleratedInit is 1

If VAAI is not enabled, enable it now:
> esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedInit
> esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove

5) Install the Plugin on the ESXi host.
> esxcli software vib install -d “/vmfs/volumes/<your path>/NetAppNasPlugin.v20.zip”
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: NetApp_bootbank_NetAppNasPlugin_1.0-020
VIBs Removed:
VIBs Skipped:

6) Reboot the ESXi host. Either through vCenter or at the command line.
> reboot

7) Verify the plugin is installed on the ESXi host. You will have to re-enable SSH in vCenter.
> esxcli software vib get | grep -i NetApp
NetApp_bootbank_NetAppNasPlugin_1.0-020
Name: NetAppNasPlugin
Vendor: NetApp
Summary: NAS VAAI NetApp Plugin
Description: NetApp NAS VAAI Module for ESX Server
Payloads: NetAppNasPlugin

8) Create an NFS export on the NetApp Controller and mount it as a new NFS datastore on the ESXi host. These steps below are specific to my configuration but you should be able to figure out your variables.
a) On the NetApp:
> exportfs -p rw=10.10.10.0/24,root=10.10.10.0/24 /vol/vol1
(substitute your ESXi host IP range)

b) On the ESXi host:
> esxcli storage nfs add -H labnetapp01 -s /vol/vol1 -v DatastoreVAAI
(substitute your controller hostname, volume name and datatstore name)

9) Verify that the new datastore is VAAI supported with the following command. Look for NAS VAAI Supported: YES at the bottom of the output.
> vmkfstools -Ph /vmfs/volumes/<name-of-your-datastore>
NFS-1.00 file system spanning 1 partitions.
File system label (if any): DatastoreVAAI
Mode: public
Capacity 8 GB, 8 GB available, file block size 4 KB
UUID: 69e81cd6-90fa0446-0000-000000000000
Partitions spanned (on “notDCS”):
nfs:DatastoreVAAI
NAS VAAI Supported: YES
Is Native Snapshot Capable: YES

10) You can also verify VAAI support with the following commands.
> esxcli storage core plugin list
Plugin name       Plugin class
—————-  ————
VMW_VAAIP_NETAPP  VAAI
VAAI_FILTER       Filter
NMP               MP

> esxcli storage core claimrule list –claimrule-class=VAAI | grep NETAPP
VAAI        65433  runtime  vendor  VMW_VAAIP_NETAPP  vendor=NETAPP model=*
VAAI        65433  file     vendor  VMW_VAAIP_NETAPP  vendor=NETAPP model=*

> esxcli storage core claimrule list –claimrule-class=Filter | grep NETAPP
Filter      65433  runtime  vendor  VAAI_FILTER  vendor=NETAPP model=*
Filter      65433  file     vendor  VAAI_FILTER  vendor=NETAPP model=*

HOWTO use Wireshark to read a packet capture from NetApp Data ONTAP after running the pktt command.

08 Friday Nov 2013

Posted by Slice2 in NetApp, Wireshark

≈ Leave a comment

Tags

NetApp, Wireshark

NetApp Data ONTAP 7 and 8 has the ability to sniff packets but the trace file cant be viewed on the controller. You can open and manipulate the trace file in Wireshark on another host. This HOWTO uses Wireshark on Windows 7. Wireshark on Linux will work as well. You must have Wireshark already installed on your Windows/Linux host before you start. You can download it here:

Windows: http://www.wireshark.org/download.html

Debian based Linux:
> apt-get install wireshark

RPM based Linux:
> yum install wireshark

1) Identify the controller’s NIC where you want to sniff packets on. In this case we will use e0a.
netapp> ifconfig -a

e0a: flags=0xe48867<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.10.140 netmask 0xffffff00 broadcast 10.10.10.255
ether 00:0c:29:89:3f:3c (auto-1000t-fd-up) flowcontrol full
e0b: flags=0xe08866<BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:89:3f:46 (auto-1000t-fd-up) flowcontrol full
e0c: flags=0xe08866<BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:89:3f:50 (auto-1000t-fd-up) flowcontrol full
e0d: flags=0xe08866<BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:89:3f:5a (auto-1000t-fd-up) flowcontrol full
lo: flags=0x1b48049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 9188
inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
losk: flags=0x40a400c9<UP,LOOPBACK,RUNNING> mtu 9188
inet 127.0.20.1 netmask 0xff000000 broadcast 127.0.20.1

2) Using the pktt command, start the capture on interface e0a and dump the output into /etc/log on the controller. When you run the command, a file is created in /etc/log/ with the NIC name (e0a), a date/time stamp and a .trc file extension.
netapp> pktt start e0a -d /etc/log
e0a: started packet trace

3) You can check the status of the packet capture and get details.
netapp> pktt status
e0a: Packet tracing enabled; packets truncated at 1514 bytes.
e0a: Trace buffer utilization = 2% of 1048320 bytes, 258 packets
e0a: 0 bytes written to file /etc/log/e0a_20131108_173928.trc
e0a: Currently tracing to file /etc/log/e0a_20131108_173928.trc
e0a: 258 packets seen; 0 packets dropped; 24936 total bytes seen

lo: Packet tracing enabled; packets truncated at 1514 bytes.
lo: Trace buffer utilization = 99% of 130816 bytes, 1011 packets
lo: 1387 packets seen; 0 packets dropped; 160568 total bytes seen

losk: Packet tracing enabled; packets truncated at 1514 bytes.
losk: Trace buffer utilization = 99% of 130816 bytes, 282 packets
losk: 40901 packets seen; 0 packets dropped; 21761277 total bytes seen

4) After a period of time you deem adequate, stop the packet capture.
netapp> pktt stop e0a
e0a: Tracing stopped and packet trace buffers released.
Fri Nov  8 17:42:25 EST [sim81:cmds.pktt.write.info:info]: pktt: 280 packets seen, 0 dropped, 32046 bytes written to /etc/log/e0a_20131108_173928.trc.

5) Verify that it has stopped.
netapp> pktt status
e0a: packet tracing not enabled

6) Open Windows Explorer on the PC/Server and enter the UNC path to the /etc/ folder on the filer. If you don’t have CIFS enabled and use NFS, mount the file system to your UNIX host.   \\10.10.10.140\etc$

pktt01

7) Browse to the log folder and locate the .trc file you just created. Double-click the file and it will load in Wireshark.

pktt02

8) You can now operate on the trace file and filter, search and analyze packets.

pktt03

NetApp releases new versions of 7-Mode Transition Tool, SnapManager, NFS VAAI Plugin, VSC, and two new Oracle tools.

10 Thursday Oct 2013

Posted by Slice2 in NetApp, Oracle

≈ Leave a comment

Tags

NetApp, Oracle

1) 7-Mode Transition Tool v1.1
The 7-Mode Transition Tool enables copy-based transitions of Data ONTAP 7G and 7-Mode FlexVol volumes and configurations to new hardware that is running clustered Data ONTAP 8.2, with minimum client disruption and retention of storage efficiency options. Attention: You can transition only network-attached storage (NAS) environments to clustered Data ONTAP 8.2 using the 7-Mode Transition Tool.
http://support.netapp.com/NOW/download/software/ntap_7mtt/1.1/

2) NetApp NFS Plug-in for VMware VAAI v1.0.20
http://support.netapp.com/NOW/download/software/nfs_plugin_vaai/1.0.20/

3) SnapManager for Exchange v7.0
http://support.netapp.com/NOW/download/software/snapmanager_e2k/7.0/

4) Single Mailbox Recovery for Exchange v7.0
http://support.netapp.com/NOW/download/software/smbr/7.0/

5) SnapManager for SharePoint v6.1.2, v7.1.1, and v8.0
SnapManager for Microsoft SharePoint is an enterprise-strength backup, recovery, and data management solution for Microsoft SharePoint 2013, 2010 and 2007.
http://support.netapp.com/NOW/download/software/snapmanager_sharepoint/8.0/
http://support.netapp.com/NOW/download/software/snapmanager_sharepoint/7.1.1/
http://support.netapp.com/NOW/download/software/snapmanager_sharepoint/6.1.2/

6) Virtual Storage Console v4.2.1
The Virtual Storage Console for VMware vSphere software is a vSphere client plug-in that provides end-to-end virtual machine lifecycle management for VMware virtual server and desktop environments running on NetApp storage.
http://support.netapp.com/NOW/download/software/vsc_win/4.2.1/

7) NetApp Storage System Plug-in for Oracle Enterprise Manager v12.1.0.2.0
The NetApp Storage System Plug-in for Oracle Enterprise Manager delivers comprehensive availability and performance information for NetApp storage systems. By combining NetApp storage system monitoring with comprehensive management of Oracle systems, Cloud Control significantly reduces the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.
http://support.netapp.com/NOW/download/tools/ntap_storage_plugin/

8) NetApp Cloning Plug-in for Oracle Database
NetApp and Oracle have collaborated to provide the ability to quickly clone a PDB database from the Oracle Database 12c SQL command line. This integration leverages NetApp FlexClone technology which allows you to develop and test applications faster by creating instant, space efficient clones of PDBs that shorten the design cycles and improve service levels.
http://support.netapp.com/NOW/download/tools/ntap_cloning_plugin/

HOWTO Secure iSCSI Luns Between Debian Linux 7.1 and NetApp Storage with Mutual CHAP

28 Saturday Sep 2013

Posted by Slice2 in iSCSI, Linux, NetApp, Security

≈ Leave a comment

Tags

iSCSI, Linux, NetApp, Security

This post demonstrates how to enable two-way or mutual CHAP on iSCSI luns between Debian Linux 7.1 and NetApp storage. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple.

1) Install open-iscsi on your server.
> apt-get install open-iscsi
> reboot (don’t argue with me, just do it!)

2) Display your server’s new iscsi initiator or iqn nodename.
> cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1993-08.org.debian:01:e6d4ee61d916

3) On the NetApp filer, create the volume that will hold the iscsi luns. This command assumes you have aggregrate aggr1 already created. If not use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

4) Create the lun in the volume.
netapp> lun create -s 5g -t linux /vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01

5) Create an igroup and add the Linux iscsi nodename or iqn from step 2 above to it.
netapp> igroup create -i -t linux ISCSI_MCHAP_DEB71
netapp> igroup add ISCSI_MCHAP_DEB71 iqn.1993-08.org.debian:01:e6d4ee61d916
netapp> igroup show

ISCSI_MCHAP_DEB71 (iSCSI) (ostype: linux):
iqn.1993-08.org.debian:01:e6d4ee61d916 (not logged in)

6) Map the lun to the iscsi-group and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01 ISCSI_MCHAP_DEB71 01

7) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

8) Set the CHAP secret on the NetApp controller.
netapp> iscsi security add -i iqn.1993-08.org.debian:01:e6d4ee61d916 -s chap -p MCHAPDEB71 -n iqn.1993-08.org.debian:01:e6d4ee61d916 -o NETAPPMCHAP -m iqn.1992-08.com.netapp:sn.84167939

netapp> iscsi security show

init: iqn.1993-08.org.debian:01:e6d4ee61d916 auth: CHAP Inbound password: **** Inbound username: iqn.1993-08.org.debian:01:e6d4ee61d916 Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.84167939

9) On the server, edit your /etc/iscsi/iscsi.conf file and set the parameters below.  
> vi /etc/iscsi/iscsid.conf:
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1993-08.org.debian:01:e6d4ee61d916
node.session.auth.password = MCHAPDEB71
node.session.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
node.session.auth.password_in = NETAPPMCHAP
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1993-08.org.debian:01:e6d4ee61d916
discovery.sendtargets.auth.password = MCHAPDEB71
discovery.sendtargets.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
discovery.sendtargets.auth.password_in = NETAPPMCHAP
> wq!

10) On the server, discover your iSCSI target (your storage system).
> iscsiadm -m discovery -t st -p 10.10.10.11
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

> iscsiadm -m node  (this should display the same as above)
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

11) On the server, manually login to the iSCSI target (your storage array).
> iscsiadm -m node –targetname “iqn.1992-08.com.netapp:sn.84167939” –login

Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.

On the NetApp storage console you should see the iSCSI sessions:
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1993-08.org.debian:01:e6d4ee61d916 at IP addr 10.10.10.203
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1993-08.org.debian:01:e6d4ee61d916 at IP addr 10.10.10.203

Verify the iSCSI session on the filer:
netapp> iscsi session show
Session 49
Initiator Information
Initiator Name: iqn.1993-08.org.debian:01:e6d4ee61d916
ISID: 00:02:3d:01:00:00
Initiator Alias: deb71

12) Stop and start the iscsi service on the server.
> service open-iscsi stop
Pause for 10 seconds and then run the next command.
> service open-iscsi start

[ ok ] Starting iSCSI initiator service: iscsid.
[….] Setting up iSCSI targets:
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.
. ok
[ ok ] Mounting network filesystems:.

13) From the server , check your session.
> iscsiadm -m session -P 1

14) From the server, check the NetApp iSCSI details.
> iscsiadm –mode node –targetname “iqn.1992-08.com.netapp:sn.84167939” –portal 10.10.10.11:3260

15) From the server, find and format the new lun (new disk).
> cat /var/log/messages | grep “unknown partition table”
deb71 kernel: [ 1856.751777]  sdb: unknown partition table

> fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x07f6c360.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-10485759, default 2048): press enter
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): press enter
Using default value 10485759

Command (m for help): p
Disk /dev/sdb: 5368 MB, 5368709120 bytes
166 heads, 62 sectors/track, 1018 cylinders, total 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x07f6c360

Device Boot      Start     End               Blocks       Id  System
/dev/sdb1         2048    10485759     5241856   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Command (m for help): q

16) On the server, create the Linux file system on the new partition.
> mkfs -t ext4 /dev/sdb1
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310464 blocks
65523 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

17) Verify the partition.
> blkid /dev/sdb1
/dev/sdb1: UUID=”afba2daf-1de8-4ab1-b93e-e7c99c82c054″ TYPE=”ext4″

18) Create the mount point and manually mount the directory.
> mkdir /newiscsilun
> mount /dev/sdb1 /newiscsilun
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1 5.0G   10M  4.7G   1% /newiscsilun

19) Add the new mount point to /etc/fstab.
> vi /etc/fstab
/dev/sdb1 /newiscsilun ext4 _netdev 0 0
> wq!

Note: the _netdev option is important so that it doesn’t try mounting the target before the network is available.

20) Test that it survives a reboot by rebooting the server. With the _netdev set, iscsi starts and your CHAP logins should take place before it attempts to mount. After the reboot, login and verify its mounted.

> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1 5.0G   10M  4.7G   1% /newiscsilun

21) On the server you can check session stats.
> iscsiadm -m session -s
Stats for session [sid: 1, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260]
iSCSI SNMP:
txdata_octets: 69421020
rxdata_octets: 765756
noptx_pdus: 0
scsicmd_pdus: 365
tmfcmd_pdus: 0
login_pdus: 0
text_pdus: 0
dataout_pdus: 924
logout_pdus: 0
snack_pdus: 0
noprx_pdus: 0
scsirsp_pdus: 365
tmfrsp_pdus: 0
textrsp_pdus: 0
datain_pdus: 193
logoutrsp_pdus: 0
r2t_pdus: 924
async_pdus: 0
rjt_pdus: 0
digest_err: 0
timeout_err: 0
iSCSI Extended:
tx_sendpage_failures: 0
rx_discontiguous_hdr: 0
eh_abort_cnt: 0

22) As root, change permissions on /etc/iscsi/iscsid.conf. I’m not sure why they haven’t fixed this clear text CHAP password in a file issue so just make sure only root can read/write the file.
> chmod 600 /etc/iscsi/iscsid.conf

23) On the NetApp storage you can verify the Lun and the server’s session.
> lun show -v /vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01
/vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01      5g (5368709120)    (r/w, online, mapped)
Serial#: hoagPJtrPZCi
Share: none
Space Reservation: enabled
Multiprotocol Type: linux
Maps: ISCSI_MCHAP_DEB71=1

>  iscsi session show -v
Session 55
Initiator Information
Initiator Name: iqn.1993-08.org.debian:01:e6d4ee61d916
ISID: 00:02:3d:01:00:00
Initiator Alias: deb71

Session Parameters
SessionType=Normal
TargetPortalGroupTag=1000
MaxConnections=1
ErrorRecoveryLevel=0
AuthMethod=CHAP
HeaderDigest=None
DataDigest=None
ImmediateData=Yes
InitialR2T=No
FirstBurstLength=65536
MaxBurstLength=65536
Initiator MaxRecvDataSegmentLength=65536
Target MaxRecvDataSegmentLength=65536
DefaultTime2Wait=2
DefaultTime2Retain=0
MaxOutstandingR2T=1
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
Command Window Size: 32

Connection Information
Connection 0
Remote Endpoint: 10.10.10.203:57127
Local Endpoint: 10.10.10.11:3260
Local Interface: e0a
TCP recv window size: 131400

Display the iSCSI Initiator Node Name or IQN from the command line.

01 Sunday Sep 2013

Posted by Slice2 in HP, iSCSI, Linux, NetApp, NetBSD, Solaris, VMware, Windows

≈ 1 Comment

Tags

iSCSI

At some point you will be asked by a Storage Engineer for your system’s iSCSI Initiator Node Name or your iqn. This list shows you how to get your local iSCSI initiator name or iqn from the command line. This assumes the iSCSI service is installed, enabled and running. If you have a different way or want to add an OS or platform to this list simply leave a comment and I’ll add it.

AIX:
> smitty iscsi
select > iSCSI Protocol Device
select > Change / Show Characteristics of an iSCSI Protocol Device

FreeBSD (v10 and newer. Thanks to Edward Tomasz Napierala for this update):
> iscsictl -v  (only after you have established a session with your array)

HP-UX:
> iscsiutil -l

Linux:
> cat /etc/iscsi/initiatorname.iscsi

NetApp Data ONTAP: (this is a target iqn not a host iqn)
7-Mode:
> iscsi nodename

Cluster Mode from the clustershell:
> vserver iscsi show

NetBSD: (please make this easier NetBSD developers! How about an iscsictl list_initiators command?)
> iscsictl add_send_target -a <hostname or IP of your target/storage)
Added Send Target 1
> iscsictl refresh_targets
OK
> iscsictl list_targets
1: iqn.1992-08.com.netapp:sn.84167939
2: 10.1.0.25:3260,1000
> iscsictl login -P 2
Created Session 2, Connection 1
> iscsictl list_sessions
Session 2: Target iqn.1992-08.com.netapp:sn.84167939

On the NetApp filer find the initiator:
netapp01> iscsi initiator show
Initiators connected:
TSIH  TPGroup  Initiator/ISID/IGroup
4    1000   nbsd611.lab.slice2.com (iqn.1994-04.org.netbsd:iscsi.nbsd611.lab.slice2.com:0 / 40:00:01:37:00:00 / )

Solaris 11:
> iscsiadm list initiator-node

VMware ESXi 5.1:
ESXi console:
Get the devices first:
> esxcfg-scsidevs -a | grep iSCSI
Then get the iqn (in this case vmhba33 is the iSCSI device)
> vmkiscsi-tool -I -l vmhba33

esxcli:
> esxcli -s <esxihostname or ip> -u root iscsi adapter get -A vmhba33

Windows:
c:\iscsicli.exe

NetApp releases SnapDrive for Windows and SnapManager for MSSQL 7.0

30 Friday Aug 2013

Posted by Slice2 in NetApp, Windows

≈ Leave a comment

Tags

NetApp, Windows

These are major new versions with much needed support for cDOT 8.2, Powershell and SMB 3.

SnapDrive 7.0 for Windows
http://support.netapp.com/NOW/download/software/snapdrive_win/7.0/

SnapDrive 7.0 for Windows is a major release, adding support for the following features:
1) Clustered Data ONTAP 8.2
2) Sub-LUN cloning in restore operations
3) PowerShell cmdlet support in SMB 3.0 enviornments
4) Volume and share provisioning template in SMB 3.0 environments
5) Virtual Fibre Channel
6) Group Managed Service Accounts (gMSA) on Windows Server 2012

SnapManager 7.0 for Microsoft SQL Server
http://support.netapp.com/NOW/download/software/snapmanager_sql2k/7.0/

SnapManager 7.0 for Microsoft SQL Server includes several new features:
1) Support for clustered Data ONTAP 8.2.
2) Support for databases running on clustered Data ONTAP SMB 3.0 shares.
Note: Specify SMB shares with or without a closing backslash: \\ServerName\ShareName or \\ServerName\ShareName\
3) Support for archiving backups to SnapVault secondary volumes running on clustered Data ONTAP.
5) The option to restore databases to a different location on the same Microsoft SQL Server instance.
6) Performance improvements when restoring databases from a LUN that stores multiple databases.
7) The Backup report now includes the cmdlet syntax for database backups initiated from the Backup wizard or Backup and Verify option.
8) Support for running SnapManager from a group Managed Service Account.

Read these URLs to see why you should be interested in SMB 3.0.

http://blogs.technet.com/b/filecab/archive/2012/05/03/smb-3-security-enhancements-in-windows-server-2012.aspx
http://networkerslog.blogspot.com/2012/09/new-storage-space-on-smb30-in-windows.html
http://blogs.technet.com/b/windowsserver/archive/2012/04/19/smb-2-2-is-now-smb-3-0.aspx

Hidden Gems – NetApp Operations Manager Storage Efficiency Dashboard Plugin

17 Saturday Aug 2013

Posted by Slice2 in NetApp

≈ Leave a comment

Tags

NetApp

This is a relatively unknown plugin for DFM/Operations Manager, now called OnCommand Unified Manager. The Plugin helps to answer a number of questions related to storage utilization and storage efficiency savings.  It also helps in identifying ways to improve the storage utilization.  It is implemented as a script that can be installed on the Operations Manager server.   The script can be scheduled for execution in Operations Manager and generates a set of web pages that provide an Efficiency Dashboard for the NetApp storage systems managed by Operations Manager.  Two primary charts are produced with additional charts to provide detailed breakdowns of how the storage space is being consumed.  These charts can be produced to represent all storage systems being monitored by Operations Manager, ‘groups’ of storage systems as grouped in Operations Manager or a single storage system. I wonder why they don’t include this as a tab inside OnCommand Unified Manager by default.

Note: click images to see a larger version.

Download it from here:
http://support.netapp.com/NOW/download/tools/omsed_plugin/

1) To install from the OnCommand Unified Manager browser interface:

a) Login to Operations Manager as an administrator. Click Management > Scripts.
storageeffplugin5.2-1

b) On the Scripts page choose the option to select the omeff_oc52_windows64.zip file you downloaded above and click Add Script.storageeffplugin-5.2-2

c) On the Confirm page, read the summary and click Add.storageeffplugin-5.2-3

d) After the script is installed, in the lower section, select the script check box and click “No” under the Schedule column.storageeffplugin-5.2-4

e) On the add a schedule page, enter a name for the plugin such as Storage Efficiency Plugin and at the bottom, enter the schedule of your choice. I run it daily at 05:00 AM. Click Add Schedule when done and you should see the schedule you chose.

storageeffplugin-5.2-6

2) If you want to install from the the OnCommand Unified Manager CLI:

C:\Program Files\NetApp\DataFabric Manager\DFM\bin\dfm script add omeff_oc52_windows64.zip

3) Access the Dashboard. After the script has run the first time, the dashboard will be available. Its located at the root on your OnCommand URL. Just add /dashboard.html at the end as shown in the example below.

https://nocumc.lab.slice2.com:8443/dashboard.html

storageeffplugin-5.2-7

← Older posts
Newer posts →

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Recent Posts

  • Patch Alma Linux 8.7 on an Offline or Air-Gapped System
  • HOWTO Remove /home logical volume and add that space to the root partition
  • Patch Rocky Linux 8.6 on an Offline or Air-Gapped System
  • HOWTO Install the Splunk Universal Forwarder on FreeBSD
  • HOWTO install a Splunk Universal Forwarder on Solaris 11 SPARC and x64 Using pkg(p5p) and tar
  • HOWTO install a Splunk Universal Forwarder on Solaris 10 SPARC and x64 Using pkgadd and tar
  • Recover Files from a Windows NTFS partition using Linux based SystemRescue
  • Sysmon Event ID 1 Process Creation rules for Splunk Universal Forwarder and McAfee All Access
  • Upgrading CentOS 7.2003 to 7.2009 on an Offline or Air-Gapped System
  • HOWTO Easily Resize the Default LVM Volume on Ubuntu 18.04
  • Create a Docker Container for your Cisco ESA, SMA or WSA Offline Content Updates
  • Apply the Mozilla Firefox STIG to Firefox on Ubuntu Linux 18.04
  • Dynamically Resize Those Tiny BlackArch Linux Terminals and Add a Scrollbar
  • Kali Linux OVA for Air-Gapped Use Build Process
  • HOWTO install the XFCE 4 Desktop on NetBSD 8.1
  • Build a Kali Linux ISO with the latest OS patches and packages
  • HOWTO quickly STIG Firefox 59.01
  • HOWTO mount a Synology NAS SMB share on Linux with SMBv1 disabled
  • Howto safely delete the WSUS WID on Windows 2012R2
  • HOWTO quickly STIG Firefox 45.0.1
  • Completing the vSphere vCenter Appliance Hardening Process
  • HOWTO install the XFCE 4.12 Desktop on NetBSD 7
  • Enabling TLS 1.2 on the Splunk 6.2x Console and Forwarders using Openssl and self signed certs.
  • HOWTO enable SSH on a Cisco ASA running 9.1.x
  • Apply a Windows 2012 R2 Domain GPO to a standalone Windows 2012 R2 server
  • Enable legacy SSL and Java SSL support in your browser for those old, crusty websites
  • HOWTO update FreeBSD 10.1 to the latest 11-current release
  • HOWTO Secure iSCSI Luns Between FreeBSD 10.1 and NetApp Storage with Mutual CHAP
  • HOWTO install the XFCE 4 Desktop on NetBSD 6.1.5
  • HOWTO Secure iSCSI Luns Between Ubuntu Server 14.10 and NetApp Storage with Mutual CHAP

Categories

  • Cisco (2)
  • ESXi (4)
  • FreeBSD (2)
  • HP (5)
  • iSCSI (12)
  • Linux (31)
  • Nessus (3)
  • NetApp (31)
  • NetBSD (10)
  • Oracle (9)
  • Security (48)
  • Solaris (9)
  • Splunk (5)
  • VMware (19)
  • Windows (20)
  • Wireshark (4)
  • XFCE (3)

Archives

  • February 2023
  • August 2022
  • July 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • November 2021
  • January 2021
  • December 2020
  • November 2020
  • August 2020
  • May 2020
  • September 2019
  • August 2019
  • March 2018
  • November 2016
  • March 2016
  • January 2016
  • November 2015
  • July 2015
  • June 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013

Blogroll

  • Adobe Security Bulletins
  • CentOS Blog
  • Cisco Security Blog
  • CSO Magazine
  • DHS National Vulnerability Database
  • Eric Sloof's NTPRO
  • HT SSL Tests
  • Intel Corp Security Advisories
  • Internet Usage World Stats
  • Kali Linux Blog
  • Linux Mint Blog
  • Meltdown and Spectre
  • Microsoft Security Blog
  • Microsoft Security Intelligence Report
  • Microsoft Security Research & Defense
  • Microsoft Security Response Center
  • MITRE CVE Site
  • NetApp Blogs
  • NetBSD Blog
  • Oracle OTN Security
  • Oracle Security Blog
  • PacketStorm
  • Redhat Security Blog
  • SC Magazine
  • Shodan Search Engine
  • US-CERT Alerts
  • US-CERT Bulletins
  • US-CERT Vulnerability Notes KB
  • VMware Blogs
  • VMware Security Advisories

Category Cloud

Cisco ESXi FreeBSD HP iSCSI Linux Nessus NetApp NetBSD Oracle Security Solaris Splunk VMware Windows Wireshark XFCE

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 38 other subscribers

Powered by WordPress.com.

 

Loading Comments...