Wednesday 31 May 2017

Running scripts against the Oracle ZFS appliance

So you want to run scripts against the ZFS appliance and you've read through the Oracle document titled "Effectively Managing Oracle ZFS Storage Appliances with Scripting" and realised it's not as straightforward as it should be.

I mean I use similar ZFS commands on the OS all the time, why not just have a terminal interface that uses the same commands? Nope - Oracle don't play that.

First thing you have to do set up key authentication from where you want to write the scripts. Simply done, just log onto the ZFS appliance and go into the shell by running "shell" and set it up the way you would normally between UNIX servers.

Once that works, here is a script to snapshot a filesystem:

user=root
appliance=zfsappliance.name.here
pool=poolname
project=projectname
snapshot=snapname

ssh $user@$appliance 2>/dev/null << EOF
script
var MyArguments = {
  pool:         '$pool',
  project:      '$project',
  snapshot:     '$snapshot',
}
var MyErrors = {
  PoolNotFound: '$PoolNotFound',
  ProjectNotFound: '$ProjectNotFound',
  UnKnownError: '$UnknownError',
}
function CreateSnapshot (Arg) {
  run('cd /'); // Make sure we are at root child context level
  run('shares');
  try {
    run('set pool=' + Arg.pool);
  } catch (err) {
    printf("ERROR Specified pool %s not found\n ",Arg.pool);
    return;
  }
  try {
    run('select ' + Arg.project);
  } catch (err) {
      printf("ERROR Specified project %s not found\n ",Arg.project);
      return;
  }
  try {
    run('snapshots');
  } catch (err) {
      printf("ERROR Snapshot %s not found\n ",Arg.project);
      return;
  }
  try {
    run('snapshot ' + Arg.snapshot);
  } catch(err) {
    printf("ERROR Unable snapshot %s\n ",Arg.project);
    return;
  }
  return;
}
CreateSnapshot(MyArguments);
.
EOF

As you can see, most steps of the Javascript is checked to see if it worked. It's good practice because Javascript will give you zero output. Zero.

Zero output is especially irritating when the main reason for the script is to retrieve some output e.g. when you want to get a list of snapshots.

So here's an example (to list snapshots) where the output is returned:

user=root
appliance=zfsappliance.name.here 
pool=poolname
project=projectname

ssh $user@$appliance 2>/dev/null << EOF script var MyArguments = {

  pool:         '$pool',
  project:      '$project'
}
function ListFS (Arg) {
  run('cd /');
  run('shares');
  run('set pool=' + Arg.pool);
  run('select ' + Arg.project);
  fs=list();
  for (j=0; j<fs.length; j++)
  {
    printf(fs[j]);
    printf("\n");
  }
}
ListFS(MyArguments);
.
EOF

I've left the error checking out of this one for a leaner script. As a side note, remember for some commands (like deletions) you'll need the "confirm" command as part of it.

Anyway, that should be enough to get you started. Happy scripting.

Monday 5 September 2016

Hardening - Setting Solaris 11 Security Settings using the compliance command

The  compliance program produces security assessments and reports. Essentially an evaluation of the security  configuration  of  a system, conducted against a benchmark.

No more having a list of things you have to check and having to follow some doc to implement the settings. The compliance command makes things easy peasy.

If you don't find the compliance command, install pkg:/security/compliance.

First off, list the assessments available.

# compliance list -p
Benchmarks:
pci-dss:        Solaris_PCI-DSS
solaris:        Baseline, Recommended
Assessments:
        No assessments available

I recommend running the solaris Recommended check (if you've got cardholder information on your system, you'll need to be doing the pci-dss check instead).

compliance assess -b solaris -p Recommended
compliance report -a solaris.Recommended.2016-09-05,15:33

This outputs an html file that I usually mail myself. If you rock a GUI, then just view the html file in a browser.

Tadaaa! You now have a document that not only tells you what needs to be done - but also how to do it. And when everything in your report is green - now you have a report to forward onto the relevant people.

---

Some of you will end up with reports with some red in it because there are settings you don't want to/can't  change. For example, on the SuperCluster, you're going to need NFS to access your storage. Luckily compliance gives you the ability to customise its assessments.

First list the rules we want to exclude:

Service svc:/network/nfs/status is disabled or not installed OSC-40010
Service svc:/network/nfs/nlockmgr is disabled or not installed OSC-38510
Service svc:/network/nfs/server is disabled or not installed OSC-39510
Service svc:/network/nfs/rquota is disabled or not installed OSC-39010
Service svc:/network/nfs/cbd is disabled or not installed OSC-37010
Service svc:/network/nfs/mapid is disabled or not installed OSC-38010
ssh(1) is the only service binding a listener to non-loopback addresses OSC-73505

Next we create a custom assessment:

compliance tailor -t MySecurityPolicy 'set benchmark=solaris; set profile=Recommended; exclude OSC-40010; exclude OSC-38510; exclude OSC-39510; exclude OSC-39010; exclude OSC-37010; exclude OSC-38010; exclude OSC-73505; export'

You can, of course, use "include" if you needed to.
And then we run our custom security assessment:

compliance assess -t MySecurityPolicy
compliance report -a yadayadayada 

And that's it folks, I went a step further and wrote a script to output the commands I need to implement the hardening, but I'm tired of writing this post. It's getting too long so here it ends.



Tuesday 30 August 2016

SuperCluster DR Procedure for App Zones


NOTE: This is a procedure for testing failing over from one SuperCluster to another at a different site - not for the occurrence of an actual DR situation.

This doc is a work in progress - process is still a bit finicky

Pre-work
i) Set up zfs replication isci zone lun and nfs shares
ii) Make sure zone root zpool has different name on source and destination system
iii) Add same IB IP as zfs-sa at source to zfs-sa at destination
iv) Set up key authentication to zfs-sa at destination
v) Create script at source to periodically save zone configs

1. Snap ZFS pool

2. Snap zfs project/shares

3. Share zpool lun
On DR:
a. Create a project to put your clones under
b. List the snapshots on the zpool lun
c. Create the clone off the appropriate snap

4. Share zfs projects/shares
Do this in the GUI, for each project:
- click on Shares
- underneath Shares click on Projects
- underneath Projects click on Replica
- move the mouse over the relevant project and a pencil and trash bin icon will show on the right, click on the pencil
- Click on Replication
- below Replication should be a bunch of icons
- click on the + icon ("Clone most recently received project snapshot")
-  You might get an error at this point. Just try again and again.
- When you get to the screen asking for new project name, use the current project name and add "_DR" and click on Continue.
Or in an actual DR situation:
Instead of the + icon, you break replication. Afterwards you will need to reverse replication before failing back.

5. Import zpool
On destination server:
cfgadm -alv
devfsadm -Cv
zpool import -R / zones

6. Recreate zone
Configure zones
Attach zones
Boot zones

Thursday 21 July 2016

Copying a SuperCluster Zone

Copying a SuperCluster Zone


1. Snapshot the original zone
zfs snapshot -r zones/zone1@copy
2. Copy the snap across to the destination zoneroot
zfs send -vr zones/zone1@copy | zfs recv -v zones/zone2
3. Export the zone config and copy it across to the new zone. (use IB if possible)
4. Edit the zone config to reflect new address and new zonepath
5. Create the new zone
6. Attach and boot the new zone
7. Configure the new zone
Name
Solaris 11
sysconfig create-profile -g identity -o config.xml
sysconfig configure -g identity -c config.xml
Solaris 10
Sysunconfig
Edit /etc/nodename, /etc/hostname.*, /etc/hosts

Host files
vi /etc/hosts
IP
Solaris 11
ipadm delete-addr ipmp0/v4
ipadm delete-addr ipmp1/v4
ipadm create-addr -T static -a 23.88.34.157/24 ipmp0
ipadm create-addr -T static -a 192.168.55.61/22 ipmp1
ipadm set-ifprop -p standby=on -m ip net2
route -p add default 23.88.34.1
Solaris 10
vi /etc/hostname.*
vi /etc/defaultrouter
reboot
Samba config
vi /etc/samba/smb.conf
net join -U user -S ADservername
rm /export/home/samba/*/.ssh/known_hosts
Reboot
8. Configure the storage from the ZFS
Log into the appropriate zfs head:
Shares -> Projects: Add new project
Edit Project:
Under General set Mountpoint to /export/projectname
Under protocols:
Share mode=none
Add NFS Exception: Network - IB IP of zone/32 - Read/write - tick root access
Under shares, add shares needed
Go into each share and set quota
vi /etc/vfstab
mount -a
Change ownership of each mountpoint
9. Put key authentication into place between original and copy.
10. Rsync the NFS shares from original to new zone

rsync -azh /u01 192.168.55.61:/

Thursday 30 June 2016

Setting up Samba Auditing

1. Define the output file
  vi /etc/syslog.conf and add the line:
local5.notice                                   /var/log/samba/audit.log
  Note: Use tabs for spacing!

2. Make sure the output file is rotated
logadm -A 6w -S5g -z 0 -c -p 1w -w /var/log/samba/audit.log
Where:
                        -A 6w means Delete files older than 6 weeks
                        -S 5g means delete files so that all versions are less than 5g
                        -z 0 means compress all previous versions
                        -c mean rotate copying & truncating the logfile to zero length, rather than renaming
                        -p 1w means rotate after 1 week
-w means write to settings to logadm.conf

3. Change the samba settings for the shares.
  vi /etc/samba/smb.conf
  To the [global] section add the lines:
        full_audit:prefix = %U|%I|%u|%S
        full_audit:failure = connect
        full_audit:success = connect disconnect mkdir rmdir read pread write pwrite sendfile rename unlink chmod fchmod chown fchown ftruncate lock symlink readlink link mknod
        full_audit:facility = LOCAL5
        full_audit:priority = notice
  For each of the shares you want to audit, add the line:
        vfs object = full_audit
  Note: If you want to audit all shares, add this line to the global section.

  In case you're wondering what file creates and deletes show up in the log as:
    create=pwrite
    delete=unlink

4. svcadm restart samba

Friday 12 June 2015

Test Solaris Root Mirror

Here's the situation. Being the good UNIX SysAdmin that you are, one of the first things you do is mirror the rootpool. You do something like:

zpool attach -f rpool c0t5000CCA03C5A7C00d0 c0t5000CCA03C5C19CCd0


...wait for the mirror to finish resilvering...

installboot -f -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t5000CCA03C5C19CCd0

(or better use - see comments below for why - :bootadm install-bootloader)

(Notice that my disk devices don't use slices - there'd be an "s0" at the end of the disk names - older ZFS systems needed to the root disk to be on a slice - this has fallen away)

So to test that you boot off the root disk - you go to ok prompt and try to boot off the second disk

shutdown -y -i0 -g0
...
ok> boot disk1
Boot device: /pci@3c0/pci@1/pci@0/pci@2/scsi@0/disk@p0  File and args:
ERROR: /packages/deblocker: Last Trap: Fast Data Access MMU Miss

So that's a bit of a bitch. Luckily, this is only a test. Start up your machine normally and then shut down with an init 0. Somehow rebooting with an init, sorts this out.

(If it wasn't a test, you can try to specify the path old school. Your path you can figure out - though I've had hit and miss success - by running devalias and scsi-probe-all and doing a path similar to /pci@400/pci@1/pci@0/pci@0/LSI,sas@0/disk@w5000cca02584ad19,0:a. - Sidenote: If that doesn't work I've had limited success by adding a to the last number before the comma).

Either way, once you've got a booted system. You can check which disk you're booted from by running prtconf -vp |grep bootpath.

This post is a little neither here nor there - but that's because my testing has brought various results and was done whil I was changing from a sas root disk to an ssd root disk. I'll update it as I retest.

Thursday 4 June 2015

VLAN tagging in Solaris

If you want to have zones in multiple subnets but using the same physical port, you have to use VLAN tagging. VLAN tagging is pretty easy to configure on the zones (point 7), less so on the global zone.

  1. The Network guys have to do a few things for you:
    • set the network ports your nic connects to as "trunked"
    • give you the vlan id of the vlans you want to connect to (digits)
    • for aggregated NICs, set LACP to active (rather than auto)
    • set the default vlan-id of the ports to 1 
  2. NOTE: Configuring the ports as trunked, obsoletes any traffic that isn't vlan tagged. All or nothing baby. 
  3. Your aggregate needs LACP activity to be active
      • dladm modify-aggr -L active -T short aggr0
  4. I use aggregates, but I think most of the same steps below applies for IPMP.
  5. I wish you could add a default vlan ID to the aggregate when you create it but you can't (and I get the feeling if I think really hard about it, I'll be able to see the logic in why). Instead you have to create a vnic on the aggregate that uses that vlan ID:
      • dladm create-vnic -v 10 -l aggr0 vnic10
  6. Now create an address on that vnic
      • ipadm create-ip vnic10
      • ipadm create-addr -T static -a 196.0.10.15/24 vnic10
  7. That sorts out the global zone. For the zones its pretty easy. Just set the vlan-id attribute (under anet) on the zone config.

NOTES:
  • The active LACP is not something I'm sure needs to be there but it worked so I'm leaving it.
  • IPMP in zones - if I recall correctly - needs vnics created for you to do IPMP within the zone. Just make sure you assign the correct vlan ID to those vnics and you should be fine.