Wednesday 27 November 2013

Sys-Unconfig on Solaris 10 Branded Zone Doesn't Give Option to Change Hostname

As the title says, I was trying to do a sys-unconfig on a Solaris 10 branded zone and when I reboot and go enter the new config info, the hostname doesn't get asked to be changed. The system automatically configures the zonename as the hostname.

Some things I tried that didn't work:

  1. Configure a NIC that doesn't exist
    Since most of the googled results pointed to pulling out the network cable, I tried the logical equivalent for a zone. However, configuring a non-existent NIC doesn't work, because the a zone won't boot when configured with a NIC that doesn't exist.  
  2. Choose a NIC that does exist but is offline
    Same reasoning at point 1, still doesn't work though.
  3. Configure an IP that doesn't exist on our LAN
    Same reasoning at point 1, still doesn't work though.
  4. Do it with no NIC configured
    This actually works, the system asks me for a new hostname during the sysconfig. Unfortunately, it doesn't ask me for any of the subsequent network info. So when I add the NIC, I'll have to then do the subnet and DNS configuration manually. No thank you.  
  5. Search for files that contain the hostname and delete manually
    Just in case the sys-unconfig was not deleting these files. Didn't work.
  6. Set the bootargs to "noauto"
    Got this from an Oracle SR for when the hostname is being retrieved from the jumpstart server (which I don't have anyway). Didn't work.
What worked:
  1. After the sys-unconfig and subsequent sysconfig completed, I edited the following files and rebooted the zone:
    • /etc/inet/hosts
    • /etc/nodename
    • /etc/hostname.net0 (or whatever's relevant)

Saturday 14 September 2013

Empty root folder on Solaris10 branded zone

If you have a Solaris10 branded zone, and it is in an "unavailable" state, the root folder in the zone path will look empty. Scared the living daylights out of me! Just do an attach and the folder's contents will appear.

It might just be my understanding of how Solaris 11 does zones though. It looks like during the attach, there is another filesystem that gets mounted on root. I'll have to do some testing and prodding and understand what's going on.

Thursday 18 July 2013

SSH Issue on Solaris 10 Branded Zones

So suddenly one night I get a call from the night operators telling me that they can't ssh to a particular zone. Their ssh sessions are even dying. When I log in (via the global zone), the message log has loads of sshd core dumps!:

Jul 11 20:41:08 hostname genunix: [ID 603404 kern.notice] NOTICE: core_log: ssh[2791] core dumped: /var/core/core_hostname_ssh_14247_103_1373568067_2791
Jul 11 20:41:15 hostname genunix: [ID 603404 kern.notice] NOTICE: core_log: ssh[2813] core dumped: /var/core/core_hostname_ssh_14247_103_1373568074_2813
Jul 11 20:41:26 hostname genunix: [ID 603404 kern.notice] NOTICE: core_log: ssh[2889] core dumped: /var/core/core_hostname_ssh_14247_103_1373568085_2889
Jul 11 20:44:37 hostname genunix: [ID 603404 kern.notice] NOTICE: core_log: ssh[6711] core dumped: /var/core/core_hostname_ssh_14247_103_1373568276_6711
Jul 11 20:47:54 hostname genunix: [ID 603404 kern.notice] NOTICE: core_log: ssh[11121] core dumped: /var/core/core_hostname_ssh_14247_103_1373568473_11121
Jul 11 20:59:34 hostname genunix: [ID 603404 kern.notice] NOTICE: core_log: ssh[25061] core dumped: /var/core/core_hostname_ssh_14247_103_1373569173_25061

I try a couple of things, none of which seem to particular help but the problem goes away after about half an hour. Then it comes back a couple of days later. And then again. And then it happens on some other containers.

Of course by this time, my call logged with Oracle has been escalated to the highest level. They come back with this:
It seems at this point that you have bin hit by known issue.
Bug 15781192 - SUNBT7156478-SOLARIS_11U1 double free in kernelSlottable.c kernel_slottable_ini
This was fixed in the S11u1 release .. but now we have started a backport CR for S10. At this point the only workaround is to disable pkcs11 engine in the sshd_conf and restart ssh.
And then gave the complete workaround:

The complete workaround requires three steps to be executed inside the Solaris 10 branded zone: 1) Uninstall the pkcs11 kernel provider:  # cryptoadm uninstall provider='/usr/lib/security/$ISA/pkcs11_kernel.so' 2) Disable the pkcs11 engine for sshd  # vi /etc/ssh/sshd_config add the line "UseOpenSSLEngine no" to this file (without the quotes) 3) Restart the ssh service to pickup the change:  # svcadm restart ssh
EDIT: I updated to the latest patches. I'll have to take some time to reverse these workarounds and see if the problem has been fixed. I've been it told it has but I'll have to confirm for myself.

Wednesday 12 June 2013

Automated Snapshots

My test systems don't get backed up or snapped via the SAN. So I figured I'd create zfs snapshots on a regular basis just in case.

For Solaris 10 I had a script that did this for me (including sending it to a remote machine if need be) but it was really complex and a bit of a mission for anyone other than me to figure out. For Solaris 11, there is a services that takes care of the snapshotting and scheduling for you (without the option of sending to a remote site).

It's pretty easy to install and configure as well:
  1. Install the package:
    • pkg install time-slider
  2. Start the services
    • svcadm restart dbus
    • svcadm enable time-slider
  3. Choose which filesystems it snaps (properties should be inherited by child filesytems)
    • zfs set com.sun:auto-snapshot=true rpool/export
  4. Manually exclude certain filesystems
    • zfs set com.sun:auto-snapshot=false rpool/swap1
  5. Enable the snap schedules you need:
    • svcadm enable auto-snapshot:hourly
    • svcadm enable auto-snapshot:daily
and it includes the steps for checking and modifying the frequency of the snaps, as well as how many it will keep.

I just wish there was a text only version of time-slider so that I don't have to install all the Gnome packages I'm never going to use.

Tuesday 11 June 2013

Hanging Out - Not in a good way

I was configuring a new Solaris 11 zone the other day when I started getting some performance problems. Notably things would just hang. Command like prstat and top and even an ls would hang or take a very long time to complete.

The real worry was that it not only affected the zone, but the global zone too!

I could log in to extra sessions with no problem but as soon as I ran a command it would just hang. And since there was only this zone on this freshly installed global zone, I was really really worried that I had some problem with the hardware.

Luckily I did some checking first before having a well-deserved nervous breakdown (I still had 8 other zones to configure and the server is going live on Saturday!). The template I had used for creating the Solaris 11 zone was a current Solaris 10 zone template that I had modified.

More luck than reasoning made the following two settings on the zone config stand out:
limitpriv: default
scheduling-class: FSS

I cleared both settings and restarted my zone and the intermittent problems went away.


*Note to self: Investigate at a later point to understand fully.

Tuesday 21 May 2013

Migrating a Solaris 10 zone to a Solaris 10 branded zone on Solaris 11


So there's an official Oracle procedure somewhere. Mine just goes into a little more detail and putting in workarounds for some of the bugs I found. 

*Edit: Points 5 and 11 is not necessary with the latest Solaris packages installed.

Zone name: zone1

On old (Solaris 10) global zone:
  1. zoneadm -z zone1 ready
  2. cd /zone/path
  3.  find root -print | cpio -oP@ | gzip >/dumps/zone1.cpio.gz
  4. zonecfg -z zone1 export -f /dumps/zone1.cfg
  5. Copy config and dump across to new global zone
  6. zfs send and recv any extra filesystems across to new global zone
On new (Solaris 11) global zone
  1. Notes:
    1.  I create a zpool with the name “zone”where all my zones data will sit on
    2.  I create a zfs filesystem zone/roots where all the zone paths will be in
    3.  Each zone gets a get a zfs filesystem off /zone where its mounted filesystems stem off from. e.g /zone/zone1 with /zone/zone1/home
  2. vi zone1.cfg*
    1. Change IP if needed
    2. Correct attached filesystems path if needed
    3. Set brand=Solaris10
    4. Set ip-type=exclusive
    5. Change  from net to anet
  3. zonecfg -z  zone1 -f zone1.cfg
  4. zoneadm -z zone1 attach -a /dump/zone1.cpio.gz
  5. Make sure the NIC gets configured on boot - fixes this
    1. vi /zone/path/zone1/root/etc/rc3.d/S99sol11networkaround
      • #This is a workaround for Sol10 zones on Sol11
      • # Till the bug gets fixed
      • ifconfig net0 `cat /etc/hostname.net0`
      • sleep 3
      • svcadm clear svc:/network/physical:default
    2. vi /zone/path/zone1/root/etc/hostname.net0
      • zone1 netmask 255.255.255.0 up
  6. Change root's home from /export/home/root to /root - might not be needed in your environment
    1. vi /zone/path/zone1/root/etc/passwd
    2. mv /zone/zone1/home/root /zone/roots/zone1/root/
  7. If IP is to change, vi /zone/roots/zone1/root/etc/hosts
  8. Boot zone1 and zlogin
  9. ifconfig plumb net0
  10. vi /etc/default/nfs and change: LOCKD_SERVERS=1024 - fixes this
  11. vi /etc/defaultrouter
  12. Reboot zone and test
  13. If you're changing the hostname, you'll have to do a sys-unconfigure and don't forget to update:
    1. /etc/hosts
    2. /etc/nsswitch.conf
    3. /etc/samba/smb.conf
    4. /etc/hostname.net0
    5. and you'll probably have to do a final reboot.

----
*Example zone1.cfg
create -b
set brand=solaris10
set zonepath=/zone/roots/zone1
set autoboot=false
set bootargs=”-m verbose”
set ip-type=exclusive
add fs
set dir=/export/home
set special=/zone/zone1/home
set type=lofs
end
add fs
set dir=/oracle
set special=/zone/zone1/oracle
add anet
set linkname=net0
set lower-link=aggr0
set allowed-address=192.12.23.52/24
set configure-allowed-address=true
set defrouter=192.12.23.1
set link-protection=mac-nospoof
set mac-address=random
end
add capped-memory
set physical=2G
end
----

Wednesday 15 May 2013

Time command Solaris 11

So all of the scripts so far that I've taken across from Solaris 10 to Solaris 11 have worked. Which is no surprise since I generally use the Bourne shell with the idea that it makes my scripts more acceptable in other environments.

One of my scripts didn't work though. It's a simple little script that writes a test file to the current directory and tells you how long it took.
#!/bin/sh

# This is a quick test of write speed.
# The filesize to write can be specified in gigabytes as a parameter.
# Doubt whether this script works if the test file takes more than 59m to write.
tempfile=gigfile.tmp
if [ $# = 1 ]; then
  filesizeGb=$1
else
  filesizeGb=1
fi
filesizeMb=`expr $filesizeGb \* 1024` || exit 1
# Find out the time taken to write the file
sync
timeforwrite=`time dd bs=1048576 count=$filesizeMb if=/dev/zero of=$tempfile 2>&1 | grep real | awk '{ print $NF }'`
timeforsync=`time sync 2>&1 | grep real | awk '{ print $NF }'`
#Take into account if time for write took more than a minute
if [ "`echo $timeforwrite | grep ':'`" != "" ]; then
  seconds=`echo $timeforwrite | awk -F':' '{ print $NF }'`
  minutes=`echo $timeforwrite | awk -F':' '{ print $1 }'`
  min2sec=`expr $minutes \* 60`
  timeforwrite=`echo "scalar=4;$min2sec+$seconds" | bc`
fi
# Calculate the speed
timetaken=`echo "scalar=4;$timeforwrite+$timeforsync" | bc`
writespeed=`echo "scalar=2;$filesizeMb/$timetaken" | bc`
# Do some cleaning up
[ -f "$tempfile" ] && rm "$tempfile"
echo A "$filesizeGb"Gb file was written in $timetaken seconds at a speed of approximately $writespeed"Mb/s."
exit 0

The error message when running it on Solaris 11 is not important because it incorrectly pointed out "bc" - i.e. my calculator. Looking into the script, I could see that my script wasn't giving bc the correct variables to add - none in fact.

Rather than talk you through everything, here's the conclusion. The "time" command works differently in Solaris 11 in two ways:
  1. It always outputs the time in ##m##s format, similar to the way it did in bash in Solaris 10 but not in the Bourne shell.
  2. It doesn't pipe into standard error as neatly as it did before. I'll give an example. If I wanted to store in a test file how long it takes the system to echo "hello world", previously I would run command like this: time echo "hello world" 2>output.txt. This doesn't work in Solaris 11, I need to run it like this: (time echo "hello world") 2>output.txt


Tuesday 16 April 2013

Login Security Part 2 - Setting up a Solaris11 to authenticate to AD using SAMBA

Setting up a Solaris11 to authenticate to AD using SAMBA:
  1. Add to /etc/system and reboot (This is once off on the global zone only)
  2. vi /etc/samba/smb.conf*
  3. mv /etc/pam.conf /etc/pam.conf.bak
  4. mv /etc/pam.conf-winbind /etc/pam.conf
  5. svccfg -s name-service/switch
    > setprop config/password = "files winbind"
    > setprop config/group = "files winbind"
    > exit
  6. svcadm refresh name-service/switch
  7. net join -U ADUserThatCanAddToDomain -S ADDomainControllerName
  8. svcadm enable samba winbind
  9. getent passwd


*Truncated smb.conf:

[global]
        workgroup = <HELLO>
        #realm = <HELLO.COM>
        encrypt passwords = yes
        netbios aliases = <hostname>
        server string = <hostname>
        security = DOMAIN
        auth methods = winbind
        password server = <ADDomainControllerIP>
        unix password sync = Yes
        log level = 2 vfs:3
        syslog = 2
        log file = /var/log/samba/smb-%U-%M.log
        max xmit = 65535
        name resolve order = host bcast
        deadtime = 15
        socket options = TCP_NODELAY IPTOS_LOWDELAY
        load printers = No
        disable spoolss = Yes
        show add printer wizard = No
        preferred master = No
        local master = No
        domain master = No
        dns proxy = No
        ldap ssl = no
        socket address =
        idmap uid = 10000-20000
        idmap gid = 10000-20000
        winbind enum users = Yes
        winbind enum groups = Yes
        winbind use default domain = yes
        hide special files = Yes
        hide unreadable = Yes
        veto files = /lost+found/samba_recycle_bin/

Friday 5 April 2013

Login Security Part1

I'm busy looking at how we allow users to log into our systems and improving it. This post will explain how it's done currently.

Well first of all, there are two ways we let users access our Solaris machines, SSH and Samba. So running "netservices limited" pretty much closes down all the unnecessary stuff like ftp and rlogin.

Secondly, we want users to enter using their AD accounts. LDAP methinks is the most popular way to do this but here we have a Samba/Winbind implementation - where our UNIX server is added into the AD domain. It's pretty simple - when it works. When it doesn't it can be frustrating. I'll do a separate post on the setting up the Samba Winbind AD integration.

Lastly, when any user logs in (except root), a menu comes up. The user has to choose the relevant application user (e.g. oracle, ctma, uptime) and is changed to it. When a user logs out of the application user it's back to the menu. Exit the menu and the user is logged out.

The menu logs the user into the application user using ssh and key authentication so the only the password the user has to know is the AD password. Effectively the only thing a user can do as them self on the machine is choose which application user to change to.

Notes:

  1. Access to Samba shares is controlled using /etc/samba/smb.conf
  2. Access to ssh login is using the AllowUsers option in /etc/ssh/sshd_config and setting up key authentication for user
  3. Because the change from user to application user is done using ssh - this screws with auditing.
*Most of this setup was in place when I got here, so I can't really take any credit for it

Thursday 28 March 2013

Oracle Database 11gR2 Installation On Solaris 11.1 (x86-64)

Hey, this dude chose the same theme as me!

http://dbarohit.blogspot.com/2012/12/oracle-database-11g-release-2-11201.html

Pure coincidence but I think I might change my theme down the line. Anyway, I needed the info but for SPARC. I'll post my own SPARC version once I've got a procedure that I'm happy with.

Edit: Changed my blog theme.

Loads of "wrong magic number" disk errors in /var/adm/messages

If you're getting lots of the "wrong magic number" on your console:

Jun 29 20:38:11 HOSTNAME scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk@g5000c50017c1704b (sd33):
Jun 29 20:38:11 HOSTNAME Corrupt label; wrong magic number
Jun 29 20:38:11 HOSTNAME scsi: [ID 107833 kern.warning] WARNING: 

Then you might want to try labeling the disk in format. Worked for me. They were gatekeeper disks from the SAN so I wasn't too worried about the data on them but if you are (I'm not sure how labeling affects data), you might want to follow this dude's procedure (I haven't tried it): http://unix.lofland.net/2011/06/30/corrupt-label-wrong-magic-number-errors/

Wednesday 27 March 2013

Copying a Solaris 11 Zone

Sidenote: While looking for some other info, a post I read said that Oracle was starting to use the word "zone" instead of "container" going forward. I don't know how official that is but I'm going to start doing the same - it's less to type!

 Okay, so one of my regular tasks in Solaris 10 is to copy a zone. My regular Solaris 10 procedure is like:

  1. Make a copy of the zone config (zonecfg -z zone export -f zone.cfg)
  2. Make a copy of the zonepath zfs filesystem (using zfs send and receive)
  3. Make a copy of any other filesystems  (using zfs send and receive)
  4. Edit the zone.cfg file to reference the copied zonepath and filesystems instead of the original ones. Also change the IP.
  5. Create the zone using the zone.cfg file (zonecfg -z zone1 -f zone.cfg)
  6. Attach the new zone (with -u)
  7. Boot the new zone and correct the hostname (using sys-unconfig).
For Solaris 11, points 2 and 7 have changed:
  1. Make a copy of the zone config (zonecfg -z zone export -f zone.cfg)
  2. Make a copy of the zonepath zfs filesystem (using zfs send and receive) including the zonepath's child filesystems but excluding the VARSHARE filesystem*
  3. Make a copy of any other filesystems  (using zfs send and receive)
  4. Edit the zone.cfg file to reference the copied zonepath and filesystems instead of the original ones. Also change the IP.
  5. Create the zone using the zone.cfg file (zonecfg -z zone1 -f zone.cfg)
  6. Attach the new zone (with -u)
  7. Boot the new zone, make sure export's mountpoint is /export and not /rpool/export and correct the hostname (using sysconfig configure -s).
---

*If you copied the zone and you included the VARSHARE filesystem, you're going to have problems when starting the zone. To fix this, log into the zone and do the following commands:
  zfs set canmount=noauto rpool/VARSHARE
  zfs set mountpoint=/var/share rpool/VARSHARE
  svcadm clear svc:/system/filesystem/minimal:default
  zfs set mountpoint=/export rpool/export

---

Tuesday 19 March 2013

Mounting filesystems on /var

If you trying to mount filesystems below /var, you're going to have a bad time.

The container won't boot because it mounts the filesystems first and then /var after.

I have an application that likes to put stuff in directories under /var so I made it filesystems under /var. I'm first going to reinstall the application to see if I can specify directories in other places. Barring that I will probably just put a link to some other place under /var. It's kinda avoiding the issue but I'm pressed for time, will try to solve the actual issue at a later stage.

Tuesday 12 March 2013

Solaris 11 Network Aggregation

Should be rather simple to set up:

dladm create-aggr -l net1 -l net3 aggr1
ipadm delete-ip aggr1
ipadm create-ip aggr1
ipadm create-addr -T static -a 192.168.100.172/24 aggr1

Source: http://blog.allanglesit.com/2011/03/solaris-11-network-configuration-advanced/

Monday 11 March 2013

Configuration for lockd server threads is suboptimal. Default value is 1024, configured value is 20

When you get this error in your messages:

Mar  1 10:39:07 machinename rpcmod: [ID 514475 kern.warning] WARNING: Configuration for lockd server threads is suboptimal. Default value is 1024, configured value is 20

This message is being generated by a Solaris 10 branded container. The default on on Solaris 11 is 1024 and somehow the message comes up on the global zone instead of the container. Log onto the container and change the value in /etc/default/nfs.

Saturday 9 March 2013

Solaris 10 branded zone - NIC vanishes after reboot


I’ve migrated a non-global Solaris 10 zone to a Solaris 10 branded zone. I made the zone an exclusive IP zone (samba’s AD integration i.e. winbind doesn’t work if I use a shared IP). However, when I rebooted the container, the NIC configuration got lost. I have to reconfigure it with ifconfig. Using /etc/hostname.net0 does not work. Seems to be no way to keep the config across reboots.

The problem is consistent across all the Solaris10 zones I’ve migrated.

Other symptoms: Physical network service doesn't start:

bash-3.2# svcs -xv
svc:/network/physical:default (physical network interfaces)
 State: maintenance since Wed Feb 27 13:31:54 2013
Reason: Start method exited with $SMF_EXIT_ERR_CONFIG.
   See: http://sun.com/msg/SMF-8000-KS
   See: man -M /usr/share/man -s 1M ifconfig
   See: /var/svc/log/network-physical:default.log
Impact: 6 dependent services are not running:
        svc:/milestone/network:default
        svc:/system/webconsole:console
        svc:/network/shares/group:default
        svc:/network/samba:default
        svc:/network/ssh:default
        svc:/network/winbind:default
bash-3.2#

Feedback from Oracle:
Bug 15802435 - SUNBT7182449 zonecfg configure-allowed-address does not work in solaris10 zones which is a regression of the fix for software defect Bug 15749195 - SUNBT7102421 allowed-address not configured at first boot after unconfiguration The latter has been made available with Solaris 11.1, which explains, why you didn't see the issue with Solaris 11.0 SRU 13.4 or earlier. Unfortunately there is no fix available officially for bug 15802435, yet.
Related error message (on the global zone):
Mar 12 14:02:53 hostname dlmgmtd[63]: [ID 183745 daemon.warning] Duplicate links in the repository: net0

EDIT: I updated to the latest patches. I've checked and now this problem is a thing of the past! Well done, Oracle, well done.

Wednesday 6 March 2013

Cannot do a "pkg update" on a new machine

"pkg update" kept telling me that there were no later packages for me in the repository. This seemed strange because after some googling I knew my packages weren't the latest.

Eventually I figured out the problem.

The default repository anybody who installs Solaris has access to. However, my machine came pre-installed and they had used the support repository. So my "pkg update" needed the support suppository to update. Follow this link to change to the support repository: https://pkg-register.oracle.com/help/ (Valid Oracle Support contract needed)


BEFORE
# pkg publisher
PUBLISHER                             TYPE     STATUS   URI
solaris                               origin   online http://pkg.oracle.com/solaris/release/
# pkg list entire
NAME (PUBLISHER)                                  VERSION                    IFO
entire                                            0.5.11-0.175.0.0.0.2.0     i--

AFTER
# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F https://pkg.oracle.com/solaris/support/
# pkg list entire
NAME (PUBLISHER)                                  VERSION                    IFO
entire                                            0.5.11-0.175.1.4.0.5.0     i--

Tuesday 5 March 2013

Specifying an IP for an exclusive IP address container

These are the container settings for Solaris 11 if you want to specify an IP in the zone config for an exclusive IP zone:

set ip-type=exclusive
add anet
set linkname=net0
set lower-link=auto
set allowed-address=<IP>/24
set defrouter=<routerIP>
set configure-allowed-address=true


NOTE: I'm having problems when migrating a Solaris 10 non-global zone to a Solaris 10 branded container and then making it an exclusive IP address. It doesn't keep the network card information. Logging a call with Oracle soon...

Tuesday 26 February 2013

Solaris 11 boot problems - Filesystem services

One of my our customisations on Solaris 10 is to change the root home directory from "/" " to "/export/home/root"/ The default on Solaris 11 is that root's home directory is "/root". Leave it like this, Solaris 11 doesn't cope well with that being changed.

UPDATE: It might be because that I was trying to set root's home to "/export/home/root" specifically rather than something other than "/root". Directories "/export" and "/export/home" in Solaris 11 are created automatically as part of rpool. Even if you're migrating a Solaris 10 non-global zone container to run as a Solaris 10 branded zone on Solaris 11 - this can cause problems. So after your migration, change root's home directory to "/root".

Mirroring rpool and creating a swap pool

When I order machines, I want a minimum of 4 internal disks. Two for the OS and two for swap. You can get small disks anymore so you end up with huge amounts of swap space but that's how I roll.

The steps are the same as Solaris 10.

zpool status to see your current pools and the disks in them.

First use "format" to make sure all your disks are present. Check the partition table of your current root disk and format another disk in the same manner (i.e. all the space into one slice)

partition> p
Volume:  solaris
Current partition table (original):
Total disk cylinders available: 46873 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       1 - 46872  279.38GB    (46872/0/0) 585900000
  1 unassigned    wm       0            0         (0/0/0)             0
  2     backup    wu       0 - 46872  279.38GB    (46873/0/0) 585912500
  3 unassigned    wm       0            0         (0/0/0)             0
  4 unassigned    wm       0            0         (0/0/0)             0
  5 unassigned    wm       0            0         (0/0/0)             0
  6 unassigned    wm       0            0         (0/0/0)             0
  7 unassigned    wm       0            0         (0/0/0)             0



Setting up rpool to be mirrored:
zpool attach -f rpool <disk1s0> <disk3s0>
Wait for resilvering to finish (check zpool status) then run:
installboot -f -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<disk3s0>
 

Adding swap:
zpool create swappool mirror <disk2> <disk4>
zfs create -V 250G swappool/swap1
swap -a /dev/zvol/dsk/swappool/swap1

echo "/dev/zvol/dsk/swappool/swap1    -    -    swap   - no  -" >>/etc/vfstab

At this point you should really reboot and make sure all your changes are still there. Booting off the mirrored root disk as a test is also good practice.

T4 received and ILOM password change

I've received my new T4. Amazing how machines just get smaller and smaller. This T4 has 4 8-core cpus and 256gb of RAM. This should be faster than our current M5000s but with slightly less redundancy.

Solaris 11 is preinstalled and the RSC is an ILOM. Default root password for ILOM is still "changeme". To change the ILOM root password:

    set /SP/users/root password=<password>

When you start up the machine, it takes you into the setup interface for Solaris11. Pretty much the same as Solaris10.