upgrade to all zfs root

Now that I’m successfully running a zfs root, I don’t need my old usf root anymore, so it should be a simple matter of removing the old usf boot environment and increasing the size of the new zfs root pool. 

Right?  Well no actually, there seems to be a bug or 3.

# lustatus
Boot Environment           Is       Active Active    Can    Copy     
Name                       Complete Now    On Reboot Delete Status   
————————– ——– —— ——— —— ———-
snv_98                     yes      no     no        yes    –        
snv_102                    yes      yes    yes       no     –        
# ludelete snv_98
System has findroot enabled GRUB
Checking if last BE on any disk…
BE <snv_98> is not the last BE on any disk.
Updating GRUB menu default setting
Changing GRUB menu default setting to <3>
ERROR: Failed to copy file </boot/grub/menu.lst> to top level dataset for BE <snv_98>
ERROR: Unable to delete GRUB menu entry for deleted boot environment <snv_98>.
Unable to delete boot environment.

This is CR6718038/CR6715220/CR6743529. A quick workaround would be to edit /usr/lib/lu/lulib and replace the following in line 2937:
lulib_copy_to_top_dataset "$BE_NAME" "$ldme_menu" "/${BOOT_MENU}"
with
lulib_copy_to_top_dataset `/usr/sbin/lucurr` "$ldme_menu" "/${BOOT_MENU}"

then rerun the ludelete:

# ludelete snv_98
System has findroot enabled GRUB
Checking if last BE on any disk…
BE <snv_98> is not the last BE on any disk.
Updating GRUB menu default setting
Changing GRUB menu default setting to <3>
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <snv_102> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
Successfully deleted entry from GRUB menu
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment <snv_98> deleted.
#

Then I needed to remove the old usf boot and swap slices, old and new layout:

partition> print
Current partition table (original):
Total disk cylinders available: 12047 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       3 –  1962       15.01GB    (1960/0/0)   31487400
  1       swap    wu    1963 –  2224        2.01GB    (262/0/0)     4209030
  2     backup    wm       0 – 12046       92.28GB    (12047/0/0) 193535055
  3 unassigned    wm    2225 –  4182       15.00GB    (1958/0/0)   31455270
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wm    4183 –  6793       20.00GB    (2611/0/0)   41945715
  7       home    wm    6794 – 12046       40.24GB    (5253/0/0)   84389445
  8       boot    wu       0 –     0        7.84MB    (1/0/0)         16065
  9 alternates    wu       1 –     2       15.69MB    (2/0/0)         32130

partition>

partition> print
Current partition table (original):
Total disk cylinders available: 12047 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       3 –  4182       32.02GB    (4180/0/0)   67151700
  1 unassigned    wu       0                0         (0/0/0)             0
  2     backup    wm       0 – 12046       92.28GB    (12047/0/0) 193535055
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wm    4183 –  6793       20.00GB    (2611/0/0)   41945715
  7       home    wm    6794 – 12046       40.24GB    (5253/0/0)   84389445
  8       boot    wu       0 –     0        7.84MB    (1/0/0)         16065
  9 alternates    wu       1 –     2       15.69MB    (2/0/0)         32130

partition>

The size of my pools before:

# zpool list
NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rootpool    15G  8.54G  6.46G    56%  ONLINE  –
tank        40G  5.65G  34.3G    14%  ONLINE  –
tank2     19.9G   652K  19.9G     0%  ONLINE  –
#

Then reboot, oops!!!

It just defaults to >grub prompt, because my old ufs slice held all the boot info and I just deleted that so it can’t find any  . . . but this is a simple process to restore (as long as you have a recent dvd image handy (so it can recongnise and mount the zfs pool).

Insert and boot from the dvd image, select single user mode.

Mount the rootpool as r/w on /a (it should prompt automatically for this).

At the command prompt, type:

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c2t1d0s0

then reboot.  I love it when a plan comes together, pool sizes after the reboot:

# zpool list
NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rootpool    32G  8.54G  23.5G    26%  ONLINE  –
tank        40G  5.54G  34.5G    13%  ONLINE  –
tank2     19.9G   722K  19.9G     0%  ONLINE  –
#

Advertisements
%d bloggers like this: