Installing vCloud Director 1.5 can be like installing a VCR. For the most part, you can get through it without reading the instructions. However, there may be some advanced or obscure features (such as programming the clock or automatically recording a channel) which require knowledge you’ll only pick up by referring to the documentation. Such is the case with vCD Transfer Server Storage. Page 13 of the vCloud Director Installation and Configuration Guide discusses Transfer Server Storage as follows:
To provide temporary storage for uploads and downloads, an NFS or other shared storage volume must be accessible to all servers in a vCloud Director cluster. This volume must have write permission for root. Each host must mount this volume at $VCLOUD_HOME/data/transfer, typically /opt/vmware/vcloud-director/data/transfer. Uploads and downloads occupy this storage for a few hours to a day. Transferred images can be large, so allocate at least several hundred gigabytes to this volume.
This is the only VMware documentation I could find covering Transfer Server Storage. There is a bit of extra information revealed about Transfer Server Storage upon the initial installation of the vCD cell which basically states that at that point in time you should configure Transfer Server Storage to point to shared NFS storage for all vCD cells to use, or if there is just a single cell, local cell storage may be used:
If you will be deploying a vCloud Director cluster you must mount the shared transfer server storage prior to running the configuration script. If this is a single server deployment no shared storage is necessary.
Transfer Server Storage is used for uploading and downloading (exporting) vApps. A vApp is one or more virtual machines with associated virtual disks. Small vApps in .OVF format will consume maybe 1GB (or potentially less depending on its contents). Larger vApps could be several hundred GBs or beyond. By default, Transfer Server Storage will draw capacity from /. Lack of adequate Transfer Server Storage capacity will result in the inability to upload or download vApps (it could also imply you’re out of space on /). Long story short, if you skipped the brief instructions on Transfer Server Storage during your build of a RHEL 5 vCD cell, at some point you may run short on Transfer Server Storage and even worse you’d run / out of available capacity.
I ran into just such a scenario in the lab and thought I’d just add a new virtual disk with adequate capacity, create a new mount point, and then adjust the contents of /etc/profile.d/vcloud.sh (export VCLOUD_HOME=/opt/vmware/vcloud-director) to point vCD to the added capacity. I quickly found out this procedure does not work. The vCD portal dies and won’t start again. I did some searching and wound up at David Hill’s vCloud Director FAQ which confirms the transfer folder cannot be moved (Chris Colotti has also done some writing on Transfer Server Storage here in addition to related content I found on the vSpecialist blog). However, we can add capacity to that folder by creating a new mount at that folder’s location.
I was running into difficulties trying to extend / so I collaborated with Bob Plankers (a Linux and Virtualization guru who authors the blog The Lone Sysadmin) to identify the right steps, in order, to get the job done properly for vCloud Director. Bob spent his weekend time helping me out with great detail and for that I am thankful. You rule Bob!
Again, consider the scenario: There is not enough Transfer Server Storage capacity or Transfer Server Storage has consumed all available capacity on /. The following steps will grow an existing vCloud Director Cell virtual disk by 200GB and then extend the Transfer Server Storage by that amount. The majority of the steps will be run via SSH, local console or terminal:
- Verify rsync is installed. To verify, type rsync followed by enter. All vCD supported versions of RHEL 5 (Updates 4, 5, and 6) should already have rsync installed. If a minimalist version of RHEL 5 was deployed without rsync, execute yum install rsync to install it (RHN registration required).
- Gracefully shut down the vCD Cell.
- Now would be a good time to capture a backup of the vCD cell as well as the vCD database if there is just a single cell deployed in the environment.
- Grow the vCD virtual disk by 200 GB.
- Power the vCD cell back on and at boot time go into single user mode by interrupting GRUB (press an arrow key to move the kernel selection). Use ‘a‘ to append boot parameters. Append the word single to the end (use a space separator) and hit enter.
- Use # sudo fdisk /dev/sda to partition the new empty space:
- Enter ‘n’ (for new partition)
- Enter ‘p’ (for primary)
- Enter a partition number. For a default installation of RHEL 5 Update 6, 1 and 2 will be in use so this new partition will likely be 3.
- First cylinder… it’ll offer a number, probably the first free cylinder on the disk. Hit enter, accept the default.
- Last cylinder… hit enter. It’ll offer you the last cylinder available. Use it all!
- Enter ‘x’ for expert mode.
- Enter ‘b’ to adjust the beginning sector of the partition.
- Enter the partition number (3 in this case).
- In this step align the partition to a multiple of 128. It’ll ask for “new beginning of data” and have a default number. Take that default number and round it up to the nearest number that is evenly divisible by 128. So if the number is 401660, I take my calculator and divide it by 128 to get the result 3137.968. I round that up to 3138 then multiply by 128 again = 401664. That’s where I want my partition to start for good I/O performance, and I enter that.
- Now enter ‘w’ to write the changes to disk. It’ll likely complain that it cannot reread the partition table but this is safe to ignore.
- Reboot the vCD cell using shutdown -r now
- When the cell comes back up, we need to add that new space to the volume group.
- pvcreate /dev/sda3 to initialize it as a LVM volume. (If you used partition #4 then it would be /dev/sda4).
- vgextend VolGroup00 /dev/sda3 to grow the volume.
- Now create a filesystem:
- lvcreate –size 199G –name transfer_lv VolGroup00 to create a logical volume 199 GB in size named transfer_lv. Adjust the numbers as needed. Notice we cannot use the entire space available due to slight overhead.
- mke2fs -j -m 0 /dev/VolGroup00/transfer_lv to create an ext3 filesystem on that logical volume. The -j parameter indicates journaled, which is ext3. The -m 0 parameter tells the OS to reserve 0% of the space for the superuser for emergencies. Normally it reserves 5%, which is a complete waste of 5% of your virtual disk.
- Now we need to mount the filesystem somewhere where we can copy the contents of /opt/vmware/vcloud-director/data/transfer first. mount /dev/VolGroup00/transfer_lv /mnt will mount it on /mnt which is a good temporary spot.
- Stop the vCloud Director cell service to close any open files or transactions in flight with service vmware-vcd stop.
- rsync -av /opt/vmware/vcloud-director/data/transfer/ /mnt to make an exact copy of what’s there. Mind the slashes, they’re important.
- Examine the contents of /mnt to be sure everything from /opt/vmware/vcloud-director/data/transfer was copied over properly.
- rm -rf /opt/vmware/vcloud-director/data/transfer/* to delete the file and directory contents in the old default location. If you mount over it, the data will still be there sucking up disk space but you won’t be able to see it (instead you’ll see lost+found). Make sure you have a good copy in /mnt!
- umount /mnt to unmount the temporary location.
- mount /dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer (all one line) to mount it in the right spot.
- df -h to confirm the mount point is there and vCD data (potentially along with transient transfer storage files) is consuming some portion of it.
- To auto mount correctly on reboot:
- nano -w /etc/fstab to edit the filesystem mount file.
- At the very bottom add a new line (but no blank lines between) that looks like the rest, but with our new mount point. Use tab separation between the fields. It should look like this:
/dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer/ ext3 defaults 1 2
- Ctrl-X to quit, ‘y’ to save modified buffer, enter to accept the filename.
- At this time we can either start the vCD cell with service vmware-vcd start or reboot to ensure the new storage automatically mounts and the cell survives reboots. If after a reboot the vCD portal is unavailable, it’s probably due to a typo in fstab.
This procedure, albeit a bit lengthy and detailed, worked well and was the easiest solution for my particular scenario. There are some other approaches which would work to solve this problem. One of them would be almost identical to the above but instead of extending the virtual disk of the vCD cell, we could add a new virtual disk with the required capacity and then mount it up. Another option would be to build a new vCloud Director server with adequate space and then decommission the first vCD server. This wasn’t an option for me because the certificate key files for the first vCD server no longer existed.