You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Portal:Cloud VPS/Admin/VM images: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Andrew Bogott
(→‎Install a new image: Updated labcontrol1001 references to cloudcontrol1003)
 
imported>GTirloni
(Use openstack cmd to deactivate images)
Line 1: Line 1:
{{draft}}
Cloud VPS uses special VM images that contain all the customizations required for our environment.


This page collects all the information on '''VM images''' for instances that runs on [[Portal:Cloud_VPS | Cloud VPS]].
= Builders =


= Image generation =
There is a builder server for each Linux distribution, installed with the appropriate build tools:


There are VM images out there (Debian, Ubuntu) which are ''cloud-ready'' but we do generate our owns.
{| class="wikitable"
|-
! OS !! Builder Server !! Buid Tool
|-
| Debian Stretch || labs-bootstrapvz-stretch.openstack.eqiad.wmflabs || bootstrap-vz
|-
| Debian Jessie || labs-bootstrapvz-jessie.openstack.eqiad.wmflabs || bootstrap-vz (modified)
|-
| Ubuntu Trusty || vmbuilder-trusty.openstack.eqiad.wmflabs || vmbuilder
|}


== Debian ==
= How To Build =
We build debian images using bootstrap-vz.  The bootstrap-vz config is puppetized in the class labs_bootstrapvz -- on Jessie we use a custom build of the bootstrap-vz package, documented below.


To build a Debian Jessie image, log in to labs-bootstrapvz-jessie:
== Debian Stretch ==
 
* Login to <code>labs-bootstrapvz-stretch.openstack.eqiad.wmflabs</code>
 
* Build and convert image


<source lang="bash">
<source lang="bash">
sudo su -
sudo su -
cd /target # This is where the image will end up when we finish
cd /target
rm *.raw && rm *.qcow2 # Make space for our new build
bootstrap-vz --pause-on-error  /etc/bootstrap-vz/manifests/labs-stretch.manifest.yaml
qemu-img convert -f raw -O qcow2 debian-stretch-amd64-20181121.raw debian-stretch-amd64-20181121.qcow2
</source>
 
== Debian Jessie ==
 
* Login to <code>labs-bootstrapvz-jessie.openstack.eqiad.wmflabs</code>
 
* Build and convert image
 
<source lang="bash">
sudo su -
cd /target
bootstrap-vz /etc/bootstrap-vz/manifests/labs-jessie.manifest.yaml
bootstrap-vz /etc/bootstrap-vz/manifests/labs-jessie.manifest.yaml
qemu-img convert -f raw -O qcow2 ./debian-jessie.raw ./debian-jessie.qcow2
qemu-img convert -f raw -O qcow2 debian-jessie.raw debian-jessie.qcow2
</source>
</source>


=== Custom bootstrap-vz package ===
=== Custom bootstrap-vz package ===


Andrew built python-bootstrap-vz_0.9wmf-1_all.deb using stddeb.  It was built from the 'development' branch on 2014-12-22 with commit 255f0624b49dbcf6cacccd3b2f1fa7c0cc2bcc8d and the patch, below. To reproduce:
Building Jessie images required a custom bootstrap-vz package (not needed for Stretch).
 
Andrew built <code>python-bootstrap-vz_0.9wmf-1_all.deb</code> using stddeb.  It was built from the <code>development</code> branch on 2014-12-22 with commit <code>255f0624b49dbcf6cacccd3b2f1fa7c0cc2bcc8d</code> and the patch below.


* Checkout the bootstrap-vz source from https://github.com/andsens/bootstrap-vz
* Checkout the bootstrap-vz source from https://github.com/andsens/bootstrap-vz
* add this patch to make stdeb's dependencies work out properly:
* Add this patch to make stdeb's dependencies work out properly:


  diff --git a/setup.py b/setup.py
  diff --git a/setup.py b/setup.py
Line 50: Line 76:
* ls deb_dist/*.deb
* ls deb_dist/*.deb


As of 2017-05-06, the .deb provided by the upstream debian Stretch repo (bootstrap-vz 0.9.10+20170110git-1) seems to work properly on Stretch without a custom build or any additional modifications.
== Ubuntu Trusty (Legacy) ==


== Ubuntu ==
{{warning | Ubuntu has been deprecated. These instructions are only for historical purposes. }}  
 
{{warning | Usage of Ubuntu is discouraged as it will be deprecated and support disabled at some point in the future by the WMCS team}}  


We use [https://help.ubuntu.com/12.04/serverguide/jeos-and-vmbuilder.html vmbuilder] to build our custom Ubuntu images. The vmbuilder configuration is in puppet in the labs-vmbuilder module.
We use [https://help.ubuntu.com/12.04/serverguide/jeos-and-vmbuilder.html vmbuilder] to build our custom Ubuntu images. The vmbuilder configuration is in puppet in the labs-vmbuilder module.
Line 88: Line 112:
Note the name of the tmp file generated; for instance: "Converting /tmp/tmpD0yIQa to qcow2, format /mnt/vmbuilder/ubuntu-trusty/tmpD0yIQa.qcow2"
Note the name of the tmp file generated; for instance: "Converting /tmp/tmpD0yIQa to qcow2, format /mnt/vmbuilder/ubuntu-trusty/tmpD0yIQa.qcow2"


= Image management =
= How To Test =


The final destination of VM images are the Openstack Glance service.
You can boot an image locally for testing, like this:
 
<code>
sudo qemu-system-x86_64 -nographic -serial mon:stdio -enable-kvm image_name.raw
</code>
 
If the command above does not work, you try can try the following command (beware boot logs will be supressed):
 
<code>
qemu-system-x86_64 image_name.raw --curses
</code>
 
Having a working login account for test purposes is left as an exercise to the reader. bootstrap-vz creates one by default (login:root / passwd:test) but our config wisely disables it.
 
= How To Deploy =


== Image test/debug ==
Images are deployed to the OpenStack Glance service.


You can boot an image locally for testing, like this:
* Copy the .qcow2 file to /tmp on the <code>cloudcontrol1003.wikimedia.org</code> server
 
Since the file has to cross the Cloud VPS / Production boundary, you can copy it from the builder server to your laptop (using your Cloud Services root key) and then from your laptop to cloudcontrol1003 (using your production key):


<source lang="bash">
<source lang="bash">
qemu-system-x86_64 ./<new image name>.raw --curses
rsync --progress -v -e ssh root@labs-bootstrapvz-stretch.openstack.eqiad.wmflabs:/target/debian-stretch-amd64-20181121.qcow2 .
rsync --progress -v -e ssh debian-stretch-amd64-20181121.qcow2 cloudcontrol1003.wikimedia.org:/tmp/
</source>
</source>


Unfortunately, qemu's default behavior is to suppress all boot logs, so you'll be looking at a mostly-blank screen for several minutes before getting a login prompt with no working password. Turning on a working login account for test purposes is left as an exercise to the reader -- bootstrap-vz creates one by default (login: root passwd:test) but our config wisely disables it.
* Login to <code>cloudcontrol1003.wikimedia.org</code>


Bootstrap-vz uses source files from /etc/bootstrap-vz.  These files are puppetized, so you'll want to disable puppet if you change them.
* Create new image in Glance:


Bootstrap-vz also uses several source files that are standard local config files on the build host. For a complete list of these files, look at the 'file_copy:' section in /etc/bootstrap-vz/manifests/labs-jessie.manifest.yaml
<source lang="bash">
sudo su -
source ~/novaenv.sh
cd /tmp
openstack image create --file debian-stretch-amd64-20181121.qcow2 --disk-format "qcow2" --container-format "ovf" --public "debian-9.6-stretch"
</source>


The build process for Stretch is similar; the build system is labs-bootstrapvz-stretch.openstack.eqiad.wmflabs and the manifest to use is named labs-stretch.manifest.yaml.
* Test new image by booting a new VM with it (if the image is faulty, remember to delete the test VM and the faulty image)


Another method is to run:
* Get a list of existing images


<source lang="bash">
<source lang="bash">
sudo qemu-system-x86_64 -nographic -serial mon:stdio -enable-kvm ./debian-jessie.raw
openstack image list
</source>
</source>


This way you will see all the boot prompt.
* Append "deprecated" to old images and remove properties (only if new image is working as expected)


== Install a new image ==
<source lang="bash">
 
openstack image set --name "debian-9.5-stretch (deprecated <date>)" <old-image-id>
On cloudcontrol1003 (or other control box):
nova image-meta <old-image-id> delete default
nova image-meta <old-image-id> delete show
</source>


First, get the new .qcow2 images into /tmp on cloudcontrol1003.wikimedia.org. One way is to rsync it from the builder VM to your laptop using your labs/labs root key, and rsync it to cloudcontrol1003 using your prod key. For example: <syntaxhighlight lang="bash">
Passing <code>--purge-props</code> to <code>openstack image set</code> should be enough to clear all properties but it's currently not available in our OpenStack version. The <code>nova image-meta</code> commands serve the same purpose but you have to delete each property individually. This should be reviewed when OpenStack is upgraded.
rsync -e 'ssh -i ~/.ssh/id_rsa_labs_private' root@vmbuilder-trusty.openstack.eqiad.wmflabs:/srv/vmbuilder/ubuntu-trusty/tmpUUpMh4.qcow2 .
rsync -e 'ssh -i ~/.ssh/id_rsa_prod_new' tmpUUpMh4.qcow2 madhuvishy@cloudcontrol1003.wikimedia.org:/tmp/
</syntaxhighlight>


Then, also on cloudcontrol1003...
* Make the new image the default for new instances


<source lang="bash">
<source lang="bash">
sudo su -
openstack image set --name "debian-9.6-stretch" <new-image-id> --property show=true --property default=true
source ~/novaenv.sh
cd /tmp
openstack image create --file ubuntu-trusty.qcow2 --disk-format "qcow2" --container-format "ovf" --public "ubuntu-14.04-trusty (testing)"
openstack image create --file debian-jessie.qcow2 --disk-format "qcow2" --container-format "ovf" --public "debian-8.1-jessie (testing)"
# Test the images by booting instances in labs; if they don't work, delete
# the instances, then delete the images (using glance delete), then
# restart the process
glance index # image-list in mitaka
# find image ids
glance image-update --name "ubuntu-14.04-trusty (deprecated <date>)" <old-trusty-id> --purge-props
glance image-update --name "ubuntu-14.04-trusty" <new-trusty-id> --property show=true --property default=true
glance image-update --name "ubuntu-12.04-precise (deprecated <date>)" <old-precise-id> --purge-props
glance image-update --name "ubuntu-12.04-precise" <new-precise-id> --property show=true
</source>
</source>


Notice in the above glance image-update commands the use of properties. If default=true the image will be the default image selected in the instance creation interface; purging properties removes the 'default' state.
Notice in the above <code>glance image-update</code> commands the use of properties. If <code>default=true</code> the image will be the default image selected in the instance creation interface. Purging properties removes the 'default' state.
 
= How To Delete =


We used to use the 'show=true' property to ensure that an image appeared on wikitech. Now instead we use the image state, where only images with state=active appear in the gui (both on wikitech and in Horizon.) To deactivate your obsolete image:
TODO
 
= How To Deactive Obsolete Images =
 
We used to use the 'show=true' property to ensure that an image appeared on Wikitech. Now instead we use the image state, where only images with state=active appear in the GUI (both on wikitech and in Horizon.) To deactivate your obsolete image:


<source lang="bash">
<source lang="bash">
source /etc/novaenv.sh  
source /etc/novaenv.sh  
openstack token issue
openstack image set --deactivate <image-id>
curl -X POST -H 'X-Auth-Token: <token id>' http://cloudcontrol1003.wikimedia.org:9292/v2/images/<image id>/actions/deactivate
</source>
</source>


To reactivate an image (because it was deactivated in error, or in order to permit a migration):
If you need to reactive it for some reason:


<source lang="bash">
<source lang="bash">
source /etc/novaenv.sh  
source /etc/novaenv.sh  
openstack token issue
openstack image set --activate <image-id>
curl -X POST -H 'X-Auth-Token: <token id>' http://cloudcontrol1003.wikimedia.org:9292/v2/images/<image id>/actions/reactivate
</source>
</source>


== Delete an old image ==
Please note that we usually just "deprecate" images by changing their names. Deactivating an image is a more extreme step to be used when you do not want any users to have access to it.
 
= Internals =
 
== bootstrap-vz configuration files ==
 
Bootstrap-vz uses source files from /etc/bootstrap-vz. These files are puppetized, so you'll want to disable Puppet if you change them.


'''TODO:''' fill me.
{| class="wikitable"
|-
! OS !! Configuration File
|-
| Debian Stretch || /etc/bootstrap-vz/manifests/labs-stretch.manifest.yaml
|-
| Debian Jessie || /etc/bootstrap-vz/manifests/labs-jessie.manifest.yaml
|}


= First boot =
 
Bootstrap-vz also uses several source files that are standard local config files on the build host. For a complete list of these files, look at the 'file_copy:' section in /etc/bootstrap-vz/manifests/labs-{stretch,jessie}.manifest.yaml
 
== First Boot ==


The first boot of the VM image is a key moment in the setup of an instance in Cloud VPS.
The first boot of the VM image is a key moment in the setup of an instance in Cloud VPS.
Line 185: Line 237:
Until the last point, the instance may have limited connectivity or usability.
Until the last point, the instance may have limited connectivity or usability.


= Common issues and hacks =
= Troubleshooting =
 
== Common Issues ==


These problems may vary from deployment to deployment (main, labtest, labtestn), but they could be common.
These problems may vary from deployment to deployment (main, labtest, labtestn), but they could be common.
Line 193: Line 247:
* Image doesn't correctly resolve the hostname/domain name (so it fails to fetch is own puppet catalog)
* Image doesn't correctly resolve the hostname/domain name (so it fails to fetch is own puppet catalog)


== Inspecting a disk by hand ==
== How To Inspect Disk Contents ==
 
If you want to explore and edit the disk image of a ''live instance'', follow this procedure:


If you want to explore & edit the disk image of a live instance, follow this procedure:
* Locate the virt server in which the VM is running


* locate the virt server in which the VM is running
<syntaxhighlight lang="shell-session>
<syntaxhighlight lang="shell-session>
aborrero@labtestvirt2001:~ $ for i in $(sudo virsh list --all | grep i.* | awk -F' ' '{print $2}') ; do echo -n "$i " ; sudo virsh dumpxml $i | grep nova:name ; done
cloudvirt1020:~ $ for i in $(sudo virsh list --all | grep i.* | awk -F' ' '{print $2}') ; do echo -n "$i " ; sudo virsh dumpxml $i | grep nova:name ; done
i-0000025d      <nova:name>puppettestingui</nova:name>
i-0000025d      <nova:name>puppettestingui</nova:name>
i-00000262      <nova:name>aptproxy2</nova:name>
i-00000262      <nova:name>aptproxy2</nova:name>
Line 208: Line 263:
* Once you know the internal instance name (i-xxxxx), locate the disk file
* Once you know the internal instance name (i-xxxxx), locate the disk file
<syntaxhighlight lang="shell-session>
<syntaxhighlight lang="shell-session>
aborrero@labtestvirt2001:~ $ sudo virsh dumpxml i-00000264 | grep "source file" | grep disk
cloudvirt1020:~ $ sudo virsh dumpxml i-00000264 | grep "source file" | grep disk
       <source file='/var/lib/nova/instances/09865310-b440-4dc7-99ab-fb5f35be04fb/disk'/>
       <source file='/var/lib/nova/instances/09865310-b440-4dc7-99ab-fb5f35be04fb/disk'/>
</syntaxhighlight>
</syntaxhighlight>


* shutdown the machine (from inside, from horizon, using virsh or whatever)
* Shutdown the machine (from inside, from horizon, using virsh or whatever)
* copy the disk file to your home
* Copy the disk file to your home
<syntaxhighlight lang="shell-session>
<syntaxhighlight lang="shell-session>
aborrero@labtestvirt2001:~ $ cp /var/lib/nova/instances/09865310-b440-4dc7-99ab-fb5f35be04fb/disk t3-disk.qcow2
cloudvirt1020:~ $ cp /var/lib/nova/instances/09865310-b440-4dc7-99ab-fb5f35be04fb/disk t3-disk.qcow2
</syntaxhighlight>
</syntaxhighlight>


* disable puppet and install libguestfs-tools
* Disable puppet and install libguestfs-tools
<syntaxhighlight lang="shell-session>
<syntaxhighlight lang="shell-session>
aborrero@labtestvirt2001:~ $ sudo puppet agent --disable "inspecting instance disk" ; sudo aptitude install libguestfs-tools -y
cloudvirt1020:~ $ sudo puppet agent --disable "inspecting instance disk" ; sudo aptitude install libguestfs-tools -y
[...]
[...]
</syntaxhighlight>
</syntaxhighlight>


* create a destination directory and mount the disk!
* Create a destination directory and mount the disk!
<syntaxhighlight lang="shell-session>
<syntaxhighlight lang="shell-session>
aborrero@labtestvirt2001:~ $ mkdir mnt ; sudo guestmount -a t3-disk.qcow2 -m /dev/sda3 --rw mnt
cloudvirt1020:~ $ mkdir mnt ; sudo guestmount -a t3-disk.qcow2 -m /dev/sda3 --rw mnt
</syntaxhighlight>
</syntaxhighlight>


* you can now read/write the instance disk in the mount point
* You can now read/write the instance disk in the mount point
* when done, umount, copy back the instance disk and start the instance!
* When done, umount, copy back the instance disk and start the instance!


= See also =
= See Also =


* [[Portal:Cloud_VPS/Admin/Maintenance]]: information on cloudvps maintenance tasks
* [[Portal:Cloud_VPS/Admin/Maintenance]]: Information on Cloud VPS maintenance tasks


[[Category:VPS admin]]
[[Category:VPS admin]]

Revision as of 07:28, 22 November 2018

Cloud VPS uses special VM images that contain all the customizations required for our environment.

Builders

There is a builder server for each Linux distribution, installed with the appropriate build tools:

OS Builder Server Buid Tool
Debian Stretch labs-bootstrapvz-stretch.openstack.eqiad.wmflabs bootstrap-vz
Debian Jessie labs-bootstrapvz-jessie.openstack.eqiad.wmflabs bootstrap-vz (modified)
Ubuntu Trusty vmbuilder-trusty.openstack.eqiad.wmflabs vmbuilder

How To Build

Debian Stretch

  • Login to labs-bootstrapvz-stretch.openstack.eqiad.wmflabs
  • Build and convert image
sudo su -
cd /target
bootstrap-vz --pause-on-error  /etc/bootstrap-vz/manifests/labs-stretch.manifest.yaml
qemu-img convert -f raw -O qcow2 debian-stretch-amd64-20181121.raw debian-stretch-amd64-20181121.qcow2

Debian Jessie

  • Login to labs-bootstrapvz-jessie.openstack.eqiad.wmflabs
  • Build and convert image
sudo su -
cd /target
bootstrap-vz /etc/bootstrap-vz/manifests/labs-jessie.manifest.yaml
qemu-img convert -f raw -O qcow2 debian-jessie.raw debian-jessie.qcow2

Custom bootstrap-vz package

Building Jessie images required a custom bootstrap-vz package (not needed for Stretch).

Andrew built python-bootstrap-vz_0.9wmf-1_all.deb using stddeb. It was built from the development branch on 2014-12-22 with commit 255f0624b49dbcf6cacccd3b2f1fa7c0cc2bcc8d and the patch below.

diff --git a/setup.py b/setup.py
index f7b97ac..349cfdc 100644
--- a/setup.py
+++ b/setup.py
@@ -22,11 +22,8 @@ setup(name='bootstrap-vz',
       install_requires=['termcolor >= 1.1.0',
                         'fysom >= 1.0.15',
                         'jsonschema >= 2.3.0',
-                        'pyyaml >= 3.10',
                         'boto >= 2.14.0',
                         'docopt >= 0.6.1',
-                        'pyrfc3339 >= 1.0',
-                        'requests >= 2.9.1',
                         ],
       license='Apache License, Version 2.0',
       description='Bootstrap Debian images for virtualized environments',


  • Alter the version tag in vi bootstrapvz/__init__.py as needed
  • Install python-stdeb
  • python setup.py --command-packages=stdeb.command bdist_deb
  • ls deb_dist/*.deb

Ubuntu Trusty (Legacy)

We use vmbuilder to build our custom Ubuntu images. The vmbuilder configuration is in puppet in the labs-vmbuilder module. It can be added to a node using role::labs::vmbuilder. Here's a set of steps to build and import the images:

On vmbuilder-trusty.openstack.eqiad.wmflabs:

puppet agent -tv
cd /srv/vmbuilder
rm -Rf ubuntu-trusty
vmbuilder kvm ubuntu -c /etc/vmbuilder.cfg -d /srv/vmbuilder/ubuntu-trusty -t /srv/vmbuilder/tmp --part=/etc/vmbuilder/files/vmbuilder.partition

Note the name of the tmp file generated; for instance: "Converting /tmp/tmpD0yIQa to qcow2, format /mnt/vmbuilder/ubuntu-trusty/tmpD0yIQa.qcow2"

Note the name of the tmp file generated; for instance: "Converting /tmp/tmpD0yIQa to qcow2, format /mnt/vmbuilder/ubuntu-trusty/tmpD0yIQa.qcow2"

How To Test

You can boot an image locally for testing, like this:

sudo qemu-system-x86_64 -nographic -serial mon:stdio -enable-kvm image_name.raw

If the command above does not work, you try can try the following command (beware boot logs will be supressed):

qemu-system-x86_64 image_name.raw --curses

Having a working login account for test purposes is left as an exercise to the reader. bootstrap-vz creates one by default (login:root / passwd:test) but our config wisely disables it.

How To Deploy

Images are deployed to the OpenStack Glance service.

  • Copy the .qcow2 file to /tmp on the cloudcontrol1003.wikimedia.org server

Since the file has to cross the Cloud VPS / Production boundary, you can copy it from the builder server to your laptop (using your Cloud Services root key) and then from your laptop to cloudcontrol1003 (using your production key):

rsync --progress -v -e ssh root@labs-bootstrapvz-stretch.openstack.eqiad.wmflabs:/target/debian-stretch-amd64-20181121.qcow2 .
rsync --progress -v -e ssh debian-stretch-amd64-20181121.qcow2 cloudcontrol1003.wikimedia.org:/tmp/
  • Login to cloudcontrol1003.wikimedia.org
  • Create new image in Glance:
sudo su -
source ~/novaenv.sh 
cd /tmp
openstack image create --file debian-stretch-amd64-20181121.qcow2 --disk-format "qcow2" --container-format "ovf" --public "debian-9.6-stretch"
  • Test new image by booting a new VM with it (if the image is faulty, remember to delete the test VM and the faulty image)
  • Get a list of existing images
openstack image list
  • Append "deprecated" to old images and remove properties (only if new image is working as expected)
openstack image set --name "debian-9.5-stretch (deprecated <date>)" <old-image-id>
nova image-meta <old-image-id> delete default
nova image-meta <old-image-id> delete show

Passing --purge-props to openstack image set should be enough to clear all properties but it's currently not available in our OpenStack version. The nova image-meta commands serve the same purpose but you have to delete each property individually. This should be reviewed when OpenStack is upgraded.

  • Make the new image the default for new instances
openstack image set --name "debian-9.6-stretch" <new-image-id> --property show=true --property default=true

Notice in the above glance image-update commands the use of properties. If default=true the image will be the default image selected in the instance creation interface. Purging properties removes the 'default' state.

How To Delete

TODO

How To Deactive Obsolete Images

We used to use the 'show=true' property to ensure that an image appeared on Wikitech. Now instead we use the image state, where only images with state=active appear in the GUI (both on wikitech and in Horizon.) To deactivate your obsolete image:

source /etc/novaenv.sh 
openstack image set --deactivate <image-id>

If you need to reactive it for some reason:

source /etc/novaenv.sh 
openstack image set --activate <image-id>

Please note that we usually just "deprecate" images by changing their names. Deactivating an image is a more extreme step to be used when you do not want any users to have access to it.

Internals

bootstrap-vz configuration files

Bootstrap-vz uses source files from /etc/bootstrap-vz. These files are puppetized, so you'll want to disable Puppet if you change them.

OS Configuration File
Debian Stretch /etc/bootstrap-vz/manifests/labs-stretch.manifest.yaml
Debian Jessie /etc/bootstrap-vz/manifests/labs-jessie.manifest.yaml


Bootstrap-vz also uses several source files that are standard local config files on the build host. For a complete list of these files, look at the 'file_copy:' section in /etc/bootstrap-vz/manifests/labs-{stretch,jessie}.manifest.yaml

First Boot

The first boot of the VM image is a key moment in the setup of an instance in Cloud VPS.

This is usually done by means of the /root/firstboot.sh script which is called by means of /etc/rc.local.

The script will do:

  • some LVM configuration
  • run DHCP request for configuration
  • name/domain resolution to autoconfigure the VM
  • initial puppet autoconfiguration (cert request, etc)
  • initial configuration of nscd/nslcd
  • initial apt updates
  • NFS mounts if required
  • final puppet run to fetch all remaining configuration (ssh keys, packages, etc)

Until the last point, the instance may have limited connectivity or usability.

Troubleshooting

Common Issues

These problems may vary from deployment to deployment (main, labtest, labtestn), but they could be common.

  • Image does not have the puppet master CA, so it fails to fetch catalog (see phab:T181523)
  • Image does not have the puppet master CRL, so it fails to fetch catalog (see phab:T181523)
  • Image doesn't correctly resolve the hostname/domain name (so it fails to fetch is own puppet catalog)

How To Inspect Disk Contents

If you want to explore and edit the disk image of a live instance, follow this procedure:

  • Locate the virt server in which the VM is running
cloudvirt1020:~ $ for i in $(sudo virsh list --all | grep i.* | awk -F' ' '{print $2}') ; do echo -n "$i " ; sudo virsh dumpxml $i | grep nova:name ; done
i-0000025d       <nova:name>puppettestingui</nova:name>
i-00000262       <nova:name>aptproxy2</nova:name>
i-00000263       <nova:name>t2</nova:name>
i-00000264       <nova:name>t3</nova:name>
  • Once you know the internal instance name (i-xxxxx), locate the disk file
cloudvirt1020:~ $ sudo virsh dumpxml i-00000264 | grep "source file" | grep disk
      <source file='/var/lib/nova/instances/09865310-b440-4dc7-99ab-fb5f35be04fb/disk'/>
  • Shutdown the machine (from inside, from horizon, using virsh or whatever)
  • Copy the disk file to your home
cloudvirt1020:~ $ cp /var/lib/nova/instances/09865310-b440-4dc7-99ab-fb5f35be04fb/disk t3-disk.qcow2
  • Disable puppet and install libguestfs-tools
cloudvirt1020:~ $ sudo puppet agent --disable "inspecting instance disk" ; sudo aptitude install libguestfs-tools -y
[...]
  • Create a destination directory and mount the disk!
cloudvirt1020:~ $ mkdir mnt ; sudo guestmount -a t3-disk.qcow2 -m /dev/sda3 --rw mnt
  • You can now read/write the instance disk in the mount point
  • When done, umount, copy back the instance disk and start the instance!

See Also