Contents

OpenStack


OpenStack cloud for your home lab

This is going to be a series of blog posts about running OpenStack for a home lab. It’s not for everyone, but I’ve found it very useful.

OpenStack is a great set of microservices that can be run and provide a set of cloud interfaces into your home lab. I use a Linux desktop for everything and prefer using libvirt/virt-manager for most virtual machines (VM). But at some point, it’s very useful to have a VM server where you can run a script and get a virtual network with VMs. It’s also really nice to have Ansible and other tools with cloud interfaces such that you can play with it locally and still be compatible with major clouds when you want to run things there.

OpenStack is run by a large number of massive companies for their internal IT as well as some public clouds like OVH. While it’s designed for these use cases, it’s quite possible to run some of these microservices locally on a single machine. The easiest approach is running the pre-built DevStack inside a VM. At some point, if this is enticing, running these service with customization is going to be very helpful.

General tips for running OpenStack at home

There’s a few things I’ve learned over years of running OpenStack that have a pretty big impact if you want to run it locally.

  1. If you want to use block storage, all of the officially supported drivers are for big and expensive network storage systems. It’s not difficult to use a local ZFS driver and it makes running VMs in your home lab a lot nicer. I’ve got a driver I’ve kept updated and is in my GitLab. This is based on an older ZFS cinder driver but with various updates.

  2. If you use Ubuntu or another distribution that has OpenStack, it’s likely an official release - and does not have updates. It’s not uncommon for there to be a problem that is fixed in a minor update. You can only get those updates when you sign up for the OpenStack support that costs money. Or, you can run OpenStack from the official opensource PyPi code and get all of the updates. If you don’t want to pay money, this is the best approach. There are official methods of deploying OpenStack, like ‘Charmed OpenStack’, but I’ve found it not flexible enough for a single machine installation.

  3. When you make a VM in OpenStack, it gets a reservation on a compute host. What that means is, even if you don’t have all of your VMs running, they have CPU and Memory allocations. If you hit a limit, you can ‘shelve’ your VM which removes that reservation while letting you ‘unshelve’ and use the VM later. This can also be used to move a VM from one compute host to another.

  4. A minimal set of microservices will take around 6GB to 8GB of memory. If using ZFS, you also need to allocate some maximum ARC memory for that. This memory overhead should be considered when deciding if you want to run OpenStack in your home lab.

OpenStack docker containers

I run OpenStack from a series of Docker containers. At some point, I plan to open source my configuration. However, it’s current state is likely to cause more trouble for others to use given many of the modifications I’ve made.

These are the containers I’m currently using for my hacking OpenStack server. It should give a decent idea how much memory is needed for the OpenStack microservices. Note that there is no load and this is a few minutes after bringing the containers up.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
CONTAINER ID   NAME                                 CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
7293499a17bb   20231_1_neutron-dhcpagent_1          0.90%     484.9MiB / 125.5GiB   0.38%     0B / 0B           11.2MB / 1.09MB   67
fccd99e1897e   20231_1_neutron-metadataagent_1      0.02%     179.1MiB / 125.5GiB   0.14%     5.05kB / 11.5kB   803kB / 623kB     11
4ba7c8a0a639   20231_1_nova-conductor_1             1.36%     285MiB / 125.5GiB     0.22%     1.06MB / 723kB    578kB / 315kB     8
1787e12dd4c4   20231_1_nova-api_1                   1.28%     398.5MiB / 125.5GiB   0.31%     69.1kB / 52.6kB   17.9MB / 315kB    13
2c5eb3e78d57   20231_1_cinder-api_1                 1.32%     258.1MiB / 125.5GiB   0.20%     39kB / 22.8kB     14.9MB / 324kB    19
ac3c34e53abe   20231_1_neutron-server_1             2.86%     571.4MiB / 125.5GiB   0.44%     8.32MB / 6.24MB   23.2MB / 623kB    15
6241e2e64f92   20231_1_nova-metadata-agent_1        1.31%     184.1MiB / 125.5GiB   0.14%     36.4kB / 47.4kB   5.12MB / 315kB    8
9568ab3353ae   20231_1_horizon_1                    0.10%     144.7MiB / 125.5GiB   0.11%     524kB / 524kB     51.4MB / 3.45MB   363
ec406a510b8f   20231_1_nova-spice_1                 0.03%     119MiB / 125.5GiB     0.09%     1.37kB / 1.25kB   16.4kB / 315kB    18
fac00d4f9440   20231_1_memcached_1                  0.03%     4.945MiB / 125.5GiB   0.00%     527kB / 525kB     733kB / 0B        12
d657a7e13c83   20231_1_db_1                         1.06%     190.3MiB / 125.5GiB   0.15%     7.42MB / 10.7MB   266MB / 140MB     132
64e45fd68f49   20231_1_glance_1                     1.21%     135MiB / 125.5GiB     0.11%     0B / 0B           6MB / 319kB       5
49817a4af06f   20231_1_nova-serial_1                0.03%     122.8MiB / 125.5GiB   0.10%     1.45kB / 1.25kB   16MB / 315kB      18
928a23e36026   20231_1_rabbitmq_1                   0.40%     142MiB / 125.5GiB     0.11%     1.22MB / 1.34MB   44.8MB / 77.8kB   32
6b2e5628d9f2   20231_1_neutron-l3agent_1            1.02%     470.6MiB / 125.5GiB   0.37%     0B / 0B           3.73MB / 938kB    29
3bd3c597b34f   20231_1_nova-compute_1               1.18%     259.5MiB / 125.5GiB   0.20%     0B / 0B           73.1MB / 819kB    48
5fad100d916b   20231_1_cinder-scheduler_1           0.08%     129MiB / 125.5GiB     0.10%     72.6kB / 47.8kB   8.91MB / 324kB    3
4889264fa6ab   20231_1_keystone_1                   0.02%     184.8MiB / 125.5GiB   0.14%     1.2MB / 949kB     45.1MB / 324kB    65
838a1ea502a1   20231_1_neutron-openvswitchagent_1   0.69%     309.9MiB / 125.5GiB   0.24%     0B / 0B           21.1MB / 930kB    8
c38bcce77ccd   20231_1_placement_1                  0.01%     132.7MiB / 125.5GiB   0.10%     162kB / 69.4kB    13.6MB / 8.19kB   23
bd991e71b57a   20231_1_cinder-volume_1              1.28%     182.5MiB / 125.5GiB   0.14%     0B / 0B           9.21MB / 324kB    5
34fab8a71626   20231_1_nova-scheduler_1             5.14%     242.3MiB / 125.5GiB   0.19%     389kB / 119kB     270kB / 315kB     8
c590d43eb934   dns                                  0.00%     7.824MiB / 125.5GiB   0.01%     966B / 1.25kB     13.6MB / 8.19kB   34

Adding up all of the memory usage brings me to 5,139MiB - that does not include the 8GB ZFS ARC max limit I’ve set. I generally allocate about 16GB memory for my underlying VM server. I will discuss these services in future blog posts, but for a summary, I’m running:

  • Neutron - Virtual networking using openvswitch
  • Nova - The VM compute service (thing actually running the qemu processes)
  • Cinder - Block storage, and in my case using ZFS locally
  • Horizon - Web UI that I don’t normally use. But can be quicker to open a local console into a VM using this.
  • Memcached - In memory cache, which may end up going away
  • DB - Persistent configuration for the OpenStack setup using MariaDB
  • Glance - VM image service (upload an image and use it as a template for new VMs)
  • RabbitMQ - Messaging between all of these microservices
  • Keystone - Authentication and authorization
  • Placement - Used to match a Nova compute host with the needs of a new VM
  • DNS - Absolutely required to have DNS resolution (I’m using Bind9)

There’s a great many other OpenStack microservices to provide different specialized features, like direct container managment.

Overview of the OpenStack microservices

Comments

You can use your Fediverse (i.e. Mastodon, among many others) account to reply to this post.