
OpenStack Containers
If you’re just trying to play around with OpenStack, definitely look at DevStack first. Before I go into any significant detail, here’s my current OpenStack (2023.1) configuration:
- Docker-Compose and general setup is documented in GitLab
- It will pull a single minimized Docker image (~600MB) from my DockerHub
Why Containers?
I’ve found that packaging my OpenStack configuration inside containers helps organize and revision control my OpenStack setup quite well. A number of containers will be privileged or host networking - if you’re going to use openvswitch and qemu, it’s going to be privileged. While containerizing this setup does provide a little more security, it’s mostly to avoid installing a ton of things on each host. It also has the added benefit that I can run and develop the OpenStack image inside a VM running on my previous OpenStack revision. I’ve even gone as far as to run two VMs of my OpenStack image so I could troubleshoot and configure live VM migration.
Why sooooo complicated?
OpenStack is a real open source cloud solution designed to support a large number of different companies with tons of hardware. Getting it to run on a single machine requires some trade-offs. Also, if deploying from ‘scratch’ there’s a lot of basic cloud architecture that will need to be defined. Take a look at this test script for a simplified configuration. For example, I often run a single machine with just the ‘admin’ user - but OpenStack has a very rich and detailed set of policies that can be customized for deployment with many different users and roles. This test script generates a pretty simple network and shows how to connect the ‘provider’ network (your actual homelab ethernet) to the ‘provider-self’ or ‘self service’ network which is just an isolated bridge inside this one machine (in the single machine case). I’ve often ran multiple VLANs on the provider network and used a firewall to configure rules for what talks to what. If using more than one compute host - these ‘isolated’ self service networks will connect between machines over a VXLAN and is properly handled by the underlying OpenStack code. You could simply just define a single provider network and provision VMs on that.
What’s the point?
Here’s one use case that I rather like a lot: Generate a new Windows 10 Enterprise Evaluation VM automatically. When you spin up a new one it will start with the normal 90 day license. Automate it’s configuration and you can pretty easily start up a VM, do what you need to, and destroy it.
The general set up is:
- Install a Windows 10 Enterprise Evaluation ISO into a VM. Use ‘Admin’ (or whatever username you use in CloudBase Init later) so you don’t leave around an extra user.
- Make sure RDP is enabled.
- Download and install CloudBase Init. Make sure it ‘generalizes’ and shuts down the VM.
- Upload the disk image to OpenStack using the Glance service.
It’s a good idea to do a few other things like re-size the disk volume so you have a smaller raw image later. But not necessary.
Now you can just run a single command to build a VM from that image (for example):
|
|
Even though this is a Windows VM, an SSH key is provided. By default CloudBase Init will create an Admin user with a random password. It encrypts the password with the public SSH key and lets you get and decrypt it using a command like:
|
|
Assuming everything is configured correctly, you can then RDP into your new VM.
For me, with a bunch of old spining drives, it takes ~10min to copy my large windows image over to the new VM and another ~10min for the automated Windows setup. Those two things happen automatically during the VM creation.
Hack_Char's Blog
Comments
You can use your Fediverse (i.e. Mastodon, among many others) account to reply to this post.