AFP548 Site News,Third Party Applications January 26, 2015 at 4:14 pm

Using Puppet with WebHelpDesk to Sign Certs, with Docker

In a previous post, I showed how to use Munki with Puppet SSL Client certificates in a Docker image.

In that example, the Puppetmaster image is set to automatically sign all certificate requests. Good for testing, but not a good idea for production use.

Instead, we should look into Puppet policy-based signing to sign requests only based on some credentials or criteria we control. This means that random nodes can’t come along and authenticate to the Puppet master, and it also means that the Puppet admin won’t have to manually sign every node’s certificate request. Manually signing works great for testing, but it quickly spirals out of control when you’re talking about dozens, or hundreds (or thousands) of machines.

Puppet’s policy-based autosigning allows us to execute a script. The exit code of that script determines whether a certificate is signed or not (exit code 0 means we should sign). So we need to write a script that will check something about the client that lets us determine it’s “ours” or “safe,” and sign accordingly – or reject.

Well, we have a really easy to way to do that – why not look up the client in inventory? We have WebHelpDesk, with its customized Postgres database, which can track inventory for us. If we’re using WebHelpDesk for inventory (as I am), then an autosign script that checks the WHD inventory for ownership would be an effective way to screen for cert requests.

One of WebHelpDesk’s best features, in my opinion, is its REST API, which allows us to make requests from WebHelpDesk’s backend in a more automated fashion than via the web interface. Using the REST API, we can develop scripts that will manage information for us – such as the one I wrote, WHD-CLI.

I’ve even made a separate Docker container for it (which is admittedly better documented than the original project), although we’re not actually going to use the container separately for this purpose (as there’s no way to get Puppet to use an autosign script that isn’t installed locally, so having it exist in a separate Docker container isn’t going to help us).

So, we have WebHelpDesk, which has inventory for our machines. We have a script, WHDCLI, which allows us to query WebHelpDesk for information about devices. We have the Puppetmaster container, which is running Puppet. Let’s combine them!

Building Puppetmaster with WHD-CLI installed:

The repo for this project is here. Start with the Dockerfile:

FROM macadmins/puppetmaster

MAINTAINER [email protected]

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz
WORKDIR /home/requests
RUN python /home/requests/setup.py install
ADD https://github.com/nmcspadden/WHD-CLI/tarball/master /home/whdcli/master.tar.gz
RUN tar -zxvf /home/whdcli/master.tar.gz --strip-components=1 -C /home/whdcli && rm /home/whdcli/master.tar.gz
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install
ADD puppet.conf /etc/puppet/puppet.conf
ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
ADD check_csr.py /etc/puppet/check_csr.py
RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

FROM macadmins/puppetmaster
Since we have a nice Puppet master container already, we can use that as a baseline to add our WHD-CLI scripts onto.

RUN yum install -y tar python-setuptools && yum clean all
ADD https://github.com/kennethreitz/requests/tarball/master /home/requests/master.tar.gz
RUN tar -zxvf /home/requests/master.tar.gz --strip-components=1 -C /home/requests && rm -f /home/requests/master.tar.gz

Use ADD to download the Requests project. Requests is an awesome Python library for handling HTTP/S requests and connections, much more robust and much more usable than urllib2 or urllib3. Unfortunately, it’s not a standard library, so we’ll need to download a copy of the module in tarball form, then extract and install it ourselves.

WORKDIR /home/requests
The WORKDIR directive changes the local present working directory to /home/requests before the next command. This is equivalent to doing cd /home/requests.

RUN python /home/requests/setup.py install
Now we use the Python setuptools to install Requests so it’s available system-wide, in the default Python path.

RUN git clone https://github.com/nmcspadden/WHD-CLI.git /home/whdcli
WORKDIR /home/whdcli
RUN python /home/whdcli/setup.py install

Same thing happens here to WHD-CLI – clone the repo, change the working directory, and install the package.

ADD puppet.conf /etc/puppet/puppet.conf
In the Puppetmaster image, we already have a Puppet configuration file – but as I documented previously, it’s set to automatically sign all cert requests. Since we’re changing the behavior of the Puppet master, we need to change the configuration file to match our goals.

Here’s what the new puppet.conf looks like:

[agent]  
    certname        = puppetmaster  
    pluginsync      = true  
  
[master]  
    certname        = puppet  
    confdir     = /opt/puppet  
    vardir      = /opt/varpuppet/lib/puppet/  
    basemodulepath  = $confdir/site-modules:$confdir/modules:/usr/share/puppet/modules  
    factpath        = $confdir/facts:/var/lib/puppet/lib/facter:/var/lib/puppet/facts  
    autosign        = $confdir/check_csr.py  
    hiera_config    = $confdir/hiera.yaml  
    rest_authconfig = $confdir/auth.conf  
    ssldir          = $vardir/ssl  
    csr_attributes  = $confdir/csr_attributes.yaml  

The major change here is the autosign directive is no longer set to “true.” Now, it’s set to $confdir/check_csr.py, a Python script that will be used to determine whether or not a certificate request gets signed. Note also the use of csr_attributes = $confdir/csr_attributes.yaml directive – that’ll come into play in the script as well.

ADD com.github.nmcspadden.whd-cli.plist /home/whdcli/com.github.nmcspadden.whd-cli.plist
Add in a default WHD-CLI configuration plist. This will be used by WHD-CLI to get API access to WebHelpDesk.

ADD check_csr.py /etc/puppet/check_csr.py
Here’s the actual script that will be run whenever a certificate request is received on the Puppet master. An in-depth look at it comes later.

RUN touch /var/log/check_csr.out
RUN chown puppet:puppet /var/log/check_csr.out

As we’ll see later in-depth, the script will log its results to a logfile in /var/log/check_csr.out. To prevent possible permissions and access issues, it’s best to create that file first, and make sure it has permissions where the Puppet master can read and write to it.

RUN cp -Rfv /etc/puppet/ /opt/
RUN cp -Rfv /var/lib/puppet/ /opt/varpuppet/lib/

These last two commands are copies of those from the original Puppetmaster image. Since we’re adding in new stuff to /etc/puppet, it’s important for us to make sure all the appropriate files end up in the right place.

As usual, you can either build this image yourself from the source:
docker build name/puppetmaster-whdcli .
Or you can pull from the Docker registry:
docker pull macadmins/puppetmaster-whdcli

Crafting Custom CSR Attributes:

The goal of an autosign script is to take information from the client machines (the Puppet nodes) and determine if we can sign it based on some criteria. In this use case, we want to check if the client nodes are devices we actually own, or know about in some way. We have WebHelpDesk as an asset tracking system, that contains information about our assets (such as serial number, MAC address, etc.), and we already have a script that allows us to query WHD for such information.

So our autosigning script, check_csr.py, needs to do all of these things. According to Puppet documentation, the autosigning script needs to return 0 for a successful signing request, and non-zero for a rejection. A logical first choice would be to ask the client for its serial number, and then look up the serial number to see if the machine exists in inventory, and exit 0 if it does – otherwise reject the request.

The first question is, how do we get information from the client? This is where the csr_attributes.yaml file comes into play. See the Puppet documentation on it for full details.

In a nutshell, the csr_attributes.yaml file allows us to specify information from the node that goes into the CSR (certificate signing request), which can then be extracted by the autosigning script and parsed for relevance.

Specifically, we can use the CSR attributes to pull two specific facts: serial number, and whether or not the machine is physical, virtual, or a docker container.

This is the csr_attributes.yaml file that will be installed on clients:

---  
extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: mySerialNumber  
  1.3.6.1.4.1.34380.1.2.1.2: facter_virtual

The two extension_request prefixes are special Puppet OIDs that allow us to add attributes to the CSR – essentially they’re labels for what kind of data can be put into the CSR.

Here’s an example of what it looks like in a VMWare Fusion VM, after installation:

sh-3.2# cat /etc/puppet/csr_attributes.yaml

extension_requests:  
  1.3.6.1.4.1.34380.1.2.1.1: VMYNypomQeS5  
  1.3.6.1.4.1.34380.1.2.1.2: vmware 

The serial number has been replaced with what the VM reports, and the “virtual” fact is replaced by the word “vmware”, indicating that Facter recognizes this is a virtual machine from VMWare. This will be important in our script.

For convenience, I have a GitHub repo for installing these attributes (built with Whitebox Packages) available here. A release package is available for easy download.

The Autosigning Script:

The autosign script, when called from the Puppetmaster, is given two things. The hostname of the client requesting a certificate is passed as an argument to the script. Then, the contents of the CSR file itself is passed via stdin to the script. So our script needs to be able to parse an argument, and then read in what it needs from stdin.

The full script can be found on GitHub. Here’s a pared-down version of the script, with many of the logging statements removed for easier blog-ability:

#!/usr/bin/python

import sys
import whdcli
import logging
import subprocess

LOG_FILENAME = '/var/log/check_csr.out'

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
logger = logging.getLogger(__name__)

logger.info('Start script')

hostname = sys.argv[1]

if hostname == "puppet":
    logger.info("It's the puppetmaster, of course we approve it.")
    sys.exit(0)

certreq = sys.stdin.read()

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

lineList = output.splitlines()

strippedLineList = [line.lstrip() for line in lineList]
strippedLineList2 = [line.rstrip() for line in strippedLineList]

try:
    trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
except:
    logger.info("No serial number in CSR. Rejecting CSR.")
    sys.exit(1)
    
serial_number = strippedLineList2[trusted_attribute1+1]
logger.info("Serial number: %s", serial_number)     

try:
    trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
except:
    logger.info("No virtual fact in CSR. Rejecting CSR.")
    sys.exit(1)

physical_fact = strippedLineList2[trusted_attribute2+1]

if physical_fact == "virtual" or physical_fact == "vmware":
    logger.info("Virtual machine gets autosigned.")
    sys.exit(0)
elif physical_fact == "docker":
    logger.info("Docker container gets autosigned.")
    sys.exit(0)

# Now we get actual work done
whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)
if not w.getAssetBySerial(serial_number):
    logger.info("Serial number not found in inventory.")
    sys.exit(1)

logger.info("Found serial number in inventory. Approving.")
sys.exit(0)

Let’s take a look at some of the notable parts of the script:

logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO)
This sets the basic log level. This script has both INFO and DEBUG logging, so if you’re trying to diagnose a problem or get more information from the process, you could change level=logging.INFO to level=logging.DEBUG. It’s much noisier, so best for testing and probably not ideal for production.

Migrating the logging to standard out so that you can use docker logs is a good candidate for optimization.

hostname = sys.argv[1]
The hostname for the client is the only command line argument passed to the script. In a test OS X default VM, this would be “mac.local”, for example.

certreq = sys.stdin.read()
The actual contents of the CSR gets passed in to stdin, so we need to read it and store it in a file.

cmd = ['/usr/bin/openssl', 'req', '-noout', '-text']
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = proc.communicate(certreq)

Here, we make an outside call to openssl. Puppet documentation shows that we can manually parse the CSR for the custom attributes using OpenSSL, so we’re going to do just that in a subprocess. We’re going to pass in the contents of certreq into stdin in the subprocess call, so in essence we are doing this:
/usr/bin/openssl req -noout -text -in <contents of csr>

Once we do some text parsing and line stripping (since the CSR is very noisy about linebreaks), we can pull the first custom attributes, the serial number: trusted_attribute1 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.1:")
If there’s no line in the CSR containing that data, that means the CSR didn’t have our csr_attributes.yaml installed (and is almost certainly not something we recognize, or at least not in a desired state and should be addressed). Thus, reject.

trusted_attribute2 = strippedLineList2.index("1.3.6.1.4.1.34380.1.2.1.2:")
Our second attribute is the Facter virtual fact. If we don’t find that either, then we still have an incorrect CSR, and thus it gets rejected.

if physical_fact == "virtual" or physical_fact == "vmware":
This was mostly for my own convenience, but I decided it was safe to Puppetize any virtual machine, such as a VMWare Fusion VM (or ESXi, or whatever). As VMs tend to be transient, I didn’t want to spend time approving these certs constantly as I spun test VMs up and down. Thus, they get autosigned.

elif physical_fact == "docker":
If it’s a Docker container getting Puppetized, autosign as well, for mostly the same reasons as above.

Once the CSR is parsed for its contents and some basic sanity checks are put into place, we can now actually talk to WebHelpDesk. whd_prefs = whdcli.WHDPrefs("/home/whdcli/com.github.nmcspadden.whd-cli.plist")
w = whdcli.WHD(whd_prefs, None, None, False)

Parse the .plist we passed in to the Puppetmaster image earlier for the API key and URL of WebHelpDesk, and load up the API. Note the False at the end of the WHD() call – that’s to specify that we don’t want Verbose logging. If you’re trying to debug behavior, and want to see all the details in the log file, specify True here (or eliminate the extra variables just call whdcli.WHD(whd_prefs), since the other three variables are optional).

if not w.getAssetBySerial(serial_number):
This is the real meat, right here – w.getAssetBySerial() is the function call that checks to see if the serial number exists in WebHelpDesk’s asset inventory. If this serial number isn’t found, the function returns False, and thus we reject the CSR by exiting with status code 1.

Putting It All Together:

So, we’ve got WebHelpDesk in a Docker image, using our customized Postgres. We’ve got our new-and-improved Puppetmaster with WHD-CLI. We’ve got our client configuration install package. We have all the pieces to make it work, let’s assemble it into a nice machine:

  1. First, run the data container for the Postgres database for WHD:
    docker run -d --name whd-db-data --entrypoint /bin/echo macadmins/postgres-whd Data-only container for postgres-whd

  2. Run the Postgres database for WHD:
    docker run -d --name postgres-whd --volumes-from whd-db-data -e DB_NAME=whd -e DB_USER=whddbadmin -e DB_PASS=password macadmins/postgres

  3. Run WebHelpDesk:
    docker run -d -p 8081:8081 --link postgres-whd:db --name whd macadmins/whd

  4. Configure WebHelpDesk via the browser to use the external Postgres database (see the penultimate section on Running WebHelpDesk in Docker for details).

  5. Once WebHelpDesk is set up and you’re logged in, you need to generate an API key. Go to Setup -> Techs -> My Account -> Edit -> API Key: “Generate” -> Save.

  6. Copy and paste the API key into com.github.nmcspadden.whd-cli.plist as the value for the “apikey” key. If you haven’t cloned the repo for this project, you can obtain the file itself:
    curl -O https://raw.githubusercontent.com/macadmins/puppetmaster-whdcli/master/com.github.nmcspadden.whd-cli.plist

  7. Create a data-only container for Puppetmaster-WHDCLI:
    docker run -d --name puppet-data --entrypoint /bin/echo macadmins/puppetmaster-whdcli Data-only container for puppetmaster

  8. Run Puppetmaster-WHDCLI. Note that I’m passing in the absolute path to my whd-cli.plist file, so make sure you alter the path to match what’s on your file system:
    docker run -d --name puppetmaster -h puppet -p 8140:8140 --volumes-from puppet-data --link whd:whd -v /home/nmcspadden/com.github.nmcspadden.whd-cli.plist:/home/whdcli/com.github.nmcspadden.whd-cli.plist macadmins/puppetmaster-whdcli

  9. Complete the Puppetmaster setup:
    docker exec puppetmaster cp -Rf /etc/puppet /opt/

  10. Configure a client:

    1. Install Facter, Hiera, and Puppet on an OS X VM client (or any client, really – but I tested this on a 10.10.1 OS X VM).
    2. Install the CSRAttributes.pkg on the client.
    3. If your Puppetmaster is not available in the client’s DNS, you’ll need to add the IP address of your Docker host to /etc/hosts.
    4. Open a root shell (it’s important to run the Puppet agent as root for this test):
      sudo su
    5. Run the Puppet agent as root:
      # puppet agent --test
    6. The VM should generate a certificate signing request, send to the Puppet master, which parses the CSR and notices that it’s a virtual machine, and then autosigns it and send the cert back.
  11. You can check the autosign script’s log file on the Puppetmaster to see what it did:
    docker exec puppetmaster tail -n 50 /var/log/check_csr.out

Here’s sample output from a new OS X VM:
INFO:main:Start script
INFO:main:Hostname: testvm.local
INFO:main:Serial number: VM6TP23ntoj2
INFO:main:Virtual fact: vmware
INFO:main:Virtual machine gets autosigned.

Here’s sample output from that same VM, but I manually changed /etc/puppet/csr_attributes.yaml so that the virtual fact is “physical”:
INFO:main:Start script
INFO:main:Hostname: testvm.local
INFO:main:Serial number: VM6TP23ntoj2
INFO:main:Virtual fact: physical
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): whd
INFO:main:Serial number not found in inventory.

Try this on different kinds of clients: Docker containers (a good candidate is the Munki-Puppet container which needs to run Puppet to get SSL certs), physical machines, other platforms. Test it on a machine that is not in WebHelpDesk’s inventory and watch it get rejected from autosigning.

Troubleshooting:

Manually run the script:

If you get a CSR that gets rejected and you’re not sure why, you can manually run the check_csr.py script itself on the rejected (or rather, disapproved) CSR .pem file. Assuming the hostname is “testvm.local”: docker exec -it puppetmaster /bin/bash to open a Bash shell on the container, then:
cat /opt/varpuppet/lib/puppet/ssl/ca/requests/testvm.local.pem | /opt/puppet/check_csr.py "testvm.local"

Then, you can check the logs to see what the output of the script is. Assuming you’re still in the Bash shell on the container: tail -n 50 /var/log/check_csr.out

Test WHDCLI:

If you’re running into unexpected failures with the autosigning scripts, or you’re not getting the results you expect, you can try manually running the WHDCLI to see where the problem might be:
docker exec -it puppetmaster /usr/bin/python
Once you’re in the Python interpreter, load up WHD-CLI:

&gt;&gt; import whdcli
>&gt;&gt; whd_prefs = whdcli.WHDPrefs(&quot;/home/whdcli/com.github.nmcspadden.whd-cli.plist&quot;)
>&gt;&gt; w = whdcli.WHD(whd_prefs)

If you get a traceback here, it’ll tell you the reason why it failed – perhaps a bad URL, bad API key, or some other HTTP authentication or access failure. Embarrassingly, in my first test, I forgot to Save in WebHelpDesk after generating an API key, and if you don’t hit the Save button, that API key disappears and never gets registered to your WHD account.

Assuming that succeeded, try doing a manual serial lookup, replacing it with an actual serial number you’ve entered into WHD:

>> w.getAssetBySerial("serial")

The response here will tell you what to expect – did it find a serial number? It’ll give you asset details. Didn’t find a match? The response is just False.

Conclusions

Important Note: Although this post makes use of Docker as the basis for all these tools, you can use the WHD-CLI script with a Puppetmaster to accomplish the same thing. You’d just need to change the WHD URL in the whd-cli.plist file.

One of the best aspects of Docker is that you can take individual pieces, these separate containers, and combine them into amazing creations. Just like LEGO or Minecraft, you take small building blocks – a Postgres database, a basic Nginx server, a Tomcat server – and then you add features. You add parts you need.

Then you take these more complex pieces and link them together. You start seeing information flow between them, and seeing interactions that were previously more difficult to setup in a non-Docker environment.

In this case, we took separate pieces – WebHelpDesk, its database, and Puppetmaster, and we combined them for great effect. Combine this again with Munki-Puppet and now you’ve got a secure Munki SSL environment with your carefully curated Puppet signing policies. There are more pieces we can combine later, too – in future blog posts.

Nick McSpadden

I'm Client Systems Manager for Schools of the Sacred Heart, San Francisco. I'm in charge of all OS X and iOS deployments to our faculty, staff, and students.

More Posts

Tags:

Leave a reply

You must be logged in to post a comment.