Why you should automate your Puppet code testing with Litmus and Onceover

That said, Puppet shifted traditional infrastructure management more towards software development because we are developing Config as Code, even if it’s declarative.
This shift brings new challenges. A Puppet codebase can become too big, complex, and monolithic. We have to test our code, but how should we do it, and when?
We’ve seen many cases where there is no Unit test (Rspec) for puppet modules and manually executed Catalog compilation tests that could take hours or even days, depending on the size of the codebase.

What about Continuous Integration?
At first sight, CI/CD might not be relevant for Puppet code development because we’re not per se ‘building’ artifacts. Also, Puppet is about run-time configuration management, not build-time.
But if you look closer and apply the principles of CI/CD to Puppet code development, then there are a lot of benefits we can enjoy.
The main advantage is testing, or even better, automated testing! For both Puppet modules and Puppet Control repositories, we can use a pipeline that does all kinds of testing before the code is released for deployment.
Test coverage for Puppet code
Before we dive in, let’s consider the two areas for testing:
Puppet modules, that are used in a Roles and Profiles hierarchy
Puppet Control Repository, that contains the Hieradata, Roles and Profiles
It is our opinion that testing a complex codebase of Hieradata, Roles and Profiles with multiple operating systems against a testing harness is unscalable and eventually too expensive when it comes to resources and time. Even then we can’t be a hundred percent sure.
To determine ‘what, when, and how to test’ we can use the test pyramid with white-box tests at the bottom and black-box tests at the top.

As a refresher:
White-box testing: tests the internal structure and working of a component, in contrast to the functionality. The code is visible to the tester.
Black-box testing: tests the functionality without looking at the internal code structure. The code is invisible to the tester.
We’ve created some example code for Puppet module and Puppet Control Repository testing which can be found here.
The pipelines are implemented with Gitlab CI, which has an easy Pipeline as Code DSL that uses a single YAML file, .gitlab-ci.yml.
Now let’s have some fun with Puppet code testing.

Puppet module testing
Our code example has a notify class that sends an arbitrary message to the Puppet agent run-time log, which we’ll use to test.

By using the The Puppet Development Kit ,we can create a boilerplate repository, which has a pipeline template and already defined Puppet testing tools.(https://github.com/puppetlabs/puppet-module-gems/blob/main/config/dependencies.yml).
The first step is to do Static Analysis: syntax checking and linting with several testing tools, for example puppet-syntax, metadata-json-lint and puppet-lint.

This is the first line of defence: if the syntax is incorrect, we should not continue.
Next, we perform Unit testing with rspec-puppet and parallel_tests for parallel execution. In our test we expect the agent log to contain a certain message.
As you can see, our Puppet module has passed the Unit test. The great thing about Rspec is that it doesn’t require actual target nodes, because it’s a white-box test.
Lastly, we want to test the Puppet module functionally against a target system. For this we use a great tool called Litmus. Litmus facilitates parallel test runs and running tests in isolation. Each step is standalone, allowing other operations between test runs, such as debugging or configuration updates on the test targets.
Because this is a black-box test, we’re using a container side-car in the Gitlab pipeline with a ‘service’ that runs a Docker in Docker container that is accessible from our test job container.

In our example we’d like to perform the acceptance test against a Centos 7 and 8 system:
---
el:
provisioner: docker
images:
- litmusimage/centos:7
- litmusimage/centos:8

The job has successfully tested the the Puppet module against a Centos 7 and Centos 8 system.
All tests combined should give us enough confidence in the Puppet Module, which is a basic component used in a Roles and Profiles hierarchy.
Now let’s have a look at the Puppet Control Repository.
Puppet Control Repository testing
Our Puppet Control Repo example contains some example Roles and Profiles which are tested and released in the CI pipeline.
As said before, it’s too complex to functionally test multiple combinations of Roles, Profiles and target node operation systems. Instead, we perform only Static Analysis and Unit tests on the Puppet Control Repository codebase.
Onceover is a great way to test Control Repo code. It’s well maintained by it’s developers and recommended by Puppet.
Static Analysis
As always, let’s start with Static Analysis in the pipeline. Onceover provides the plugin onceover-codequality which checks for syntax and linting.

Testing the Catalog compilation
The main purpose of Onceover is spec testing capability which allows us to define spec tests in onceover.yaml and factsets to define the testing fixtures. Based on the test configuration in onceover.yaml, it tries to compile the catalog before the code is released to the Puppet Master. This means we get our coding errors earlier in the process.
# onceover.yaml
# Classes to be tested
classes:
- role::augeastest
- role::database_server
- role::webserver
- role::goldload_server
- role::loadbalancer
- role::example
# Nodes to tests classes on, this refers to a 'factset' or 'nodeset'
nodes:
- RHEL-7.4
- CentOS-7.0-64
...
Let’s run the test:

With Onceover, we don’t have to test in the field against real nodes.
Deploy our code change with r10k
Lastly, after testing and peer review, we’d like to deploy our code change using r10k in the pipeline. This is a theoretical concept, because it’s not implemented yet in our example repo.

The idea is that this release should not be fire and forget, because the r10k deploy job should end successfully, in order for the pipeline to succeed.
In the deploy job we envision a micro-service that is responsible for r10k deployment through an API that can POST a deployment but also GET the status of the deployment.
This way we can wait for the r10k deployment to be finished, so that the job can end successfully.
TL;DR
By implementing static analysis, unit and acceptance tests for Puppet modules, static analysis and unit tests for Puppet Control repositories using a CI/CD pipeline (in our case Gitlab), we increase our confidence in code changes without manual testing.


Kunnen wij je helpen?