Collaboratively work on a Puppet module. Vagrant + r10k.

Once a puppet module is in the forge it is quite easy to share it with other people to try it out. But until then, it can be somewhat cumbersome. They are several reasons why releasing a module to the forge can be delayed :

  • a dependent module has not been officialy released yet
  • a pull request being pending on another module one is relying on.
  • a pull request has been merged but one needs to wait for the maintainer to actually do a new release

All those reasons, make the collaboration on a puppet module a litlle bit harder than a simple puppet module install.

In order to tackle this issue and make collaboration / demo of a puppet module easier let’s see how by using Vagrant and r10k this problem can be solved painlessly and in a versionned controlled way.


r10k is a project that allows one to specify her puppet module dependencies on a file (Puppetfile) and make a proper deployment on a puppet master or a local folder.

Those dependencies can be expressed either as forge puppet module version number or git repository. When specifying git, the checkout process can be bound on master, a specific branch, a specific commit, etc…

mod 'puppetlabs/ntp', '3.0.3'

mod 'stdlib',
:git => 'git://'

Vagrant (provisioner)

I am sure you’ve heard about vagrant, at least once. It’s a great project that makes collaboration easier by scripting the boot + provisionning of a VM. No more excuses like “it works on my machine”.
By simply using the same Vagrant file and the same base box, two users are sure to have the same result. One of the great feature of vagrant is the provisionning on VM creation.
This is the feature that will be used here. To make the point the following Vagrant file will be used :


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = "MYBASEBOX"
config.vm.provision "puppet" do |puppet|
puppet.module_path = 'modules'


By default, vagrant will look for a manifet to run in ./manifests/default.pp, hence it is not necessary to specify it here since we will be dropping the file in the correct path.
With the afro mentionned Vagrantfile, vagrant will boot a VM using the MYBASEBOX box and run the manifest/default.pp manifests using the modules/ folder located in the samed directory as your vagrantfile as –modulepath option to puppet. (a.k.a “the glue”)

The script, or however you want to call it, will simply create the appropriate folders and run the r10k install command to retriveve the modules specified in the Puppetfile.
It will then copy an example manifest into manifests/default.pp for vagrant to run it. This is what a basic could look like :


if [[ ! `gem list r10k` ]];then
gem install r10k
mkdir -p modules
mkdir -p manifests
PUPPETFILE=./Puppetfile PUPPETFILE_DIR=modules r10k --verbose 3 puppetfile install
cp modules/path/to/example.pp manifests/default.pp

It is up to you to customize it to your need.
In this example a file called example.pp was located at the root of the module but it is up to you to have it in an examples folder, simply adapt the script accordingly.

Show time

Long story short, simply run the following command and see the magic happen

vagrant up


By having the Puppetfile and the script in a git repository, one can easily and repeatedly share the avancement on a given puppet module at any time, without having to wait for upstream release. QED.


Enable network namespaces in CentOS 6.4

By default, CentOS 6.4 does not support network namespaces. If one wants to test the new virtualization platforms (Docker, OpenStack, & co…) on a CentOS server, all features won’t be available.
For OpenStack for example, Neutron won’t work as expected, since it actually needs network namespace to create networks,

Fortunately, RedHat – through RDO – provides a kernel that get this feature backported.

So, before updating the kernel, if one runs :

#> ip netns list

s/he will be presented with the following error message : Object “netns” is unknown, try “ip help”.

The following steps needs to be realized to install the new kernel and enable the network namespace feature

#> yum install -y
#> yum install kernel iproute
#> reboot

And that’s it. Really.

Now one can run

#> ip netns add spredzy
#> ip netns list

spredzy should get displayed.

If everything is working one should have the following kernel and iproute packages installed :


Note : the openstack mention for kernel and netns for iproute

The Foreman PXE provisioning with Libvirt

More than just a Puppet management interface The Foreman can handle the whole lifecycle of servers, from their creations and provisioning (pxe + kickstart/preseed) to their management (puppet). Today’s blog post will highlight how to use the provisioning feature of The Foreman using Libvirt DHCP server (dnsmasq) for local testing purpose.

Pre Requisite

  • An instance of a VM running foreman on libvirt, for this post version 1.3.0 of The Foreman is used, and CentOS 6.4 will be deployed.

Create the Operating System (The Foreman)

The Operating System

In the first time, simply fill the four first field and click submit. We will get back to it at a later point.

Path : More -> Provisioning -> Operating Systems -> New Operating System

Edit OS

Edit OS

The Architecture

Add an architecture one will be supporting for a set of OSes

Path : More -> Provisioning -> Architectures -> New Architecture

Edit Architecture

Edit Architecture

The Installation Media

For our case, the CentOS installation media already exist, one still have to click on CentOS and specify RedHat as Operating System family.

If you have a local mirror of CentOS repositories you could simply make the path points to it, installation will be much faster.

Path : More -> Provisioning -> Installation Media

Edit Installation Media

Edit Installation Media

The Partion Table

A RedHat default partition tables is already present, for the purpose of the demo we will be using it but you might want to create your own one. Do not forget to specify the Operating System Family.

Path : More -> Provisioning -> Partition Tables

Edit Partition Tables

Edit Partition Tables

The Templates

The provisiong template section is where one defines its kickstart/preseed, PXE, gPXE, etc… scripts.

One can define snippets that can be embedded within scripts.

For the demo purpose we will be using two pre-existing scripts

  • Kickstart Default PXELinux (PXELinux)
  • Kickstart Default (provision)

Once one clicks on the Template, one needs to go the the Association tab on the presented page to associate it with the proper OS. Here it needs to be done twice for the Kickstart Default PXELinux and for the Kickstart Default scripts.

Path : More -> Provisioning -> The Provisioning  Templates

Edit Provisioning Template

Edit Provisioning Template

The Operating System

And back to the Operating System to bind it all together.

Path : More -> Provisioning -> Operating Systems -> CentOS 6.4

First you should be presented with the following page, pick the right options (Architecture, Partition Tables, Installation Media) for your OS

Edit OS - OS

Edit OS – OS

Now go to template and associate the template accordingly

Edit OS - Templates

Edit OS – Templates

You can now save the OS.

Create the domain (The Foreman)

Here nothing fancy, simply fill up what is prompted. In the current scenario we don’t use The Foreman as a DNS.

Path : More -> Provisioning -> Domains -> New Domain

Edit Domain

Edit Domain

Create the Subnet (The Foreman)

Here the Network Address is the one from your libvirt’s dnsmasq configuration. Normally you can guess if from a simple ifconfig eth0, else on the host run virsh net-dumpxml default, assuming you run the default network. Same thing applies for the Network Mask.

Select the appropriate domain (cf. Create The Domain) and then the most important make sure the smart proxy name is selected in the TFTP Proxy box.

Path : More -> Provisioning -> Subnets -> New Subnet

Edit Subnet

Edit Subnet

Create the VM with PXE boot (Libvirt)

Create the New VM with a PXE boot

node1 - PXE

node1 – PXE

For now you can stop the VM since the DHCP server is not configured. Please note the MAC address of the Virtual Machine, it will be needed on the later section

Configure dnsmasq for IP attribution and PXE boot (Libvirt)

Note your foreman VM and your node1 VM MAC addresses.

Stop your foreman VM now.

1. Destroy the network

virsh net-destroy default

2. Edit the current network to assign static ip

virsh net-edit default


<ip address='' netmask=''>
    <range start='' end='' />


<ip address='' netmask=''>
    <range start='' end='' />
    <host mac='52:54:00:CB:C3:C6' name='foreman' ip='' />
    <host mac='52:54:00:89:2A:7E' name='node1' ip='' />
    <bootp file='pxelinux.0' server='' />

3. Restart the network

virsh net-start default

What is being done here at step 2,  is a static assignement of IP addresses by the DHCP server and the configuration of the PXE boot.

Static Assignement of IP address

<host mac='52:54:00:CB:C3:C6' name='foreman' ip='' />

Here we tell dnsmasq that device with MAC address ’52:54:00:CB:C3:C6′ will always be assigned ip ‘’

PXE Boot Configuration

<bootp file='pxelinux.0' server='' />

He we tell devices that wish to do PXE boot, to get the file pxelinux.0 on the tftp server running on

You can now start the foreman VM, not node1 yet.

Create the Host (The Foreman)

Here fill up the information as needed, the specifics to PXE provisioning are the Network and Operating System tabs.

  • In the Network tab, fill up the MAC address, the configured domain, subnet and the IP Address assigned in DHCP server.
  • In the Operating System tab, select the Operating System you want your VM to be. (cf. Configure the Operating System)

Path : Hosts -> New Host

Edit Network Host

Edit Network Host

Edit Operating System Host

Edit Operating Syste

Start the VM (Libvirt)

Simply start the node1 VM, it will be assigned the static IP address  and will retrieve the pxelinux.0 from the foreman server as specified in the DHCP server. It might take some time while the installation is processing.

Once the VM automatically rebooted, one needs to go to the foreman > hosts page and will see that the node1 is in a ‘No Changes’ state, meaning build was successful, puppet connected. The VM is now fully managed by The Foreman.


One can configure as many OSes as one wants with fully configurable kickstart/preseed scripts, themselves dynamically parametrizable. As of today, The Foreman is a solid solution to manage the whole lifcycle of servers, from creation to provisioning to management, providing the user with details – filtrable – reports of what is going on. On a personal note I would say that if you are managing puppet servers and you are not using The Foreman, you are doing it wrong. QED.


Samba standalone + OpenLDAP

On the web there are many tutorials about setting a Samba server as one’s Domain Controller (DC), but really a few about setting a Standalone Samba server relying on an external OpenLDAP for authentication. Actually quite a simple process, it needs a lot of configuration on both ends, the Samba server and the OpenLDAP one, before it can be functionnal.

This post shows how to set up a Samba 3.6 server to rely on an external OpenLDAP 2.4 server, both being hosted on a CentOS 6.4

The Samba Server

Authorize the use of LDAP system-wide

In order for the Samba server to be able to rely on then OpenLDAP one, the use of LDAP needs to be enabled system-wide. To do so the authconfig configuration needs to be updated the following way

authconfig --enableldap --update

This simply edits the /etc/nsswitch.conf file and append ldap on passwd, shadow, group, netgroup and automount items

Install the samba packages

Simply run

yum install samba samba-common

Note : This article is about Samba 3.6 version and not Samba4. So do install the samba* packages and not the samba4* packages.

Copy and install the Samba schema in the OpenLDAP server

Note : Since those steps need to be done before the smb.conf configuration, this section shows here, even if logically it belongs to “The OpenLDAP server”

By default, the OpenLDAP server doesn’t speak the Samba language. One needs to add samba LDAP schema to it. From the Samba server, once the samba packages installed simply copy the samba.ldif file located at /usr/share/doc/samba-3.6.9/LDAP/samba.ldif to your OpenLDAP cn=schema directory

scp /usr/share/doc/samba-3.6.9/LDAP/samba.ldif user@openldap:/etc/openldap/slapd.d/cn=config/cn=schema

On the OpenLDAP server, the file needs to be renamed with the pattern – cn={X}samba.ldif – where X represents the highest number available + 1. On a default OpenLDAP installation, the highest number available is 11 (cn={11}collective.ldif) thus, the samba.ldif file needs to be renamed cn={12}samba.ldif

Edit the cn={12}samba.ldif file at line 1 and 3 so it look like this

dn: cn={12}samba.ldif
objectClass: olcSchemaConfig
cn: cn={12}samba.ldif

Finally, restart the slapd service so the new schema can be loaded correctly.

The smb.conf

In Samba there are 3 backends storage available per default.

  • smbpasswd – it is deprecated,
  • tdbsam – the one enabled by default.  It relies on a local database of user, filled via the smbpasswd -a command
  • ldapsam –  It relies on an external LDAP directory

To make your standalone Samba server rely on OpenLDAP simply change this chunk of code

security = user
passdb backend = tdbuser


security = user
passdb backend = ldapsam:ldap://
ldap suffix = dc=wordpress,dc=com
ldap admin dn = cn=admin,dc=wordpress,dc=com
  • ldap suffix : the suffix of your DIT
  • ldap admin dn : This is optional. If the OpenLDAP server denies anonymous request, then one needs to specify an admin dn entry.  Also if your LDAP tree do not have a SambaDomain entry yet, specifying the ldap admin dn configuration will create it automatically.  If using ldap admin dn, one needs to specify the admin dn password running smbpasswd -W

Save and exit the file, then restart the smb service. After few second one can run net getlocalsid and will be presented with a line looking like

SID for domain SAMBA-SERVER is: S-1-5-21-2844801791-3392433664-1093953107

If you set ldap admin dn in the smb.conf, the SambaDomain was created automatically and net getlocalsid returns this value, if you setted it manually net getlocalsid should return your your SambaDomain informations

Set samba to start automatically at boot time – chkconfig samba on – and the Samba server is all set to receive request from LDAP existing users.

The OpenLDAP server

In order for an OpenLDAP server to be Samba aware, some attributes needs to be added to the appropriate entryies. Make sure the samba schema has been loaded into OpenLDAP, as explained earlier.


This entry can be automatically created  by the Samba server – if one wants  – and contains general informations about the Samba behavior. The most important information that can be found here is the SID, Security IDentifier for the domain. It will be needed for the configuration of Samba Groups and Users entries.


This is an auxiliary objectClass  that should be added to all the posixGroup entry that one wants to work with in Samba. It has only  two mandatory attributes, the SambaSID that is a uniqe ID within the SambaDomain ans the SambaGroupType, that define the type or the group.

The SambaSID is composed of the SID + RID

  • SID : From the SambaDomain entry
  • RID : Relative IDentifier, a unique id within the SambaDomain

The defined SambaGroupType are :

  • 2: Domain Group
  • 4: Local Group (alias)
  • 5: Builtin


This is probably the most touchy, yet scriptable part. This is the auxiliary objectClass that should be added to all the posixAccount entry that one wants to work with in Samba. It contains Samba credentials. For Samba to authenticate a LDAP hosted user, the latter needs to have the the following attributes set

  • SambaAcctFlag : define user type (permissions)
  • SambaLMPassword : The LanMan password
  • SambaNTPassword : The NT password
  • SambaPwdLastSet : Timestamp of the last password update
  • SambaSID : The unique identifier within the SambaDomain

To obtain those informations , one can run this script , this needs the perl module Crypt-SmbHash to be installed

Usage : ./script username password

This will give the following outputs

:0:47F9DBCCD37D6B40AAD3B435B51404EE:82E6D500C194BA5B9716495691FB7DD6:[U          ]:LCT-4C18B9FC

  |            LMPassword          |         NTPassword             |   AcctFlags |

For the SambaSID value, refere to the SambaGroupMapping section the same logic apply here.

Once the SambaDomain, SambaGroupMapping and SambaSamAccount applied where it has to, the Samba server is ready to authenticate against the OpenLDAP server


Making a standalone Samba server rely on an external OpenLDAP , is not a difficult process, but it does involve quite a lot of configuration. In this article, neither the IPtables or the SElinux side of things has been adressed, but you should definetly set them up accordingly.  Go ahead add people on your DIT and see how they can access their own Samba Share. QED

Effective backup/recovery process for OpenLDAP

Making sure to never lose any piece of data is a really difficult task. A point-in-time backup (snapshot), in a permanently living and changing environment does not match data loss-less expectations.

In today’s post the focus will be put on OpenLDAP backup/recovery process in order to never lose a bit of data – well maybe the last transaction in case of a power outage.

Most online resources refer to the OpenLDAP backup/recovery process as :

  • For Backup : running a slapcat command and sending the output to a backup server in a cron job
  • For Recovery : getting the last meaningful backup from the backup server and reload it with a slapadd command

Simple isn’t it ? Well it is simple but it simply does not prevent from important data loss. Let’s highlight two cases that demonstrates the limit of this backup plan.

Case 1

Let’s take a moderately busy service that inserts an average of 1,000 new daily users in its dictionary. There are backups made (using the slapcat command) every day at midnight. Now, for some reasons one day at 8.00pm, a hard drive crashes (no RAID) or the filesystem got corrupt or the reason you want to come up with… It is time for recovery. We set a new VM or a new drive, set OpenLDAP again, get back the last meaningful backup and load it with a slapadd command. OpenLDAP server is back to its yesterday state but what about the 900 entries that got inserted today ? Well simply gone. That is why you must have a redundant set of OpenLDAP servers via replication. But replication is not a backup plan in itself.

Case 2

For precaution you set up a master/slave schema (a.k.a Consumer/Provider in the LDAP terms). So even if the main OpenLDAP server crashes you do have an up to date copy. Now since error is human, if an employee inadvertently removes an important set of data, this change will be replicated to all your slaves OpenLDAP servers and the data won’t be recoverable. Recovering yesterday backup will leave you in the same state as Case 1 and data would have been lost.


Design of an infrastructure effective for backup/recovery process

Design of an infrastructure effective for backup/recovery process

To be able to almost never lose a bit of OpenLDAP data, the infrastructure to deploy will heavily rely on the accesslog module provided by OpenLDAP.

The accesslog overlay is used to keep track of all or selected operations on a particular DIT (the target DIT) by writing details of the operations as entries to another DIT (the accesslog DIT). The accesslog DIT can be searched using standard LDAP queries. Accesslog overlay parameters control whether to log all or a subset of LDAP operations (logops) on the target DIT, to save related information such as the previous contents of attributes or entries (logold and logoldattr) and when to remove log entries from the accesslog DIT

Definition from

Accesslog are mainly used for replication/audit purpose. In the above schema, our slaves will never be master of any other OpenLDAP server, they do use accesslog as a real-time accesslog backup in case the Master OpenLDAP server becomes unavailable for any reason.

Backup Process

As simple as it is described by most resources out there, the backup process will be a slapcat command – run as a cron job – of the needed DIT and their relative Accesslog DIT

#> slapcat -n 2 >  maindit-bk.ldif
#> slapcat -n 3 > maindital-bk.ldif

Recovery Process

This is how the recovery process would work :

  1. Load the last meaningful backup of the needed DIT with the command
  2. Load the accesslog from either the backup or the slave accesslog – which ever fit the most – do not forget to clean the accesslog if you are trying to recover an erroneous action
  3. Set the DIT to be a consumer of the freshly loaded accesslog


Step 1 : Simulate data loss

#> service slapd stop
#> slapcat -n 2 > maindit-backup.ldif
#> service slapd start
#> ldapadd -x -w 'test' -D 'cn=Manager,dc=domain,dc=com' -f user.ldif
#> service slapd stop
#> slapcat -n 3 > maindit-accesslog-backup.ldif

At this time, there are two backup files :

  • the maindit-backup.ldif that has everything but the last entry
  • the maindit-accesslog-backup.ldif that do have the addition of the user.

Step 2 : Recovering a clean OpenLDAP server

  1. Install a new VM with the appropriate package and configuration [if necesary only]
  2. If you are using a corrupted OpenLDAP server, move all the dbd file of your corrupted database (mv /var/lib/ldap/{yourdbdname}/*.dbd /backup/ldap/{yourdbname})
  3. Enable accesslog and syncprov modules
  4. Reload the needed DIT with slapadd
  5. Create an Accesslog db that will be used as provider
  6. Reload the accesslog db with its backup
  7. Configure syncrepl on the main DIT to be a consumer of the accesslog provider
  8. Restart slapd

At this time you have your OpenLDAP server being back up-to-date data wise and no data has been lost.


Not that simple right ? It needs a bit more than 2 lines of shell scripts. Long time observed behavior is that people/company do backups but do not test recovery. They are tested when the backup plan is created but then left aside and almost never used. Some company, on the other side takes recovery to its extreme and deploy last night backup to production every day. This way the recovery process is well tested and they don’t fear failure. Either way one decides to go, make sure  to always have a data loss-less backup/recovery plan, an up-to-date documentation that goes along with it, and your nagios’ check_ldap plugin up and running. QED

Network Access Server with a RaspberryPi : Part 1 – DNS

The RaspberryPi did not land in the market unnoticed. For about $35 you get a ready to work computer.
Many people have done amazing things with it – from IoT to distributed computation – other uses it as a full stack home media player. Other surely have a spare RaspberryPi and don’t know what they can do with it, the answer is a SMB grade Network Access Server (NAS)

This 3 part series intend to show how to use a RaspberryPi as a Network Access Server with enterprise services.

This specific blog post will be about providing a LAN  with a local DNS resolver using dnsmasq, that will improve overall internet speed of the clients in the LAN and allow a network administrator to configure its host names in an easy fashion.

Note: The RaspberryPi is running Raspbian as its Operationg System

DNS Primer

It is taken for granted that the reader knows what is the basic function of a DNS server, translate a name to an IP address.

A DNS server in order to be effective has to match two criterias :

  • Proximity
  • Cacheability


In the wild, they are two kids of DNS servers network types.

The first one, anycast network type. With anycast network type, several geographically separated DNS servers listen on the same IP address, the DNS server closest to you in terms of hops will answer your query, providing you with the lowest latency.

The second one, unicast networy type, With unicast network type, a single server listens to a single IP address. Meaning if you live in California and your DNS provider has its servers in California you will have low latency, but if a country-side european resident uses the same DNS server he will have a much higher latency.

Bottom line on proximity, the closer the better. The closest you can get to your computer – beyond  is your LAN. Having a DNS resolver on your LAN provides one with the second lowest possible latency.


One of the biggest challenge of Public DNS resolvers is Cacheability, more precisely shared cacheability. Due to the scale of the deployed infrastructure by those PublicDNS resolvers maintaining a common cache is somewhat a big  technical challenge in itself.

When you clicked on a DNS server answered the IP corresponding to the hostname and then cached the association IP <> Hostname for TTL time. So a user can think next time s/he will be hitting the DNS server for the exact same host name (within the TTL) the DNS query may be faster, well the answer is not necessarily.

Be it an unicast or an anycast network type,  nothing can ensure one will end up on the exact same server two times in a row. (ie. Load balancer, etc…)

Bottom line on cacheability, by caching locally on a single server (the Pi) you won’t need to worry about shared cache. It will always be synced to itself

A note on dnsmasq name server feature

By deploying a DNS resolver within one local network, both proximity and cacheability issues are tackled. Also, but nonetheless, deploying dnsmasq on one local network will act as an authoritative source  for local devices names defined in /etc/hosts. No more need to deal with BIND and DNS records such as ‘router A XXX.XXX.XXX.XXX’. Simply by inserting the line ‘XXX.XXX.XXX.XXX router’ in your host file your DNS server will provide you the correct IP address.

Installation & Configuration


sudo apt-get install dnsmasq dnsmasq-base
update-rc dnsmasq default


As with most programs, dnsmasq configuration can be edited in the /etc/dnsmasq.conf file or by dropping configuration rules in /etc/dsnmasq.d directory.

In order to keep a clean configuration, only the listen-address parameter will be edited in /etc/dnsmasq.conf


Then, the extra configuration will be written in specific files under /etc/dnsmasq.d/


server=                       # Primary DNS Server
server=                       # Secondary DNS Server

server=/   # Specific DNS server for a given domain name

bogus-nxdomain=                # Return NXDOMAIN as it should (IP applies to OpenDNS)

all-servers                                 # All listed DNS servers will be queried the faster will be picked

Make your computer default DNS your raspberrypi

Once everything is set up, you need to let your computer know which DNS server to use. For this several options :

  • Configure it directly in your DHCP server if you have access to (recommended)
  • In Linux, either configure NetworkManager or your /etc/resolv.conf file to have the right DNS server
  • In Windows configure your connection accordingly to use the right DNS

Also the /etc/hosts file will be edited to highlight the name server feature of dnsmasq    printer    printer.localdomain     router     router.localdomain    storage    storage.localdomain


For tesing the performance of using the RaspberryPi as a DNS server the following script was ran 10 times from a laptop connected to a router via WiFi.
Using the RaspberryPi as DNS server


sleep 2 && dig | grep 'Query time:'
yguenane@laptop:~$ repeat 10 ./
;; Query time: 102 msec
;; Query time: 31 msec
;; Query time: 28 msec
;; Query time: 29 msec
;; Query time: 32 msec
;; Query time: 29 msec
;; Query time: 29 msec
;; Query time: 30 msec
;; Query time: 28 msec
;; Query time: 29 msec

Using OpenDNS as DNS server


sleep 2 && dig @ | grep 'Query time:'
yguenane@laptop:~$ repeat 10 ./
;; Query time: 103 msec
;; Query time: 131 msec
;; Query time: 133 msec
;; Query time: 132 msec
;; Query time: 134 msec
;; Query time: 131 msec
;; Query time: 131 msec
;; Query time: 133 msec
;; Query time: 134 msec
;; Query time: 133 msec

Using Google PublicDNS as DNS server


sleep 2 && dig @ | grep 'Query time:'
yguenane@laptop:~$ repeat 10 ./
;; Query time: 136 msec
;; Query time: 135 msec
;; Query time: 131 msec
;; Query time: 131 msec
;; Query time: 131 msec
;; Query time: 132 msec
;; Query time: 132 msec
;; Query time: 136 msec
;; Query time: 131 msec
;; Query time: 131 msec

One can see the – big – response time difference from the RaspberryPi compared to the PublicDNS servers once the entry is cached.

For the name feature, one can simply ping printer and see that will be pinged.

Cache can be tuned thanks via cache-size, no-negcache, local-ttl and neg-ttl options. Refer to the man pages for more details.


BIND is a great product, it does well what is has been conceived for, but the entrance barrier might be high for a non networking-related profile. Dnsmasq is a lightweight yet mature alternative for SMBs. It allows one, totally unfamiliar with DNS records to set up a name server easily for an entire network.
In this first part we only focused on the DNS feature of dnsmasq, but it has much more it can provide. Next part will focus on the DHCP and PXE Server feature.

Create Puppet modules with solid foundations

Over the last year , the team at PuppetLabs have done a great job making the forge a better place. Also, during this time, they have been pushing puppet module authors to create better modules. By better, three characteristics can be highlighted here; testable – thanks to rspec-puppet, style compliant – thanks to puppet-lint and input-validated – thanks to the validation functions on the stdlib puppetlabs’ module. This post walk you through the process of creating a puppet module taking advantage of these.

Note : It is assumed that you have Puppet, gems and bundler already installed

Create the module

Simply run the following command. If you are root by default the folder will be on /etc/puppet/modules/, if you are logged in as a regular user you’ll find it on ~/.puppet/modules (It relies on $modulepath)

puppet module generate yguenane-mkdir

This will generates boilerplate for a new mkdir module by creating the directory structure and files. Note : the module name needs to have the pattern forgeusername-modulename.

Since the forge last update, new files got pushed as first class citizen file :

  • README : This one still gets displayed as the home page of your module profile
  • CHANGELOG : The changelog gets it’s own tab. One can quickly see the activity of a module without going to the module project page
  • LICENSE : The license file gets it’s own tab also. The license tab contains – as you would guess – the license text

Bring a set of extra features to your module


Create a .gemfile file at the root of your module with the following content

source ''

puppetversion = ENV['PUPPET_VERSION']
gem 'puppet', puppetversion, :require => false
gem 'puppet-lint'
gem 'rspec-puppet'
gem 'puppetlabs_spec_helper', '>= 0.1.0'

Install the bundled gems with the following command

bundle install --gemfile .gemfile

Some of you might ask why .gemfile instead of the traditional and conventional Gemfile, this is PuppetLabs explanation :

Aside: Gemfiles and the Puppet Module Tool. In our modules, we name our Gemfiles .gemfile instead of Gemfile. Dotfiles are automatically ignored when packaging with the Module Tool. We recommend this practice in order to avoid clutter on end-users’ systems, but it is not strictly required.

Everything we need to start working on a puppet module has been installed. Now time for configuration


First of all, a Rakefile needs to be created at the root of your module with the following content

require 'rubygems'
require 'puppetlabs_spec_helper/rake_tasks'
require 'puppet-lint'

Then execute BUNDLE_GEMFILE=.gemfile bundle exec rake help to see what task one can run

Note: Here we preceded our bundle command by setting the BUNDLE_GEMFILE environment variable to tell bundle which file to use. You can rename your .gemfile to Gemfile, since this is the default file you will not need to specify it, but remember to rename it before packaging the module.

rake build            # Build puppet module package
rake clean            # Clean a built module package
rake coverage         # Generate code coverage information
rake help             # Display the list of available rake tasks
rake lint             # Check puppet manifests with puppet-lint
rake spec             # Run spec tests in a clean fixtures directory
rake spec_clean       # Clean up the fixtures directory
rake spec_prep        # Create the fixtures directory
rake spec_standalone  # Run spec tests on an existing fixtures directory


rspec-puppet is a project that brings rspec feature to test puppet module. It is lead by one of the puppet community member @rodjek. The necessary gems has been installed when you ran bundle install earlier.

rspec-puppet provide the rspec-puppet-init binary to help you getting started with. Simply run rspec-puppet-init at the root of your module and a bunch of directory will be created under your spec/ folder.

The only thing to configure here is the spec/spec_helper.rb file that should contain this content

require 'rubygems'
require 'puppetlabs_spec_helper/module_spec_helper'


Most probably than not, your module will depend on other modules. Thus your rspec tests will need to know about those modules in order to pass. This is the exact purpose of the .fixtures.yml file located at the root of your module. Before running tests, it will download the module it depends on and set them as fixtures, so your test can actually be run against them.

A .fixtures.yml file looks like this

    stdlib: "git://"
    dep2: "git://"
    mymodule: "#{source_dir}"


This file aims to tell how to run your test suite. For those who does not know travis-ci it is a hosted continuous integration service that you can use freely to test your open source project. Refer to official website for more information on how to use it.

This is the travis configuration I use for my own modules

language: ruby
  - 1.8.7
  - 1.9.3
  - ruby-head
  - "rake spec SPEC_OPTS='--format documentation'"
  - PUPPET_VERSION="~> 2.7.0"
  - PUPPET_VERSION="~> 3.0.0"
  - PUPPET_VERSION="~> 3.1.0"
    - rvm: ruby-head
    - rvm: 1.9.3
      env: PUPPET_GEM_VERSION="~> 2.7.0"
    - rvm: ruby-head
      env: PUPPET_GEM_VERSION="~> 2.7.0"
gemfile: .gemfile

Please refer to the official documentation for technical details, explanation of the keywords and key concept to extend this configuration file.

Use them all together

For the following test I will be using yguenane-mkdir module available on github. This sample module has been configured as mention above.


With the current version of this module executing the following command

BUNDLE_GEMFILE=.gemfile bundle exec rake lint

will output the following result

manifests/dir.pp - WARNING: defined type not documented on line 1
tests/init.pp - WARNING: line has more than 80 characters on line 5
tests/init.pp - WARNING: line has more than 80 characters on line 6
tests/init.pp - WARNING: line has more than 80 characters on line 9

Thanks to this feature you can make sure your module is conformed to PuppetLabs Style Guide. Some constraints can be too much restrictive for some situation (ie. 80chars) in order for puppet-lint to not complain about it you can disable this check by adding this line in your Rakefile – PuppetLint.configuration.send(“disable_check“) – where check is the condition being checked.

For example for the 80 characters constraint the line would be


List of checks is available here


With the current version of this module executing the following command

BUNDLE_GEMFILE=.gemfile bundle exec rake spec

will output the following result


Finished in 0.28395 seconds
2 examples, 0 failures

It ran the tests under spec/{classes,defines} folders, and 2 tests out of 2 passed.

Once your module is puppet-lint compliant and passes all tests, the module is ready to be uploaded to the forge. Simply use puppet module build to create the tarball and upload it.
Done ! A new puppet style-guide compliant and testable module is on the forge !


If you are using travis-ci do not forget to add the travis badge on your README that let people now the status of the builds. Information here


Even if it can be seen as a long process at first, it is definitely worth it :

  • Contribution to the module becomes simpler, by running the tests a contributor knows the module status during contributions
  • Testing on multiple puppet and ruby version becomes trivial using
  • This point is more subjective, but when I see a puppet module with tests and changelog, I have a better feeling about the dedication one is putting into h(er|is) module

The more puppet module authors will do module this way, better the quality of the forge and the quality of the modules you can find there will be. QED

PS: The actual module is available at