in Infrastructure

IPAM: Bringing it all together

When I left off, I had explained how I manage the first half of my IPAM solution, requesting an IP address during system deployment. This time I will describe how we maintain that information during the life of the system and reuse the IP once the system is decommissioned. I have recently published the code that makes this possible at Puppet Forge.

One of the requirements of this setup is the ability to update DNS servers automatically. We did not want to maintain DNS information by hand. Managing DNS information by hand would be just about as unreliable and labor intensive as using a spreadsheet. I looked at a couple of different options for name services, but not for long, ISC BIND comes with Red Hat Enterprise Linux and I’m familiar with it.

ISC BIND and Puppet

I spent some time looking at Puppet modules that are available for BIND configuration. There are some good candidates out there both on Puppet Forge and GitHub. The problem I ran into was that none of these modules allowed for automated updates of zone data. There were modules that would export information from a node, but I needed to get information from our one authoritative source, the phpIPAM system.

BIND Configuration File

Our setup is such that a server can be primary for some zones and a slave for others. I should be able to indicate at run time whether the system was a primary or slave for any particular zone and write the BIND configuration file to fit that need. I needed to make sure that I could assign master or slave for a zone on individual systems. Accomplishing those ends without duplicating code and making clear from a host file what a system should be was important.

My solution for the configuration file was to use a Puppet template and populate the variables from centrally located data, in my case hiera data. Server wide configuration settings such as logging, access control lists, and forwarders are declared in the top level hiera data file, hieradata/common.yaml. The domain or zone definitions contained in the configuration file are handled differently, because their form is dictated by whether the system is acting as a primary or slave. The domain configuration settings need to be defined at the host or server level, and there may be one or many zones defined for a server. A hash of hashes was the best way to address this data. Each domain or reverse lookup zone would have it’s own hash with the data needed to write it’s configuration file entry.

If you look at the Puppet template file I wrote for the configuration file, the first part looks straight forward, but where I define the zones, your eyes may begin to cross. When designing our networks, the networking folks decided that we should only have to create our subnets for production, testing, etc. once, so they decided to use CIDR /22 subnets. While this makes really good sense to minimize the work required when you outgrow a current subnet definition, creating reverse lookup zones in BIND for CIDR /22 subnets was new to me. What made the most sense was to split anything larger than CIDR /24 into multiple /24 subnets. That is why the last section of my template gets a bit complicated, it handles reverse subnets larger than CIDR /24. When I sat down to write support for zones smaller than CIDR /24… let’s just say that code has not been released yet.

Forward Zone Files

Regardless of whether a zone is configured as a primary or slave, there is a certain amount of data that won’t change. In light of that, and because I try not to duplicate settings, I put definitions for zones in the common yaml file. Putting the zone information in a common place also allowed for one place to add CNAME records that would not be covered in our authoritative IPAM source. By querying the phpIPAM database for a certain domain and then merging that data with hiera data, I could build a JSON array to loop over in my template. While the complete zone template is a bit more complicated, the portion to handle creating the A resource records wasn’t too hard to write. Below is an example taken from the code at puppet-bind, located on github.com in the CoverMyMeds organization.

templates/fwd_zone_file.erb
.....
<% @merged_zone.each do |key,value| -%>
<% require 'resolv' -%>
<% if value =~ Resolv::IPv4::Regex -%>
<%= key.gsub("\.#{@name}", '') -%> IN A <%= value %>
<% else -%>
<%= key.gsub("\.#{@name}", '') -%> IN CNAME <%= value %>
<% end -%>
<% end -%>

Above in my variable @merged_zone I have both A and CNAME resource records, by checking if the value is an IP address, I can make a decision about if the record should be a CNAME.

Reverse Zone Files

Reverse lookup zones were easy to handle at first; I have code in the defined type manifests/zone_add.pp to check if the zone is a reverse, and then pass it to a different defined type. The template file for the reverse zone is much simpler than the forward zone. The former held true until, as I mentioned above, I began to deal with subnets larger than CIDR /24.

Puppet custom functions

All of the work above would be, in my mind, useless if I had to maintain all of the information in hiera. I identified that shortcoming in the modules I looked at when I began this work. By getting my data from hiera only, I would have to manually maintain that information and would be in a situation only slightly better than using a spreadsheet. Puppet custom functions to the rescue! By writing custom functions to connect to an API on the phpIPAM system, I can pull the data needed to build our zone files.

In the BIND module I use a single custom function to pull the data I need for zone files. The function is simple ruby code that takes four arguments. The first three arguments are used to connect to phpIPAM; the final argument is the domain to pull data for. I wrote the code used on phpIPAM in such a way that you can pass a subnet as the domain argument and get back the data for that subnet in order to write reverse zone files.

A custom puppet function was also the solution for reverse zone files that are larger than CIDR /24. Using the function contained in cidr_zone I’m able to get the CIDR /24 subnets that are part of the larger CIDR subnet and then call the first function with each of those subnets as arguments.

Decommission that server

Now that I have covered how we keep track of and resolve systems that are in service on the network, the only thing left is to describe how a system is removed from the network. It’s just as important to be able to remove unused IP addresses as add them. In the automated assignment article I wrote, I mentioned mkvm.rb to which I added a plugin in order to get an IP from our phpIPAM system. In the same way, when it’s time for a system to be decommissioned, I wrote a separate ruby program called rmvm.rb that, along with removing the system from our VMware infrastructure, removes the IP reservation from phpIPAM. Because the phpIPAM system is our one source of true information for IP addresses, BIND will automatically be updated the next time puppet runs, keeping our DNS records accurate

I have enjoyed doing this work, while it was challenging at times and the code may never be perfect, I have achieved all of the goals that were stipulated when I began. Thanks for following along on this journey and hope you have enjoyed it.

Leave a Reply for Marc Cancel Reply

Write a Comment

Comment

  1. If I read this (and your code) correctly, zone files are only updated by a Puppet run.

    If that’s the case, how do you keep your DNS servers returning consistent data? Without something kicking off the Puppet run after a change, the result would be a bunch of servers randomly getting new zone files based on the Puppet agent’s timing.

    Do you just run puppet continuously with runinterval=0 in your puppet.conf? If so, what kind of extra load does that put on your DNS servers?

    • The only servers that get updates are the masters, then the slaves will replicate that data when the masters are updated. We do run our masters on more frequent basis than other systems, usually every ten to fifteen minutes.

      I suppose if you were updating all the DNS servers via puppet you could have some discrepancies, but I have only one master per data center, so I haven’t really had any problems.