in Infrastructure

Automating Sentry With Puppet

Just about a year ago we started using Sentry to collect and track our application exceptions. At the time, we deployed Sentry version 7.0.0-DEV using a very basic Puppet module. Our Puppet module started out just installing Sentry from GitHub and not doing much else. As we collectively learned more about how to use and support Sentry, our Puppet module grew increasingly more complex. Dan wrote about our initial foray into Sentry in January of this year, and we’ve come a long way since!

We’ve just successfully completed the process of upgrading to Sentry version 7.7.1. Part of this upgrade was a clean-up effort on our Puppet module, with the express intention of open sourcing it. I’m happy to report that that effort is complete! You can download covermymeds/sentry from the Puppet Forge today!

What Is Sentry?

Sentry is “a modern error logging and aggregation platform.” Sentry offers a hosted solution, which is probably very good. But we have a strong requirement to protect much of our data. As such, we preferred to run Sentry ourselves, in our own data centers.

Thankfully, Sentry is an excellent open source project. It is robust and reasonably well documented. Most importantly, it is an easy-to-install Python application!

Automating Sentry Installation

It’s possible to compile Sentry from source, but the Sentry team also publishes it on PyPI. We’re already using the stankevich/python module for other Python-related projects internally, so it was an easy decision to simply install Sentry with pip.

Our module installs all of the necessary dependencies for Sentry, as well as a couple of custom Python scripts. The module does not handle the installation of memcached, Redis, or PostgreSQL. Here our work on a Puppet Redis module helped immensely, as well as the excellent puppetlabs/postgresql and saz/memcached modules.

Truth be told, there’s not a whole lot of magic with the installation aspects of the module. We’re more than happy with this, though, and it’s a testament to the maturity of the Sentry project!


The Sentry project has examples of how to run Sentry via a reverse proxy behind Apache or nginx, but no documentation for using mod_wsgi. mod_wsgi is our preferred method of running Python applications in a virtualenv. We run several Python apps via mod_wsgi, and wanted Sentry to follow the same pattern in order to ease our support burden.

We’ve been using Sentry via mod_wsgi for over a year now with great success. We want others to enjoy it, too!

Automating Sentry Projects

The real secret sauce with our Sentry Puppet module, though, is the automatic creation of Sentry projects for our applications. Through the power of Puppet exported resources, each of our applications exports a Sentry project. The Sentry server collects and instantiates all of these exported resources. Applications can then ask the Sentry server for their project DSN, which gets injected into the application’s vhost environment for use by the application code.

What this means is that a brand new project can be configured to report to Sentry with zero human intervention. It also means that new servers can be provisioned for existing applications and they, too, will report to Sentry correctly. This has been a huge productivity win for both developers and operations.

This pattern does have a couple of caveats, of course. First, your Puppet infrastructure needs to know about your applications in some way. Second, your applications need to know how to contact Sentry to find their DSN.

At CoverMyMeds, we have a profile::appserver profile that is applied to all of our app servers. This profile includes a custom fact that looks for deployed applications and presents them as a comma-separated list for use within our Puppet manifests. This profile also identifies each application’s language, and creates another custom fact for that.

Inside our profile::appserver manifest, we convert the list of deployed apps into an array, and then invoke sentry::source::export{ $array_of_apps: }. The sentry::source::export defined type explicitly looks up the application’s primary language via fact named ${name}_lang, where “name” here is the name of the application being exported.

Finally, we have another custom fact that loops through each deployed application and fetches the DSN from the Sentry server. The Sentry server publishes application DSNs at /dsn/${app_name} on itself. The custom fact caches the DSN value on the client, so the lookup only has to happen once per application server.

(We considered creating an exported resource for the DSNs themselves, which the app servers would then collect, but things got very complicated, very quickly: the app server exports a resource, which the Sentry server collects, which in turn exports a new resource which the app server collects. We elected to have the app servers fetch the DSN via HTTPS directly and be done with it.)

As a practical example, say we have a Ruby application named foobar deployed onto an app server named server01. When the app server executes Puppet it will compile its list of facts. This will cause our deployed_apps fact to contain “foobar”. There will also be a foobar_lang fact with a value of “Ruby”. The server will create a sentry::source::export { 'foobar': } resource. This resource will look for (and find!) a custom fact named foobar_lang. Then this resource will export the following:

@@sentry::source::project { "foobar-sentry01":
    project  => 'foobar',
    platform => 'Ruby',
    tag      => 'production',

When the Sentry server next runs Puppet, it will collect that exported resource and instantiate it. Multiple application servers can export the same application, because the exported resource includes the app server hostname. The Sentry server will collect all of these uniquely named resources, even if they’re all for the same application. The actual sentry::source::project resource first checks to ensure that a Sentry project doesn’t already exist for the application. That’s why we explicitly duplicate the project name as a parameter to the defined type.

When server01 next runs Puppet, its custom fact will check for a foobar_dsn cache file. It won’t exist, since this is the first time Puppet has run since the Sentry server has collected the exported project. So the custom fact will make an HTTPS request to the Sentry server for the /dsn/foobar URL. This will return the DSN that Sentry has created for the foobar project, and will become a fact available to our manifests. server01 will then use that fact’s value to create an environment variable for the foobar application vhost, and the foobar code can start reporting to Sentry!


Wow, explaining a complex Puppet exported resource configuration in prose is harder than I thought! I hope the above makes sense, at least at a high level. Hopefully the Puppet code makes clear anything I muddied above.

We’ve been extremely happy with Sentry in general, and with our hands-free project creation process in particular. We hope you’ll find it as useful as we do!

Download covermymeds/sentry today!

Write a Comment