Moving away from Puppet: SaltStack or Ansible?

Over the past month at Lyft we’ve been working on porting our infrastructure code away from Puppet. We had some difficulty coming to agreement on whether we wanted to use SaltStack (Salt) or Ansible. We were already using Salt for AWS orchestration, but we were divided on whether Salt or Ansible would be better for configuration management. We decided to settle it the thorough way by implementing the port in both Salt and Ansible, comparing them over multiple criteria.

First, let me start by explaining why we decided to port away from Puppet: We had a complex puppet code base that has around 10,000 lines of actual Puppet code. This code was originally spaghetti-code oriented and in the past year or so was being converted to a new pattern that used Hiera and Puppet modules split up into services and components. It’s roughly the role pattern, for those familiar with Puppet. The code base was a mixture of these two patterns and our DevOps team was comprised of almost all recently hired members who were not very familiar with Puppet and were unfamiliar with the code base. It was large, unwieldy and complex, especially for our core application. Our DevOps team was getting accustom to the Puppet infrastructure; however, Lyft is strongly rooted in the concept of ‘If you build it you run it’. The DevOps team felt that the Puppet infrastructure was too difficult to pick up quickly and would be impossible to introduce to our developers as the tool they’d use to manage their own services.

Before I delve into the comparison, we had some requirements of the new infrastructure:

  1. No masters. For Ansible this meant using ansible-playbook locally, and for Salt this meant using salt-call locally. Using a master for configuration management adds an unnecessary point of failure and sacrifices performance.
  2. Code should be as simple as possible. Configuration management abstractions generally lead to complicated, convoluted and difficult to understand code.
  3. No optimizations that would make the code read in an illogical order.
  4. Code must be split into two parts: base and service-specific, where each would reside in separate repositories. We want the base section of the code to cover configuration and services that would be deployed for every service (monitoring, alerting, logging, users, etc.) and we want the service-specific code to reside in the application repositories.
  5. The code must work for multiple environments (development, staging, production).
  6. The code should read and run in sequential order.

Here’s how we compared:

  1. Simplicity/Ease of Use
  2. Maturity
  3. Performance
  4. Community

Simplicity/Ease of Use


A couple team members had a strong preference to using Ansible as they felt it was easier to use than Salt, so I started by implementing the port in Ansible, then implementing it again in Salt.

As I started Ansible was indeed simple. The documentation was clearly structured which made learning the syntax and general workflow relatively simple. The documentation is oriented to running Ansible from a controller and not locally, which made the initial work slightly more difficult to pick up, but it wasn’t a major stumbling block. The biggest issue was needing to have an inventory file with ‘localhost’ defined and needing to use -c local on the command line. Additionally, Ansible’s playbook’s structure is very simple. There’s tasks, handlers, variables and facts. Tasks do the work in order and can notify handlers to do actions at the end of the run. The variables can be used via Jinja in the playbooks or in templates. Facts are gathered from the system and can be used like variables.

Developing the playbook was straightforward. Ansible always runs in order and exits immediately when an error occurs. This made development relatively easy and consistent. For the most part this also meant that when I destroyed my vagrant instance and recreated it that my playbook was consistently run.

That said, as I was developing I noticed that my ordering was occasionally problematic and needed to move things around. As I finished porting sections of the code I’d occasionally destroy and up my vagrant instance and re-run the playbook, then noticed errors in my execution. Overall using ordered execution was far more reliable than Puppet’s unordered execution, though.

My initial playbook was a single file. As I went to split base and service apart I noticed some complexity creeping in. Ansible includes tasks and handlers separately and when included the format changes, which was confusing at first. My playbook was now: playbook.yml, base.yml, base-handlers.yml, service.yml, and service-handlers.yml. For variables I had: user.yml and common.yml. As I was developing I generally needed to keep the handlers open so that I could easily reference them for the tasks.

The use of Jinja in Ansible is well executed. Here’s an example of adding users from a dictionary of users:

- name: Ensure groups exist
  group: name={{ item.key }} gid={{ }}
  with_dict: users

- name: Ensure users exist
  user: name={{ item.key }} uid={{ }} group={{ item.key }} groups=vboxsf,syslog comment="{{ item.value.full_name }}" shell=/bin/bash
  with_dict: users

For playbooks Ansible uses Jinja for variables, but not for logic. Looping and conditionals are built into the DSL. with/when/etc. control how individual tasks are handled. This is important to note because that means you can only loop over individual tasks. A downside of Ansible doing logic via the DSL is that I found myself constantly needing to look at the documentation for looping and conditionals. Ansible has a pretty powerful feature since it controls its logic itself, though: variable registration. Tasks can register data into variables for use in later tasks. Here’s an example:

- name: Check test pecl module
  shell: "pecl list | grep test | awk '{ print $2 }'"
  register: pecl_test_result
  ignore_errors: True
  changed_when: False

- name: Ensure test pecl module is installed
  command: pecl install -f test-1.1.1
  when: pecl_test_result.stdout != ‘1.1.1’

This is one of Ansible’s most powerful tools, but unfortunately Ansible also relies on this for pretty basic functionality. Notice in the above what’s happening. The first task checks the status of a shell command then registers it to a variable so that it can be used in the next task. I was displeased to see it took this much effort to do very basic functionality. This should be a feature of the DSL. Puppet, for instance, has a much more elegant syntax for this:

exec { ‘Ensure redis pecl module is installed’:
  command => ‘pecl install -f redis-2.2.4’,
  unless  => ‘pecl list | grep redis | awk \’{ print $2 }\’’;

I was initially very excited about this feature, thinking I’d use it often in interesting ways, but as it turned out I only used the feature for cases where I needed to shell out in the above pattern because a module didn’t exist for what I needed to do.

Some of the module functionality was broken up into a number of different modules, which made it difficult to figure out how to do some basic tasks. For instance, basic file operations are split between the file, copy, fetch, get_url, lineinfile, replace, stat and template modules. This was annoying when referencing documentation, where I needed to jump between modules until I found the right one. The shell/command module split is much more annoying, as command will only run basic commands and won’t warn you when it’s stripping code. A few times I wrote a task using the command module, then later changed the command being run. The new command actually required the use of the shell module, but I didn’t realize it and spent quite a while trying to figure out what was wrong with the execution.

I found the input, output, DSL and configuration formats of Ansible perplexing. Here’s some examples:

  • Ansible and inventory configuration: INI format
  • Custom facts in facts.d: INI format
  • Variables: YAML format
  • Playbooks: YAML format, with key=value format inline
  • Booleans: yes/no format in some places and True/False format in other places
  • Output for introspection of facts: JSON format
  • Output for playbook runs: no idea what format

Output for playbook runs was terse, which was generally nice. Each playbook task output a single line, except for looping, which printed the task line, then each sub-action. Loop actions over dictionaries printed the dict item with the task, which was a little unexpected and cluttered the output. There is little to no control over the output.

Introspection for Ansible was lacking. To see the value of variables in the format actually presented inside of the language it’s necessary to use the debug task inside of a playbook, which means you need to edit a file and do a playbook run to see the values. Getting the facts available was more straightforward: ‘ansible -m setup hostname’. Note that hostname must be provided here, which is a little awkward when you’re only ever going to run locally. Debug mode was helpful, but getting in-depth information about what Ansible was actually doing inside of tasks was impossible without diving into the code, since every task copies a python script to /tmp and executes it, hiding any real information.

When I finished writing the playbooks, I had the following line length/character count:

 15     48     472   service-handlers.yml
 463    1635   17185 service.yml
 27     70     555   base-handlers.yml
 353    1161   11986 base.yml
 15     55     432   playbook.yml
 873    2969   30630 total

There were 194 tasks in total.


Salt is initially difficult. The organization of the documentation is poor and the text of the documentation is dense, making it difficult for newbies. Salt assumes you’re running in master/minion mode and uses absolute paths for its states, modules, etc.. Unless you’re using the default locations, which are poorly documented for masterless mode, it’s necessary to create a configuration file. The documentation for configuring the minion is dense and there’s no guides for normal configuration modes. States and pillars both require a ‘top.sls’ file which define what will be included per-host (or whatever host matching scheme you’re using); this is somewhat confusing at first.

Past the initial setup, Salt was straightforward. Salt’s state system has states, pillars and grains. States are the YAML DSL used for configuration management, pillars are user defined variables and grains are variables gathered from the system. All parts of the system except for the configuration file are templated through Jinja.

Developing Salt’s states was straightforward. Salt’s default mode of operation is to execute states in order, but it also has a requisite system, like Puppet’s, which can change the order of the execution. Triggering events (like restarting a service) is documented using the watch or watch_in requisite, which means that following the default documentation will generally result in out-of-order execution. Salt also provides the listen/listen_in global state arguments which execute at the end of a state run and do not modify ordering. By default Salt does not immediately halt execution when a state fails, but runs all states and returns the results with a list of failures and successes. It’s possible to modify this behavior via the configuration. Though Salt didn’t exit on errors, I found that I had errors after destroying my vagrant instance then rebuilding it at a similar rate to Ansible. That said, I did eventually set the configuration to hard fail since our team felt it would lead to more consistent runs.

My initial state definition was in a single file. Splitting this apart into base and service states was very straightforward. I split the files apart and included base from service. Salt makes no distinction between states and commands being notified (handlers in Ansible); there’s just states, so base and service each had their associated notification states in their respective files. At this point I had: top.sls, base.sls and service.sls for states. For pillars I had top.sls, users.sls and common.sls.

The use of Jinja in Salt is well executed. Here’s an example of adding users from a dictionary of users:

{% for name, user in pillar['users'].items() %}
  Ensure user {{ name }} exist:
      - name: {{ name }}
      - uid: {{ }}
      - gid_from_name: True
      - shell: /bin/bash
      - groups:
        - vboxsf
        - syslog
        - fullname: {{ user.full_name }}
{% endfor %}

Salt uses Jinja for both state logic and templates. It’s important to note that Salt uses Jinja for state logic because it means that the Jinja is executed before the state. A negative of this is that you can’t do something like this:

Ensure myelb exists:
    - name: myelb
    - availability_zones:
      - us-east-1a
    - listeners:
      - elb_port: 80
        instance_port: 80
        elb_protocol: HTTP
      - elb_port: 443
        instance_port: 80
        elb_protocol: HTTPS
        instance_protocol: HTTP
        certificate: 'arn:aws:iam::879879:server-certificate/mycert'
      - health_check:
          target: 'TCP:8210'
    - profile: myprofile

{% set elb = salt['boto_elb.get_elb_config']('myelb', profile='myprofile') %}

{% if elb %}
Ensure cname points at ELB:
    - name:
    - zone:
    - type: CNAME
    - value: {{ elb.dns_name }}
{% endif %}

That’s not possible because the Jinja running ’set elb’ is going to run before ‘Ensure myelb exists’, since the Jinja is always rendered before the states are executed.

On the other hand, since Jinja is executed first, it means you can wrap multiple states in a single loop:

{% for module, version in {
       ‘test’: (‘1.1.1’, 'stable'),
       ‘hello’: (‘1.2.1’, 'stable'),
       ‘world’: (‘2.2.2’, 'beta')
   }.items() %}
Ensure {{ module }} pecl module is installed:
    - name: {{ module }}
    - version: {{ version[0] }}
    - preferred_state: {{ version[1] }}

Ensure {{ module }} pecl module is configured:
    - name: /etc/php5/mods-available/{{ module }}.ini
    - contents: "extension={{ module }}.so"
    - listen_in:
      - cmd: Restart apache

Ensure {{ module }} pecl module is enabled for cli:
    - name: /etc/php5/cli/conf.d/{{ module }}.ini
    - target: /etc/php5/mods-available/{{ module }}.ini

Ensure {{ module }} pecl module is enabled for apache:
    - name: /etc/php5/apache2/conf.d/{{ module }}.ini
    - target: /etc/php5/mods-available/{{ module }}.ini
    - listen_in:
      - cmd: Restart apache
{% endfor %}

Of course something similar to Ansible’s register functionality isn’t available either. This turned out to be fine, though, since Salt has a very feature rich DSL. Here’s an example of a case where it was necessary to shell out:

# We need to ensure the current link points to src.git initially
# but we only want to do so if there’s not a link there already,
# since it will point to the current deployed version later.
Ensure link from current to src.git exists if needed:
    - name: /srv/service/current
    - target: /srv/service/src.git
    - unless: test -L /srv/service/current

Additionally, as a developer who wanted to switch to either Salt or Ansible because it was Python, it was very refreshing to use Jinja for logic in the states rather than something built into the DSL, since I didn’t need to look at the DSL specific documentation for looping or conditionals.

Salt is very consistent when it comes to input, output and configuration. Everything is YAML by default. Salt will happily give you output in a number of different formats, including ones you create yourself via outputter modules. The default output of state runs shows the status of all states, but can be configured in multiple ways. I ended up using the following configuration:

# Show terse output for successful states and full output for failures.
state_output: mixed
# Only show changes
state_verbose: False

State runs that don’t change anything show nothing. State runs that change things will show the changes as single lines, but failures show full output so that it’s possible to see stacktraces.

Introspection for Salt was excellent. Both grains and pillars were accessible from the CLI in a consistent manner (salt-call grains.items; salt-call pillar.items). Salt’s info log level shows in-depth information of what is occurring per module. Using the debug log level even shows how the code is being loaded, the order it’s being loaded in, the OrderedDict that’s generated for the state run, the OrderedDict that’s used for the pillars, the OrderedDict that’s used for the grains, etc.. I found it was very easy to trace down bugs in Salt to report issues and even quickly fix some of the bugs myself.

When I finished writing the states, I had the following word/character count:

527    1629   14553 api.sls
6      18     109   top.sls
576    1604   13986 base/init.sls
1109   3251   28648 total

There were 151 salt states in total.

Notice that though there’s 236 more lines of Salt, there’s in total fewer characters. This is because Ansible has a short format which makes its lines longer, but uses less lines overall. This makes it difficult to directly compare by lines of code. Number of states/tasks is a better metric to go by anyway, though.


Both Salt and Ansible are currently more than mature enough to replace Puppet. At no point was I unable to continue because a necessary feature was missing from either.

That said, Salt’s execution and state module support is more mature than Ansible’s, overall. An example is how to add users. It’s common to add a user with a group of the same name. Doing this in Ansible requires two tasks:

- name: Ensure groups exist
  group: name={{ item.key }} gid={{ }}
  with_dict: users

- name: Ensure users exist
  user: name={{ item.key }} uid={{ }} group={{ item.key }} groups=vboxsf,syslog comment="{{ item.value.full_name }}" shell=/bin/bash
  with_dict: users

Doing the same in Salt requires one:

{% for name, user in pillar['users'].items() %}
Ensure user {{ name }} exist:
    - name: {{ name }}
    - uid: {{ }}
    - gid_from_name: True
    - shell: /bin/bash
    - groups:
      - vboxsf
      - syslog
    - fullname: {{ user.full_name }}
{% endfor %}

Additionally, Salt’s user module supports shadow attributes, where Ansible’s does not.

Another example is installing a debian package from a url. Doing this in Ansible is two tasks:

- name: Download mypackage debian package
  get_url: url= dest=/tmp/mypackage_0.1.0-1_amd64.deb

- name: Ensure mypackage is installed
  apt: deb=/tmp/mypackage_0.1.0-1_amd64.deb

Doing the same in Salt requires one:

Ensure mypackage is installed:
    - sources:
    - mypackage:

Another example is fetching files from S3. Salt has native support for this where files are referenced in many modules, while in Ansible you must use the s3 module to download a file to a temporary location on the filesystem, then use one of the file modules to manage it.

Salt has state modules for the following things that Ansible did not have:

  • pecl
  • mail aliases
  • ssh known hosts

Ansible had a few broken modules:

  • copy: when content is used, it writes POSIX non-compliant files by default. I opened an issue for this and was marked as won’t fix. More on this in the Community section.
  • apache2_module: always reports changes for some modules. I opened an issue it was marked as a duplicate. Fix in a pull request, open as of this writing with no response since June 24, 2014.
  • supervisorctl: doesn’t handle a race condition properly where a service starts after it checks its status. Fix in a pull request, open as of this writing with no response since June 29, 2014. Unsuccessfully fixed in a pull request on Aug 30, 2013, issue still marked as closed, though there are reports of it still being broken.

Salt had broken modules as well, both of which were broken in the same way as the Ansible equivalents, which was amusing:

  • apache_module: always reports changes for some modules. Fixed in upcoming release.
  • supervisorctl: doesn’t handle a race condition properly where a service starts after it checks its status. Fixed in upcoming release.

Past basic module support, Salt is more far more feature rich:

  • Salt can output in a number of different formats, including custom ones (via outputters)
  • Salt can output to other locations like mysql, redis, mongo, or custom locations (via returners)
  • Salt can load its pillars from a number of locations, including custom ones (via external pillars)
  • If running an agent, Salt can fire local events that can be reacted upon (via reactors); if using a master it’s also possible to react to events from minions.


Salt was faster than Ansible for state/playbook runs. For no-change runs Salt was considerably faster. Here’s some performance data for each, for full runs and no-change runs. Note that these runs were relatively consistent across large numbers of system builds in both vagrant and AWS and the full run times were mostly related to package/pip/npm/etc installations:


  • Full run: 12m 30s
  • No change run: 15s


  • Full run: 16m
  • No change run: 2m

I was very surprised at how slow Ansible was when making no changes. Nearly all of this time was related to user accounts, groups, and ssh key management. In fact, I opened an issue for it. Ansible takes on average .5 seconds per user, but this extends to other modules that use loops over large dictionaries. As the number of users managed grows our no-change (and full-change) runs will grow with it. If we double our managed users we’ll be looking at 3-4 minute no-change runs.

I mentioned in the Simplicity/Ease of Use section that I had started this project by developing with Ansible and then re-implementing in Salt, but as time progressed I started implementing in Salt while Ansible was running. By the time I got half-way through implementing in Ansible I had already finished implementing everything in Salt.


There’s a number of ways to rate a community. For Open Source projects I generally consider a few things:

  1. Participation

In terms of development participation Salt has 4 times the number of merged pull requests (471 for Salt and 112 for Ansible) in a one month period at the time of this writing. It also three times the number of total commits. Salt is also much more diverse from a perspective of community contribution. Ansible is almost solely written by mpdehaan. Nearly the top 10 Salt contributors have more commits than the #2 committer for Ansible. That said, Ansible has more stars and forks on GitHub, which may imply a larger user community.

Both Salt and Ansible have a very high level of participation. They are generally always in the running with each other for the most active GitHub project, so in either case you should feel assured the community is strong.

  1. Friendliness

Ansible has a somewhat bad reputation here. I’ve heard anecdotal stories of people being kicked out of the Ansible community. While originally researching Ansible I had found some examples of rude behavior to well meaning contributors. I did get a “pull request welcome” response on a legitimate bug, which is an anti-pattern in the open source world. That said, the IRC channel was incredibly friendly and all of the mailing list posts I read during this project were friendly as well.

Salt has an excellent reputation here. They thank users for bug reports and code. They are very receptive and open to feature requests. They respond quickly on the lists, email, twitter and IRC in a very friendly manner. The only complaint that I have here is that they are sometimes less rigorous than they should be when it comes to accepting code (I’d like to see more code review).

  1. Responsiveness

I opened 4 issues while working on the Ansible port. 3 were closed won’t fix and 1 was marked as a duplicate. Ansible’s issue reporting process is somewhat laborious. All issues must use a template, which requires a few clicks to get to and copy/paste. If you don’t use the template they won’t help you (and will auto-close the issue after a few days).

Of the issues marked won’t fix:

  1. user/group module slow: Not considered a bug that Ansible can do much about. Issue was closed with basically no discussion. I was welcomed to start a discussion on the mailing list about it. (For comparison: Salt checks all users, groups and ssh keys in roughly 1 second)
  2. Global ignore_errors: Feature request. Ansible was disinterested in the feature and the issue was closed without discussion.
  3. Content argument of copy module doesn’t add end of file character: The issue was closed won’t fix without discussion. When I linked to the POSIX spec showing why it was a bug the issue wasn’t reopened and I was told I could submit a patch. At this point I stopped submitting further bug reports.

Salt was incredibly responsive when it comes to issues. I opened 19 issues while working on the port. 3 of these issues weren’t actually bugs and I closed them on my own accord after discussion in the issues. 4 were documentation issues. Let’s take a look at the rest of the issues:

  1. pecl state missing argument: I submitted an issue with a pull request. It was merged and closed the same day.
  2. Stacktrace when fetching directories using the S3 module: I submitted an issue with a pull request. It was merged the same day and the issue was closed the next.
  3. grains_dir is not a valid configuration option: I submitted an issue with no pull request. I was thanked for the report and the issue was marked as Approved the same day. The bug was fixed and merged in 4 days later.
  4. Apache state should have enmod and dismod capability: I submitted an issue with a pull request. It was merged and closed the same day.
  5. The hold argument is broken for pkg.installed: I submitted an issue without a pull request. I got a response the same day. The bug was fixed and merged the next day.
  6. Sequential operation relatively impossible currently: I submitted an issue without a pull request. I then went into IRC and had a long discussion with the developers about how this could be fixed. The issue was with the use of watch/watch_in requisites and how it modifies the order of state runs. I proposed a new set of requisites that would work like Ansible’s handlers. The issue was marked Approved after the IRC conversation. Later that night the founder (Thomas Hatch) wrote and merged the fix and let me know about it via Twitter. The bug was closed the following day.
  7. Stacktrace with listen/listen_in when key is not valid: This bug was a followup to the listen/listen_in feature. It was fixed/merged and closed the same day.
  8. Stacktrace using new listen/listen_in feature: This bug was an additional followup to the listen/listen_in feature and was reported at the same time as the previous one. It was fixed/merged and closed the same day.
  9. pkgrepo should only run refresh_db once: This is a feature request to save me 30 seconds on occasional state runs. It’s still open at the time of this writing, but was marked as Approved and the discussion has a recommended solution.
  10. refresh=True shouldn’t run when package specifies version and it matches. This is a feature request to save me 30 seconds on occasional state runs. It was fixed and merged 24 days later, but the bug still shows open (it’s likely waiting for me to verify).
  11. Add an enforce option to the ssh_auth state: This is a feature request. It’s still open at the time of this writing, but it was approved the same day.
  12. Allow minion config options to be modified from salt-call: This is a feature request. It’s still open at the time of this writing, but it was approved the same day and a possible solution was listed in the discussion.

All of these bugs, except for the listen/listen_in feature could have easily been worked around, but I felt confident that if I submitted an issue the bug would get fixed, or I’d be given a reasonable workaround. When I submitted issues I was usually thanked for the issue submission and I got confirmation on whether or not my issue was approved to be fixed or not. When I submitted code I was always thanked and my code was almost always merged in the same day. Most of the issues I submitted were fixed within 24 hours, even a relatively major change like the listen/listen_in feature.

  1. Documentation

For new users Ansible’s documentation is much better. The organization of the docs and the brevity of the documentation make it very easy to get started. Salt’s documentation is poorly organized and is very dense, making it difficult to get started.

While implementing the port, I found the density of Salt’s docs to be immensely helpful and the brevity of Ansible’s docs to be be infuriating. I spent much longer periods of time trying to figure out the subtleties of Ansible’s modules since they were relatively undocumented. Not a single module has the variable registration dictionary documented in Ansible, which required me to write a debug task and run the playbook every time I needed to register a variable, which was annoyingly often.

Salt’s docs are unnecessarily broken up, though. There’s multiple sections on states. There’s multiple sections on global state arguments. There’s multiple sections on pillars. The list goes on. Many of these docs are overlapping, which makes searching for the right doc difficult. The split of execution modules and state modules (which I rather enjoy when doing salt development) make searching for modules more difficult when writing states.

I’m a harsh critic of documentation though, so for both Salt and Ansible, you should take this with a grain of salt (ha ha) and take a look at the docs yourself.


At this point both Salt and Ansible are viable and excellent options for replacing Puppet. As you may have guessed by now, I’m more in favor of Salt. I feel the language is more mature, it’s much faster and the community is friendlier and more responsive. If I couldn’t use Salt for a project, Ansible would be my second choice. Both Salt and Ansible are easier, faster, and more reliable than Puppet or Chef.

As you may have noticed earlier in this post, we had 10,000 lines of puppet code and reduced that to roughly 1,000 in both Salt and Ansible. That alone should speak highly of both.

After implementing the port in both Salt and Ansible, the Lyft DevOps team all agreed to go with Salt.

  • “Ansible’s issue reporting process is somewhat laborious. All issues must use a template, which requires a few clicks to get to and copy/paste. If you don’t use the template they won’t help you (and will auto-close the issue after a few days).”

    Laborious? A few clicks is laborious?? The point of a template is that it forces useful information to be given which is why people used issue trackers like BugZilla, Trac, etc.:



    Yet somehow bugs manage to be filed and resolved.

    Salt’s dev process is misleading. The turnaround time on a bug isn’t helpful. The stack trace listen/listen_in feature was fixed with a two line fix in a file that’s thousands of lines long with few comments and lots of ugliness in it. It’s slightly amateurish too, like they’ve never written a state machine before. The other bug related to listen_in, again, no comments and the bug is in a method that’s already too long and four layers of loops.

    I’m glad you have the courage to link directly to the issues you opened letting the reader judge for themselves the dev process that Salt and Ansible have.

    The bugs that you reported for Salt are actual bugs, real bugs that are caused by low quality code and low quality documentation. The bugs you reported for Ansible are…not really bugs apparently? Did you have other issues with it that are closer to the bugs you found in Salt? Because if the same level of bugs were found in Ansible as in Salt, I’d be hesitant to use Ansible. But since I’ve seen the code of Salt, yuck, no thanks, I’ll stick to Ansible even if it means enduring a long no-change playbook run-time. Would rather have that kind of problem to deal with than the apparently dozens of little one-line fixes and patches that Salt needs.

    • I found roughly the same number of bugs in Ansible than I did in Salt. I simply stopped reporting them with Ansible when my bugs were being closed without discussion.

      The majority of the Salt bugs were relatively minor. As I mentioned in the post I reported them rather than working around them because I knew they’d be addressed, and all of them were. For some of them I requested tests be written, and they were as well. I should note that I was running against the development version of Salt, while I was running against the stable version of Ansible and I still ran into roughly the same number of bugs.

      Thanks for your comment. Though it’s pretty aggressive in tone, I appreciate your input.

      • That’s useful to know, I’ll have to keep a closer eye on Ansible then.

        Sorry for the aggressive tone, I get a little touchy when it comes to code quality.

  • You said:

    > After implementing the port in both Salt and Ansible,
    > the Lyft DevOps team all agreed to go with Salt.

    I had a question… *all* agreed? Like, unanimously the team was like: “yeah, Salt is the way to go.”?


    • We had a vote and everyone gave Salt the thumbs up. For sure part of the reason for this is that we had already done all of our orchestration in Salt.

  • Timothy

    Hi Ryan,

    Thanks for a detailed analysis of both Salt and Ansible. I found your post extremely interesting.

    I wonder if the effort you went in order to implement Salt and Ansible would be similar to a greenfield implementation of Puppet? Was this an option you had considered? From your post, I’m not sure if you don’t like Puppet (and wanted to move to something else), or like Puppet but didn’t want to fix the implementation.


    • I’ve used Puppet for ages and think it’s a perfectly fine language (you’ll find a bunch of puppet posts on this blog). A complete rewrite from scratch would have likely decreased the line count considerably and maybe the execution time as well (I’m not going to mention just how long that was, but it was considerable). That said, I think a rewrite in Puppet would have taken me considerably longer. Puppet’s DSL has a lot of quirks and you have to be very explicit about ordering. Also, there were a number of things that would have been just as difficult and painful in the long run. For instance, Puppet doesn’t have native support for git, pip, virtualenv, npm, gems, pecl, apache modules, etc. etc.. All of these either require custom ruby, or implementations of these things in Puppet’s DSL, which is what we had. As time goes on Puppet seems to stay stale with its native features, while Salt and Ansible add more and more modules every release.

      Additionally, our team and quite a bit of our organization is familiar with Python and less so with Ruby, so we have some what of a bias there as well, which makes the thought of a Puppet re-write painful.

      • XANi

        A lot of those have “official version” (like vcsrepo or apache) maintained by puppetlabs, just that they are not packaged with puppet.

        And should they be ? Usually those things are specific to certain architecture, someone not using apache would not want apache module included, same if they dont manage repos via puppet. Adding those would just add “bloat” to main packages, importing module to your own puppet repo is easy enougth (and also have a bonus of upgrading when you want, not every time you install new puppet version)

        • It doesn’t add much bloat to either Ansible or Salt. Managing those modules via puppet is an annoying thing in itself.

      • Lowe

        > Puppet doesn’t have native support for git, pip, virtualenv, > npm, gems, pecl, apache modules, etc. etc..

        the package resource has support a bunch of these.

        • Hm. It does indeed. Have most of those been around for a while? The docs don’t indicate which version they were added in.

  • I won’t take up any space on your blog post, but I did take the time to make various comments to the above on Hacker News, which may avoid some repetition:

    • Yep! I’ve been replying there. Alas, HN won’t let me post anymore comments.

  • Hi,

    We had similar requirements in our company and ended up building our own tool for building containers in docker and shipping those. So far it’s working out really well, particularly in the “ease of learning” department.

    To take each of your requirements in turn wrt ShutIt:

    No masters.

    ShutIt builds containers for shipping, so there is no concept of a master.

    Code should be as simple as possible.

    What could be simpler than “pure bash”, wrapped in a transparent and simple python framework?

    No optimizations that would make the code read in an illogical order.

    ShutIt is “pure ordered”.

    Code must be split into two parts: base and service-specific, where each would reside in separate repositories. We want the base section of the code to cover configuration and services that would be deployed for every service (monitoring, alerting, logging, users, etc.) and we want the service-specific code to reside in the application repositories.

    These are shared infra, while custom modules can be cut and kept private.

    You can also build “meta-modules” which simply require other modules and do nothing else. These then form the base layer of our dev builds.

    The code must work for multiple environments (development, staging, production).

    ShutIt’s highly configurable, so you can code whatever you want wrt different environments.

    The code should read and run in sequential order.

    ShutIt demands sequential ordering.

    Any questions, please mail me:


  • Hey,

    I wrote a round-up about Salt and Ansible a while ago here: It’s a highly opinionated, but maybe an interesting read to some.


    • I’ve read yours in the past. It was helpful. Thanks!

  • Atul Atri

    Thanks for this detailed analysis.

    What are your thoughts on chef? It seems more popular than ansible or salt.

    • Both Puppet and Chef are currently more popular than Salt or Ansible. They have both been around for a much longer period of time and have a very large ecosystem. I don’t have any really strong feelings on Chef, since I don’t have a lot of experience with it.

  • James Abley

    What’s a DevOps team? Do you mean web operations?

    DevOps is a cultural movement trying to break down silos and encourage a way of working across an entire organisation. [1] Having a DevOps team sounds very odd.


    • Lyft runs on the concept of “If you build it, you run it”, so the developers are also responsible for the operations of their applications. In general this works very well, but it’s still good to have a team that helps coordinate that work between teams and that writes tools and processes that strengthen that experience through the organization.

    • John Hogenmiller

      Actually, a DevOps team sounds very natural to me. There’s really no such thing as a DevOps engineer (though if you put that on your resume, recruiters will cream themselves in excitement). The big focus of DevOps is a blending sysadmin and development teams into one coherent one.

      From the article you linked: “Suddenly the technical team starts trying to pull together as one. An ‘all hands on deck’ mentality emerges, with all technical people feeling empowered, and capable of helping in all areas. The traditionally problematic areas of deployment and maintenance once live become tractable – and the key battlegrounds of developers (‘the sysadmin built an unreliable platform’) versus sysadmins (‘the developers wrote unreliable code’) begins to transform into a cross-disciplinary approach to maximizing reliability in all areas.”

  • Jeroen Rosenberg

    Nice write up. I’m curious about your thoughts about Chef. You only mention it briefly in the last paragraph. In your opinion does it suffer from the same downsides as Puppet compared to Ansible and Salt? I’m a former Puppet user and just started getting my hands on Chef and I find it for instance more structured than Puppet. I’m curious about your experience.

    • I mostly left Chef out because I don’t have a lot of experience with it and we weren’t strongly considering it because we wanted something Python based. Sorry I can’t be more help there.

  • martin

    Hey Ryan, thank you very much for the detailed comparison, it was quite an entertained read perfect for the ones like me are just initiating in DevOps.

  • Michael Fischer

    Hi Ryan,

    I’d contend the EOL bug in Ansible isn’t really a bug. Yes, text files should generally contain a trailing newline, but I think you made a false assumption that “copy” is specifically for creating text files.

    Assume for a moment that copy _did_ put a newline at the end, but the user intended for there not to be one (e.g., the input supplied was, say, a serialized C struct), and that broke an application that depended on the binary format of the file being correct. Then there’d be a different, and in my view “real” bug, in that the copy operation did not apply its argument literally.

    So I think the response to your bug was correct. I realize it’s not the 99% use case, but many other programs, as well as the standard C library, operate the way I described. C’s write(2) call doesn’t implicitly append a new line to text files before close(2) either.

    • It’s hard to compare Ansible’s copy->contents functions to the standard C library :). It’s unlikely that the contents function of copy is used for binary files often, since it’s a massive pain to manage binary files in YAML. The default behavior should work in a POSIX compliant way for its most often use case and should have an argument to not operate that way.

      • David Karban

        I must strongly disagree here. I often copy .tar.gz, .rpm, .deb and I really, really want to have 1:1 copy of file.

        I feel, that copymodule should be neutral, if there is problem with trailing line, it must be fixed in source file.

        Just my 2 cents :).

        • The specific bug was referring to using the contents argument of the copy module, which is either provided directly in the task, or from a var. In either case, it’s coming from YAML and it’s difficult to provide binaries directly in YAML (unless you base64 encode it). Seeing as that it’s almost always creating that’s going to be text when using that argument, it makes sense for it to create text files correctly by default, with an additional option to not add the end-of-file character.

          • David Karban

            Thanks for clarification, it may make sense with content parameter.

  • David Vestal

    Ryan, thank you for writing this up. I am brand new to Salt, coming from Puppet, and there were many things in this post that are helpful for my migration.

  • Will

    Did you use fireball with Ansible? Did it make a difference?

    • Nope. I was using ansible-playbook -c local, so it wasn’t using SSH at all. It would have been slower over SSH, fireball or not.

  • Pingback: The explosion of sysadmin configuration complexity | Smash Company()

  • Marcus Wright

    As an employee of a startup that should be acutely aware of how funding works with startups and fuels a business to awesomeness, your team chose a tool that hasn’t received any Series A funding what-so-ever to fuel their own infrastructure.

    What could possibly go wrong with a decision as short sighted as that.

    • Well, as-is SaltStack Inc. seems to be doing fairly well without funding. You should likely ask them their motivations. I’m not very worried about the situation at this time.