Reloading grains and pillars during a SaltStack run

If you use the grain/state pattern a lot, or if you use external pillars you’ve probably stumbled upon a limitation with grains and pillars.

During a Salt run, if you set a grain, or update an external pillar, it won’t be reflected in the grains and pillars found in the grains and pillar dictionaries. This is because you’ve updated it, but it hasn’t been reloaded into the in-memory data structures that salt creates at the beginning of the run. From a performance point of view this is good, since reloading grains and especially loading external pillars is quite slow.

To fix this limitation, I added a PR to add the global state arguments reload_grains and reload_pillar. These work similar to reload_modules (and in fact, they imply reload_modules).

For example, if you’re using the etcd external pillar, the following will now work:

Ensure etcd key exist for host:
  module.run:
    - name: etcd.set
    - key: /myorg/servers/myhost
    - value: {{ grains['ip_interfaces']['eth0'][0] }}
    - profile: etcd_profile
    - reload_pillar: True

Ensure example file has pillar contents:
  file.managed:
    - name: /tmp/test.txt
    - contents_pillar: servers:myhost

Note, though, that jinja in sls files is executed before the states are executed, so this will still not work:

Ensure etcd key exist for host:
  module.run:
    - name: etcd.set
    - key: /myorg/servers/myhost
    - value: {{ grains['ip_interfaces']['eth0'][0] }}
    - profile: etcd_profile
    - reload_pillar: True

Ensure example file has pillar contents:
  file.managed:
    - name: /tmp/test.txt
    - contents: {{ pillar.servers.myhost }}

Jinja used in template files is executed along with the state in order, so that’ll work without issue.

This change is merged into the develop branch of Salt, and will be included in the first stable release of 2015 (Lithium).

  • Moshe Suberri

    Ryan,

    Thank you for all you posts.

    Given that “that jinja in sls files is executed before the states are executed”, what is the best way to load dynamic grains?

    For example:
    State one: create AWS EC2 instance A.
    State two: here we need instance A id – how do we get the id of instance A?

    Thank
    Moshe

    • There’s not really a way to do this. For the boto_* salt modules we handle things like this automatically. For instance, when an autoscale group is created, we set the name tag. For future references to the autoscale group we find the autoscale group id by looking up the name tag. The idea being that in the sls code you just reference things by their name and stuff should automatically work. Of course, this isn’t perfectly handled in every module, but if it’s not implemented for something you need you should open a bug so that we can fix it.

      • Moshe Suberri

        Ryan,

        Thanks for your reply.

        After much search I also came the realization that Salt State cannot handle any dynamic info. This is due to the fact that the all per-process rendering to YAML file is done before any the Salt States are applied.

        Probably the only way to use dynamic info is via Salt custom modules, which somewhat defeats the idea of State management.

        Ansible has a nice and easy way to get the dynamic info and apply it the the next task/state when needed.

        I wonder why Salt does not provide on the fly rendering. That is render a Jinja statement when it is needed an not at the start.

        Moshe

        • It’s due to differing designs between Salt and Ansible, and you’re right in that you can’t take the output from one state and apply it to the next because Salt renders the jinja, then it renders the YAML, then it iterates through the states. There’s pros and cons for this. A pro for Salt is that you can wrap a large number of states in a complex jinja loop and do really neat complex things with very little code. Ansible doesn’t have the ability to do this. A con is of course what you’ve described in that it’s difficult to take dynamic data and apply it between states.

          Of course that feature is also somewhat of a con for Ansible as well, because Ansible relies on the feature like a crutch. Rather than combining complex actions together inside of single plays, it pushes the complexity down to the user, where you need to take (the undocumented) outputs of plays and string them into jinja inputs for other states.

          For instance, in Salt’s boto_asg module (autoscale groups in AWS), the state combines launch configuration, scaling policies, scaling alarms, and cloudwatch alarms. You write one (simplified) state and it wires everything together for you automatically. Additionally, it generates the resources by name so that they can be referenced elsewhere without needing to rely on outputs. Here the burden of the complexity is on the module writer/maintainer to make the user experience better for the user.

          Ansible’s implementation of launch config/autoscale groups is very, very difficult to use effectively because you need to stitch pieces together, and I’m not even sure it’s able to even fully automate the process in its current implementation.