In the first and second parts of this series, we got to grips with SaltStack Config, understanding the terminology, how to get the minions talking to the master, and even installing NGINX on a ubuntu server without event logging on to it. But now we need to make it a bit more interesting, and ensure we are deploying OUR web app to the server and keeping it consistent. Here we go….

So in the previous post I left you with the default NGINX holding page, deployed using SaltStack Config – which is all well and good – but honestly, that’s not what we really want. We want our content up there.

It’s a good idea at this point to create some file structure that separates nginx install files and files specific to our webpages. So I’ve created a ‘webconfig’ directory, with the root containing the state file, and the ‘files’ directory containing our content.

First up I’m going to dust off a few old HTML books and write some rather basic code. My HTML skills are rather rusty, so please don’t judge me too harshly!

Saltenv: base

Path: /webconfig/files/index.html.jinja

<!-- 
Name: index.html.jinja
Description: Basic HTML to provide basic webpage
-->

{% set host = salt['grains.get']('host') -%}
<!DOCTYPE html>
<html>
<head>
<title>Custom HTML provided by {{ host }}</title>
<body>
<h1>nginx is installed successfully using SaltStack Config</h1>
<p>The host information below was pulled from a salt grain. This is being served from:</p>
 
<h2>{{ host }}</h2>
<p>Configured by Sam Akroyd @ www.samakroyd.com</p>
</body>
</html>

Before we carry on, we’ve just come across a new file type; jinja. Jinja templates allow us to construt a config dynamically based on information that has come from salt, just as grains. For example, in the above example, we’ve used the function grains.get to pull the name of the host.

We’ll use jinja in the default site configuration too to build a little bit of logic. Lets create a new file:

Saltenv: base

Path: /webconfig/files/webconfig.jinja

{%- set interface = 'ens160' if salt['grains.get']('env') == 'Development' else 'eth0' -%}
{%- set addr = salt['network.interface_ip'](interface) -%}
 
server {
    listen {{ addr }}:80;
 
    root /var/www/sam;
    index index.html index.htm;
 
    server_name {{ addr }};
 
    location / {
        try_files $uri $uri/ =404;
    }
}

This is pretty clever. The first 3 lines create a variable called ‘interface’ which checks to see whether the environment is called ‘Development’. If it is, it exposes the site on eth0, which is the public interface of the server – perfect because its likely we don’t have a load balancer within Dev. If its not Development however, its prod, and access to the site will be via load balancer which passes traffic via the private interface ‘ens160’.

The second part, uses the address (and the server_name) and passes it into the server block to listen on port 80, essentially http traffic.

Next is the state file to push all these out to the minion and restart the service:

# Name: webconfig.sls
# Description: State file to deploy jinga

# Copy the html files to the minion
deploy_index_html:
  file.managed:
    - name: /var/www/sam/index.html
    - source: salt://webConfig/files/webconfig.jinja
    - template: jinja
    - makedirs: True

# Copy config files to the minion
deploy_config:
  file.managed:
    - name: /etc/nginx/sites-enabled/sam
    - source: salt://webConfig/files/webconfig.jinja
    - makedirs: True
    
# Delete default sites-enabled
delete_default_config:
  file.absent:
    - name: /etc/nginx/sites-enabled/default
      
# Restart the nginx service
restart_nginx:
  service.running:
    - name: nginx
      watch:
      - file: deploy_config

Nothing majorly surprising here. We’re copying the content and config files to the minion, deleting the default site and restarting the nginx service. All relatively standard.

Getting it live was just as easy as before. We need to create another job, I’ve called mine ‘nginx config’ and the settings are reasonably self explanatory

Once this job is ran against your minion, you should be able to revisit the URL you visited earlier and your config should appear…

The next step is to put some further intelligence into this using beacons and reactors to enable auto self-healing infrastructure….