
For a long time now, I’ve recommended a distributed firewall solution,and preached the benefits: increased security, distributed load,improved DDOS resistance and system-by-system fine tuning. But thesebenefits only come if the system is being maintained appropriately,and that can be a pretty big challenge. I’ve been asked severaltimes how I manage distributed firewall rules. In myexperience, using an automation engine (in our case salt) and afirewall management utility (in our case ferm) has made the ruleseasy to manage, distribute, and most importantly understand.
The first challenge is around organization. The simpler yourorganization schema is, the less documentation youhave to write, the easier it is for other peopleto pick it up. It also increases performance, because you havefewer rules for packets to be compared against before being acceptedor rejected. Security, automation, monitoring, scripting: my mantrais layers, and that is definitely in play here: we have multiple layersof organization.
The first layer of organization is the IP numbering of the hosts.I assigned a block of IP addressesto each task which will be large enough for it to scale, but smallenough that it is not wasteful. Tasks include: web server,SIE Remote Access server, administrative machines and customerboxes. Most tasks required few enough hosts to fit in a /29, but a few required a /28.Note that these are merely logical designations and not IP subnets, soyou don’t lose 3 useful IPs out of a /29. This allows us to bemore specifically secure, in that we can tie firewall rules to allow(or deny) types of servers to access to specific systems withoutneeding a rule for each server.
Our IP organization continues with the class ofserver: dev, qa, or production. Each class gets a subsection of the task block.In most cases prod gets half, with qa,staging, and dev splitting the remainder. That means that for mostthings production is inside a /30 for the purposes of the firewallconfiguration. This allows for things like production to be lockeddown so that only the production automation engine can touchproduction systems, and only the right production systems can hitthe production databases.
The second layer of organization is in our salt automation.Salt has two built in classification systems: pillars and grains. We use pillars since they are defined on theserver side, which makes management a little simpler. We create pillarsto track datacenter, rack number, and physical location, which areimportant for systems management which may not seem important untilyou have a remote hands technician plug a server into the wrongswitch. We also use pillars to track the dev, qa, staging andproduction status, as well as the task to which a server is assigned. Salt deploys the appropriate software and firewall rules in auniform centralized manner which significantly reduces the manuallabor aspect and improves consistency. It simply doesn’t forgetone of the rules.
We continue the organization down to the naming of the files. Whereit is possible to use custom configuration file names, we append a codeindicating that it is centrally managed, or if there is a file thatis temporarily independently managed pending automation that isalso indicated in the file name. We also put a note at the top ofthe file as a gentle reminder that if things aren’t fixed the rightway, they won’t stay fixed.
Also important to be organized are the personnel groups. This won’tnecessarily be by role, as you’ll may have groups of support staffand developers that need access to a group of servers but only thatone group, a QA team that only needs access to some services on thatgroup of servers and several others, such as an operations team which needsaccess to all the servers and a team of DBAs that only need access tothe database servers. You may even want to divide access upbetween production, quality assurance, staging and developmentservers. Having your personnel divided up by role and task from aserver access perspective allows you to simplify securing yourservers from unnecessary internal threats as well.
The final layer is our monitoring system, where we usehost groups to watch the services that are expected to be on eachsystem to make sure everything is running as expected. The hostgroup organization pretty closely matches the task organization insalt. The organization allows a smaller staff to watch more serverswith more reliability, security and uptime.
Now that we have the baseline provided, we can get into actuallyimplementing the distributed firewall. I’m going to assume for thesake of sanity that you have cleaned out all your previous firewallconfigurations, or you’re using a hardware firewall. If that’s notthe case, I’d suggest clearing or disabling the firewall at thesame time as pulling in the new configuration. Also, start onnon-production systems. You’re going to break things, let’s figureout what before we get to production.
A couple of notes about the salt configuration. Some of this maynot make sense until you’ve read through the ferm configurationsection. I would strongly recommend reading through all of this acouple times and feel comfortable with it before you begin workingwith it. This is far from the only way to do it, so feel free toadjust things to your needs. I’m not going to go through the initialsalt installation in this document, as it’s a bit outside the scope,and many people will use the concepts behind this with their currentautomation engine. But I will provide a couple recommendationsfrom what we’ve learned the hard way: use multiple masters and usegit as a back end.
REMINDER: Do this on testing systems. Do not start with production.Murphy’s Law will make you regret it.
In your file root, you’ll create a folder for each of the applicationsyou’ll install, with an
init.sls
and the files that you’ll beproviding. Through the scope of this document we’ll be lookinginto a file root tree that looks like this:
top.sls
ferm/init.sls
ferm/ferm.conf
ferm/SM-base.conf
ferm/SM-services.conf
ferm/SM-internal-all.conf
ferm/SM-internal-web.conf
ferm/SM-internal-monitor.conf
ferm/SM-trusted-all.conf
ferm/SM-trusted-dba.conf
ferm/SM-trusted-ops.conf
The
top.sls
will contain the subdirectories to run:
top.sls
:
base:
‘*’:
- ferm
And those point to the
init.sls
to run:
ferm/init.sls
:
ferm:
pkg:
- installed
service:
- enabled
- watch:
- pkg: ferm
- file: /etc/ferm/ferm.conf
- file: /etc/ferm/conf.d/*
/etc/ferm/ferm.conf:
file.managed:
- source: salt://ferm/ferm.conf
- user: root
- group: root
- mode: 644
/etc/ferm/conf.d:
file.directory:
- user: root
- group: root
- mode: 755
- makedirs: True
{% if salt['pillar.get']('services') %}
/etc/ferm/conf.d/SM-services.conf:
file.managed:
- source: salt://ferm/SM-services.conf
- template: jinja
- user: root
- group: root
- mode: 644
- require:
- file: /etc/ferm/conf.d
{% else %}
/etc/ferm/conf.d/SM-services.conf:
file.absent
{% endif %}
/etc/ferm/conf.d/SM-internal-all.conf:
file.managed:
- source: salt://ferm/SM-internal-all.conf
- user: root
- group: root
- mode: 644
- require:
- file: /etc/ferm/conf.d
{% if salt['pillar.get']('services:internal-web') %}
/etc/ferm/conf.d/SM-internal-web.conf:
file.managed:
- source: salt://ferm/SM-internal-web.conf
- template: jinja
- user: root
- group: root
- mode: 644
{% else %}
/etc/ferm/conf.d/SM-internal-web.conf:
file.absent
{% endif %}
{% if salt['pillar.get']('services:internal-monitor') %}
/etc/ferm/conf.d/SM-internal-monitor.conf:
file.managed:
- source: salt://ferm/SM-internal-monitor.conf
- template: jinja
- user: root
- group: root
- mode: 644
{% else %}
/etc/ferm/conf.d/SM-internal-web.conf:
file.absent
{% endif %}
/etc/ferm/conf.d/SM-trusted-all.conf:
file.managed:
- source: salt://ferm/SM-trusted-all.conf
- user: root
- group: root
- mode: 644
- require:
- file: /etc/ferm/conf.d
{% if salt['pillar.get']('services:trusted-dba') %}
/etc/ferm/conf.d/SM-trusted-dba.conf:
file.managed:
- source: salt://ferm/SM-trusted-dba.conf
- template: jinja
- user: root
- group: root
- mode: 644
- require:
- file: /etc/ferm/conf.d
{% else %}
/etc/ferm/conf.d/SM-trusted-dba.conf:
file.absent
{% endif %}
{% if salt['pillar.get']('services:trusted-ops') %}
/etc/ferm/conf.d/SM-trusted-ops.conf:
file.managed:
- source: salt://ferm/SM-trusted-ops.conf
- template: jinja
- user: root
- group: root
- mode: 644
- require:
- file: /etc/ferm/conf.d
{% else %}
/etc/ferm/conf.d/SM-trusted-ops.conf:
file.absent
{% endif %}
Breaking down the above, it installs ferm, then configures theservice to restart if salt changes any configuration file, or updatesthe package.
It puts in place
ferm.conf
, makes the folder
/etc/ferm/conf.d
, andthen in that folder places
SM-base.conf
,
SM-internal-all.conf
and
SM-trusted-all
. If there are any services at all, it puts in
SM-services
, and then if any of those services should be in
SM-trusted-ops.conf
or
SM-internal-web.conf
it’ll build and placethe correct file(s). There is more information on that processunder the ferm configuration, later.
You’ll also end up in the pillar root, with a structure that looks like this
top.sls
services/public/http
services/public/https
services/internal-all/http
services/internal-all/https
services/internal-web/postgres
services/internal-monitor/snmp
services/internal-monitor/postgres
services/trusted-all/icmp
services/trusted-dba/postgres
services/trusted-ops/postgres
services/trusted-ops/ssh
services/trusted-ops/snmp
Inside the
top.sls
is where you will define which systems get which pillars.
top.sls
:
base:
‘web1.dc1.mydomain.local’:
- services/public/http
- services/public/https
- services/trusted-ops/snmp
- services/internal-monitor/snmp
‘db1.dc1.mydomain.local’:
- services/internal-web/postgres
- services/trusted-ops/postgres
- services/trusted-ops/snmp
- services/trusted-dba/postgres
- services/internal-monitor/postgres
- services/internal-monitor/snmp
Inside each of those is an
init.sls
which includes the pillar structure.
services/public/http/init.sls
:
services:
public:
http:
protocol: tcp
port: 80
Once you have the structure completely built out, and your organizationstandard codified into salt, you’ll need to actually configure fermto be able to verify that everything works. We’ll cover that inthe next article.
Travis Hall is a System Administrator for Farsight Security, Inc.
Read the next part in this series: Distributed Firewall Configuration Part 2: Ferm Configuration with Salt