Thursday, December 10, 2009

Test builds of upcoming Firewall Builder v4.0 available

A lot of new features and improvements in the GUI and compilers. Most important ones are support for high availability configurations (clusters), redesign of the GUI, implementation of the "undo" function in the GUI and "immediate compile" feature.

Discussion is going on on the fwbuilder-discussion mailing list (subscribe the list here)

New packages are available as Firewall Builder v3.1. This version number is temporary and the code will be released as Firewall Builder v4.0 when it is ready. I do not want to imply that this is public beta of v4.0 just yet, although it is very close.

I would like to know your opinion on the new GUI before I go full public beta.

Please take a look at the new version. It has a many new features and overall I feel it has matured a lot. Come on, we now have undo :-)

Packages are available in the usual place:

http://www.fwbuilder.org/nightly_builds/fwbuilder-3.1/

As always, grab the latest build for testing.

Quick summary of new features in v3.1 (to be released as v4.0) :

- undo for all operations with objects and rules

- use main menu View/Undo stack to open a window that shows undo stack. Clicking a row in this window executes all commands up to that row, including the command shown in it. Highlighted row corresponds to the last command that has been executed.

- Object editor picks up and saves changes into the object when you hit Return in all text input fields or move keyboard focus away from the entry field (i.e. click on another one). This works almost identical to the "automatic save changes" mode of the v3.0.x except in v3.0 the change was saved into the object when you opened different object in the editor.

- "Immediate compile". Just highlight a rule in policy or NAT rule set and hit 'x' key (or use context menu item). This compiles this one rule and shows generated script or configuration lines in the bottom panel in the GUI. This is great to quickly check what is being generated for some rule you are trying

- The editor panel, object tree and undo stack are now docable windows. You can insert them into the main window (this is the default) or detach them and move around on the screen. In the detached state they can overlap the main window which should help when you use application on a small screen (laptop)

- the tree, rules and the editor are not tightly synchronized anymore. Before, single click in the tree would open object in the editor. This caused problems if you wanted to populate a group of objects and needed to switch between object libraries or open many tree branches. This tended to switch the object shown in the editor even if you did not want or need to. In the new version this does not happen. The group opened in the editor stays there and you can navigate the tree and click in it freely until you find the object you want, and then just drag it into the group.

- support for HA configurations. This includes new object type "Cluster" that encapsulates abstraction of the cluster, its interfaces and rules. Following
failover protocols are supported: on Linux heartbeat, OpenAIS, VRRP, on BSD CARP, on PIX its own protocol (I guess it does not have special name). State synchronization protocols are supported as well.

- support for these HA protocols means the program can automatically add rules to permit packets that carry these protocols between firewalls
and can configure VRRP interfaces with their own addresses and all. I tried to explain this in more details in the Release Notes which is shown when you start the program.

- on Linux we can now generate script to configure VLANs, bonding interfaces, bridges, dynamically update all of these and also dynamically update ip addresses of interfaces. In the older versions, including v3.0, generated script removed all secondary addresses from interfaces and then added them back. New version only removes addresses that have been removed in the GUI and adds new ones.

- generated Linux/iptables script has standard structure with command line options "start", "stop" and few others

- generated scripts are assembled from fragments that we call "configlets". You can override configlets that come with the package and use your own to modify generated script. This means you can make changes to the generated script without having to modify C++ code and rebuild the application

- configlets use very simple macro language that supports variable expansion and "if" construct for conditionals

- there has been a change in the generated script for the rules that use dynamic interfaces . I implemented change suggested by one of the users who noticed that dynamic ipv6 addresses were not handled properly in v3.0.x and suggested a fix. Now behavior has changed as follows: 1) if you use dynamic interface in the rule, the program generates shell function that reads all ip addresses of this interface and uses them in the rule (before it would only read and use the first address) 2) it does the same for ipv6 addresses if the rule belongs to ipv6 rule set


Thank you
Vadim

Friday, September 18, 2009

Firewall Builder v3.0.7 Released

This release fixes security issue with temporary file handling in the generated iptables script that affects only Linux systems where Firewall Builder is used to generate static routing configuration. It also significantly improves performance of batch compile operation and fixes few other minor problems. All users are encouraged to upgrade.

Tuesday, August 18, 2009

Firewall Builder v3.0.6 released

This is a bug-fix release, it comes with improvements in the GUI to fix problems with printing of large rule sets and additional optimization in the generated iptables and PF configurations.

Wednesday, July 15, 2009

Full set of Fedora rpms

We now have full set of Fedora Core 9,10 and 11 rpms for both 32 and 64 bit architectures. RPMs will be distributed from the test builds site and rpm repositories. Instructions on how to configure your system to use our repositories are here:
http://www.fwbuilder.org/docs/firewall_builder_packages.html


Test builds are here: http://www.fwbuilder.org/nightly_builds/fwbuilder-3.0

Wednesday, July 8, 2009

My talk on 10th Libre Software Meeting

I gave a talk about Firewall Builder on 10th Libre Software Meeting in Nantes, France. Slides (pdf) are available on the conference web site here

Thursday, June 18, 2009

New Users Guide is now available for download

360 pages of detailed documentation on all aspects of the program, screenshots, examples of policy and nat rules, examples of generated iptables, pf, cisco configurations and more. 12Mb PDF. Download it here.

Wednesday, June 17, 2009

Firewall Builder 3.0.5 released

This is a bug-fix release that improves program stability. This release is recommended for production use, everybody is encouraged to upgrade. We now offer deb and rpm repositories, the "stable" repositories now host packages v3.0.5-b1076. This page explains how to set up apt and yum to use our repositories.

Monday, June 15, 2009

New HOWTO: "Using IP Service Object in Firewall Builder "

This HOWTO demonstrates how IP service object can be used to match ip protocol by number, ip options (lsrr, ssrr, timestamp and others), as well as TOS or DSCP codes. Examples for iptables, PF and IOS access lists conclude this HOWTO.

Thursday, June 4, 2009

New HOWTO: Using Built-in Policy Importer

Did you know that you can import existing iptables or Cisco router configuration into fwbuilder ? This HOWTO explains how to do this.

Monday, May 25, 2009

New HOWTO: Using Addressable Objects in Firewall Builder

This HOWTO explains how objects that translate into IP addresses or groups of addresses can be configured and used in rules. This includes IPv4 and IPv6 addresses, IPv4 and IPv6 networks, physical address, address range, host object and group of addressable objects. The HOWTO includes screenshots and lots of examples of rules and corresponding generated configuration for iptables, PF, Cisco IOS access lists and PIX.

Tuesday, May 19, 2009

New HOWTO: Using Advanced Object Types in Firewall Builder

This HOWTO explains how Address Table, DNS Name and User Service objects work when used in rules of IPv4 and IPV6 policies of iptables and PF firewalls.

Monday, May 18, 2009

rpm and deb repositories are now available for stable and testing fwbuilder packages

We now have repositories for rpm and deb packages, all packages are signed with GPG key (key ID PACKAGE-GPG-KEY-fwbuilder.asc id 0xEAEE08FE) . Two separate repositories are maintained for each package type: "stable" and "testing". Stable serves packages that have been oficially released, while testing serves nightly builds. Instructions how to set up yum and apt to access repositories can be found here.

Tuesday, May 12, 2009

Building Ubuntu .deb packages on Amazon AWS

I just got my virtual build farm working. It uses Amazon AWS and currently consists of 6 Ubuntu AMI: Hardy, Intrepid and Jaunty, all 32 and 64 bit. Starting with Fwbuilder v3.0.5 build 926, I'll be building Ubuntu .deb packages using these machines.

A bit of a background: until now, I've been running build using virtual machines in the VMWare server 2.0.0 on Ubuntu Hardy 64bit. I still have it, but it reached capacity because besides of the build virtual machines I also use it for all sorts of testing and experiments. I have FreeBSD, OpenBSD, CentOS, a couple of Fedora machines, Vyatta virtual appliance and few others running on it. They do not run all at once, but sometimes I have 6 or 7 virtual machines running which is a bit too much. The server has 4 Gb of RAM and 2.3GHz Intel Core-2 Duo CPU which is pretty good, but not enough if I start several 64bit virtual machines. Now that my scripts work and I can run builds on AWS, I am going to use this VMWare server mostly for the virtual "lab".

General Idea

The whole process is controlled by a few scripts that I run on my development machine. I am on Mac but these are just shell scripts (may be a few bash-specific features) and will work on Linux just the same.

First, I identified several AMI that suite my needs. For now these are three Ubuntu versions, each in 32 and 64 bit architecture. The AMI are just basic minimal installs with no desktop environment. I launch them using AWS command line tools and provide a script that runs at boot time. They call this parameterized launch, see detailed explanation here. This script installs missing packages and does other things on the machine to make it ready for use. My build scripts then wait for the machine to come up and start sshd, then log in, check out source code form svn and do the build. In the end they upload generated packages to the nightly builds site and shut down virtual machine.

I have one 4Gb AWS volume that I attach to the build machine when it starts up. I use this volume to store apt cache so that package installation does not take network bandwidth and works faster. I also use ccache to speed up build and keep ccache repository on this volume as well, so that it persists between builds.


Virtual build farm setup

To configure and control virtual machines, I am using both online AWS Management Console and command line tools. Online console is convenient way to check the status and start or stop machines manually, but actual build is automated and uses command line tools.

First of all, I had to download and configure AWS command line tools. They offer the for download here:

AWS API Tools

Follow documentation to set up certificate and private key and make these tools work. You'll need to configure several environment variables: EC2_HOME, EC2_CERT, EC2_PRIVATE_KEY, JAVA_HOME My scripts expect these variables to be configured.

Directory structure of my build environment looks like this:

src/fwb3
src/fwb3/aws
src/fwb3/tools
src/fwb3/source


Libfwbuilder and fwbuilder modules are checked out into src/fwb3/source. Bunch of Python scripts in src/fwb3/tools orchestrate build process and do all platform-dependent things so that I can use the same scripts on Linux, Mac and Windows. Directory src/fwb3/aws is for the AWS scripts and configuration files. I have the following scripts in the aws directory:

aws/ami_list
aws/build-all-ami.sh
aws/setup-ami-ubuntu.sh
aws/start_ami.sh
aws/stop_ami.sh


the "start" and "stop" scripts have obvious purpose. Script setup-ami-ubuntu.sh is the one used as a parameter when starting AMI, this script is actually copied over to the machine and then runs on it when it boots. Script build-all-ami.sh is a wrapper that starts all machines one by one, performs build and shuts them down. File ami_list is a configuration file that lists AMI I am using.

Here is how ami_list file looks like:


ami01 ami-005db969 m1.large setup-ami-ubuntu.sh Hardy 64bit
ami02 ami-ef48af86 m1.small setup-ami-ubuntu.sh Hardy 32bit


The first column is an alias I use internally to refer to a particular AMI, next goes AMI ID, AMI type, the name of the setup script (I have only one at this time, but when I add Fedora machines I expect to have another) and finally there is a comment.

Script start_ami.sh takes one parameter, it is AMI alias (from the first column in ami_list) and uses ec2-run-instances to start it. Here is how this script looks like:



#!/bin/sh
#
# This script starts AMI, waits for it to come up and saves its DNS name
# in the machine name mapping file in aws directory
#
# EC2 environment variables must be set up for this script to work.
# We could automatically find EC2_HOME, but there is no way to know
# where certificate and key are located.
#
# Usage:
#
# start_ami.sh machine_name
#
# Where machine_name is our internal name such as ami01
# Mapping of the machine name to AMI ID is done in the file aws/ami_list


AMI_LIST="aws/ami_list"
AWS_LOG="aws.log"
MACHINE_NAME=$1
MACHINE_DNS_FILE="aws/$MACHINE_NAME"
VOLUME1_ID="VOLUME-ID"
VOLUME_ZONE="us-east-1a"

if test -z "$EC2_HOME"
then
echo "Set up EC2_HOME EC2_PRIVATE_KEY EC2_CERT environment vairables"
exit 1
fi

if test -z "$MACHINE_NAME"
then
echo "Usage: start_ami.sh machine_name"
echo "Machine name is defined in file aws/ami_list"
exit 1
fi

if test -d "aws"
then
if test -f $AMI_LIST
then
set $(grep $MACHINE_NAME $AMI_LIST)
AMI=$2
TYPE=$3
SETUP_SCRIPT=$4
COMMENT="$5 $6 $7 $8"
else
echo "AMI list file $AMI_LIST not found"
exit 1
fi
else
echo "Run this script from the top of build environment directories"
exit 1
fi

cat /dev/null > $MACHINE_DNS_FILE

echo "Starting AMI $MACHINE_NAME $AMI $COMMENT"

cat aws/$SETUP_SCRIPT | sed "s/@MACHINE@/$MACHINE_NAME/" > /tmp/ami-setup.sh

# Note: launch instance in the same availability zone where our volume is.
$EC2_HOME/bin/ec2-run-instances -k ec2key -z $VOLUME_ZONE -t $TYPE -f /tmp/ami-setup.sh $AMI > $AWS_LOG 2>&1

# output looks like this:
#
# RESERVATION r-0757c76e 424466753135 default
# INSTANCE i-56ff8f3f ami-005db969 pending ec2key 0 m1.large 2009-05-11T01:46:34+0000 us-east-1c aki-b51cf9dc ari-b31cf9da

grep INSTANCE $AWS_LOG > /dev/null || {
cat $AWS_LOG
echo
echo "Instance failed to start, aborting"
exit 1
}

set $(grep INSTANCE $AWS_LOG)
INSTANCE=$2

echo "Instance $INSTANCE started"

# when instance is running, the output of ec2-describe-instances is like this:
#
# RESERVATION r-0757c76e 424466753135 default
# INSTANCE i-56ff8f3f ami-005db969 ec2-67-202-12-64.compute-1.amazonaws.com domU-12-31-35-00-2C-D2.z-2.compute-1.internal running ec2key 0 m1.large 2009-05-11T01:46:34+0000 us-east-1c aki-b51cf9dc ari-b31cf9da

CNTR=""
while :;
do
S=$($EC2_HOME/bin/ec2-describe-instances $INSTANCE | grep $INSTANCE)
echo $S | grep running >/dev/null && {
set $S
MACHINE_DNS=$4
break
}
sleep 10
CNTR="${CNTR}O"
if test "$CNTR" == "OOOOOOOOOOOO"
then
echo "Instance failed to start after 2 min timeout"
exit 1
fi
done

echo "Instance $INSTANCE is running"

echo "Attaching volume $VOLUME1_ID to instance $INSTANCE as /dev/sdf"
$EC2_HOME/bin/ec2-attach-volume $VOLUME1_ID -i $INSTANCE -d /dev/sdf

# Remove existing host key for this machine
ssh-keygen -R $MACHINE_DNS

# Wait for ssh to come up and read host key

TIMEOUT="O"
while :;
do
echo "Waiting for ssh access..."
if test "$TIMEOUT" = "OOOOOOOOOOOO"
then
echo "Timeout waiting for ssh access"
exit 1
fi
sleep 20
ssh -AX -o StrictHostKeyChecking=no root@$MACHINE_DNS 'uname -a' && break
TIMEOUT="${TIMEOUT}O"
done

echo $MACHINE_DNS > $MACHINE_DNS_FILE
exit 0




This script starts with checking the argument and sets up global variables. Note that the volume ID I am using is hard-coded at the beginning. It is also important to know the zone name this volume is in because AWS does not let you attach volume to the instance running in a different zone. I therefore have to start my instances in the same zone where my volume is.

Script runs ec2-run-instances to start the instance, then uses ec2-describe-instances to determine when it starts running. There is a timeout in this wait so it won't get stuck there forever. After instance has come up and started running, the script attaches volume and tries to log in via ssh. Sshd comes up a little later, so the script has to wait in another loop for that. Once sshd is up, the script stores dns name of the new instance in the file aws/$MACHINE_NAME (e.g. aws/ami01) so that it can be used by other scripts to access this machine.

Note how it removes ssh host key before trying ssh for the first time. This is because AWS assigns ip addresses and therefore dns names to instances dynamically and you may get the same address and name for a machine you already had in the past, but ssh host key will have changed. To avoid conflicts, I remove ssh host key.


The most interesting part perhaps is the setup script that is used for the parameterized launch and runs on the machine itself. This script makes it possible to use generic AMI and turn it into a build server (or anything else) automatically. Here is this script for Ubuntu machines:




#!/bin/sh

# This script sets up AWS instance running Ubuntu
#

MACHINE_ALIAS=@MACHINE@
SVN_SERVER=""
SVN_USER="vadim"
NIGHTLY_BUILDS_SERVER=""
EXT_VOLUME_DEV="/dev/sdf"
EXT_VOLUME_PART="${EXT_VOLUME_DEV}1"

exec 3>&1
exec 1> /root/ami_setup.log
exec 2>&1

KNOWN_HOSTS="SSH HOST KEY HERE"

cd /root

cat <<-EOF > /root/wait.sh
TIMEOUT="O"
while :;
do
test -f /root/machine_ready && break
if test "\$TIMEOUT" = "OOOOOOOOOOOO"
then
echo "Machine setup takes over 2 min, aborting"
exit 1
fi
TIMEOUT="\${TIMEOUT}O"
sleep 10
done
echo "Successful machine setup, can continue"
EOF
chmod +x /root/wait.sh

echo $KNOWN_HOSTS > /root/.ssh/known_hosts

echo "Running ssh to pick up ssh host keys"
# Note that since this is a boot-time setup script and ssh agent
# is not running when it is executed, these two ssh commands will in
# fact fail to log in. But they pick up host keys anyway.
ssh -o StrictHostKeyChecking=no ${SVN_USER}@${SVN_SERVER} 'uname -n'
ssh -o StrictHostKeyChecking=no ${NIGHTLY_BUILDS_SERVER} 'uname -n'

# wait for the volume to attach
while :;
do
e2fsck -y $EXT_VOLUME_PART > /dev/null && break
sleep 5
done

mkdir -p /data
mount $EXT_VOLUME_PART /data

# Set up ccache dir
mkdir -p /data/$MACHINE_ALIAS/.ccache
ln -s /data/$MACHINE_ALIAS/.ccache /var/tmp/.ccache
ln -s /data/$MACHINE_ALIAS/.ccache /root/.ccache

# Set up directory for apt cache
mkdir -p /data/$MACHINE_ALIAS/cache/apt
mkdir -p /data/$MACHINE_ALIAS/cache/apt/archives
mkdir -p /data/$MACHINE_ALIAS/cache/apt/archives/partial
mkdir -p /data/$MACHINE_ALIAS/cache/debconf/
mkdir -p /data/$MACHINE_ALIAS/cache/ldconfig/
mkdir -p /data/$MACHINE_ALIAS/cache/man
chown man /data/$MACHINE_ALIAS/cache/man
mv /var/cache /var/cache.bak
ln -s /data/$MACHINE_ALIAS/cache /var/cache

echo "Installing packages"

aptitude update

aptitude -y install g++ \
libqt4-dev libqt4-gui libqt4-core qt4-dev-tools \
libsnmp-dev \
subversion python-svn \
make libtool autoconf libxml2-dev libxslt1-dev python-paramiko \
fakeroot checkinstall ccache

mkdir -p src

# Set host name to machine alias so that our build scripts can
# recognize it by this name. In particular this will be the name
# scripts will use to find per-host override files under
# fwb3/machines/

hostname $MACHINE_ALIAS

touch /root/machine_ready




Note that machine name in this script is defined using a macro @MACHINE_NAME@. start_ami.sh script replaces it with machine alias before it is passed to ec2-run-instances.

First thing this script does, is it creates another script /root/wait.sh which we use later to determine if the AMI completed setup process.

then it connects to the svn an download servers using ssh to pick up their host keys. For the actual build process to be fully unattended, the machine should already have the host keys, otherwise it would stop during svn checkout and package upload asking operator if the key should be accepted. From the security standpoint, it would be better to copy these keys into the body of the script and transfer them that way. I am going to have to try this later.

After that, the script tries to mount external volume. It is possible that this script will run before the volume is attached, so first it runs e2fsck to check if the volume is available.

Once volume is mounted as /data, the script tries to create some directories on it. Note that directories are created in the tree that starts with machine alias, so that the same volume can be used with different virtual build machines. Script creates directory cache with subdirectories used by apt, man and few others. It helps minimize network bandwidth usage if apt cache is persistent since we download and install lots of packages every time we start virtual build machine. I also keep ccache directory on this volume to speed up repetitive builds.

Once this is done, the script runs "aptitude update" to download packge spec files and then installs bunch of dependency packages I need to actually do build of fwbuilder.

In the end, it sets hostname to the machine alias and touches file /root/machine_ready which serves as a flag indicating that setup process is complete.

This script is a good place to do other operations in preparation of the machine, for example you can set up svn tunnel configuration if access to your svn repository requires it or may be create some directories on the file system.

Finally, here is the fragment of the script build-all-ami.sh that performs actual build:




build_on_ami() {
machine_name=$1
build_script=$2
MACHINE=$(< aws/$machine_name)
ssh -AXt root@$MACHINE /root/wait.sh && \
ssh -AXt root@$MACHINE "cd src; svn co $SVN_URL fwb3" && \
scp $CLIENT_FILE root@${MACHINE}:src/fwb3 && \
ssh -AXt root@$MACHINE "cd src/fwb3 && ./tools/${build_script} $UPLOAD"
}

{
for a in $MACHINE_LIST
do
aws/start_ami.sh $a && build_on_ami $a build-deb.py
aws/stop_ami.sh $a
done




Note how function build_on_ami read machine's dns name from the file aws/$MACHINE_NAME created by start_ami.sh It then logs in to the machine and runs script /root/wait.sh that waits for the setup process to complete. After that, it checks out build environment and performs build.

Generally, I am quite happy with this system. It is not too complicated and is easy to extend. To add fedora systems, I should only need to write setup setup script and probably do minor changes to the build-all-ami.sh script.

Amazon AWS machines are not very fast. The /proc/cpuinfo shows quite a bit of nominal horsepower, but compile process seems to be slower than I would expect it do be at that speed. "Large" type machines (64 bit) are more expensive ($0.40 per hour, compared to $0.10 per hour for the "small" 32 bit machines) so I suggest all debugging of the scripts should be done with 32bit machines. It looks like my build on 6 machines cost me about $1.50 which is pretty good.

Sunday, March 29, 2009

Firewall Builder v3.0.4 released

I am pleased to announce release of Firewall Builder v3.0.4. This is a significant bugfix release that includes several important improvements, as well as fixes for the bugs reported during the three month since v3.0.3 was released. All users are encouraged to upgrade to v3.0.4. Among other things, I would emphasize the following fixes and improvements:

  • Main menu item "File/Open recent" has been added.

  • Rule actions icons have been changed to make them recognizable for the red-green color blind users.

  • IPv6 addresses of firewall interfaces can now be discovered via SNMP. SNMP discovery also works on Windows.

  • Generation of static routing commands is now supported for Cisco IOS and PIX platforms.

  • CustomService object can now specify protocol and address family

  • Rule sets can be only ipv4, only ipv6 or combined. In the latter case the program intelligently chooses which objects from the rules it should use to generate firewall configuration and produces configs for both address families from the same rule set.

  • Built-in policy installer can work over IPv6

  • Built-in policy installer recognizes sudo password prompt. There is no need to configure password-less sudo rights for the firewall management account anymore.


Complete Release Notes v3.0.4

Monday, March 9, 2009

Using Built-in Revision Control in Firewall Builder

New HOWTO: Using Built-in Revision Control in Firewall Builder, published as part of the Firewall Builder CookBook. See the HOWTO here

UbuntuLinux Help published "Getting Started with Firewall Builder"

UbuntuLinuxHelp just published my HOWTO "Getting Started with Firewall Builder", here is direct link. Thanks!

Sunday, February 22, 2009

Really brief introduction to Firewall Builder

Here is Introduction to Firewall Builder for the impatient. Showing how one can create functional firewall policy to protect desktop machine and activate it in just 13 slides. This demo spends more slides showing how to compile and install firewall policy than to show how it was created :-)

Saturday, February 21, 2009

Updated HOWTO on built-in policy installer

This HOWTO has been updated and extended to reflect features available in the latest versions of Firewall Builder. You can find updated document here.

Monday, February 16, 2009

How to block IP addresses from any country

New Firewall Builder CookBook recipe: How to block IP addresses from any country.
Using geolocation API provided by http://blogama.org and following up and expanding his HOWTO "Blocking IP address of any country with iptables" found at  http://blogama.org/node/62. In addition to showing how iptables script demonstrated in the original HOWTO can be generated with Firewall Builder using Address Table objects, I also demonstrate how the same set of objects can be used to produce configuration for PF.

Tuesday, February 3, 2009

Monday, February 2, 2009

Packages for Fedora Core 10 and Ubuntu Intrepid i386

Two new virtual machines came online: Fedora Core 10 and Ubuntu Intrepid (i386). I will build Firewall Builder packages on these machines from now on. Fedora Core 9 machine has been retired, so no more rpms for FC9.

Here is the latest set of OS and architectures I build binary packages for :

Ubuntu Hardy i386
Ubuntu Hardy amd64
Ubuntu Intrepid i386
Ubuntu Intrepid amd64
Fedora Core 10 i386
CentOS 5.2 i386


Friday, January 2, 2009

Support for static routing commands for PIX and IOS

Support for static routing commands for PIX ("route <interface> <destination> <gw>") has been recently contributed by Steven Mestdagh <steven at openbsd.org> to the project. Thank you Steven!

This is done in the way similar to the routing support for Linux, you just add rules to the "Routing" rule set object. I then extended his code to add support for the "ip route " commands for IOS. In the case of IOS, there is no "Interface" column in the routing rules but you can put either an object representing gateway or router's interface object in the "gateway" column, in the latter case fwbuilder generates command "ip route <destination> <interface_name>".

Now Firewall Builder supports generation of static routing commands for three platforms: Linux, PIX and IOS. This is available in v3.0.4 build 732 and later, you can download packages from the nightly builds site at http://www.fwbuilder.org/nightly_builds/fwbuilder-3.0/

Please test, your feedback is very welcome.