Thursday, 12 May 2016

Automated Install for OpenStack Kilo on Ubuntu Server

I've been messing around with OpenStack lately, there is the excellent Devstack system for building test deployments but I wanted something to survive a reboot which meant needing a full OpenStack. There's some great docs on OpenStacks website for installing Kilo on Ubuntu 14.04.

To automate things I've scripted the process above with a few tweaks, available on github:

https://github.com/unixhead/mattstack/.

It's meant for installing on a single host and playing around with, there's no telemetry or orchestration module and it just uses the basic legacy nova networking.

How to use it!

  • Download and install the Ubuntu Server 14.04 (Trusty) image. It won't work on any other version! You don't need to specify any software components during the install, although SSH server might be handy.
  • Download the build-kilo.sh script from https://raw.githubusercontent.com/unixhead/mattstack/master/build-kilo.sh
  • Edit the script ("nano build-kilo.sh") and set the variables at the top, they have descriptive comments explaining what needs doing.
  • Run "chmod +x build-kilo.sh" to make it executable.
  • Run the script as root. "sudo su -" and then "./build-kilo.sh"
  • Reboot at the end of the install and you should have a working OpenStack Kilo build.

If you want to rebuild then you're probably best off re-installing Ubuntu server and starting from scratch.

There are a few niggles with the original build process such as getting error 500 denied messages when trying to perform various operations, resolved by changing Keystone not to use memcached. Also had some issues with Qemu due to not having the nova-compute-qemu package installed and the /etc/nova/nova.conf not quite right, needed for deploying Openstack in something like Virtualbox without KVM support. This script should sort those problems out.


Read more...

Saturday, 2 January 2016

Vagrant Lab for HAProxy

This article is about setting up a lab using Vagrant to play with the HAProxy load balancer.

If you want the TLDR version where you just copy/paste a few lines and the lab gets created then this will do the job, it's explained in more detail below. It's great with tools like Virtualbox and Vagrant that such a lab can be set up so easily, this would have taken days to build prior to virtualization!

#install software
sudo apt-get install virtualbox vagrant git

#configure host-only subnet address in virtualbox
VBoxManage hostonlyif create
VBoxManage hostonlyif ipconfig vboxnet0 --ip 172.28.128.1 --netmask 255.255.255.0
VBoxManage dhcpserver modify --ifname vboxnet0 --ip 172.28.128.1 --netmask 255.255.255.0 --lowerip 172.28.128.100 --upperip 172.28.128.250

#install the lab files, for some reason the box fails to auto-download so install it manually, should be fixed in future
git clone https://github.com/unixhead/haproxy-basic-lab && cd haproxy-basic-lab 
vagrant box add hashicorp/precise32 https://vagrantcloud.com/hashicorp/boxes/precise32/versions/1.0.0/providers/virtualbox.box


#run the lab
vagrant up


Now there is a slight caveat in that I use Linux Mint and the current versions of Vagrant & Virtualbox aren't quite right, so I had to manually install Vagrant from the website, but never let the truth get in the way of a good story.

My Virtualbox host-only network uses the range 172.28.128.0/24 and the network to be created is shown below, a simple load balancer infront of two web servers. The client is also the hypervisor hosting the VMs. It's very similar to the configuration of the Vagrant tutorial system. Virtualbox by default uses 192.168.0.0/24 for the host-only networks but that overlaps with a few places I work so had to change it.



The files needed are listed below, save them all to the same directory and run "vagrant up" in it. You can download them all in one go from github with the command:

git clone https://github.com/unixhead/haproxy-basic-lab

Vagrantfile - The configuration for Vagrant itself providing 3 VMs using the Ubuntu 32 bit image. One is the HAProxy load balancer and two web servers.

Vagrant.configure(2) do |config|
  config.vm.box = "hashicorp/precise32"

 config.vm.provider "virtualbox" do |v|
   v.memory = 1024
    v.cpus = 1
 end

 config.vm.define "lb" do |config|
  config.vm.hostname = "lb"
    config.vm.network "private_network", ip: "172.28.128.10"
   config.vm.provision :shell, path: "bootstrap-haproxy.sh"
 end

 config.vm.define "web1" do |config|
  config.vm.hostname = "web1"
    config.vm.network "private_network", ip: "172.28.128.11"
   config.vm.provision :shell, path: "bootstrap-apache.sh"
 end

 config.vm.define "web2" do |config|
  config.vm.hostname = "web2"
    config.vm.network "private_network", ip: "172.28.128.12"
   config.vm.provision :shell, path: "bootstrap-apache.sh"
 end

end


bootstrap-apache.sh - This script runs on the webservers after Vagrant has built them. It installs Apache with PHP, then sets the web root to the current Vagrant project directory.

#!/usr/bin/env bash

apt-get update
apt-get install -y apache2 php5
if ! [ -L /var/www ]; then
  rm -rf /var/www
  ln -fs /vagrant /var/www
fi


bootstrap-haproxy.sh - This runs on the load balancer after build, it installs HAProxy and copies the provided configuration file.

#!/usr/bin/env bash

apt-get update
apt-get install -y haproxy hatop
cp /vagrant/haproxy.cfg /etc/haproxy
echo "ENABLED=1" > /etc/default/haproxy
service haproxy start


haproxy.cfg - Basic HAProxy configuration for load balancing port 80 between two web servers.

frontend http
  bind *:80
  mode tcp
  option tcplog

  default_backend web-backend

backend web-backend
   balance roundrobin
   mode tcp
   server web1 172.28.128.11:80 check
   server web2 172.28.128.12:80 check


index.php - A basic web index to show which web server was accessed by printing the servers hostname. Both web-servers will load the same file.

<?php
echo gethostname() . "\n";
?>


To test it simply run Curl a few times against the IP address of the load balancer, the replies show that the web sessions are being balanced across both hosts:

matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1

Read more...

Tuesday, 10 February 2015

Cisco IOS TCL - Reset Interface if DHCP Fails

I've got some devices where DHCP doesn't always work properly for a number of reasons, running shut/no shut on the Cisco router seems to fix it. To automate that I've knocked up a TCL script:

The script itself:


#script to check if an interface has an IP address and reset it if not.
#copy to flash via TFTP or write it using the technique described here:
# http://sabotage-networks.blogspot.co.uk/2013/02/applying-ciscos-new-licenses-without.html
#
#set as cron job to be run every 60 minutes with:
#
# kron policy-list checkInteface
# tclsh flash:/checkIP.tcl
# exit
#
# kron occurence checkInterface in 60 recurring
#


#set this to name of WAN interface
set interface "fa0/0"

set output [exec "show interface $interface | include Internet address"]
if {[regexp (DHCP) $output]} {
#no ip found, reset interface
puts "no ip address found, restarting interface $interface"
ios_config "interface $interface" "shutdown";
after 5000;
ios_config "interface $interface" "no shutdown";
}

Read more...

Sunday, 17 February 2013

Applying Cisco's New Licenses Without Network Servers

Cisco have a new licensing method that involves installing an XML license on the end device. The license you buy is a code but rather than just entering that onto the device you have to go to Cisco.com and associate the code with a device using part and serial number. Then they generate an XML license file which you are supposed to download and install on the device.

The ways they support doing this are FTP, SCP, TFTP, HTTP, which is no use if you're in a locked down environment, especially working remotely. Luckily as most of their boxes now include TCL so you can fudge it to paste the license straight on via a terminal. Thanks muchly to http://www.internetworkpro.org/wiki/Edit_files_using_TCL The license looks something like this:
<?xml header stuff?>
<SomeStuff></SomeStuff>
<SomeMoreStuff></SomeMoreStuff>
<license><![CDATA[loadsandloadsofrandomgarbagethatisfartoolongtofitonasinglelineofxmlsoyouneedtosplitthislineupintoseveraldifferentvariablesthisfieldcontainsabinaryloadofgunkpretendingitsopenandinteroperablebecauseitsxml]]></license>
<EvenMoreStuff></EvenMoreStuff>

The trick is to use TCL. You create a TCL variable containing the license file data and write it to a text file on the flash memory. The problem is that the license file contains a blob in a CNAME field that is longer than the maximum TCLSH line length. One way around this is to break the file down into multiple lines, store each as a separate variable and write the lot into the same file without any line returns in between.

Several things to watch out for:
  • Don't put extra carriage returns in as the license will not be valid
  • Don't paste carriage returns as it seems to mess up the TCL shell - paste one line at a time then hit enter
  • The +> prompt means TCL is still accepting input for the same variable.
The commands are:
Router#tclsh
Router(tcl)#set file [open "flash:keyfile.lic" w+]

Router(tcl)#set line1 {
+><?xml header stuff?>
+><SomeStuff></SomeStuff>
+><SomeMoreStuff></SomeMoreStuff>
+><license><![CDATA[loadsandloadsofrandomgarbagethatisfartoolongtofitonasinglelineofxml}
Router(tcl)#set line2 {<soyouneedtosplitthislineupintoseveraldifferentvariablesthisfieldcontainsabinary>}
Router(tcl)#set line3 {<loadofgunkpretendingitsopenandinteroperablebecauseitsxml]]></license>
<EvenMoreStuff></EvenMoreStuff>}


Router(tcl)#puts -nonewline $file $line1
Router(tcl)#puts -nonewline $file $line2
Router(tcl)#puts -nonewline $file $line3
Router(tcl)#close $file

Router(tcl)#tclquit
Router#license install flash:keyfile.lic


Now you have the license in place, so all is great! Except that you need to reboot it to activate, hope you weren't running any live services on this box!

Read more...

Wednesday, 31 October 2012

Bluecoat Terminal Length

The Bluecoat SGOS equivalent of term len 0 is line-vty in config mode:

Bluecoat#(config) line-vty
Bluecoat#(config line-vty) length ?
(0 for no pausing)
Bluecoat#(config line-vty) length 0


Handy for grabbing the text config.

Read more...

Wednesday, 27 June 2012

EIGRP RTP Unicast Fallback

Having just started studying for ROUTE to refresh a variety of Cisco exams I had a look at EIGRP and got far too involved in RTP. Probably all you need to know for the ROUTE exam is that it's Reliable Transport Protocol in the context of EIGRP and that it's used to ensure reliable delivery of updates. But to dig a little deeper....

RTP (not the same as real-time-protocol) can use both unicast and multicast. On an ethernet LAN, routing information is transmitted via Multicast (unless the neighbours are defined as unicast ones with neighbour statements). The RTP feature adds it's own reliability with the addition of sequence numbers and a state table on the updating router which keeps track of the acknowledgements from neighbours. If any do not respond then RTP falls back to trying unicast transmission.

To test it I built this flat network with 3 EIGRP neighbours on the same subnet:




The addresses used are:
  • R1 - 192.168.0.1
  • R2 - 192.168.0.2
  • R3 - 192.168.0.3


In this scenario the routing update messages are sent using multicast. For removing a route the "query" type message is used. I'll shut down a loopback interface on R3 which wil generate an EIGRP query. The packet dump below shows the query being multicast (to 224.0.0.10). The two neighbours then acknowledge this via unicast.



On R3 you see the following in the output of "debug eigrp packet", it shows the process:
  1. R3 sending the query messages
  2. Both R1 and R2 responding via unicast.
*Mar 1 00:20:35.567: EIGRP: Enqueueing QUERY on FastEthernet0/0 iidbQ un/rely 0/1 serno 27-27
*Mar 1 00:20:35.571: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.1 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 27-27
*Mar 1 00:20:35.571: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.2 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 27-27

*Mar 1 00:20:35.575: EIGRP: Sending QUERY on FastEthernet0/0
*Mar 1 00:20:35.575: AS 1, Flags 0x0, Seq 34/0 idbQ 0/0 iidbQ un/rely 0/0 serno 27-27

*Mar 1 00:20:35.587: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:20:35.591: AS 1, Flags 0x0, Seq 0/34 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

*Mar 1 00:20:35.603: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:20:35.607: AS 1, Flags 0x0, Seq 0/34 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

[snip]


Now to test the unicast failback by blocking the multicast updates on R1, this is quite tricky as these multicast packets are required to keep the EIGRP neighbour relationships up. My cunning plan is to increase the EIGRP hold timer so that I can drop multicast without disrupting the neighbours.

Because the hold timer is not a local setting but an "advertised value", I actually need to set it on R2 & R3 which will then tell R1 not to worry if it doesn't see any hellos for the next ten minutes.

R3(config)#int f0/0
R3(config-if)#ip hold-time eigrp 1 600

R1(config)#int f0/0
R1(config-if)#ip access-group DENYEIGRP in

R1#show ip access-list DENYEIGRP
Extended IP access list DENYEIGRP
10 deny ip any host 224.0.0.10 log (4 matches)
20 permit ip any any (27 matches)


At this point EIGRP neighbours are all up and R1 is not expecting to hear from R3 for the next ten minutes. Now I'll shut down the interface on R3 again to generate an EIGRP query message. The wireshark output is shown below:



The debug output on R3 shows as below, you can see the phases of the RTP mechanism:
  1. R3 multicasts a query to 224.0.0.10.
  2. R2 responds via unicast (you can see the text peerQ un/rely 0/1 indicating a unicast message).
  3. R1 does not respond as it has not seen the message.
  4. Meanwhile R2 completes the exchange with R3 via unicast.
  5. R3 then realises there is an outstanding response from R1 and retries the query again via unicast showing
    *Mar 1 00:09:13.995: EIGRP: Sending QUERY on FastEthernet0/0 nbr 192.168.0.1, retry 1, RTO 3321
    *Mar 1 00:09:13.995: AS 1, Flags 0x0, Seq 18/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1 serno 18-18
  6. R1 now responds via unicast and exchange completes as normal. This is shown in bold.


The complete debug output is:
R3(config-if)#shut
R3(config-if)#
*Mar 1 00:09:11.775: EIGRP: Enqueueing QUERY on FastEthernet0/0 iidbQ un/rely 0/1 serno 18-18
*Mar 1 00:09:11.779: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.1 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 18-18
*Mar 1 00:09:11.779: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.2 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 18-18

*Mar 1 00:09:11.783: EIGRP: Sending QUERY on FastEthernet0/0
*Mar 1 00:09:11.783: AS 1, Flags 0x0, Seq 18/0 idbQ 0/0 iidbQ un/rely 0/0 serno 18-18

*Mar 1 00:09:11.799: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.799: AS 1, Flags 0x0, Seq 0/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

*Mar 1 00:09:11.811: EIGRP: Received REPLY on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.811: AS 1, Flags 0x0, Seq 17/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/0

*Mar 1 00:09:11.815: EIGRP: Enqueueing ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.815: Ack seq 17 iidbQ un/rely 0/0 peerQ un/rely 1/0
*Mar 1 00:09:11.819: EIGRP: Sending ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.819: AS 1, Flags 0x0, Seq 0/17 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 1/0

*Mar 1 00:09:13.995: EIGRP: Sending QUERY on FastEthernet0/0 nbr 192.168.0.1, retry 1, RTO 3321
*Mar 1 00:09:13.995: AS 1, Flags 0x0, Seq 18/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1 serno 18-18

*Mar 1 00:09:14.019: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.019: AS 1, Flags 0x0, Seq 0/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

*Mar 1 00:09:14.027: EIGRP: Received REPLY on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.031: AS 1, Flags 0x0, Seq 19/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/0

*Mar 1 00:09:14.031: EIGRP: Enqueueing ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.031: Ack seq 19 iidbQ un/rely 0/0 peerQ un/rely 1/0

*Mar 1 00:09:14.035: EIGRP: Sending ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.035: AS 1, Flags 0x0, Seq 0/19 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 1/0



Read more...

Wednesday, 2 May 2012

WCCP Redirect ACLs and Masks


This article is about WCCP redirect ACLs, masks and how they relate to TCAM usage on Cisco switches. It's quite important to understand if doing WCCP as you want to ensure forwarding is done in hardware which runs at wire speed and not software which will cause considerable CPU usage and potentially performance issues.

This is quite a difficult subject to explain and I'm not entirely sure I've done it that well here, the info has been pulled in from a variety of sources and I'm also not entirely sure it's correct as a few bits don't quite tie together. It's been re-written several times and I'm still not entirely happy, however here is the info warts n all.

A very basic recap on WCCP.

WCCP redirects traffic as it passes through a switch or router which act as a WCCP server. This is for things like proxy servers or WAN optimisers which are the WCCP clients. The server has redirect ACLs that specifies what traffic will be sent to the WCCP client device. On Cisco routers/switches these ACLs are not stateful and you have to capture traffic flows going in both directions.
This diagram shows the example setup, the Proxy server is the WCCP client, the switch is the WCCP server. 

For example to grab HTTP from LAN to WAN you would have:

ip wccp 100 redirect-acl HTTP_LAN_TO_WAN
ip access-list extended HTTP_LAN_TO_WAN
 permit tcp 10.0.0.0 0.0.0.255 any eq 80


Then to grab the return traffic:

ip wccp 200 redirect-acl HTTP_WAN_TO_LAN
ip access-list extended HTTP_WAN_TO_LAN
 permit tcp any eq 80 10.0.0.0 0.0.0.255


These are then configured on interfaces to capture traffic, Cisco supports both ingress and egress however the switches will only do hardware forwarding for ingress WCCP sessions.

int gi0/1
 description LAN
 ip wccp 100 redirect in


int gi0/2
 description WAN
 ip wccp 200 redirect in


With this configuration alone nothing will happen. You need to add a WCCP client and tell it to communicate with the WCCP server. i.e. you need to configure WCCP on the proxy server and tell it to talk to the switch, it will then start chatting and negotiate certain parameters.


Once that is done the WCCP server will start redirecting traffic. If no WCCP clients are active then the server will just forward traffic as per normal. If one or more WCCP clients are active then the switch will load balance traffic between them depending on configuration.

TCAM

Stands for Ternary Content-addressable Memory. It is used for hardware forwarding, packets are compared against the TCAM table and it tells the switch or router how to forward them. If an entry isn't found in the TCAM table then the packet must be software routed which is not desirable.
Ternary means there are three values, 0, 1 and don't care. "don't care" is represented by an x in this doc and just to really confuse you I'll use 0x to prefix any hex values.


Redirect ACLs

The redirection ACL tells the WCCP server what traffic to intercept and divert to the WCCP client/s, any traffic not matching is passed as normal. As this ACL is likely to be applied on an interface seeing a lot of traffic (probably all transit traffic for the network) then you want it to run entirely in hardware and be as fast as possible. There are a couple of rules with regards to TCAM usage and this ACL:

  • Each permit statement in the ACL requires at least one TCAM entry.
  • Each load balanced path requires at least one TCAM entry.
  • The number of load balanced paths can be calculated with the number of bits in the assignment mask (see below).
  • In all cases except where the mask is 0x0, Deny statements use less TCAM entries than Permit statements. This is because you don't need to load balance traffic being dropped so a Deny statement will only take up 1 TCAM entry.




The Mask.

Cisco switches only support hardware forwarding for WCCP mask based assignments, not using the hash method.The mask is a hexadecimal value that does several things:
  • Restricts how many WCCP clients can be part of the load balancing arrangement.
  • Affects the TCAM usage by WCCP.
  • Defines what IP addresses are load balanced to which WCCP clients.
The last point is critical for WAN optimisers which work in pairs by forming shared byte caches, if you have a farm of WAN optimisers, e.g. in a data centre, then you want remote sites to always speak to the same member of the farm to avoid having to maintain multiple shared caches. i.e. all hosts within a certain subnet will be load balanced to the same WCCP client.

The masks are written in hex and usually configured in hex, but I found to make sense of them it's best to convert them to binary. Also convert the IP addresses to binary and think of the mask being applied bit-by-bit. 

The mask is configured on the WCCP client, e.g. the WAN Optimiser or Proxy Server, which then informs the server during the WCCP session negotiation.

How the Switch Uses the Mask

On Cisco switches all combinations of bits in the mask are used to create different values. These values are applied to the IP addresses in the redirect ACL to create entries in the forwarding table (TCAM), which the switch uses to forward the traffic to WCCP clients.

For example a mask of 0x10 in binary is represented as 0001 0000.
The available combinations of bits are: 0000 0000 and 0001 0000
Because it's a ternary mask we are only interested in the specific bit used in the original mask, the other zero's all become "don't care" values, so the two masks the forwarding table will end up using are:
xxx1 xxxx
xxx0 xxxx

These masks would be applied against the ACL and used to create the TCAM forwarding paths for the traffic, any IP address with a 1 in the 5th position would match the first mask and any with a 0 in the 5th position would match the second. If you configure this mask then look at the WCCP session it appears as below:

Switch#show ip wccp 100 detail
WCCP Client information:
   WCCP Client ID:    192.168.0.100
   Protocol Version:   2.0
   State:     Usable
   Redirection:    L2
   Packet Return:    L2
   Packets Redirected:   0
   Connect Time:     00:01:07
   Assignment:     MASK

   Mask SrcAddr DstAddr SrcPort DstPort
   ---- ------- ------- ------- -------
   0000: 0x00000010 0x00000000 0x0000 0x0000

   Value SrcAddr DstAddr SrcPort DstPort CE-IP
   ----- ------- ------- ------- ------- -----
   0000: 0x00000000 0x00000000 0x0000 0x0000 0xC0A80064 (192.168.0.100)
   0001: 0x00000010 0x00000000 0x0000 0x0000 0xC0A80064 (192.168.0.100)



Mask Load Balancing.

The number of bits in the mask determines how many devices you can load balance traffic between.

A mask of 0x0 does not allow load balancing and will give a single path only (useful if you only have a single WCCP client and are short on TCAM).

A mask of 0x1 allows for load balancing between 2 WCCP clients only. The binary mask values can be either 0 or 1.

A mask of 0x3 allows for up to 4 WCCP clients as it's made up from 2 bits and available mask values can be 00, 01, 10 and 11.


The default mask is 0x1741. In binary that is 0001 0111 0100 0001. 6 bits are used. That allows for 2^6 WCCP clients. I have no idea why Cisco chose this number, even their own WAAS troubleshooting guide recommends you don't use it. Because it has a bit in the leftmost position it will load balance alternating every single IP address and if they wanted a 6 bit mask then 0011 1111 would make more sense, 0x3F. Possibly there is some mathematical significance I haven't seen, possibly it works best with their hardware, possibly it was made up at random or possibly this entire article is wrong and I don't understand the masks at all. Take your pick.

Mask IP Address Matching.

The simplest example is a mask of 0x1. As these masks are used against IP addresses the value would be converted to 32 bits and represented in TCAM as xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxx1


The ACL is "permit tcp 10.0.0.0 0.0.0.255 any eq http". With a mask of 0x1 it would produce two forwarding paths which will match IP traffic as follows:
Path 1  - mask 0 - 10.0.0.2, 10.0.0.4, 10.0.0.6, 10.0.0.8....
Path 2 - mask 1 - 10.0.0.1, 10.0.0.3, 10.0.0.5, 10.0.0.9...


With two WCCP clients, one would receive HTTP traffic from hosts with IPs matching path 1, the second client would receive path 2 clients.


With a mask of 0x10, the binary value is 10000 (in TCAM this would be xxxxxxxx.xxxxxxxx.xxxxxxxx.xxx1xxxx). This will load balance clients in "chunks" of 16 addresses.

If the ACL is "permit tcp 10.0.0.0 0.0.0.255 any eq http" then this will create two groups and distribute traffic as follows:
Mask 0 - 10.0.0.0 to 10.0.0.15, 10.0.0.32 to 10.0.0.47, 10.0.0.64 to 10.0.0.79.....
Mask 1 - 10.0.0.16 to 10.0.0.31, 10.0.0.48 to 10.0.0.63, 10.0.0.80 to 10.0.0.95......

If there were two active WCCP clients then you'd see the traffic distributed as above.

For large solutions you may want to distribute using a different pattern, with WAN optimisers you want the same optimisers to speak to each other rather than have a branch office device communicate with several different data centre devices as it'd either have to maintain several different copies of the byte caching tables or you'd end up with the optimiser cluster forwarding traffic internally to keep the same device peerings. For a system where you wanted to split subnets on a /21 boundary and have up to 4 WCCP clients in your farm then you'd choose a mask as follows:

/21 in binary would look like this: 11111111.11111111.11111xxx.xxxxxxxx
The WCCP mask could be xxxxxxxx.xxxxxxxx.xxxx1xxx.xxxxxxxx
But this would only allow for 2 possible mask values, so only 2 WCCP clients.
To allow 4 WCCP clients you need 2 bits, the mask becomes xxxxxxxx.xxxxxxxx.xxx11xxx.xxxxxxxx
In hex that is shown as 0x1800

This would give four available combinations/masks of:
xxxxxxxx.xxxxxxxx.xxx00xxx.xxxxxxxx shortened to mask 00
xxxxxxxx.xxxxxxxx.xxx01xxx.xxxxxxxx shortened to mask 01
xxxxxxxx.xxxxxxxx.xxx10xxx.xxxxxxxx shortened to mask 10
xxxxxxxx.xxxxxxxx.xxx11xxx.xxxxxxxx shortened to mask 11


With an ACL of "permit tcp 10.0.0.0 0.255.255.255 any eq http" the split would be:

00 - 10.0.0.0 - 10.0.7.255, 10.0.32.0 - 10.0.39.255...
01 - 10.0.8.0 - 10.0.15.255, 10.0.40.0 - 10.0.47.255...
10 - 10.0.16.0 - 10.0.23.255, 10.0.48.0 - 10.0.55.255...
11 - 10.0.24.0 - 10.0.31.255, 10.0.56.0 - 10.0.63.255...


Weighted Load Balancing.

I've said above that each available forwarding path equals a single WCCP client. This is not necessarily the case as you can weight WCCP clients. Consider a case with two WAN optimisers (A and B) of different specifications where A can process twice as much traffic as B. In that event you would want at least 3 forwarding paths, 2 of them pointing to A and 1 to B. Your mask needs to use at least 2 bits. This is another area I'm a bit hazy on, I would think you'd need a multiple of 3 to make this work properly but you can only ever have an even number of forwarding paths...

TCAM Usage

The equation for working out TCAM usage is defined as:

2^<mask bits> * <acl entries>

To include all entries in the ACL the full definition would be:
( 2^<mask bits> * <number of permit statements in redirect acl> ) +   <number of deny statements in redirect ACL>


On a 3750 the WCCP TCAM is shared with the ACL TCAM. You have to run the router SDM template to support WCCP and it supports a maximum of 1024 entries. So if using the default mask you can have up to 1024/6 = 170 entries in the redirect ACL and no other ACLs on the switch.

If you wanted to capture HTTP and HTTPS traffic, split the network by /24 and allow for 8 forwarding paths in your farm then your ACL may be:

permit tcp 10.0.0.0 0.255.255.255 any eq 80
permit tcp 10.0.0.0 0.255.255.255 any eq 443

And your mask may be 0x70 (xxxxxxxx.xxxxxxxx.xxxxxxxx.x111xxxx)


This would result in 8 forwarding paths, each being created for both of the ACL entries, a total usage of 16 TCAM entries. If you are matching traffic in both directions it's a total of 32 TCAM entries used for WCCP.



Read more...