Wednesday, 27 June 2012

EIGRP RTP Unicast Fallback

Having just started studying for ROUTE to refresh a variety of Cisco exams I had a look at EIGRP and got far too involved in RTP. Probably all you need to know for the ROUTE exam is that it's Reliable Transport Protocol in the context of EIGRP and that it's used to ensure reliable delivery of updates. But to dig a little deeper....

RTP (not the same as real-time-protocol) can use both unicast and multicast. On an ethernet LAN, routing information is transmitted via Multicast (unless the neighbours are defined as unicast ones with neighbour statements). The RTP feature adds it's own reliability with the addition of sequence numbers and a state table on the updating router which keeps track of the acknowledgements from neighbours. If any do not respond then RTP falls back to trying unicast transmission.

To test it I built this flat network with 3 EIGRP neighbours on the same subnet:




The addresses used are:
  • R1 - 192.168.0.1
  • R2 - 192.168.0.2
  • R3 - 192.168.0.3


In this scenario the routing update messages are sent using multicast. For removing a route the "query" type message is used. I'll shut down a loopback interface on R3 which wil generate an EIGRP query. The packet dump below shows the query being multicast (to 224.0.0.10). The two neighbours then acknowledge this via unicast.



On R3 you see the following in the output of "debug eigrp packet", it shows the process:
  1. R3 sending the query messages
  2. Both R1 and R2 responding via unicast.
*Mar 1 00:20:35.567: EIGRP: Enqueueing QUERY on FastEthernet0/0 iidbQ un/rely 0/1 serno 27-27
*Mar 1 00:20:35.571: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.1 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 27-27
*Mar 1 00:20:35.571: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.2 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 27-27

*Mar 1 00:20:35.575: EIGRP: Sending QUERY on FastEthernet0/0
*Mar 1 00:20:35.575: AS 1, Flags 0x0, Seq 34/0 idbQ 0/0 iidbQ un/rely 0/0 serno 27-27

*Mar 1 00:20:35.587: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:20:35.591: AS 1, Flags 0x0, Seq 0/34 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

*Mar 1 00:20:35.603: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:20:35.607: AS 1, Flags 0x0, Seq 0/34 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

[snip]


Now to test the unicast failback by blocking the multicast updates on R1, this is quite tricky as these multicast packets are required to keep the EIGRP neighbour relationships up. My cunning plan is to increase the EIGRP hold timer so that I can drop multicast without disrupting the neighbours.

Because the hold timer is not a local setting but an "advertised value", I actually need to set it on R2 & R3 which will then tell R1 not to worry if it doesn't see any hellos for the next ten minutes.

R3(config)#int f0/0
R3(config-if)#ip hold-time eigrp 1 600

R1(config)#int f0/0
R1(config-if)#ip access-group DENYEIGRP in

R1#show ip access-list DENYEIGRP
Extended IP access list DENYEIGRP
10 deny ip any host 224.0.0.10 log (4 matches)
20 permit ip any any (27 matches)


At this point EIGRP neighbours are all up and R1 is not expecting to hear from R3 for the next ten minutes. Now I'll shut down the interface on R3 again to generate an EIGRP query message. The wireshark output is shown below:



The debug output on R3 shows as below, you can see the phases of the RTP mechanism:
  1. R3 multicasts a query to 224.0.0.10.
  2. R2 responds via unicast (you can see the text peerQ un/rely 0/1 indicating a unicast message).
  3. R1 does not respond as it has not seen the message.
  4. Meanwhile R2 completes the exchange with R3 via unicast.
  5. R3 then realises there is an outstanding response from R1 and retries the query again via unicast showing
    *Mar 1 00:09:13.995: EIGRP: Sending QUERY on FastEthernet0/0 nbr 192.168.0.1, retry 1, RTO 3321
    *Mar 1 00:09:13.995: AS 1, Flags 0x0, Seq 18/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1 serno 18-18
  6. R1 now responds via unicast and exchange completes as normal. This is shown in bold.


The complete debug output is:
R3(config-if)#shut
R3(config-if)#
*Mar 1 00:09:11.775: EIGRP: Enqueueing QUERY on FastEthernet0/0 iidbQ un/rely 0/1 serno 18-18
*Mar 1 00:09:11.779: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.1 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 18-18
*Mar 1 00:09:11.779: EIGRP: Enqueueing QUERY on FastEthernet0/0 nbr 192.168.0.2 iidbQ un/rely 0/0 peerQ un/rely 0/0 serno 18-18

*Mar 1 00:09:11.783: EIGRP: Sending QUERY on FastEthernet0/0
*Mar 1 00:09:11.783: AS 1, Flags 0x0, Seq 18/0 idbQ 0/0 iidbQ un/rely 0/0 serno 18-18

*Mar 1 00:09:11.799: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.799: AS 1, Flags 0x0, Seq 0/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

*Mar 1 00:09:11.811: EIGRP: Received REPLY on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.811: AS 1, Flags 0x0, Seq 17/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/0

*Mar 1 00:09:11.815: EIGRP: Enqueueing ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.815: Ack seq 17 iidbQ un/rely 0/0 peerQ un/rely 1/0
*Mar 1 00:09:11.819: EIGRP: Sending ACK on FastEthernet0/0 nbr 192.168.0.2
*Mar 1 00:09:11.819: AS 1, Flags 0x0, Seq 0/17 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 1/0

*Mar 1 00:09:13.995: EIGRP: Sending QUERY on FastEthernet0/0 nbr 192.168.0.1, retry 1, RTO 3321
*Mar 1 00:09:13.995: AS 1, Flags 0x0, Seq 18/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1 serno 18-18

*Mar 1 00:09:14.019: EIGRP: Received ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.019: AS 1, Flags 0x0, Seq 0/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/1

*Mar 1 00:09:14.027: EIGRP: Received REPLY on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.031: AS 1, Flags 0x0, Seq 19/18 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/0

*Mar 1 00:09:14.031: EIGRP: Enqueueing ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.031: Ack seq 19 iidbQ un/rely 0/0 peerQ un/rely 1/0

*Mar 1 00:09:14.035: EIGRP: Sending ACK on FastEthernet0/0 nbr 192.168.0.1
*Mar 1 00:09:14.035: AS 1, Flags 0x0, Seq 0/19 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 1/0



Read more...

Wednesday, 2 May 2012

WCCP Redirect ACLs and Masks


This article is about WCCP redirect ACLs, masks and how they relate to TCAM usage on Cisco switches. It's quite important to understand if doing WCCP as you want to ensure forwarding is done in hardware which runs at wire speed and not software which will cause considerable CPU usage and potentially performance issues.

This is quite a difficult subject to explain and I'm not entirely sure I've done it that well here, the info has been pulled in from a variety of sources and I'm also not entirely sure it's correct as a few bits don't quite tie together. It's been re-written several times and I'm still not entirely happy, however here is the info warts n all.

A very basic recap on WCCP.

WCCP redirects traffic as it passes through a switch or router which act as a WCCP server. This is for things like proxy servers or WAN optimisers which are the WCCP clients. The server has redirect ACLs that specifies what traffic will be sent to the WCCP client device. On Cisco routers/switches these ACLs are not stateful and you have to capture traffic flows going in both directions.
This diagram shows the example setup, the Proxy server is the WCCP client, the switch is the WCCP server. 

For example to grab HTTP from LAN to WAN you would have:

ip wccp 100 redirect-acl HTTP_LAN_TO_WAN
ip access-list extended HTTP_LAN_TO_WAN
 permit tcp 10.0.0.0 0.0.0.255 any eq 80


Then to grab the return traffic:

ip wccp 200 redirect-acl HTTP_WAN_TO_LAN
ip access-list extended HTTP_WAN_TO_LAN
 permit tcp any eq 80 10.0.0.0 0.0.0.255


These are then configured on interfaces to capture traffic, Cisco supports both ingress and egress however the switches will only do hardware forwarding for ingress WCCP sessions.

int gi0/1
 description LAN
 ip wccp 100 redirect in


int gi0/2
 description WAN
 ip wccp 200 redirect in


With this configuration alone nothing will happen. You need to add a WCCP client and tell it to communicate with the WCCP server. i.e. you need to configure WCCP on the proxy server and tell it to talk to the switch, it will then start chatting and negotiate certain parameters.


Once that is done the WCCP server will start redirecting traffic. If no WCCP clients are active then the server will just forward traffic as per normal. If one or more WCCP clients are active then the switch will load balance traffic between them depending on configuration.

TCAM

Stands for Ternary Content-addressable Memory. It is used for hardware forwarding, packets are compared against the TCAM table and it tells the switch or router how to forward them. If an entry isn't found in the TCAM table then the packet must be software routed which is not desirable.
Ternary means there are three values, 0, 1 and don't care. "don't care" is represented by an x in this doc and just to really confuse you I'll use 0x to prefix any hex values.


Redirect ACLs

The redirection ACL tells the WCCP server what traffic to intercept and divert to the WCCP client/s, any traffic not matching is passed as normal. As this ACL is likely to be applied on an interface seeing a lot of traffic (probably all transit traffic for the network) then you want it to run entirely in hardware and be as fast as possible. There are a couple of rules with regards to TCAM usage and this ACL:

  • Each permit statement in the ACL requires at least one TCAM entry.
  • Each load balanced path requires at least one TCAM entry.
  • The number of load balanced paths can be calculated with the number of bits in the assignment mask (see below).
  • In all cases except where the mask is 0x0, Deny statements use less TCAM entries than Permit statements. This is because you don't need to load balance traffic being dropped so a Deny statement will only take up 1 TCAM entry.




The Mask.

Cisco switches only support hardware forwarding for WCCP mask based assignments, not using the hash method.The mask is a hexadecimal value that does several things:
  • Restricts how many WCCP clients can be part of the load balancing arrangement.
  • Affects the TCAM usage by WCCP.
  • Defines what IP addresses are load balanced to which WCCP clients.
The last point is critical for WAN optimisers which work in pairs by forming shared byte caches, if you have a farm of WAN optimisers, e.g. in a data centre, then you want remote sites to always speak to the same member of the farm to avoid having to maintain multiple shared caches. i.e. all hosts within a certain subnet will be load balanced to the same WCCP client.

The masks are written in hex and usually configured in hex, but I found to make sense of them it's best to convert them to binary. Also convert the IP addresses to binary and think of the mask being applied bit-by-bit. 

The mask is configured on the WCCP client, e.g. the WAN Optimiser or Proxy Server, which then informs the server during the WCCP session negotiation.

How the Switch Uses the Mask

On Cisco switches all combinations of bits in the mask are used to create different values. These values are applied to the IP addresses in the redirect ACL to create entries in the forwarding table (TCAM), which the switch uses to forward the traffic to WCCP clients.

For example a mask of 0x10 in binary is represented as 0001 0000.
The available combinations of bits are: 0000 0000 and 0001 0000
Because it's a ternary mask we are only interested in the specific bit used in the original mask, the other zero's all become "don't care" values, so the two masks the forwarding table will end up using are:
xxx1 xxxx
xxx0 xxxx

These masks would be applied against the ACL and used to create the TCAM forwarding paths for the traffic, any IP address with a 1 in the 5th position would match the first mask and any with a 0 in the 5th position would match the second. If you configure this mask then look at the WCCP session it appears as below:

Switch#show ip wccp 100 detail
WCCP Client information:
   WCCP Client ID:    192.168.0.100
   Protocol Version:   2.0
   State:     Usable
   Redirection:    L2
   Packet Return:    L2
   Packets Redirected:   0
   Connect Time:     00:01:07
   Assignment:     MASK

   Mask SrcAddr DstAddr SrcPort DstPort
   ---- ------- ------- ------- -------
   0000: 0x00000010 0x00000000 0x0000 0x0000

   Value SrcAddr DstAddr SrcPort DstPort CE-IP
   ----- ------- ------- ------- ------- -----
   0000: 0x00000000 0x00000000 0x0000 0x0000 0xC0A80064 (192.168.0.100)
   0001: 0x00000010 0x00000000 0x0000 0x0000 0xC0A80064 (192.168.0.100)



Mask Load Balancing.

The number of bits in the mask determines how many devices you can load balance traffic between.

A mask of 0x0 does not allow load balancing and will give a single path only (useful if you only have a single WCCP client and are short on TCAM).

A mask of 0x1 allows for load balancing between 2 WCCP clients only. The binary mask values can be either 0 or 1.

A mask of 0x3 allows for up to 4 WCCP clients as it's made up from 2 bits and available mask values can be 00, 01, 10 and 11.


The default mask is 0x1741. In binary that is 0001 0111 0100 0001. 6 bits are used. That allows for 2^6 WCCP clients. I have no idea why Cisco chose this number, even their own WAAS troubleshooting guide recommends you don't use it. Because it has a bit in the leftmost position it will load balance alternating every single IP address and if they wanted a 6 bit mask then 0011 1111 would make more sense, 0x3F. Possibly there is some mathematical significance I haven't seen, possibly it works best with their hardware, possibly it was made up at random or possibly this entire article is wrong and I don't understand the masks at all. Take your pick.

Mask IP Address Matching.

The simplest example is a mask of 0x1. As these masks are used against IP addresses the value would be converted to 32 bits and represented in TCAM as xxxxxxxx.xxxxxxxx.xxxxxxxx.xxxxxxx1


The ACL is "permit tcp 10.0.0.0 0.0.0.255 any eq http". With a mask of 0x1 it would produce two forwarding paths which will match IP traffic as follows:
Path 1  - mask 0 - 10.0.0.2, 10.0.0.4, 10.0.0.6, 10.0.0.8....
Path 2 - mask 1 - 10.0.0.1, 10.0.0.3, 10.0.0.5, 10.0.0.9...


With two WCCP clients, one would receive HTTP traffic from hosts with IPs matching path 1, the second client would receive path 2 clients.


With a mask of 0x10, the binary value is 10000 (in TCAM this would be xxxxxxxx.xxxxxxxx.xxxxxxxx.xxx1xxxx). This will load balance clients in "chunks" of 16 addresses.

If the ACL is "permit tcp 10.0.0.0 0.0.0.255 any eq http" then this will create two groups and distribute traffic as follows:
Mask 0 - 10.0.0.0 to 10.0.0.15, 10.0.0.32 to 10.0.0.47, 10.0.0.64 to 10.0.0.79.....
Mask 1 - 10.0.0.16 to 10.0.0.31, 10.0.0.48 to 10.0.0.63, 10.0.0.80 to 10.0.0.95......

If there were two active WCCP clients then you'd see the traffic distributed as above.

For large solutions you may want to distribute using a different pattern, with WAN optimisers you want the same optimisers to speak to each other rather than have a branch office device communicate with several different data centre devices as it'd either have to maintain several different copies of the byte caching tables or you'd end up with the optimiser cluster forwarding traffic internally to keep the same device peerings. For a system where you wanted to split subnets on a /21 boundary and have up to 4 WCCP clients in your farm then you'd choose a mask as follows:

/21 in binary would look like this: 11111111.11111111.11111xxx.xxxxxxxx
The WCCP mask could be xxxxxxxx.xxxxxxxx.xxxx1xxx.xxxxxxxx
But this would only allow for 2 possible mask values, so only 2 WCCP clients.
To allow 4 WCCP clients you need 2 bits, the mask becomes xxxxxxxx.xxxxxxxx.xxx11xxx.xxxxxxxx
In hex that is shown as 0x1800

This would give four available combinations/masks of:
xxxxxxxx.xxxxxxxx.xxx00xxx.xxxxxxxx shortened to mask 00
xxxxxxxx.xxxxxxxx.xxx01xxx.xxxxxxxx shortened to mask 01
xxxxxxxx.xxxxxxxx.xxx10xxx.xxxxxxxx shortened to mask 10
xxxxxxxx.xxxxxxxx.xxx11xxx.xxxxxxxx shortened to mask 11


With an ACL of "permit tcp 10.0.0.0 0.255.255.255 any eq http" the split would be:

00 - 10.0.0.0 - 10.0.7.255, 10.0.32.0 - 10.0.39.255...
01 - 10.0.8.0 - 10.0.15.255, 10.0.40.0 - 10.0.47.255...
10 - 10.0.16.0 - 10.0.23.255, 10.0.48.0 - 10.0.55.255...
11 - 10.0.24.0 - 10.0.31.255, 10.0.56.0 - 10.0.63.255...


Weighted Load Balancing.

I've said above that each available forwarding path equals a single WCCP client. This is not necessarily the case as you can weight WCCP clients. Consider a case with two WAN optimisers (A and B) of different specifications where A can process twice as much traffic as B. In that event you would want at least 3 forwarding paths, 2 of them pointing to A and 1 to B. Your mask needs to use at least 2 bits. This is another area I'm a bit hazy on, I would think you'd need a multiple of 3 to make this work properly but you can only ever have an even number of forwarding paths...

TCAM Usage

The equation for working out TCAM usage is defined as:

2^<mask bits> * <acl entries>

To include all entries in the ACL the full definition would be:
( 2^<mask bits> * <number of permit statements in redirect acl> ) +   <number of deny statements in redirect ACL>


On a 3750 the WCCP TCAM is shared with the ACL TCAM. You have to run the router SDM template to support WCCP and it supports a maximum of 1024 entries. So if using the default mask you can have up to 1024/6 = 170 entries in the redirect ACL and no other ACLs on the switch.

If you wanted to capture HTTP and HTTPS traffic, split the network by /24 and allow for 8 forwarding paths in your farm then your ACL may be:

permit tcp 10.0.0.0 0.255.255.255 any eq 80
permit tcp 10.0.0.0 0.255.255.255 any eq 443

And your mask may be 0x70 (xxxxxxxx.xxxxxxxx.xxxxxxxx.x111xxxx)


This would result in 8 forwarding paths, each being created for both of the ACL entries, a total usage of 16 TCAM entries. If you are matching traffic in both directions it's a total of 32 TCAM entries used for WCCP.



Read more...

Tuesday, 17 January 2012

Network Notes - IBM PowerHA / HACMP

Some info on the networking features of HACMP (High Availability Cluster Multiprocessing). This is now called PowerHA SystemMirror for AIX. It allows up to 16 nodes in a cluster. As of v7.1 the cluster can use multicast to communicate, previous versions used UDP broadcasts. The cluster heartbeats are sent both via LAN and SAN for redundancy.


Terminology:


Boot IP: The address bound to the physical interface (e.g. ifconfig blah x.x.x.x).
Service IP: The VIP to which clients connect to hit the actual service, can exist on any interface on any cluster member.
Persistent IP: Used to reach a host for management. Also called node VIP and can exist on any interface on a single cluster member.
HWAT - Hardware Address Takeover: MAC address follows the IP when failing over.
IPAT - IP Address Takeover: Moves the service IP between interfaces and cluster members.

There are two methods of doing IPAT, via replacement and by alias.

IPAT via Replacement


This is the older method, it uses HWAT so no gratuitous ARP is required as the MAC address fails over with the service IP. However clearly port security cannot be used! You need two interfaces in the same VLAN, 1 configured with a real IP address (boot IP) and 1 with any IP (standby IP) that need not be routable. When HACMP starts it replaces the real IP address on NIC 1 with a VIP in the same subnet. The failover moves both VIP and MAC onto NIC 2. You can only have 1 service VIP per adapter pair.


IPAT via Aliasing.


The newer and recommended method, it requires a network that can support gratuitous ARP as HWAT is not used. The service IP is the only routable address needed. The 2 NICs are configured with IP addresses on different subnets that need not be routable. The service VIP is an alias address on the interface and fails over as an alias. You can have as many service VIPs as you want on an interface.

Heartbeats.


The boot IPs seem fairly pointless, however network heartbeats are broadcast/multicast from the boot IP so they should be allocated from the same subnet, an example allocation is:

Node service IP 10.0.0.10

Node 1
NIC1 boot IP 192.168.0.1
NIC2 boot IP 192.168.10.1
Persistent IP 10.0.0.101

Node 2
NIC1 boot IP 192.168.0.2
NIC2 boot IP 192.168.10.2
Persistent IP 10.0.0.102

Node 3
NIC1 boot IP 192.168.0.3
NIC2 boot IP 192.168.10.3
Persistent IP 10.0.0.103



Routing.


Any routes should be configured via the service IP subnet and persistent/node IP subnet. You should not use the boot addresses as they may not always be reachable (e.g. if NIC failover). You can use the service IP to manage the system but it might not be on that node if the cluster has failed over so better to use a persistent IP. Service and Persistent IPs can be on same subnet or different ones. If different then you'll either need multiple IPs configured on the VLAN interface or static routing configured on the AIX box as they'll both be in the same VLAN. I would KISS and have both on same VLAN & same subnet.

Read more...

Wednesday, 17 August 2011

Cisco ASA 8.4 - Global Access Lists

Handy new feature in version 8.4 of the ASA software is the ability to do global access lists.


The Cisco ASA allows security levels to be applied to interfaces, traffic is automatically allowed from a high to low security level interface but not vice versa. It's probably designed for the fairly common use case of a perimeter device between a LAN and the internet. The internet link is set to security level 0 and the inside interface to 100. All LAN traffic is then allowed to flow out. This is shown below:



Prior to version 8.3, access lists (ACLs) had to be applied on an interface and in a direction, e.g.

access-list MYACL extended permit tcp any any eq www
access-group MYACL in interface outside
As soon as an ACL is applied to an interface, it will pass traffic based on the ACL rather than based on security levels. However it gets complicated as traffic coming in another interface that would previously have been allowed is now still allowed, in the example above if you permitted port 80 in from the internet, all outgoing LAN traffic is still allowed.


Now in version 8.4, Cisco have added the ability to have a single global ACL that applies to all traffic regardless of which interface it uses. This is how most other firewalls work so a welcome change. To do this you create the ACL then apply it with:

access-list MYACL extended permit tcp any any eq www
access-group MYACL global

When a global ACL is applied, it removes all behaviours based on security levels from ALL interfaces. So in the original example, you would need a rule in your global ACL that permits LAN hosts access to the internet. The any/any rule is a good example of what not to do as this now globally means "any address" rather than specific to any particular interface.

Update 2015 It appears that not quite "ALL" security level behaviours are removed, you still need the same-security-level command to allow traffic to flow between interfaces regardless of ACL.


Read more...

Wednesday, 13 April 2011

Evaluation Assurance Levels - EAL

EAL stands for evaluation assurance level and is a certificate of security for IT products measured against a set of common security criteria. The main source of information on EAL levels is the common criteria portal where you can find details of approved products and information on the criteria used for the EAL certifications.

Who uses it?


Your average network bod may not come across EAL very often. It tends to crop up in areas that are regulated by government bodies such as CESG who will often require EAL4 certified products for certain secure environments. However you don't just buy EAL4 kit and be government approved, it fits into a much larger security framework such as ISO27k dealing with everything from who gets into the building to how you manage changes to IT systems.

How does a product get EAL certified?


It is assessed against a set of common criteria by an approved agency. The developer of the system produces a security target (ST) document containing a list of features to be assessed.The ST is based on the criteria here. The process is long and expensive, according to wikipedia vendors were spending $1 - $2.5million to gain EAL4 certification in the 1990s.


What do you get when EAL certified?


Certified products are listed on the common criteria portal along with the rating granted, the ST it was assessed against and the assessment report. e.g. here (PDF) is the ST for the Cisco ASA as a firewall and here (PDF) is the assessment report. Interesting to note that the EAL4 VPN certificate was issued separately, so an ASA acting as both firewall and VPN endpoint is not a valid EAL4 solution, strictly speaking you would need two in series performing each task.

So what does it mean in to a network engineer?


Probably not a lot, it's a policy requirement for many places but the assessment is only against the device, not against the specific implementation of it. You could deploy an EAL4 firewall with a policy of "permit any any" and it's still an EAL4 device! At that point the other security mechanisms should have stopped you from putting it on the network.

If you are involved in hardware selection for a regulated organization then you may need to use EAL4 devices in certain situations.

What is required to meet the various levels?


The EAL process is broken down to cover the following aspects of a system:
Development, documentation, life-cycle support, security target evaluation, testing, vulnerability assessment.

Each EAL level goes into slightly more detail, for example the "development" area at EAL1 requires a basic functional specification to be provided by the developer. EAL2 requires that same functional specification but expanded to include details of security inforcement. It also requires a security architecture description and a basic design. The specifics of those items are detailed here.

How long does it take to get EAL4?


It seems to vary from a very long time to aeons, certainly it's measured in years rather than months. A look on the NIAP CCEVS evaluation and evaluated list for firewalls shows a few examples:
Checkpoint R65 HFA01 on IPSO recorded as submitted Oct 2005 although R65 was released in 2007 so the process was started early during development. It passed March 2009. So that's 4 years to get certified and the product went EOL in March 2011, 2 years later.
Cisco ASA 8.3 as a VPN submitted November 2009 still not passed, predicted June 2011.
Palo Alto submitted various devices in December 2009 and still running.

What exactly is certified?


The certification is issued against a specific software release and hardware platform.

A specific version of the software you say? As in....minor version??


That is how the cert is written. The Cisco ASA obtained EAL4 for firewall purposes on version 7.0(6) of it's OS which was released in August 2006. Cisco have been patching and updating that for 5 years! The ASA is now up to release 8.4, which has been submitted again to CCEVS (scheme run by NIST and NSA) for evaluation.

In reality there will be a security assessor on the ground who will review the solution and hopefully be sensible about using a modern patched version of the OS and judge it acceptable to meet an EAL4 requirement, even if it's not strictly what's on the EAL4 certificate.

I don't know anyone who would tell you with a straight face that using a 5 year old OS on a firewall is going to increase your security!

What about high end firewalls?


There is a bit of a gap, if you need an EAL4 firewall with 10gig throughput then you're out of luck as the only one that's passed assessment is Checkpoint Power-1 on the 5075/9075, however that went end of life last month (March 2011). The closest is the Cisco 5580 which has been submitted for EAL4, due November 2011 and is arguably similar enough to the 5540 to be acceptable, however it's recently announced as being binned in preference of the 5585 so after August 2011 you can't buy one any more!

The security market moves quickly compared to the EAL assessments and it proves tricky.

The top end Cisco firewall platform is the 5585, not even showing as submitted for EAL evaluation yet.
Checkpoint has R71 under assessment now, predicted result in November 2011.
Palo Alto has various items aiming for November 2011, but their flagship model the PA-5000 is not listed as under assessment, it only recently hit market in the UK so EAL certification may not have been discussed yet.
Juniper have EAL4 for their ScreenOS platform the SSG, which goes EOL in 2013. They have EAL3 for Junos 9.3 on the SRX platform, the current version is 10.4. There doesn't appear to be any indication that the SRX security platforms have been submitted for EAL4 certification, although it would be surprising if that were the case as governments would be ditching Juniper en-masse before 2013.

So until November 2011 there are no EAL4 10gig firewalls. You'll have to build a farm of 1gig ones instead!

What alternative schemes are there?


FIPS-140 from NIST.
CAPS, the CESG Approved Product Scheme.

Is it worth me buying EAL4 products?


If you have to ask then probably not. If your business is regulated and the agencies setting those policies define EAL4 as a requirement then you have no choice.

For companies with the option I would say it's a helpful indicator but I would certainly use other aspects above the EAL status when selecting a device:

  • Performance.

  • Price.

  • Published security tests and exploits.

  • Staff familiarity.

  • Internal testing.



An EAL4 certificate does indicate that the product was developed following good practices and has a well defined and documented architecture. These are clearly good things in terms of stability and security. However not having EAL4 doesn't necessarily mean the product hasn't followed a good development process and isn't secure, it just means the manufacturer hasn't paid for it to be assessed.

Read more...

Monday, 14 February 2011

Legacy FRTS & Subinterfaces

FRTS and subinterfaces. This page follows on from the previous article on legacy FRTS configuration here and shows the default behaviour of FRTS with subinterfaces.

The legacy frame-relay traffic shaping has to be enabled on a physical interface. Any subinterfaces will then inherit the configuration, which is 56kbps by default. The network is shown below:



In the example below FRTS is turned on but not configured, both subinterfaces are then shaped to 56kbps (using screenshots as the output to "show traffic-shape" doesn't like this sites layout).

R1#show run | begin interface Serial0/0
interface Serial0/0
no ip address
encapsulation frame-relay
no fair-queue
clock rate 2000000
frame-relay traffic-shaping
!
interface Serial0/0.102 point-to-point
ip address 192.168.12.1 255.255.255.0
snmp trap link-status
frame-relay interface-dlci 102
!
interface Serial0/0.103 point-to-point
ip address 192.168.13.1 255.255.255.0
snmp trap link-status
frame-relay interface-dlci 103


As shown below, the target rate is 56000b/s



This config sets a map on one of the subinterfaces shaping it to 2mbit:

map-class frame-relay TEST_MAP
frame-relay traffic-rate 2000000 2000000


interface Serial0/0.102
frame-relay class TEST_MAP


The remaining subinterface remains at 56kbps:



You can apply the map to the physical interface, the sub-interfaces then inherit these settings:



Applying other maps to the subinterfaces overrides any inherited settings:

map-class frame-relay TEST_MAP_2
frame-relay traffic-rate 128000 128000


interface Serial0/0.103
frame-relay class TEST_MAP_2





Read more...

Sunday, 13 February 2011

Frame Relay Traffic Shaping - Legacy Configuration

This is a basic lab to play around with frame-relay traffic shaping, FRTS. It uses the legacy configuration method rather than MCQ. INE have a great article here describing the other options.


This article assumes some knowledge of QoS terms such as CIR, Bc, Be and Tc.

The lab used looks like this:



I'll use the GNS3 built in frame switch to make life easier, the config is below:



The basic router configurations are:
hostname R1
!
interface Serial0/0
ip address 192.168.0.1 255.255.255.0
encapsulation frame-relay
clock rate 2000000


hostname R2
!
interface Serial0/0
ip address 192.168.0.2 255.255.255.0
encapsulation frame-relay
clock rate 2000000


In this mode no shaping is enabled, WFQ is the default for serial interfaces below E1 size (2.048mbps).


R2#show int s0/0
Serial0/0 is up, line protocol is up
Internet address is 192.168.0.2/24
Encapsulation FRAME-RELAY, loopback not set
Queueing strategy: weighted fair
Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/1/256 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
Available Bandwidth 1158 kilobits/sec



To turn on FRTS use the commands as below:

R1(config)#int s0/0
R1(config-if)#frame-relay traffic-shaping


This gives the interface a default configuration, which is 56kbps and has Bc set to 7000bits. This can cause problems with subinterfaces as they'll end up at 56k unless configured otherwise. The queuing method is also changed to FIFO.

R1#show int s0/0
Serial0/0 is up, line protocol is up
Internet address is 192.168.0.1/24
Encapsulation FRAME-RELAY, loopback not set
Queueing strategy: fifo
Output queue: 0/40 (size/max)

R1#show traffic-shape

Interface Se0/0
       Access Target    Byte   Sustain   Excess    Interval  Increment Adapt
VC     List    Rate     Limit   bits/int bits/int  (ms)      (bytes)  Active
102            56000     875    7000      0         125       875       -


The actual configuration is done in a class map:

R1(config)#map-class frame-relay TEST_MAP


The options are configured using the frame-relay command:

R1(config-map-class)#frame-relay ?
adaptive-shaping Adaptive traffic rate adjustment, Default = none
bc Committed burst size (Bc), Default = 7000 bits
be Excess burst size (Be), Default = 0 bits
cir Committed Information Rate (CIR), Default = 56000 bps
congestion Congestion management parameters
custom-queue-list VC custom queueing
end-to-end Configure frame-relay end-to-end VC parameters
fair-queue VC fair queueing
fecn-adapt Enable Traffic Shaping reflection of FECN as BECN
fragment fragmentation - Requires Frame Relay traffic-shaping to be
configured at the interface level
holdq Hold queue size for VC
idle-timer Idle timeout for a SVC, Default = 120 sec
interface-queue PVC interface queue parameters
ip Assign a priority queue for RTP streams
mincir Minimum acceptable CIR, Default = CIR/2 bps
priority-group VC priority queueing
tc Policing Measurement Interval (Tc)
traffic-rate VC traffic rate
voice voice options

There are a couple of ways to shape traffic, the traffic-rate command sets the rate & peak rate, IOS then calculates Bc and Be based on a time interval of 125ms. To set the rate to 128kbps and the peak rate to 256kbps:
R1(config-map-class)#frame-relay traffic-rate 128000 256000
R1(config-if)#^Z
R1#show traffic-shape
Interface Se0/0
       Access Target    Byte   Sustain   Excess    Interval  Increment Adapt
VC     List    Rate     Limit   bits/int bits/int  (ms)      (bytes)  Active
102            128000  18000   128000    128000    125       2000 -

Note that Tc (interval) is still 125ms.

IOS then calculates Be as being Tc * (PIR - CIR), which is .125 * (256000 - 128000) = 16000.



You can also specifically configure the committed information rate (CIR) and Burst Excess (Be) in the map-class, this allows you to change the value of Tc which is calculated as Bc/CIR as below on R2:
map-class frame-relay TEST_MAP_R2
frame-relay cir 128000
frame-relay bc 12800

R2#show traffic-shape

Interface Se0/0
       Access Target    Byte   Sustain   Excess    Interval  Increment Adapt
VC     List    Rate     Limit   bits/int bits/int  (ms)      (bytes)  Active
201            128000    1600   12800     0             100       1600 -


You can also see the shaping configuration by looking at the PVC:
R2#show frame pvc 201

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

DLCI = 201, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0

input pkts 8 output pkts 7 in bytes 622
out bytes 588 dropped pkts 0 in pkts dropped 0
out pkts dropped 0 out bytes dropped 0
in FECN pkts 0 in BECN pkts 0 out FECN pkts 0
out BECN pkts 0 in DE pkts 0 out DE pkts 0
out bcast pkts 2 out bcast bytes 68
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
pvc create time 00:27:55, last time pvc status changed 00:27:55
cir 128000 bc 12800 be 0 byte limit 1600 interval 100
mincir 64000 byte increment 1600 Adaptive Shaping none

pkts 1 bytes 34 pkts delayed 0 bytes delayed 0
shaping inactive
traffic shaping drops 0
Queueing strategy: fifo
Output queue 0/40, 0 drop, 0 dequeued

Read more...