Sunday 22 October 2023

Linux Shortcuts and Auto Updating AppImages

A minor annoyance with software deployed as AppImages that auto update is each update breaks any shortcuts you've created.

The trick is not to create the shortcut directly to the file, e.g.

  /home/me/software/myappname-v32831.38474-linux-x86_64.AppImage 
  


Instead create the shortcut to run the "find" command so it locates any file in that folder starting with the same text and launches it, regardless of the version number.

find /home/me/software/ -name myappname-* -exec {} \; 


Could introduce problems so there are a few assumptions/caveats:
  • The app is saved/run from /home/me/software/ in the above example
  • You're aware this is just launching any file in that folder with the right name, so if other users can access it then they could replace it and you'll launch their file
  • The AppImage update process needs to delete the old file, otherwise you'd have to modify the find command to sort by date

Read more...

Saturday 22 January 2022

3rd Party Firewalls in Azure

You can use 3rd party firewalls in Azure but there are some differences in how High Availability works.

Standard firewall H/A works via lower level network communication to move IP addresses between devices (e.g. gratiutous ARP), but the underlying network in Azure/AWS/etc won't support that approach. There are APIs to inform Azure that an IP address has moved to a different device, but at time of writing this approach results in very slow failover (1min+).

The current vendor pattern architectures use a load balancer and two separate active firewalls to provide resilience. In Azure there are two main types of Load Balancer:
  • "Public" which resembles load balancer on a stick (or one-armed load balancing) without SNAT, just changes the destination IP.
  • "Private" or "Internal" which is a sort of software defined version of routed load balancing, traffic is forwarded to the backend pool members but the destination IP is not changed.

Inbound Flows

For an inbound connection to a public IP address that fronts a single service, use a Public load balancer with the firewalls set as the backend. The load balancer will then re-write the destination address to whichever firewall it decides to use. This needs the firewall to DNAT traffic to the actual destination, and SNAT to ensure return packets go via the same firewall. This looks like the below, red text refers to IP addresses:
This is fairly limited because not many networks would deploy a pair of firewalls to front a single service. Multiple services are more complicated and there are a few big constraints:
  • The firewall normally uses destination IP address to direct traffic, but in the above scenario the load balancer has set the destination IP as the firewall itself.
  • With typical one-armed load balancers you can have a different SNAT address for each service, but Azure Load Balancer doesn't do SNAT.
  • A "private" type load balancer would maintain the original destination IP address, but you can't apply public IP addresses to them, Microsoft's use-case for those is strictly internal.
  • A "public" type load balancer could be put infront of a private one to have separate load balancers for service and firewalls, but the backend pool hosts need to be on the local subnet with the Azure Load Balancer.
The only option for non-web traffic is to use the "public" type load balancer with either a separate IP addresses on every firewall for every service, or a separate port on every firewall for every service, which will get complicated fairly quickly so will become problematic at scale:
For web traffic a nicer solution is to use an Application Gateway on the outside to load balancing the service (backend pool = actual servers) and the inside a private/internal load balancer doing the firewall H/A (backend pool = firewalls). The destination IP address for the entire flow is the backend server, so no DNAT is required and neither is one IP on every firewall per service.
There is also a feature "gateway load balancer" that seems to do away with the need for SNAT on the firewalls but I've not played with it yet: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/nva-ha.

Outbound Flows

Outbound traffic can be load balanced using a "private" load balancer sat on the "inside" of the network, which maintains the original destination while providing H/A via the available firewalls. This looks the same as the first diagram.

References

Vendor Model Architectures
https://docs.fortinet.com/document/fortigate-public-cloud/6.0.0/use-case-high-availability-for-fortigate-on-azure/224311/basic-concepts
https://www.paloaltonetworks.com/resources/reference-architectures
https://blogs.cisco.com/security/secure-cloud-for-azure

F5 docs on cloud failover, Azure seems slower than other platforms: https://clouddocs.f5.com/products/extensions/f5-cloud-failover/latest/userguide/performance-sizing.html MS docs on public/private load balancer in Azure:
https://docs.microsoft.com/en-us/azure/load-balancer/components

Read more...

Thursday 20 May 2021

GDPR and Appropriate Security Controls

GDPR article 32 requires "appropriate" controls to protect personal data, but what exactly is appropriate? The ICO has published various cases that can be used to gauge their expectations.

Having a risk assessment helps to qualify the approach and show due dilligence, but it's a subjective process so results will vary and if the ICO disagree with the outcome then the financial penalties can be large. Marriot claimed a risk based approach supported their decision to focus database monitoring and encryption on card holder data, however the ICO disagreed with that conclusion and held them liable for not giving personal data the same level of protection.

Industry regulations such as PCI can help indicate that controls are "appropriate".

Conclusion

The ICO appear to take the state of the art as their baseline and seem to have a fairly idealistic view on implementing enterprise security. They look at recommendations from bodies such as NCSC, NIST, etc alongside industry regulations and consider any deviation from those as an indicator of negligence which increases the liability. Sensible efforts will be considered such as Marriot's MFA implementation, which ultimately turned out to be incomplete but that was not considered in the judgement as an independent audit had informed them the control was in place.

A few particular controls were called out in multiple cases and should be on any organisations radar:
  • Application Whitelisting.
  • Multi-factor authentication.
  • Detection of configuration change.
  • Privileged access management (PAM) and implementation of least privilege.
  • Risk Assessment of personal data storage and processing.
  • Awareness of good practice and current issues with technologies in use.
  • Strict control over remote access.
  • Compliance with internal security policies and relevant industry regluation.
Network segregation was also discussed in some cases, but highlighting that segregating the IP network is not the whole story if the same Active Directory is permitted to all network areas. This is an important consideration for organisations who may be implementing segregation, particularly post the Maersk NotPetya incident.

In terms of specific items called out with some big cases that indicate expectations of controls are:

Ticketmaster

https://ico.org.uk/media/action-weve-taken/2618609/ticketmaster-uk-limited-mpn.pdf
  • Hacked via 3rd party Javascript chat bot they'd included on their website, 3rd party was then compromised.
  • The 3rd party had ISO27001, this was not considered relevant by the ICO as it's not a software security standard.
  • The ICO used blog posts and Stackoverflow questions about risks from including 3rd party Javascript on website as evidence that this was a recognised issue, combined with supply chain articles by NIST and NCSC.

BA

https://ico.org.uk/media/action-weve-taken/mpns/2618421/ba-penalty-20201016.pdf
  • Referenced CPNI GPG from April 2015 supply chain guidance on assessing risk
  • Mentioned various NCSC and NIST documents recommending MFA
  • Highlighted that BA's own internal policy mandated use of MFA.
  • However their implementation of Citrix did not apply MFA to all access.
This highlights the need to test the implementation of security controls to ensure they are working and effective, or at least audit their configuration.
  • Did not have a risk assessment of the Citrix solution or the applications accessed through it.
  • Had not locked down the services available by Citrix.
  • Lack of app whitelisting.
  • Lack of server hardening.
  • Limited restrictions on apps being opened by stopping clicking the icon, but still able to run via file->open.
  • Environment was pen tested, but scope appears to have been limited so many issues were not detected.
  • Called out use of hardcoded passwords.
  • Suggested logging access to certain files containing hardcoded passwords would be a suitable control.
  • Lack of implementation of "least privilege" principles.
  • Lack of monitoring of unexpected (e.g. guest) account logins.
  • No use of PAM.
  • Limited monitoring.
  • Used PCI DSS but were not compliant with it.
  • Left debug logging in place on live systems, increasing data available to attackers.
  • Lack of File Integrity Monitoring (FIM).
  • No ability to detect changes to the website code.

Marriot

https://ico.org.uk/media/action-weve-taken/mpns/2616891/dsg-mpn-20200107.pdf
Marriot thought MFA was implemented and had even audited it, but there were undiscovered gaps. The ICO accepted this and did not include that in their assessment.

  • Insufficient monitoring of privileged accounts - not logging access to systems, noting from other cases that logging alone is of little value unless someone is checking the logs or being alerted.
  • Insufficient monitoring of databases - Guardium deployed but only on cardholder storing tables, so they had done a risk based approach to choose where to monitor, but was not deemed adequate by the ICO. SOC/SIEM was not logging the user access to databases. Boundary controls were not enough without internal monitoring.
  • Control of critical systems - app whitelisting, monitoring/alerting.
  • Encryption - no justification/risk assessment of data held without encryption.

DSG

https://ico.org.uk/media/action-weve-taken/mpns/2616891/dsg-mpn-20200107.pdf
  • It was deemed that that PAN (Primary Account Number - i.e. Card data) does constitute personal data, so be wary of this as data is considered PII if people can be indirectly identified by it, phone numbers being a common example that many may not initially consider as being PII. See: https://www.gdpreu.org/the-regulation/key-concepts/personal-data/
  • PCI DSS is not indicative of appropriate security for PII but certifications like this can be helpful in deeming what level is considered appropriate, it sounds like DSG had some issues with PCI compliance.
  • Segregation was considered as both network/IP and Active Directory, the inference being that segregating your network but not your AD is probably not appropriate.
  • Not having host based firewall was called out, despite that it wouldn't have prevented this attack. Also the ability to detect changes to the configuration of these local firewalls was called out as a requirement.
  • Inadequate patching on domain controllers.
  • No logging/monitoring in place to detect and respond to attacks.
  • Outdated versions of Java.
  • Not strictly controlling privileged access rights - i.e. no PAM.
  • Not using standard builds with hardening built in.
  • Patching of devices was not compliant with their own policy.
  • Case notes state that application whitelisting is considered "appropriate" control

Read more...

Monday 25 November 2019

Powershell for AD Querying

Powershell commands for mucking about with AD:

Basic info on the user:
Get-ADUser username

List all groups a user is in:
Get-ADPrincipalGroupMembership username | select name


List all users in a group
Get-ADGroupMember "Groupname" | select name


List all groups in the AD
Get-ADGroup -searchbase "OU=GROUPS_OU,DC=domain,DC=com" -Property member -Filter * | select-object name, @{n='count';e={$_.member.count}} | sort-object descending


Batch file to run powershell:
@echo off & setlocal
set batchPath=%~dp0
powershell.exe -ExecutionPolicy ByPass -file "%batchPath%file.ps1"

Read more...

Thursday 12 May 2016

Automated Install for OpenStack Kilo on Ubuntu Server

I've been messing around with OpenStack lately, there is the excellent Devstack system for building test deployments but I wanted something to survive a reboot which meant needing a full OpenStack. There's some great docs on OpenStacks website for installing Kilo on Ubuntu 14.04.

To automate things I've scripted the process above with a few tweaks, available on github:

https://github.com/unixhead/mattstack/.

It's meant for installing on a single host and playing around with, there's no telemetry or orchestration module and it just uses the basic legacy nova networking.

How to use it!

  • Download and install the Ubuntu Server 14.04 (Trusty) image. It won't work on any other version! You don't need to specify any software components during the install, although SSH server might be handy.
  • Download the build-kilo.sh script from https://raw.githubusercontent.com/unixhead/mattstack/master/build-kilo.sh
  • Edit the script ("nano build-kilo.sh") and set the variables at the top, they have descriptive comments explaining what needs doing.
  • Run "chmod +x build-kilo.sh" to make it executable.
  • Run the script as root. "sudo su -" and then "./build-kilo.sh"
  • Reboot at the end of the install and you should have a working OpenStack Kilo build.

If you want to rebuild then you're probably best off re-installing Ubuntu server and starting from scratch.

There are a few niggles with the original build process such as getting error 500 denied messages when trying to perform various operations, resolved by changing Keystone not to use memcached. Also had some issues with Qemu due to not having the nova-compute-qemu package installed and the /etc/nova/nova.conf not quite right, needed for deploying Openstack in something like Virtualbox without KVM support. This script should sort those problems out.


Read more...

Saturday 2 January 2016

Vagrant Lab for HAProxy

This article is about setting up a lab using Vagrant to play with the HAProxy load balancer.

If you want the TLDR version where you just copy/paste a few lines and the lab gets created then this will do the job, it's explained in more detail below. It's great with tools like Virtualbox and Vagrant that such a lab can be set up so easily, this would have taken days to build prior to virtualization!

#install software
sudo apt-get install virtualbox vagrant git

#configure host-only subnet address in virtualbox
VBoxManage hostonlyif create
VBoxManage hostonlyif ipconfig vboxnet0 --ip 172.28.128.1 --netmask 255.255.255.0
VBoxManage dhcpserver modify --ifname vboxnet0 --ip 172.28.128.1 --netmask 255.255.255.0 --lowerip 172.28.128.100 --upperip 172.28.128.250

#install the lab files, for some reason the box fails to auto-download so install it manually, should be fixed in future
git clone https://github.com/unixhead/haproxy-basic-lab && cd haproxy-basic-lab 
vagrant box add hashicorp/precise32 https://vagrantcloud.com/hashicorp/boxes/precise32/versions/1.0.0/providers/virtualbox.box


#run the lab
vagrant up


Now there is a slight caveat in that I use Linux Mint and the current versions of Vagrant & Virtualbox aren't quite right, so I had to manually install Vagrant from the website, but never let the truth get in the way of a good story.

My Virtualbox host-only network uses the range 172.28.128.0/24 and the network to be created is shown below, a simple load balancer infront of two web servers. The client is also the hypervisor hosting the VMs. It's very similar to the configuration of the Vagrant tutorial system. Virtualbox by default uses 192.168.0.0/24 for the host-only networks but that overlaps with a few places I work so had to change it.



The files needed are listed below, save them all to the same directory and run "vagrant up" in it. You can download them all in one go from github with the command:

git clone https://github.com/unixhead/haproxy-basic-lab

Vagrantfile - The configuration for Vagrant itself providing 3 VMs using the Ubuntu 32 bit image. One is the HAProxy load balancer and two web servers.

Vagrant.configure(2) do |config|
  config.vm.box = "hashicorp/precise32"

 config.vm.provider "virtualbox" do |v|
   v.memory = 1024
    v.cpus = 1
 end

 config.vm.define "lb" do |config|
  config.vm.hostname = "lb"
    config.vm.network "private_network", ip: "172.28.128.10"
   config.vm.provision :shell, path: "bootstrap-haproxy.sh"
 end

 config.vm.define "web1" do |config|
  config.vm.hostname = "web1"
    config.vm.network "private_network", ip: "172.28.128.11"
   config.vm.provision :shell, path: "bootstrap-apache.sh"
 end

 config.vm.define "web2" do |config|
  config.vm.hostname = "web2"
    config.vm.network "private_network", ip: "172.28.128.12"
   config.vm.provision :shell, path: "bootstrap-apache.sh"
 end

end


bootstrap-apache.sh - This script runs on the webservers after Vagrant has built them. It installs Apache with PHP, then sets the web root to the current Vagrant project directory.

#!/usr/bin/env bash

apt-get update
apt-get install -y apache2 php5
if ! [ -L /var/www ]; then
  rm -rf /var/www
  ln -fs /vagrant /var/www
fi


bootstrap-haproxy.sh - This runs on the load balancer after build, it installs HAProxy and copies the provided configuration file.

#!/usr/bin/env bash

apt-get update
apt-get install -y haproxy hatop
cp /vagrant/haproxy.cfg /etc/haproxy
echo "ENABLED=1" > /etc/default/haproxy
service haproxy start


haproxy.cfg - Basic HAProxy configuration for load balancing port 80 between two web servers.

frontend http
  bind *:80
  mode tcp
  option tcplog

  default_backend web-backend

backend web-backend
   balance roundrobin
   mode tcp
   server web1 172.28.128.11:80 check
   server web2 172.28.128.12:80 check


index.php - A basic web index to show which web server was accessed by printing the servers hostname. Both web-servers will load the same file.

<?php
echo gethostname() . "\n";
?>


To test it simply run Curl a few times against the IP address of the load balancer, the replies show that the web sessions are being balanced across both hosts:

matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1
matt@client haproxy-lab $ curl 172.28.128.10
web2
matt@client haproxy-lab $ curl 172.28.128.10
web1

Read more...

Tuesday 10 February 2015

Cisco IOS TCL - Reset Interface if DHCP Fails

I've got some devices where DHCP doesn't always work properly for a number of reasons, running shut/no shut on the Cisco router seems to fix it. To automate that I've knocked up a TCL script:

The script itself:


#script to check if an interface has an IP address and reset it if not.
#copy to flash via TFTP or write it using the technique described here:
# http://sabotage-networks.blogspot.co.uk/2013/02/applying-ciscos-new-licenses-without.html
#
#set as cron job to be run every 60 minutes with:
#
# kron policy-list checkInteface
# tclsh flash:/checkIP.tcl
# exit
#
# kron occurence checkInterface in 60 recurring
#


#set this to name of WAN interface
set interface "fa0/0"

set output [exec "show interface $interface | include Internet address"]
if {[regexp (DHCP) $output]} {
#no ip found, reset interface
puts "no ip address found, restarting interface $interface"
ios_config "interface $interface" "shutdown";
after 5000;
ios_config "interface $interface" "no shutdown";
}

Read more...