ILOVEYOU – 20 Years On

This month marks 20 years since the ILOVEYOU virus hit computer networks.  For me, it represented a milestone in my security career.  Up until that point security was a technical challenge, solving challenges associate with the global distribution of public keys for secure email exchange.  (Aside, I’ve blogged on this many times, it is a challenge still not resolved in a usable way today).

My first exposure to ILOVEYOU is when the Nexor CEO came into our office confessing he may have clicked something, and his computer was now behaving strangely.   The remedy was fairly easy, disconnect from the network and rebuild the PC.  To be honest, as a technologist, it was quite exciting at the time,  seeing a real live virus in action.

The learning was more important.   Security was about far more than technology.   It’s also about people and process.

I could go on about how the CEO should not have clicked the link, but the last 20 years have shown those links will still be clicked no matter how much education we try.  Don’t get me wrong.  Education is still vital and will reduce the number of incidents, but incidents will still happen.

The more interesting part of the 20 year old incident, was the learning around incident response.   We were able to contain the incident, be we (sort of) knew what were doing and took no risks – we went for a rebuild, despite the inconvenience to the CEO.  What we had unwittingly created was an early example of an “incident response plan”.  This was about process and relatively simple technical steps (rebuild a PC) and some post event briefings.  It was not long after that I started to understand where emerging standards like BS7799, which became ISO 27001, fitted in the over all security story.

This month, 20 years later, I’ve just briefed on of my team who is creating an incident response plan for a customer.   Who would have thought such a simple incident would have direct relevance 20 years later!

Lockdown 2020 – Learning Python3

Lockdown 2020 gave me the opportunity to learn a new skill – Python3 & GitHub DevOps.

My career started in the 1980’s / early 1990s as a software developer, primarily in C for an open source package, implementing the full 7 layer ISO stack for a directory.   If used today it would be akin to an internal Active Directory Forest merged with external DNS search.  Since then, during my journey in management and consulting I’ve dabbled with bits of code, mainly Perl.

Last year, to make my smart home work, I needed a few bug fixes in open source platform called Homeassisant, started to learn bits of Python3.   Then after a torturous acceptance process (due to the high coding standards) I had an minor upgrade I implemented accepted.   While torture, it was a great learning curve in how a modern open source release works – very different to when I ran the Isode open source in the early 1990s.

In normal times, I spend some of my ‘spare’ time supporting the admin team of Nottingham Leander Swimming Club.  We use a platform call Swim Club Manager (SCM),  and had a few Perl script to help with data cleansing and verification.   Lockdown came, and I decided to upgrade the scripts to Python3, and make it available to other SCM users.

I thought it would take me a few evenings, but with some significant scope creep, and many more than a few evenings later the result is:   SCM Helper.   With support from the SCM community to iron out a few bugs related to test cases I had not considered, a number of people are now using it to clean their data sets.

Learning points…

  • Python3
  • Multi threaded Python3 (wow, if only I had this back in the day – I had to do threading by hand to make distributed directory search work).
  • Object oriented programming (This was a big deal, as my C background was very much procedural)
  • TK/Tcl GUI via tkinter.  I’ve never developed a windows GUI before – easier than I thought it would be (once I realised tkinter was not thread safe).
  • Git (again, how easy it is now.  Had to use FTP of tar balls back in the day!)
  • GitHub DevOps (at a trivial level)
  • Creating Python packages, via PyPi
  • Yaml for configuration
  • Compiling Python to a Windows.exe
  • New support tools (black, flake8, pylint, isort, codespell, python setuptools, twineand pyinstaller)

My reflection on the learning?  Wow that was relatively easy even if it took a lot longer than I planned.   If only these tools were available in the 80’s/90’s, it would have easily reduced the time develop the Isode/Quipu application in half.  (Not sure the code performance would be as good, but with the significant processing power upgrade of today not sure that would have mattered).  Added to that the widespread availability of Python packages to do most of what I needed (http, crypto, Yaml and CSV parsing and schema validation time name a few) enabled me to focus on the application, and not support tools.

Software Developers of today have it so easy!

Final note, good job I picked Python3, as Python2 went end-of-life in April 2020!

Security, Privacy and False Positives in the Covid-19 App

The COVID-19 app is being trailed in the Isle of Wight, and has already created lots of public debate.  The debate centres around security and privacy.   However, there is part of security that has so far not been aired – false positives.

Before I discuss false positives, I want to spend a few paragraphs on the security and privacy debate.  The two topics are often confused, but quite different.


The security aspects of the app have been covered reasonably well so far.

There is a missing piece of the evidence – the assurance of how the product is built.   How can we be certain the source code published is the source code the app runs?  How do we know it has not been accidentally or maliciously modified?   This is a topic I spend a lot of time helping customer in my day job, with the encouragement from NCSC, via their secure application development guidance, that this is an important topic.  So perhaps to reassure the public we can expect more information in due course on how the app is built.


The privacy aspect is getting greater public discussion.   The NCSC blog describes what information is shared by the app and talks about anonymisation.  In the privacy world anonymisation is an important and difficult topic, which is why GDPR specifically calls it out as a topic.  As research has shown de-anonymisation (or re-identification) is relatively easy when you have access to multiple data sources that you can triangulate (see “Estimating the success of re-identifications in incomplete datasets using generative models”).    This is where security and privacy deviate.   The assurance above may demonstrate the app is secure, but does not prevent the de-anonymisation risk – by design the app assigns a unique identified to each phone.   The NCSC blog states a Data Privacy Impact Assessment will shortly be published; this will be an important document, which I hope will give expert opinion on the risk of deanonymisation.

False Positives

Now onto the main point of this blog – false positives.

The Covid-19 app is designed for a member of the public to report they have virus symptoms and alert their contacts.  These contacts, and member of their household, will be requested to self-isolate for a period.  What I cannot see if what happens if the initial person symptoms turn out to be a false alarm – will the contacts be re-contact to say they no longer need to isolate?

Now, I’m paranoid, my job requires me to be, but what happens in a malicious person picks up on this?  What if they go into a public space and deliberately make close contact with as many people as the possibly can, then go home as make a false report they have symptoms?  Presumably, the close contacts they made will get an alert and required to self-isolate?  What if this is a coordinated action by a hostile group of people – could they get the app to impose a new lockdown by stealth?

In assessing the app, NCSC will have done a threat assessment, and I am certain this threat will have been considered – it is an element of their mission by being part of GCHQ.  In considering the threat they will have considered mitigations.  There is talk in the NCSC blog of “algorithms” that assess if contacts need to be informed based of the risk.  Perhaps the likelihood of it being a false positives is considered in the risk assessment.

How do you assess the likelihood of a false positives –  de-anonymisation could be a useful contributory technique!?!





Reblog: How Do You Remain Savvy With Your Supply Chain

Reblog. Original (TechUK)

By now, we must all be aware that Cyber Security is a prominent issue – we recently heard mainstream news reports about Ransomware hitting the NHS and often hear about the latest data theft of millions of passwords or credit cards.

Home users should be starting to get the message about keeping our devices up to date, choosing good passwords, and even using two factor authentication where possible. But do we spend sufficient time thinking about the products or services we buy?


The last post: CyberMatters comes to an end

CyberMatters started almost 5 years ago, as a proof of concept blog platform for Nexor. Over that time, we’ve covered a wide range of topics from general security advice on passwords, commentary on topics of the day, and discussion of the latest technology concepts Nexor has been working on.

Over the last few months I’ve focused my efforts on looking at the issues of secure information exchange in the cloud – how can the concepts and architectures Nexor has applied to traditional environments morph and adapt to protecting cloud environments.

The infographic below is a summary of a white paper we’ve released at CYBERUK today discussing our views on how these techniques can be used to enable the Cloud for secure information sharing and exchange.

The Cloud is undoubtedly becoming a core technology we all use – CyberMatters has always run in the cloud using WordPress SaaS! Securing the cloud is becoming a specialist discipline, and I’ve been given the opportunity to build a specialist cloud security team at Nexor.

As part of this, Nexor has been consolidating our branding and web presence. The Qonex brand, built to focus our Cloud and IoT activities, will be rolled back into Nexor as core business, as will CyberMatters.

Consequently, this will be CyberMatter’s last blog post, my future mumblings will be posted on the Nexor blog.

I hope you’ve found the blog of value and interest over the last 5 years, and want to take this opportunity to thank you for your readership, comments, feedback and encouragement. I do hope we’ll meet again at

Update 2021

The blog refers to a historic site called   The current site is nothing to do with the original developed by me, and is using the content without permission..

The “NHS” Attack

The poor and inaccurate reporting of the NHS Ransomware incident over the weekend has irked CyberMatters into coming out of hibernation. With so much to say, it’s hard to know where to start.

WannaCrypt ransomware demand

Not targeted

First the NHS was not targeted by a Cyber Attack. The attack affected ANY system that was vulnerable; the sad fact is the NHS was vulnerable, as were many other global organisations thus the attack was able to succeed.

By Friday evening, and over the weekend, the media were taking interviews from various industry ‘experts’. Sadly, too many were using the opportunity to push their latest and greatest product feature that would provide protection. Let’s be clear, if any product supplier says their product would have prevented the incident, their comment should be taken with a pinch of salt. THERE IS NO MAGIC BULLET PROTECTION. (However, there were also some very good reports from proper experts).

Defence in Depth

A solution requires an organisation has a defence in depth strategy, as long promoted in this blog.

Protection measures are needed on all interfaces that can bring malware into the IT systems – email, web sites, CD & Memory sticks etc. These need to have multiple layers – e.g., both boundary and end point protection, and multi-faceted – e.g., anti-virus, sandboxing, limited user rights and advanced verification techniques.

A defence in depth strategy will then assume these measures have failed, and provide mitigations to prevent the spread. These typically include patching and network segmentation.

The next layer will then assume these have failed, and provide monitoring mechanisms to look for suspicious network behaviour, such as unusual network traffic.

If these protect and detect measures fail, you then need to enact pre-planned response measures.

The NHS scenario

NHS logo.pngIt is too early to tell, but it is my belief the NHS was so badly hit, as their defence in depth strategies were not effective.

Boundary protection systems let the malware in (and to be fair, this is likely in most organisations, unless excellent user training and advanced data verification tools are used), the lack of patching allowed the malware to spread.

Then, due to the lack of segmentation, the only response mechanisms were to shut all systems down until a more detailed assessment could be made.

Cyber Essentials

My first reaction on hearing of the way the malware was spreading is this would be a good advert for Cyber Essentials. To this end, I thought Amber Rudd, Home Secretary, presumably briefed by Ciaran Martin, head of NCSC, missed an opportunity to promote implementing Cyber Essentials as immunisation. But her detailed words reveal why…

She said there were three key mitigations, patching, anti-virus and backups. Cyber Essentials is a prevent strategy, and does not include the prepare element of backups. Maybe a lesson learnt that should feed into a revision of Cyber Essentials?

What went well?

Part of the NSCS’s £1.9bn is spent on the Cyber Information Sharing Partnership (CiSP) which incorporates information from the UK Computer Emergency Response Team. By 3pm, the incident was being discussed by experts, and by 4pm the relevant Microsoft patch identified. If you are not part of CiSP, I recommend including consulting CiSP as part of your incident response plans.

The NCSC were also quick to publish specific mitigation advice on by Sunday.

Windows XP

Much of the press debate has centred on unpatched Windows XP systems. Irrespective of the rights or wrongs of Microsoft not providing updates, this issue has been known for a long time. For example, government departments running Windows XP would not be allowed to connect to the government public sector network, forcing departments to resolve the issue.

The NHS ‘defence’ is legacy applications do not work on newer Windows systems. Again, whether that is the full truth matters not. If you know this risk exists, then you MUST deploy defence in depth, and most importantly segmentation and isolation strategies to manage the risk.

Nexor – how did we react?

We became aware of the issue, via open source monitoring mid-afternoon on Friday. We convened an ad-hoc security incident response meeting, consulted CiSP to determine the nature of the issue, from where we were able to establish the March Microsoft patch provided immunity. Cyber Essentials demands we roll out the patches quickly, so we could be confident the immunity would be effective, but decided to double check our patch management records in any case. By 5pm we concluded we were OK this time.

Who to trust?

One of the hard parts of all this, is knowing who to trust. Who is given an accurate and balanced story, versus plugging a corporate position. This is hard to answer. The best I can come up with at the moment is other than word-of-mouth / reputation, check the person giving advice on the Trusted Security Advisors Register – not perfect, but the closest we have right now.