The COVID-19 app is being trailed in the Isle of Wight, and has already created lots of public debate. The debate centres around security and privacy. However, there is part of security that has so far not been aired – false positives.
Before I discuss false positives, I want to spend a few paragraphs on the security and privacy debate. The two topics are often confused, but quite different.
The security aspects of the app have been covered reasonably well so far.
- NCSC have described the architecture in the blog “The security behind the NHS contact tracing app”
- The source code has been published on GitHub.
- Pen Test Partners, who have an excellent track record in exposing weakness in consumer products have performed an initial analysis without raising any significant issues.
There is a missing piece of the evidence – the assurance of how the product is built. How can we be certain the source code published is the source code the app runs? How do we know it has not been accidentally or maliciously modified? This is a topic I spend a lot of time helping customer in my day job, with the encouragement from NCSC, via their secure application development guidance, that this is an important topic. So perhaps to reassure the public we can expect more information in due course on how the app is built.
The privacy aspect is getting greater public discussion. The NCSC blog describes what information is shared by the app and talks about anonymisation. In the privacy world anonymisation is an important and difficult topic, which is why GDPR specifically calls it out as a topic. As research has shown de-anonymisation (or re-identification) is relatively easy when you have access to multiple data sources that you can triangulate (see “Estimating the success of re-identifications in incomplete datasets using generative models”). This is where security and privacy deviate. The assurance above may demonstrate the app is secure, but does not prevent the de-anonymisation risk – by design the app assigns a unique identified to each phone. The NCSC blog states a Data Privacy Impact Assessment will shortly be published; this will be an important document, which I hope will give expert opinion on the risk of deanonymisation.
Now onto the main point of this blog – false positives.
The Covid-19 app is designed for a member of the public to report they have virus symptoms and alert their contacts. These contacts, and member of their household, will be requested to self-isolate for a period. What I cannot see if what happens if the initial person symptoms turn out to be a false alarm – will the contacts be re-contact to say they no longer need to isolate?
Now, I’m paranoid, my job requires me to be, but what happens in a malicious person picks up on this? What if they go into a public space and deliberately make close contact with as many people as the possibly can, then go home as make a false report they have symptoms? Presumably, the close contacts they made will get an alert and required to self-isolate? What if this is a coordinated action by a hostile group of people – could they get the app to impose a new lockdown by stealth?
In assessing the app, NCSC will have done a threat assessment, and I am certain this threat will have been considered – it is an element of their mission by being part of GCHQ. In considering the threat they will have considered mitigations. There is talk in the NCSC blog of “algorithms” that assess if contacts need to be informed based of the risk. Perhaps the likelihood of it being a false positives is considered in the risk assessment.
How do you assess the likelihood of a false positives – de-anonymisation could be a useful contributory technique!?!