The genesis of Reviewfacility.eu with Michiel Leenaars

Michiel Leenaars

In pandemic times, the Internet has proven its potential to help keep our societies going by facilitating remote collaboration, distance learning, and innumerable other use cases while we are unable to travel to work and meet. Some of these technologies already existed, but matured more rapidly than ever at the immense scale required for their deployment – like video conferencing solutions. For other problems, new solutions had to be improvised or created. Many of these solutions somehow involved the Internet, or local networking technologies like Bluetooth. The free and open source expertise from the Next Generation Internet community comes in handy when it comes to evaluating and maturing these solutions. At the request of the European Commission, representatives of the NGI community set up a technical review facility that provides independent security and privacy analysis of COVID-19 related technology. The Emergency Tech Review Facility is a collaborative, community-focussed effort to quickly and transparently analyse COVID-19 solutions.

In this second in a short series of interviews, we talk to project lead Michiel Leenaars from NLnet Foundation to learn more about the reviewfacility.eu platform and the larger effort behind it. If you are interested, you are invited to join in on the crowdsourced mission to map out the solutions and problems of emergency tech solutions like contact tracing apps – even for a small task. It takes the collaborative work of many to arrive at trustworthy solutions that work for everyone, and only one subject to call out the emperor’s new clothes. Everything happens out in the open.

What’s your motivation to set up the Emergency Tech Review Facility?

When the corona pandemic happened, it became clear that there was an explosion of new solutions to solve many different (and sometimes cascading) problems. Everything was happening in parallel, and there was little coordination or shared understanding beyond the individual projects. Essentially there was a lot of reinventing of wheels, based on whatever technology those in need of a solution had experience with or could get their hands on. And that situation is not efficient, nor is it likely to yield optimal results.

Obviously, the way most people in government and the health sector work with IT projects, is through traditional procurements with a lot of paperwork, involvement of external consultants to outsource the risk and many check and balances. These ‘oil tanker style’ procurement procedures are far from lightweight, and that is putting it mildly. In this case, there simply was no time to get anything done the traditional way. We observed a fascinating mix of grassroots and top-down efforts, spawning all across the globe.

We wanted to help converge the solution space and get the necessary facts on the table to make the right decisions to move forward – with as much confidence as possible. At the same time, we wanted to provide practical support and rally brain power in support of the work being done. Why develop a home-grown solution everywhere, when in reality there is no competition? Why act as a bystander when people with good intentions are trying to save lives, but are humans with limitations too? There is great value in making things comparable, to help understand whether assumptions hold and to benchmark what is out there. Sub-optimal approaches can be replaced by better approaches. And bad approaches can be stopped before they have bad consequences.

How can things improve in such a situation?

It became obvious that the most viable solutions that were being developed were grounded on free and open source software. Free or libre software is software just like any other, but it is created and distributed under terms that empower the users above anything else. There is a license attached to the source code that allows anyone to run the software for any purpose as well as to study, change, and distribute it and any adapted versions too. Free software has the great benefit and flexibility of ‘permission free innovation’, meaning that you have full access to everything and can adapt it to whatever use case you have – without asking anyone for permission. Having the source code available also allows anyone – security experts, digital rights organisations, and individual citizens – to investigate whether there are any back doors in an application, either intentionally or by accident.

That is a critical aspect in this particular situation. Within the NGI initiative, one of the four pillars is ‘trustworthiness‘ – trust based on assumptions is just so much weaker than trust based on actual hard evidence and transparency. Trust has to be earned and justified, and if possible proven with boring math and proof systems; especially so with something as complex and privacy-invasive as COVID-19 contact tracing. If you ask a citizen of any country on the planet to place a bet on whether a new random IT project from her or his government would be a big failure or a big success – where do you think most people would put their money? Why would a project to create an app created in an emergency situation magically be any different? If we want a solution, we should look at this not as a collection of isolated projects, but as a collaborative global path-finding effort. By working together, using best-of-breed solutions and continuous open feedback, the chances of getting good results vastly increase.

You are trying to establish whether we should trust COVID-19 apps. Should we?

That judgment call is actually not up to us, we are just facilitators. Our humble goal is to get as many technical facts on the table as possible, and to crowd-source mapping out the solution space. Once the facts are there, everyone should be able to make their own informed decisions. When important new facts come in, these decisions can and should be revisited. Several apps have been pulled by national governments already, or developers completely changed their designs. That responsiveness is actually a sign of great strength. There is no magic, and COVID-19 apps are not too big to fail, nor should they – progressive insight is better than a hop, skip and jump of faith into the abyss.

Sufficient adoption in a democratic region like Europe is only going to happen on a voluntary basis, meaning that the population needs to trust the authorities in charge to get this right. It is interesting to see that in some countries, the population is historically much more cautious when it comes to trusting their government. An app does not exist in a vacuum but in a real political context, and when organisations like Amnesty warn about some apps used for political gain in danger zones they know what they are talking about. If your government has an ongoing trust issue, you must work extra hard for people to trust your contact tracing solution. But of course, that doesn’t mean it is impossible. That is what trustworthiness means to us.

Individual citizens might be willing to take a leap of faith when you push the fear button, but digital rights communities, security researchers and academics are not so easily impressed by political power play or bluff. And any seed of doubt can grow into a field of discontent. When a majority of people starts thinking about contact tracing apps as turning people into ‘walking antennas’ because apps are amateurish and they have legitimate fears about their safety, who can blame them? You need to actively counter doubt with solid arguments – not just rhetorically, but technically.

Is that data really so sensitive? People share a lot with corporates like Facebook and Google already, why not with governments and health authorities?

I think that is a flawed argument: a lot of people are very cautious with their privacy, more perhaps than most people think. And that holds true in particular for people that need to be cautious, like journalists, anti-corruption investigators, whistle-blowers, but also politicians and judges. Governments have a responsibility and the highest of moral standards to live up to, that is what makes them legitimate. The burden of proof is on the side of those promising solutions, you can’t apply a ‘move fast and break things’ approach in such critical situations. When the inventor of Bluetooth states that he does not believe Bluetooth is a suitable basis for this type of technology at this point in time, or security researchers point to recent severe vulnerabilities in the implementation of Google, it is up to developers to come up with structured research and cold hard facts that prove these allegations wrong. No answer likely means no trust.

To summarise: our primary goal is not to establish whether or not we should use contact tracing apps, but to bring together every scrap of knowledge about them. Our work is not just with apps, we also put a spotlight on alternatives like the several dedicated open hardware designs crafted especially to solve some of the challenges posed by apps. We hope that by benchmarking the different solutions, and contributing to their maturity, we can help spread the best solutions everywhere. At some point we will need to consolidate and get the best people from everywhere to collaborate on a handful of great solutions, rather than maintaining a hard-to-understand swamp of weak apps and homegrown backends.

What do you do if you cannot say whether these apps are safe?

We help apps become better, in whatever they need to be better at. We are offering independent security and privacy analyses, but also can help with localisation and internationalisation, accessibility, copyright compliance, protocol verification, reproducible builds and packaging, etcetera. Through the EC, member states can approach us for strengthening the technology they are developing to address the challenges resulting from COVID-19. That technology obviously includes contact tracing and information solutions but is not limited to it.

The independent nature of the security and privacy analysis in particular, is a key aspect. An outside perspective is more likely to uncover hidden flaws everyone has scurried past, or assumptions that have been made which no longer hold. How can we proliferate best practices without independently looking into the security of each of the candidates? The European Commission and the Member States are obviously committed to the development of trustworthy pandemic related technologies, but they can only make decisions based on the information they have. And from the other side, parliaments, civil society but also individual citizens want to properly understand the situation they find themselves in. They have a need and a right to be informed. That’s where we come in. In the end it should be easy for everybody to understand the parameters of how certain apps or protocols impact how health data is shared, and the alternatives available.

Currently, Google and Apple have a convenient ‘off the shelf’ solution and quite a few countries have adopted that. It is unfortunate indeed that the actual implementations by both companies is not open source; that means it cannot be properly audited from a security point of view. Other countries have resisted that temptation, which makes them less reliant on proprietary technologies that are not open to inspection. There is a reason why pretty much all the solutions in Europe are being built as open source, but for this critical component the source code is not available. We hope to resolve that at some point, of course – but such dependency on big tech is hard to navigate.

Since transparency is essential, we work in the open. The data is collected in a structured, public wiki where anyone can contribute. We coordinate security and privacy audits in a number of real-time chat channels that are open to participation by anyone interested, and we invite anyone to join in on participate.reviewfacility.eu to do so. Plus, we are setting up a forum where people can exchange experiences and ideas on both technical and non-technical issues.

What may we expect in the future?

We hope to build a neutral and transparent environment that is of interest to a rich set of other stakeholders and experts. This is a new and unchartered territory, but once the ecosystem is up and running, we hope there is enough value for the community to become self-sustaining until its work is done. Our societies are built on top of many centuries of humanist ideals and high moral standards, whether you label these as human rights, democratic processes, justice or empowerment. The scramble for technology as a solution to contain COVID-19 was understandable, but now it is paramount that we use these standards to protect our health, our data, and our open societies, etcetera.

We cannot emphasise enough the importance of having a broad community work on these issues, as the field is evolving so fast it’s beyond any single experts’ capability. Together we can share insights and best practices – which is what our work within the Tech Review Facility is all about, combining knowledge with flexibility and creativity to work on the development of tools we’re all happy to use in the knowledge they respect our privacy and security. With the right tools and the right data to hand, we can all be part of the solution – ranging from better decision-making by governments, to better participation from each of us. So join in.

Michiel Leenaars is head of NGI Zero and Director of Strategy at NLnet

Skip to content