Blog post

Insecurity by transparency

securityopen sourcesecurity-by-obscurity

Marco W. SoijerBy Marco W. Soijer

So, over $300m in cryptocurrency got stolen. There is a strong argument against those $300m actually being worth much, as no central bank is backing it and cryptocurrencies are little more than a combination of a bubble, a Ponzi scheme and an environmental disaster, as Agustín Carstens, head of the Bank of International Settlements, summarised it back in 2018.

But that is not what this post is about. In this week's SANS NewsBites, Jake Williams brings up an interesting aspect of this case: the attackers apparently noted a security fix having been uploaded to GitHub, where the open-source software for the cryptocurrency platform is hosted. In between the security fix being published — thus documenting the exact vulnerability in the existing code — and all the machines running that code having been updated, attackers have a window of opportunity. They can exploit a weakspot that they do not even have to find themselves.

Sharing fixes with adversaries

Trying to make systems secure by hiding the internal mechanisms by which they are protected — frequently called “security by obscurity” — is a bad idea. If your assets are valuable enough, those mechanisms will not be secret for a long time. Someone will reverse-engineer them, leak them, steel them of simply guess them, leaving you without any protection at all. In cryptography, the opposite and nowadays broadly accepted view — known as Kerckhoff's principle — is that everything, except for key material, can and even should be disclosed; security comes from the mathematical effort that is needed to break those keys by brute force. Keys are easier to protect than full security concepts; and if they are breached, they can easily be replaced.

But back to the cryptocurrency case and its open-source aspects. A logical consequence of Kerckhoff's principle is that open-source software beats closed-source solutions when it comes to security. You cannot and must not expect your security mechanisms to remain secret; in consequence, the more people look at it, the better your chances are of someone finding a flaw, reporting and correcting it before any attacker does. And as you do not want any insufficiently reviewed and tested code to be deployed, publication of sources is done well before the code is actually run, so that anyone in the team can have a look at it and raise questions. It all takes a bit of time, but in the end, it ensures only mature code to be deployed to operational machines.

Furthermore, in typical decentralised open-source scenarios, the team developing the software are not the same people who are responsible for potentially thousands of machines running the software. Each and every operator of such a system must be aware of the patch being available and then deploy it. In between, they may want to check on its quality and relevance themselves and decide when, if at all, the update is to be installed.

And there is the catch if such code is actually fixing a security flaw that was there after all.

Illustration of the process with developers, quality control and adversary

Open source, but closed deployment

The situation is somewhat similar to any patch that is made available: with its publication, adversaries know about it and can actively seek and target vulnerable systems. But there is one decisive difference: in the usual closed-source case, updates are available at least some weeks before the original flaw is disclosed. System operators at least have a chance to keep their systems up-to-date and fully patched before life is made easy on the attackers. With the open-source review and distribution of updates, this is no longer the case. In the race between adversaries and system operators, the latter do not even get a head start.

So what to do about it? I guess the solution is to develop security-critical software as open-source — meaning that the code is fully disclosed, at least to all stakeholders, but preferably to a broader public, too — yet to maintain it within a closed team. Updates should be published in binary form and pushed to operational machines without depending on the users' action. In parallel, the source code can be disclosed, but updates that are out for review, at least the safety-critical ones, are already in place everywhere.

February 2022

Share this post