Security at Antithesis: A Manifesto
Software has bugs. People make mistakes. We’re probably more aware of this than the average company, since finding bugs and mistakes made by smart and well-meaning people is literally all we do all day. This makes us come at security from a different perspective than most. We assume that most of our code is full of bugs and assume that any of our employees could make a braindead mistake. Our goal is to have security anyway. Out of the crooked timber of humanity, no straight thing was ever made, but read on to see how we try our damnedest.
The organization of this document is as follows: we examine in turn each of the Eight Principles that we follow when building secure systems at Antithesis. For each principle we first explain what it means in the abstract, and then use concrete examples to show how it plays out in our security architecture. This isn’t a comprehensive reference to everything we do, and if you’re looking for our official security policy you can find that here. But if you’re trying to decide whether to trust us, this is probably more informative than a reference or a policy, because it shows you how we will think about future decisions and tradeoffs.
The Eight Principles
1. Know your goals
The first step of building secure systems is to know what you’re actually trying to accomplish. A smart-ass might say: “the goal is to be as secure as possible”, but that isn’t actually true. The most secure possible computer is powered off, locked in a safe, at the bottom of the sea. There is some margin at which security trades off against other values, and you owe it to yourself and your customers to make explicit where you draw the lines, where you’ll cave, and where you’ll stand firm.
Another reason it’s important to be clear and precise about your goals is that sometimes it’s actually different forms of security that are in conflict. The classic example of this is the password policy that makes things very hard for a remote intruder to guess, but encourages all your employees to write their passwords on a sticky note stuck to their monitors. That example is contrived (and these days we have password managers, yay!), but there are many more subtle situations where different security goals are in tension with one another, and you have to decide.
2. Be clear what you trust
Most software is insecure garbage, and most human beings can be bribed or blackmailed by a sufficiently dedicated attacker. So it’s tremendously comforting to be able to draw a boundary around a component and say: “even if this part were actively malicious, it wouldn’t matter, because it still wouldn’t be able to do anything bad.”
We believe that to have any hope of having real security, you must be able to mistrust the vast majority of your code and your systems. You must concentrate all the security responsibilities of your architecture into a tiny subset of components. If you do that, then fanatically scrutinizing that tiny trusted subset won’t be too onerous, and there’s maybe a chance you can get it right.
3. Security as the default
The curse of security engineering is that to design a secure system you need to get everything right, whereas to attack a system you only need to find one mistake. A corollary to this is that whenever possible you should start with something maximally secure (within reason, remember the computer at the bottom of the sea), and then add exceptions or relax particular properties as you convince yourself that doing so is safe. Going the other way – starting from something insecure and then trying to band-aid or patch your way to security – is too likely to leave something overlooked.
4. No hidden state
Conventional operating systems are big blobs of mutable state, whose current configuration admits no shorter description than the history of every change ever made to them. Then, in the cloud and in our data centers, we repeat the same mistake at a higher level of abstraction, building complex architectures whose true condition can only be determined via archaeology. This isn’t just a maintainability disaster, it’s a security disaster. How many times have breaches happened via an unpatched server that everybody forgot existed, or an OS process that nobody even thought about? Modern tools make it possible to make our systems fully declarative, both at the level of whole architectures, and all the way down to individual systems.
5. Limit your credentials in scope and time
Getting pickpocketed is a drag, but it’s a lot worse if you’re carrying around the key to the bank vault that contains your life-savings. So it might be a good idea not to carry that key around with you all the time, but only to get it when you need to go to your bank vault. The same is true of user accounts, machine roles, and all other forms of authorization. Whenever possible, use the account with the bare minimum ability to accomplish what you want to do. This is called the Principle of Least Privilege, and it’s an important way to contain the damage when (not if) something goes wrong.
Man, wouldn’t it be great if that bank vault key only worked for 5 minutes after it came out of your pocket? That way the pickpocketer wouldn’t just have to figure out how to steal it from you, they’d also have to figure out a really fast way to get across town, or take care to steal it at the exact right moment when you’re walking past the bank. It’s hard to make physical keys behave this way, but computerized tokens and credentials can be programmed only to work for a very short time. That way, if an attacker steals such a credential they don’t get access for long, and have a limited window to establish a more permanent foothold.
6. Follow the process
We all hate bureaucracy, but when it comes to security, having a process is a lifesaver. Process makes it hard to forget to do things. Japanese train conductors use a ritualized set of hand gestures to avoid fatal accidents, and when doctors use checklists the rate of medical error declines sharply. Process is also a powerful tool for catching bad actors.
People feel socially awkward about calling out others who are pushing the boundaries in a way that makes them uncomfortable, but if there’s a rule or a process or a handbook it depersonalizes the issue. “Hey sorry, but you can’t do that, it violates the process.” We try to keep our software teams as free-spirited as possible, but if there’s one place to create bureaucracy and formal processes, it’s security.
7. Threats come from every direction
The strongest castle walls are useless if somebody opens the gates to the enemy. But not every insider threat is the result of malice. Employees can get phished, or have their laptops stolen, or perform an administrator action with security consequences that they didn’t intend. It’s dangerous to assume that anybody acting with an employee’s credentials has the company’s best interests at heart. That goes double for administrators, superusers, and senior leaders in the company, who are the most likely targets for witting or unwitting subversion.
8. Check your work, then have somebody else check it
In fields as diverse as accounting and software testing, it’s good to have both an internal audit function and an external audit function. The internal audit is what you iterate against – a reviewer, or a “red team”, or a test suite. Somebody you know well, somebody who’s dedicated to catching your mistakes. In an area like security where a single mistake can be fatal, you’d be a fool to do without it. But if you only have internal audits, then you don’t have protection against your auditors getting sloppy.
Also, over time you’ll get more and more used to the audit, and you’ll start unconsciously figuring out ways to bypass it. That’s where the external audit comes in. The external audit keeps your internal audit honest. It’s people from the outside, people who don’t play by your organization’s rules, people whose one job is finding ways your internal audit got lazy, and then telling you about them so you can fix them.