No Negative Access Rules

If you don’t want someone to be able to delete the database, do not give them authority to delete the database.

Azure recently announced that they were implementing Negative access rules. A rule of this nature is such that you/ can say “User U cannot perform action A on resource R.”

This is an anti-pattern. It is a panic response to breaches caused by principals (users or agents ) that have far more power than they should have to perform operations than they need. What is the alternative?

Create a agents with minimal access to get done exactly what they need an no more.

Lets break this on down. I am going to use a web application to explain what this should look like. This application takes orders on line and send them to a back-end processing system to fulfill. As such, there are roughly four agents for different parts of the overall system.

  • The Web server
  • The Web servers Database
  • The messaging broker
  • The back-end fulfillment system.

Ulysses is the end user. Ulysses wants to buy a thing. Ulysses opens his browser and clicks through the pages. We’ll assume that the web-server is not running as root on the remote server, as that would be bad and we’ve all grown beyond that point. It is running as, say, the STORE user. This user can render pages to send back to Ulysses, and can query the database to get categories, products, SKUs, and few more things. For write access, this user can write to the order database, and can write order messages to the message queuing system. The STORE user can also read from the order update queue to change the order management database, although this is likely done by a different agent in practice.

OK, I think you can see how this same kind of pattern would be implemented by the order fulfillment system. Lets go back to our cloud provider and discuss a little further.

Odette is our owner. She is the one that pays the Cloud provider to set up her store, to sell things. Odette has contracted to Carlita to modify her web application to customize it for her needs. Dabney is her database administrator. In addition, she uses a service from her cloud provider to set up the application in the first place.

Which of these user should have authority to delete the database? None, obviously. But how does the database get created in the first place? Automated processes set up by the cloud provider. Modifications performed on the database should happen via git, with code review required by at least one other person to accept a merge request. This merge request then goes into a staging branch, staged, and again requires manual approval to be pushed to the live server.

THE actual pushing of changes from one stage to another are performed by agents with only the ability to perform the operations they need. Create and drop tables are handled by the database management agent. And yes, this one, if compromised, could destroy the database. However, another agent performs backups prior to any operations that alter the tables. A deleted table can be created by rolling aback to a previous version, acknowledging that all data that came in post corruption will be lost.

It ain’t easy. It ain’t simple, and it ain’t free. But it is all necessary.

So why are negative rules such a big deal? Surely adding an explicit deny rule on top of all the explicit allow rules provides an added layer of security? There are a few problems with this approach, First, it only actively denies actions you have actually foreseen as problematic. For instance, it would not catch that the Store user could delete the order export queue, or that the order update agent could alter the order update table to a format that was broken. You want to deny everything by default. Then, to test to see if a user can perform a specific action, you can query the role assignments to see if a user has the dangerous permission. If they do, you probably need to create a new, more fine grained role, that has only the permissions required.

A good example of a system that works this way is SELinux. IN an SELinux hardened system, everything is forbidden until explicitly allowed. The critical tool that makes SELinux workable is permissive mode. When you run a NON_PRODUCTION system in permissive mode, you see what would have been denied by the existing set of rules, and you make new rules that would allow those actions. Then you go back into enforcing mode and run, and make sure you can get through the required workflow with all required actions allowed and all forbidden actions denied. This workflow would replace the explicit Deny rules above.

Deny rules add an additional complication. Now you need to explicitly order your rules. First, you have the default Rule of Deny All. Then you have the explicit allow rules, and finally the deny rules. But what happens if someone comes by later and wants an exception to the deny rule. Maybe this account should not be able to delete production databases, but staging ones are just fine? If your blanket deny stops them from deleting all databases, you might be stuck.

And that is what the Deny rules should be: unit tests that show an action is denied. Not an actual rule itself. Because all actions should be denied by default, and if you can accidentally assign one action, you probably accidentally allowed a bunch of other actions as well. So, do not make Deny actions part of your enforcement rule set. Make them part of your explicit tests after the fact.

Deny rules have been a part of many access control systems over the years. This is not the first time this discussion has come up, nor will it be the last. This is one of those “must have” features that keep popping up. Its an anti pattern and should be avoided. The rational developer can only say no so many times before someone does an end run around them and someone puts negative rules in the system.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.