Quiz Question:
What is wrong with this model of computation?

by Eric Drexler on 2011/08/03

In the news today:
“Governments, IOC and UN hit by massive cyber attack” (BBC)

How did the attack work? In a mind-numbingly ordinary way:

“An email would be sent to an individual with the right level of access within the system; attached to the message was a piece of malware which would then execute and open a channel to a remote website giving them access…they sometimes embedded themselves in the network and [tried to] spread across different systems within an organisation.”

In short:

  • A person with broad authority ran a bit of code.
  • The code, operating with this broad authority, wreaked havoc.

Quiz questions:

  1. Why did the code inherit the person’s authority?
  2. Is there a good reason for allowing this?
  3. In the current model of computation, is it easy and natural to grant limited authority to individual computational objects?
  4. What alternative model of computation (not an added security layer!) makes it natural to grant limited authority? What is it called? (Links, please.)

Questions for thought and discussion:

  1. Why does the current computational model grant authority in this indiscriminate way? How does this lead to “sandboxing”?
  2. What would be the main costs and benefits of moving computation toward the alternative model? How would this model play with the existing software base?
  3. What are the leading implementations of this model today, at the language and operating system levels? In your opinion, should they be promoted more vigorously?

</lazy_quiz_mode>

{ 13 comments… read them below or add one }

simon August 3, 2011 at 7:10 pm UTC

Is this the sort of thing you’re looking for?

Carl Lumma August 3, 2011 at 7:25 pm UTC

You’re noticing problems with ACL-based security. The alternative is capability-based security.

http://en.wikipedia.org/wiki/Capability-based_security

As noted there, the E programming language and associated OSs like EROS are attempts to build capability-based computing systems.

Vasilii Artyukhov August 4, 2011 at 12:21 am UTC
Dave August 4, 2011 at 12:18 pm UTC

Even more basic; why do we assume it is normal or desirable to send code to people?

Why do people use javascript instead of CSS?
Why do people use flash for navigation?
And why do people accept such things when they are sent?

J Storrs Hall August 4, 2011 at 2:37 pm UTC

The attack works with memes, too.

Matt Palmer August 4, 2011 at 2:40 pm UTC

Simon got there before me with a link to erights. More specifically, have a look at Mark Miller’s PhD thesis, in which he equates object oriented composition of programs with capability based security. Security by reachability arguments can be made (you can’t do anything without an explicit reference to something).

http://www.erights.org/talks/thesis/

Lex Spoon August 7, 2011 at 5:59 pm UTC

The attack you describe, Eric, is devastating. It is simple and very hard to defend against.

Let me share a point of optimism, though. Web browsers, for all of their problems and ugliness, have successfully implemented sandboxes. A web-based email program has the ability to let you display content without that content getting full rights to their account. On a technical note, such an email program would need to be careful to host the content on a separate domain name from the domain for the email program itself; otherwise malicious content would still be able to hack your email.

I don’t think it’s the technical issues driving the rampant security problems we see. As we can see with web browsers, engineers find solutions to these problems when they are confronted with them. The dynamic is more that modern user platforms are built out of a number of separate third-party components. Those components all need a certain minimum authority over your personal data to do their work. Pretty much anything in the system is fair game for a third-party component to use, so you can’t design a single sandbox for all apps. You have to make it configurable, with each app getting different access rights.

There have been attempts to do so, but I don’t think they’re going well. One of the better efforts I’ve seen is by the Android platform. However, I don’t think even that one is accomplishing its goal. Users install an application and are asked if they approve of all the permissions the app wants to be given. It’s better than nothing, but I’m sure most people are like me and have come to just say “OK” no matter what they ask for.

Alexander August 10, 2011 at 1:23 pm UTC

I am late to the class but I would still like to join in and raise my hand to answer your questions as best as I can.

1) Today the standard paradigm for allocating and managing security is based on ACLs as others have alluded to. I would like to add that in addition it also implicitly is based on the concept of user identity and ambient authority propagation. The idea being that a person granted a level of security automatically grants the entire environment he operates in the same level of security.

2) NO, as we have discovered in the today’s age of networking and collaborative computing, it has so many problems people have invested many man-hours and energy trying to develop fixes that end up being exploited in a fraction of the time it took to implement them.

3) & 4) The current model is too permissive and I suspect provably impossible to secure(would be interesting to find if there has been research about this)

Better to explicitly break up authority into little tokens carrying one specific type of authority and its designated receiver. In other words, what researchers call capabilities or informally keys. We can explicitly allocate only the minimum number of tokens required to operate and deny unnecessary tokens or powerful tokens to others.

Plus it can be shown that it is possible to build authority theorem provers to scan an object’s capability graph and demonstrate for once and for all whether the object has the ability to perform actions that can cause havoc.

Even better since capabilities are basically pointers to objects in the computing model universe, other objects which don’t possess a capability for a specific object won’t necessarily know of its existence. So not only we have limited propagation of authority but also limited amount of discoverable information about other components.

http://www.cap-lore.com/CapTheory/
http://www.capros.org/
http://www.coyotos.org/

Both of CapRos and Coyotos are based on EROS which are based on KeyKOS. They are being developed along different paths but share many of the key ideas.

Right now development on Coyotos appears to have stalled and CapROS seems to have slowed down. My regret is that the principle developer, Johnathan Shapiro departed Johns Hopkins where he did a lot of his work before my arrival as a staff programmer.

As of now, those two microkernels are the state of the art capability-based systems that should be heavily promoted and explored. I’ve long been interested in them but don’t know how to promote them and encourage more wide-spread development. It’s my dream that someday they will spawn a Linux-level dev ecology.

The main costs obviously are inertia and the inevitable problems which will crop up when migrating existing code to a different security model. I believe that existing standard user login and permissions settings can be integrated with capabilities fairly easily. We just predefine groups of capabilities corresponding to the familiar RWX permission bits. Not to mention it would require users to completely change their existing preconceptions of security and managing permissions.

The biggest challenge would be educating users and figuring out how to leverage all of the advantages of capabilities which are numerous beyond just security considerations. Introducing capabilities for memory addressing and hardware architecture also make interesting things possible.

Now this comment is clearly getting too long so I would like to sign off by saying this is a rich field that has been overlooked for too long. Myself, I only just learned of it less than two years ago and since then it’s been on my mind constantly.

Jaya Gibson August 18, 2011 at 11:33 pm UTC

Hi there,



Just following up from my recent email about your RSS feed being published to Before It’s News. 

I’m the Environment editor at Before It’s News.

Our site is a People Powered news platform with over 2,000,000 visits a month and growing fast. 



We would be honored if we could republish your blog RSS feed in our Technology category.
Our readers need to read what your blog has to say.

Syndicating to Before It’s News is a terrific way spread the word and grow your audience. Many other organizations are using Before It’s News to do just that. We can have your feed up and running in 24 hours. I just need you to reply with your permission to do so. Please include the full name and email of the person who will assigned be to the account, and let me know the name you want on the account (most people have their name or their blog name).



You can also have any text and/or links you wish appended to the end or prepended to the beginning of each of your posts on Before It’s News. Just email me the text and links that you want at the beginning and/or ending of each post. If you know html you can send me that. If not, just send me the text and a link to your site. It should be around 200 characters or less (not including links).

You can, if you like, create a custom feed for Before It’s News that includes multiple links back to your blog or web site. We only require that RSS feeds include full stories, not partial stories. We don’t censor or edit work. 



Thank you, 



Jaya

Jaya Gibson
Editor, Before It’s News
http://www.beforeitsnews.com

Arowx August 18, 2011 at 11:34 pm UTC

We just need some of these smart chips to guard our computers…

http://www.kurzweilai.net/ibm-unveils-cognitive-computing-chips-combining-digital-neurons-and-synapses

As long as we put an off switch on them it should be fine! ;0)

Mike Warot November 5, 2011 at 3:10 pm UTC

Quiz -
1. Because traditionally the user was (or knew, worked with, etc) the programmer, and was assumed to know what he was doing.

2. In the past, the odds of a rogue program were almost exactly zero, so using administrative time and effort to further segregate things would have been wasted.

3. The system calls supplied in Linux, Windows, etc… are not geared towards it, so it is not natural, nor easy to grant limited capabilities to a program.
Virtualization, and the rise of VMware and it’s competitors are a direct result of the lack of the capabilities model in contemporary operating systems. In such an environment, the program (a virtual machine) is given specific access to a set of resources at run time.

4. Capability BAsed SECurity, (Cabsec for short) is the model of choice. I’ve tagged some entries at delicio.us with cabsec, you can review them here:
http://www.delicious.com/ka9dgx/cabsec

I’m interested in helping out if you’re gearing up for a project.

Thought and discussion –
1. It does it this way because historically the user and programmer were the same person, or at least in the same organization. It made sense to give each group a sandbox, and permissions to read a common set of tools. All of this was determined by system administrators. The groups then managed their own affairs within their sandbox.

Needless to say, that model is insane to use in an era of modern code.

2. The cost is refactoring programs to accomodate a new security paradigm, where resources are supplied to a program, instead of just grabbed ad hoc.
The benefit is that the user would have explicit control over the resources given to a program, which can prevent a large class of security problems.
If widely adopted, it would make the internet more secure by decreasing the population of hosts which can be compromised and exploited.

3. There are no widely used capability based operating systems that I’m aware of at this time. There are features of things that are like capabilities, which should be promoted as such, to help popularize the model and move it into the realm of toolsets people consider using.

Chris Phoenix November 13, 2011 at 2:03 am UTC

Alexander: You mention users having to learn to manage permissions in a different way. Any scheme that requires users to manage permissions explicitly is doomed to be insecure, because of the number of users who will click what they’re told to click, whether or not the teller is trusted or even known.

Mike: Historically, the odds of a “rogue” program have always been high, if “rogue” means buggy enough to trash any data in its memory space (and sometimes even disk space). I’d argue that this didn’t matter as much prior to computers becoming general-purpose multi-tasking information appliances.

It’s kind of hard to corrupt a deck of punched cards. (Now someone with more gray hairs than I will tell a story of a program that accidentally made the card reader spit a six-inch deck onto the floor.) And before PCs had hard disks, you’d use a different floppy for every program you ran, and you’d only run one at a time, because that’s all the computer could handle.

Yes, mainframes had virtual memory, but they typically had complete separation between users’ processes and data. That made the problem a lot easier.

And applications used to be a single hunk of data from a single source, with little or no ability to run code from other sources. Heck, emails and documents used to be completely incapable of carrying executable code.

So it’s not that the security model was always crazy-broken. It’s that we’re trying to do things today that we have never done before: execute code acquired automatically from untrusted sources, and give it access to some but not all of our data, according to what the users would want if they took half an hour to think about each decision.

This is not to defend the continued use of the old security model. If it’s broken, and there’s a better replacement, then use the replacement.

But a technology that’s not, from the user’s point of view, a drop-in substitution, will fail. If for no other reason, it’ll create enough confusion to create new opportunities for person-hacking.

“””
Your data is in an old format and needs to be brought up to date. To use this handy app you’ve just downloaded, update your data format by this procedure:
1) Go to the Start menu
2) Select “Run”
3) Type “format all”
4) Hit enter three times very quickly.
“”””

Chris

Martin February 8, 2013 at 2:18 am UTC

I was wondering what you guys think of these capability-related projects:

http://genode.org/documentation/general-overview/index

http://www.cl.cam.ac.uk/research/security/capsicum/

They look very promising to me.

Leave a Comment

Previous post:

Next post: