Sunday, October 11, 2015

The .CABSEC fork in the road

Computer security is a mess, and to fix it, I believe that Linux, and indeed the entire open source stack, needs to get forked, I humbly suggest we call this the .CABSEC fork, so it is obvious to anyone who sees it what it is.

Cabsec (CApability Based SECurity) is a term coined by Doc Searls in response to a private email from me a long time ago when I asked him for advice about promoting the idea of capability based security. The problem is that the term capability has a bunch of meanings, most of which didn't fit my vision of the solution to computer security. Choosing a new term should help make things Google friendly... which made sense, so I've been trying to be consistent about using it every since.

I claim no special gifts for skills or wisdom, but rather I'm just a guy who has an idea which is self consistent, and seems to me (and to anyone I can talk to in person for long enough to explain it) to offer up a genuine solution to almost all of our computer security woes.... which I now call cabsec.

Cabsec is the embodiment of the principle of least privilege to be applied against the entire open source stack, if I can nudge things the right way. The core essence is to re-design things to flip the default assumption that programs can be made trustworthy, on its head, and go with the proven reality that this is false.

Not trusting your applications means you have to change things, mostly in terms of the user interface, the code that gets things done really doesn't need much work. Instead of asking the user to choose a file, then opening that file, you'd request the OS to do the same thing, and work with that handle (called a capability in cabsec).  

So you see, it really doesn't need to be a huge set of changes for any given program, but because of the scope (every program written!) it is a huge flipping piece of work to do.  I just want to introduce this simple naming convention to make it easier to coordinate... let's name all the forks .CABSEC forks.

Because of the way GIT works, it's possible to make a fork in a project, and keep it synchronized to take advantage of any changes that don't directly conflict with it in a low cost manner. This means that we could have the mainline stack keep going and feeding this new fork while everything gets done.

Overall, the plan would be to build a set of support APIs to do the required capability security calls on top of existing Linux in userland. This would allow testing and development of the concept to be done without any changes to the Linux kernel. If it turns out to actually work well, then we can push the changes into the kernel, or migrate to a different OS that supports native Cabsec.

I might be stupid, crazy, nuts, or right... only time will tell.  Thanks for your time and attention.

Saturday, October 10, 2015

The root cause of our security woes

This is a cut/paste of a comment I posted to /.

The root cause of all of these security problems has been in plain sight since 1970 or so, yet only a few people are even aware of it. It's obvious once you get it, and the scope of fixing things comes clearly into place. So, do you really want to take on forking every program to build a new version of it? If so, you can fix it, if not... this will continue to happen, and government will try to fix it by fiat, badly.
The cause is that our operating systems operate on the assumption that programs can be trusted. This makes it almost impossible to launch an executable safely, because there is no OS enforced way to limit the side effects of execution.
Only an operating system that requires specifying the resources to feed to a given instance of execution can limit the side effects by design, instead of luck.
It doesn't have to be user-unfriendly, because the OS can always handle prompting for file names, etc... in fact if done properly, the user might not even need re-training to use the new fork of their favorite program, because for their intents and purposes, it acts the same, with the same dialog boxes, etc.
The principle of least privilege is the solution to this whole mess, but it has to be applied from the kernel all the way up the stack. This is a lot of forking work to do.
Do you dare to take up the challenge, or will you let someone else try the latest band-aid instead?

Saturday, June 16, 2012

Genode

I've just come across Genode which looks like it may offer a reasonably quick route to capability based security for all of us. They are looking to self-host development ("eat our own dog food" as the term goes) by the end of 2012.

It builds on the work of L4 and all the other microkernels, providing a way to run on 8 different microkernels in total.

I'll do what I can to help push this along.  I'm sorry it took so long to find... Google isn't the great way to search, but it's the best so far.

Saturday, May 26, 2012

Why ACL/UIC based security is futile, the IED example.

I wrote this on Slashdot in reply to comments about the relative security of Windows vs Linux.


This is like arguing about the odds of an IED (Improvised Explosive Device) killing you based on the brand of vehicle you're driving. If you have territory which is denied to your enemies, you don't have IEDs at all.
Both Windows and Linux let any old program tunnel into things and leave all sorts of crap wherever, as a default course of action. They assume that the user is the logical point at which security questions should be answered, which was fine back when it was just kids in CS101 trying to get their C programs to compile. However, times have changed, and now any program can take out a system (just like an IED looks like litter before it kills you).
Linux is no more secure than Windows in the big picture. They both lack capability based security, and thus both suck.
Capability based security isn't a magic bullet, it's more like being able to keep the enemy out of your territory.

Saturday, November 5, 2011


Eric Drexler asks some interesting questions, and has points for discussion... here are my answers.



Quiz -
1. Because traditionally the user was (or knew, worked with, etc) the programmer, and was assumed to know what he was doing.

2. In the past, the odds of a rogue program were almost exactly zero, so using administrative time and effort to further segregate things would have been wasted.

3. The system calls supplied in Linux, Windows, etc... are not geared towards it, so it is not natural, nor easy to grant limited capabilities to a program.
  Virtualization, and the rise of VMware and it's competitors are a direct result of the lack of the capabilities model in contemporary operating systems. In such an environment, the program (a virtual machine) is given specific access to a set of resources at run time.

4. Capability BAsed SECurity, (Cabsec for short) is the model of choice. I've tagged some entries at delicio.us with cabsec, you can review them here:
http://www.delicious.com/ka9dgx/cabsec

I'm interested in helping out if you're gearing up for a project.

Thought and discussion -
1. It does it this way because historically the user and programmer were the same person, or at least in the same organization. It made sense to give each group a sandbox, and permissions to read a common set of tools. All of this was determined by system administrators. The groups then managed their own affairs within their sandbox.

Needless to say, that model is insane to use in an era of modern code.

2.  The cost is refactoring programs to accomodate a new security paradigm, where resources are supplied to a program, instead of just grabbed ad hoc.
The benefit is that the user would have explicit control over the resources given to a program, which can prevent a large class of security problems.
If widely adopted, it would make the internet more secure by decreasing the population of hosts which can be compromised and exploited.

3. There are no widely used capability based operating systems that I'm aware of at this time. There are features of things that are like capabilities, which should be promoted as such, to help popularize the model and move it into the realm of toolsets people consider using.

Tuesday, December 21, 2010

A project in the works

I'm putting together a project to implement CABsec on a small scale. I find myself wanting to play with an idea that doesn't seem have any current implementations... so I'm recruiting a few friends to get it coded up, and boiled down to something that might be interesting to others.

Friday, April 16, 2010

A brilliant way to deal with spam.

I read this comment to a question about spam on Metafilter, and have been inspired. He uses unique email addresses in a way that is pretty much the definition of a revocable capability. I've had other friends with the same idea in the past. Now to figure out how to do it for myself.

Tuesday, March 16, 2010

Cabsec by example... the fun game

There is a fun game called Bubble Breaker... I like to play it while I'm waiting for things to compile, format, etc.. but it has one big problem... it plays sounds... and doesn't provide a mute. I have a program that I trust, except for the annoying sounds.

In a cabsec world, I would just not supply it with the ability to write to the sound channel, and it would still work. It's the inability to express my desire to simple NOT MAKE SOUND that is frustrating. Sound is a simple thing that doesn't permanently affect my system, I also have no way to express other, more crucial limits.

This is the heart of cabsec, the ability to explicitly supply capabilities to a program, instead of having to manually block off everything.

Wednesday, March 3, 2010

Bush Era cybersecurity - my response

So, the Obama administration has declassified part of the "cybersecurity" planning of the Bush administration.... the story hit slashdot, and here's my response.




Initiative #9. Define and develop enduring "leap-ahead" technology, strategies, and programs. One goal of the CNCI is to develop technologies that provide increases in cybersecurity by orders of magnitude above current systems and which can be deployed within 5 to 10 years. This initiative seeks to develop strategies and programs to enhance the component of the government R&D portfolio that pursues high-risk/high-payoff solutions to critical cybersecurity problems. The Federal Government has begun to outline Grand Challenges for the research community to help solve these difficult problems that require 'out of the box' thinking. In dealing with the private sector, the government is identifying and communicating common needs that should drive mutual investment in key research areas.

(Emphasis mine)

I propose instead that we consult the results of the previous R&D work that has been active in this area since the 1960s, and learn the lessons of problems already solved. This is low risk (as we've already paid for it), high payoff.

Let's get capability based security into the hands of the masses. This will remove their machines from the threat pool. It would also allow those inside the government to manage security in a much more granular (and thus more effective) manner.

This can be fixed, and it doesn't require a high risk, just due diligence, and hard work.

Monday, January 4, 2010

Capabilities, still out on the fringe, and misunderstood

I recently posted a comment on the Slashdot story You won't recognize the internet in 2020, which said:

It's not the Internet switching fabric that is the problem, it's the end nodes. None of our PCs is provably secure. It's highly likely it won't be by 2020 either, as it appears the money is going into the wrong places in research. Capability Based Security has been around since the 1980s, and yet it's not even being funded to try to get it ready for widespread use by 2020.

Until the ends of the internet are secure, it's not going to be secure. It almost seems the money is always being spent in places where it won't really help the end user, but will allow more control by the authorities. (Or maybe I'm just a bit paranoid?)

Well, there is some hope because it did get moderated up to a +5 in short order. However one of the comments to my comment shows there is work to do in raising awareness of the benefits of capabilities:

Capability Based Security hinges on the operating system being inviolate. The problem is programmable computers by their very nature offer the opportunity to reprogram the whole system. This is not a bad thing, because it allows the same device to be used in various different ways (Linux, Windows, OSX etc) - diving deeper, it allows more efficient software (patches) to be added to the system by anyone with the desire to accomplish some task, or make the system run more efficiently.

With a capability based security system in place, OSs would collapse into one 'approved' version - and the general purpose nature of the computer would be lost (a game console would be the current model for such a system I would think).

I addressed this with a followup:

Actually, it's not the whole system that has to be inviolate, just the kernel. There are projects to produce a provable L4 microkernel, for example. This would allow the user to have a machine that they could then trust to only give away resources they chose.

Don't confuse a locked down kernel with a locked down computer. With the current OS selections you have, it's not possible to make a distiction, but it doesn't have to be this way. The problem boils down to the default permissive environment that we're all used to thinking and modeling our systems on top of. Capability based systems are a default deny environment, but you are free to give away as much as you want to a program of your choice.

So... there is some awareness, and cause for hope, but much work remains.

Thursday, December 3, 2009

A form to show off capabilities

The form below uses a capability (the string in the Token field) to replace the last data entry in The world's simplest capability demo... try entering some text and hitting submit. You'll see your text appear in the last slot on the linked page.


Your text

Token (which entry do you wish to replace?)



Less talk, more code

I can blog and talk for the rest of my life, and I doubt it would matter much. The fact is that it's very hard to wrap your mind around something as different as capabilities without some good old fashioned examples to play with.

So... I've written one, using Google App Engine. It's a simple message board with the ability to append a message, and the ability to replace and existing message, provided you have a capability.

To make it easy, the capability strings are right out there for you to use and abuse. It's VERY simple code and easy to mangle, so please be gentle.





Tuesday, December 1, 2009

The Mine! Project - Capabilities for the web

I've watched the demo videos (the second video gets closest to capabilities), and it appears that The Mine! Project is going to be building capablities based security for the web, without specifically mentioning it. This will be a very nice step forward, in my humble opinion.

They want to allow someone to give a "minekey" as a proxy for a relationship. This could be considered equivalent to a capability. The minekey can be revoked at any time, which is a pretty good method of control. They don't try to solve covert channels, nor do they try to build in DRM.

Doc Searls talks about VRM as a way to counteract the mining of our data, and that approach along with the efforts of others, seems to be yielding fruit like this, and in other areas.

Sunday, November 29, 2009

Capabilities explained... a Google tech talk worth watching

I highly recommend you watch http://www.youtube.com/watch?v=EGX2I31OhBE which is a Google tech talk video about Object Capabilities, heavy on practical reasons, explanation, and conversational inquiry from the audience.

It might take you a while to unpack all of the jargon that has arisen over the decades of Capabilities based research, but you'll walk away with knowledge well worth the effort.

Wednesday, April 29, 2009

Missing the point on Slashdot... yet again

Slashdot gets close to the truth... and then totally blows it, as usual. 

A recent story pointed out that there will be funding for Minix, one of the goals being to figure out how to build a provably secure OS kernel.

From the project proposal: (warning: pdf)

The most serious reliability and security problems are those relating to the operating system. The core problem is that no current system obeys the POLA: the Principle Of Least Authority. The POLA states that a system should be partitioned into components in such a way that an inevitable bug in one component cannot propagate into another component and do damage there. Each component should be given only the authority it needs to do its own job and no more. In particular, it should not be able to read or write data belonging to another component, read any part of the computer’s memory other than its own address space, execute sensitive instructions it has no business executing, touch I/O devices it should not touch, and so on. Current operating systems violate this principle completely, resulting in the reliability and security problems mentioned above.

So... in my opinion, this is the key take away... to build something actually secure, instead of trying to use the tired old language X is insecure chestnut or other assertions.

Slashdot encourages responses based on emotion and ego, and doesn't provide proper incentives to actually help discover truth and learn new things. There has to be a better way.

Thursday, May 22, 2008

Capabilities Summarized

One of the things about digging up information about Capabilities based security is trying to find Google terms that have value. It's like learning magic spells. I learned a new one from the video in the previous post...

Ambient Authority - google search

Here's a nice post that summarizes a lot of what Capabilities is all about from Julien Couvreur.

Tuesday, May 20, 2008

Object Capabilities for Security - YouTube

This video at YouTube looks very interesting... I hope to be able to watch the whole thing later today.

As an educational resource it's pretty good so far.

Update 5/22/2008 - It was VERY useful, and I learned some new terms, like Ambient Authority, and got some new examples to use.

Saturday, May 17, 2008

AppArmor

AppArmor is a least-privilege system for Linux which uses the Linux Security Modules interface. Every "armored" application has a profile which specifies the privileges the program requires to do it's job. It's not clear to me right now if this project is still maintained or not, as Novell was leading it, but has since bowed out by laying off the programmers it had on the project.

Tony Jones while giving an overview of AppArmor to the Linux Kernel Mailing List said:
AppArmor is *not* intended to protect every aspect of the system from
every other aspect of the system: the intended usage is that only a
small fraction of all programs on a Linux system will have AppArmor
profiles. Rather, AppArmor is intended to protect the system against a
particular threat.
Now, this isn't a true capabilities system in that the profiles use names, and are explicit, but it does help enforce least privilege, so it's a very strong step in the right direction.

BeyondTrust | Privilege Manager

I came across BeyondTrust, which might be useful for people in a Windows Environment, because it helps lean towards a least privilege configuration for users. It's definitely not a capabilities based system, but still, you might find it useful.

It allows the Administrator to give rights to run some things, without handing over the administrative password.

Persevere - First impressions

The Persevere project is an open source set of tools for persistence and distributed computing using intuitive standards-based JSON interfaces of HTTP REST, JSON-RPC, JSONPath, and HTTP Channels. The core of the Persevere project is the Persevere Server. The Persevere server includes a Persevere JavaScript client, but the standards-based interface is intended to be used with any framework or client.

The interesting thing about this is that they mention capabilities in their security model, and they offer support for pluggable security modules. So, even if they don't due "pure" capabilities, someone else could add a library that does.

Friday, May 16, 2008

A tweet in the wilderness, calling for help.

Thomas Hawk recently tweeted:
I wish Blogger's moderate comments system was smart enough to whitelist people. I hate having to reapprove legit users over and over again.

Now, this is a call for capabilities if I've ever seen one. He wants to be able to delegate a capability to someone.

OATH: Open sourcing the mark of the beast??

OATH - initiative for open authentication | All users, all devices, all networks.

Ok, this one creeps me out a bit... they really, REALLY, REALLY want to make sure the user who is connected to whatever little box really is who they say they are. This project seems to want to build the backend to the REAL ID act of 2005.

Aside from my personal aversion, they are a STRONG IDENTITY project. You would have one set of keys to the kingdom, that would open everything. One ring to rule them all.

LBNL: Delegating responsibility in digital systems

Here's an interested article about Object Capability Systems, which they call ocaps from LBNL. They argue that the need to have a user to blame is one of the reasons that drove the adoption of the ACL security model. They then go on to introduce Horton, a system to help merge the best features of ACL and Capabilities models.

I don't understand the rest of it, for now, it's way over my head. I now understand a bit more about the ACL vs Capabilities history, and that's enough for me.

OAuth

OAuth is:

An open protocol to allow secure API authentication in a simple and standard method from desktop and web applications.


OAuth is a limited implementation of capabilities. A token allows proxy access to a resource on the internet. This eliminates the need to share authetication information.

They do a lot of great things. Their home page is clean and simple. They have example code in many programming languages. They have a FAQ section, chat and a wiki.

What is Capabilities Digest?

I'm pushing an agenda, Capabilities as a means of fixing a lot of the problems with computer security. The most effective way to push an agenda in 2008 appears to be the same one that has worked for a very long time... find an area to focus on, and try to occupy it. Traditionally this occupation is in terms of knowledge or skill.

I'm spending time and innumerable frustrating searches on this topic. Capability based security is not even close to Google friendly. Because there isn't a specific set of buzzwords to describe the concepts involved, the terms that do get used are sufficiently common that most searches get a ton of noise. I've spent a lot of time finding things of interest, so I'm sharing what I find on this topic, in this one space.

I'll keep original articles and other thoughts at my regular blog, and occasionally link back to it.

I'll also be pointing out things that are related, but near misses.

For example, I came across OAuth, which is about delegating access to Internet accessible resources without the need to share authentication information in a standard way. It's a good step in the overall evolution of security, but is not capabilities oriented.

I'll also be using Labels (tags) on the posts, with Hit or Miss to indicate if a given post is about a find that is or is not truly capabilities based.

In summary... I'm setting myself up as a gatekeeper to judge what is/isn't capabilities.