IP: Beyond Carnivore: FBI Eyes Packet Taps (fwd)
Original Message from Sun, 21 Oct 2001 14:14:50 0200 (MET DST):>
-- Eugen* Leitl leitl ______________________________________________________________ ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.lrz.de/~ui22204 57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3
---------- Forwarded message ---------- Date: Sun, 21 Oct 2001 06:07:48 -0400 From: David Farber Reply-To: farber@cis.upenn.edu To: ip-sub-1@majordomo.pobox.com Subject: IP: Beyond Carnivore: FBI Eyes Packet Taps
From: Monty Solomon Subject: Beyond Carnivore: FBI Eyes Packet Taps
October 18, 2001 Beyond Carnivore: FBI Eyes Packet Taps By Max Smetannikov
Expect the FBI to expand its Internet wiretapping program, says a source familiar with the plan.
Stewart Baker, a partner with law firm Steptoe & Johnson, is a
The info in the Interactive Week article is basically the same info from the National Journal article previously posted here, which leads me to suspect that Baker is simply repeating the same rumor to everyone who'll write about it. But..... it is interesting that they say "router manufacturers" here. I believe that what Baker "heard" was simply the FBI going out to people like Cisco and some of the larger network providers and people responsible for provisioning NAPs and saying "we want you to implement the additions to IPSEC that the IETF refused to implement". (For background, the FBI, DOJ, DoD -- the "usual suspects" -- had presented a series of recommendations to the IETF last year that would create "packet accounting" features in IPSEC protocols and future IP protocols.... they were rejected by the IETF, which stated at the time that the idea of creating built-in exploits to a protocol designed for security was counterintuitive. See http://www.ietf.org for more info.) Now, it is entirely possible that given the public pressure arising from the 9-11 attacks, individual manufacturers (read" "Cisco") might bow to such pressure, and build-in some of these features into future products AND into future software builds for existing products. So, I think this is what Baker "heard" -- not that the FBI has any such system in place or would have one anytime soon... rather, that the FBI will re-present these proposals one-on-one with Cisco and a few <major> network providers, and in effect, get the impact of their previously-rejected proposals implemented to cover maybe as much as 80% or more of the traffic in the domestic US. And besides access to the majority of USA packet traffic, they would have access to some part of international traffic too... it's beyong the scope of this email, but keep in mind that many non-USA NAPs are really connected to one another VIA the USA..... in effect, bug the USA NAPs, and you get access to almost all the traffic from Pacific Rim countries like Japan, Australia, etc. and you get access to small parts of Western Europe also, not to mention parts of Africa and the Middle East that uplink via satellite instead of a wired connection. An enterprising reporter might make an interesting article out of trying to track down exactly what parts of the IETF proposal the FBI wants (Declan?) and someone could post copies of the draft proposal as first released at ietf.org (JYA?). But I digress :) former
general counsel to the National Security Agency. He says the FBI has spent the last two years developing a new surveillance architecture that would concentrate Internet traffic in several key locations where all packets, not just e-mail, could be wiretapped. It is now planning to begin implementing this architecture using the powers it has under existing wiretapping laws.
http://www.interactiveweek.com/article/0,3658,s%3D605%26a% 253D16678,00.asp
For archives see: http://www.interesting-people.org/archives/interesting-people/
_______________________________________________________________________________ WANT YOUR OWN FREE AND SECURE WEB EMAIL ADDRESS? Visit http://www.fastcircle.com
All the more reason to use Linux routers and firewalls. Especially if Cisco pulls a Larry Ellison. -- Harmon Seaver, MLIS CyberShamanix Work 920-203-9633 Home 920-233-5820 hseaver@cybershamanix.com http://www.cybershamanix.com/resume.html
On Sun, 21 Oct 2001, Harmon Seaver wrote:
All the more reason to use Linux routers and firewalls. Especially if Cisco pulls a Larry Ellison.
Nope, Plan 9. http://plan9.bell-labs.com -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall, secure on installation without you needing to tweak stuff. Hell you can even tell it to encrypt swap pages. ----------------------Kaos-Keraunos-Kybernetos--------------------------- + ^ + :Surveillance cameras|Passwords are like underwear. You don't /|\ \|/ :aren't security. A |share them, you don't hang them on your/\|/\ <--*-->:camera won't stop a |monitor, or under your keyboard, you \/|\/ /|\ :masked killer, but |don't email them, or put them on a web \|/ + v + :will violate privacy|site, and you must change them very often. --------_sunder_@_sunder_._net_------- http://www.sunder.net ------------ On Sun, 21 Oct 2001, Jim Choate wrote:
On Sun, 21 Oct 2001, Harmon Seaver wrote:
All the more reason to use Linux routers and firewalls. Especially if Cisco pulls a Larry Ellison.
Nope, Plan 9.
On Sun, 21 Oct 2001, Sunder wrote:
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall,
<shrug> You mean there are OS'es that don't?
secure on installation without you needing to tweak stuff.
<shrug> You mean all(!) OS'es don't do this already?
Hell you can even tell it to encrypt swap pages.
<shrug> You mean all OS'es don't allow you to mount individual filesystems through an encryption layer? - Authored by same Bell-Labs crew that wrote Unix in the first place. Plan 9 was specifically designed to 'fix' the problems of Unix. It of course has its own problems. There is active support by the authors currently. - Has had many years of production use internal to Bell - Labs. - Open Source, no license required to build and distribute your own version. - Fully distributed in both process and file space - Has a unique three (3) kernel approach; I/O - Auth, File, Process - No 'root' user. - Supports IPv6 (default), IPv4, and IL (it's customer to Plan 9). - Filesystem is fully transitive, everything is treated like a file. This creates some unique opportunities to make publicly shared but privately maintaned resource pools. Hangar 18 is an attempt to do just this. - The filesystem is structured and featured in such a way that RDBMS sorts of solutions are moot. These functions are built into the filesystem itself (though not through SQL compliance). - Encryption (currently DES, needs fixing) built right in. - Doesn't use passwords, Instead it uses tickets (ie certificates). - Anonymity features with respect to both process and file space are not going to be hard to build in, Pike estimated at one points about 150 lines of rc besides the actual crypto algorithm. - Global mobile log-in out of the box. - Has a wickedly new GUI. - Supports Inferno (run-time included) so that you can access one of the leading 'Internet Appliance' work environments. Plan 9 isn't real-time, but Inferno is. (It makes my Lego Mindstorm look like a directory tree, makes programming real-time hardware operations rather easy) http://plan.bell-labs.com Another Open Source OS to look at for inspiration is unununium. It is a kernel-less OS, everything is a module that can be loaded in/out at run time as required. Has some very interesting applications with respect to distributed computing. There is a working implimentation available from Source Forge. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 on Sun, Oct 21, 2001 at 10:30:57AM -0500, Jim Choate (ravage@einstein.ssz.com) wrote:
On Sun, 21 Oct 2001, Sunder wrote:
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall,
<shrug> You mean there are OS'es that don't?
secure on installation without you needing to tweak stuff.
<shrug> You mean all(!) OS'es don't do this already?
Hell you can even tell it to encrypt swap pages.
<shrug> You mean all OS'es don't allow you to mount individual filesystems through an encryption layer?
- Authored by same Bell-Labs crew that wrote Unix in the first place. Plan 9 was specifically designed to 'fix' the problems of Unix. It of course has its own problems. There is active support by the authors currently.
This says nothing about current development. Word I've heard (from someone tangentially involved with the project) was that the release was something of a desperation move. As someone who watches free software licences closely, the Plan 9 license is one of the more twisted bits of corporate-authored licenses. Not necessarially bad, but it reeks of compromise clauses speaking to internal battles. Rumor was that a codebase that had been stable for a couple of years saw a slew of commits in the weeks leading to the public release.
- Has had many years of production use internal to Bell - Labs.
How about its external use track record?
- Open Source, no license required to build and distribute your own version.
The license is *not* OSI certified, nor is it considered Free Software by the FSF. OSI approved licenses list: http://www.opensource.org/licenses/index.html FSF discussion of Plan 9 License: http://www.fsf.org/philosophy/plan-nine.html
- Fully distributed in both process and file space
Meaning...?
- Has a unique three (3) kernel approach; I/O - Auth, File, Process
- No 'root' user.
This is a plus. There are other systems which provide this, from Guardian GNU/Linux to, IIRC, Jon Shapiro's EROS. EROS shares a number of design similarities with Plan 9, as I understand, though I can't admit to more than a nodding aquaintance with either.
- Supports IPv6 (default), IPv4, and IL (it's customer to Plan 9).
Ditto GNU/Linux.
- Filesystem is fully transitive, everything is treated like a file. This creates some unique opportunities to make publicly shared but privately maintaned resource pools. Hangar 18 is an attempt to do just this.
What does this mean? How does this compare with, say, GNU/Linux and /proc?
- The filesystem is structured and featured in such a way that RDBMS sorts of solutions are moot. These functions are built into the filesystem itself (though not through SQL compliance).
How does this compare with, say, journaled filesystems? I'm not challenging, I don't understand the statement above and am not familiar with the technology.
- Encryption (currently DES, needs fixing) built right in.
Built into what? Filesystems? Networking? How does this differ from a GNU/Linux approach of providing encrypted filesystems and/or FreeSWAN and/or SSH as modules and/or userspace.
- Doesn't use passwords, Instead it uses tickets (ie certificates).
...which are granted via...? Passwords, perhaps?
- Anonymity features with respect to both process and file space are not going to be hard to build in, Pike estimated at one points about 150 lines of rc besides the actual crypto algorithm.
- Global mobile log-in out of the box.
- Has a wickedly new GUI.
Oh, now *that's* compelling....
- Supports Inferno (run-time included) so that you can access one of the leading 'Internet Appliance' work environments. Plan 9 isn't real-time, but Inferno is. (It makes my Lego Mindstorm look like a directory tree, makes programming real-time hardware operations rather easy)
What's Inferno? - -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE701BIOEeIn1XyubARAvyKAJ4wL2XEJENSQ96WRcyKkrGxdrWWbwCeI5+1 fw9y+VoX4+Gcprq/0owJ6ms= =7Uoq -----END PGP SIGNATURE-----
On Sun, 21 Oct 2001, Karsten M. Self wrote:
This says nothing about current development. Word I've heard (from someone tangentially involved with the project) was that the release was something of a desperation move. As someone who watches free software licences closely, the Plan 9 license is one of the more twisted bits of corporate-authored licenses. Not necessarially bad, but it reeks of compromise clauses speaking to internal battles. Rumor was that a codebase that had been stable for a couple of years saw a slew of commits in the weeks leading to the public release.
??? Plan 9 was released Open Source in 2000. Prior to that it had a weird 'no commercial use' clauses. It apparently was intended as a internal use only project. Forces both internal and external began fighting for the release of the Rev. 2 code (which turns out can't be done) so instead Pike and the others created a Rev. 3. They are now working on what is called the "2000 Release". Haven't had a chance to try it yet.
How about its external use track record?
None, see commentary above. I've been an avid follower of Plan 9 since '86 when the first papers started to appear. It's current state with respect to apps is about where Linux was in '92. With respect to the code, it works and works well. The fathers of Unix did as good a job here as well.
The license is *not* OSI certified, nor is it considered Free Software by the FSF.
<shrug> Ask me if I care. Read Lessigs "Code".
- Fully distributed in both process and file space
Meaning...?
Meaning that through your I/O server you see an effective pool of many processors on which you can then use to execute your programs. It means that the filesystem components that appear 'local' may actually not be. It means that things like backups and such can be left to the filesystem to take care of 'under the covers' so to speak.
- Filesystem is fully transitive, everything is treated like a file. This creates some unique opportunities to make publicly shared but privately maintaned resource pools. Hangar 18 is an attempt to do just this.
What does this mean? How does this compare with,
Transitive means that A mounts B, C mounts A and gets B free. Plan 9 does this, managed by a set of authorization layers for fine control, native. This means that when Hangar 18 goes online you can mount /hangar18 into your filespace (via Plan 9 or Linux NFS services) and you will get all the resources that Hangar 18 mounts through that point. ftp is a good example. In Plan 9 you 'mount' the ftp server to your file system. If you ever go out and walk that part of the file space tree and request a file it only then goes and gets it. You can control its lifetime (to manage disk space for example) via local cache controls. A 'lazy update' mechanism, very efficient of network and local resources.
say, GNU/Linux and /proc?
Irrelvant comparison.
- The filesystem is structured and featured in such a way that RDBMS sorts of solutions are moot. These functions are built into the filesystem itself (though not through SQL compliance).
How does this compare with, say, journaled filesystems? I'm not challenging, I don't understand the statement above and am not familiar with the technology.
You can build a journaled filesystem layer onto Plan 9 through scripts that define how the various servers are supposed to journal the individual compnents.
- Encryption (currently DES, needs fixing) built right in.
Built into what?
The network layer. The traffic between any two Plan 9 boxes is encrypted with keys dependent upon the individual boxes (or larger classes if you desire) if the system is so configured. You can also use this to encrypt branches of your filesystem. Plan 9 provides SSH.
- Doesn't use passwords, Instead it uses tickets (ie certificates).
...which are granted via...?
However the resource owner choses. I'm using 'small worlds network' models for my 'web of trust'...
Passwords, perhaps?
But the passwords don't go across the network, therefore they're not 'used' in the conventional sense.
- Has a wickedly new GUI.
Oh, now *that's* compelling....
It should be, X Windows sucks.
- Supports Inferno (run-time included) so that you can access one of the leading 'Internet Appliance' work environments. Plan 9 isn't real-time, but Inferno is. (It makes my Lego Mindstorm look like a directory tree, makes programming real-time hardware operations rather easy)
What's Inferno?
Another OS, intended for real-time control of "Internet Appliances". You run it along side Plan 9. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
on Sun, Oct 21, 2001 at 06:29:16PM -0500, Jim Choate (ravage@EINSTEIN.ssz.com) wrote:
On Sun, 21 Oct 2001, Karsten M. Self wrote:
This says nothing about current development. Word I've heard (from someone tangentially involved with the project) was that the release was something of a desperation move. As someone who watches free software licences closely, the Plan 9 license is one of the more twisted bits of corporate-authored licenses. Not necessarially bad, but it reeks of compromise clauses speaking to internal battles. Rumor was that a codebase that had been stable for a couple of years saw a slew of commits in the weeks leading to the public release.
??? Plan 9 was released Open Source in 2000.
Summer, June/July, IIRC. I've done a couple of look-ups since. There's been little additional news or information (I'm not saying none, I'm saying little). OpenBSD, a relatively little-known free 'nix, gets rather more press and community coverage. While there's nothing wrong with a small, dedicated core, I'm just commenting that there doesn't appear to be much broader appeal.
The license is *not* OSI certified, nor is it considered Free Software by the FSF.
<shrug> Ask me if I care.
A fair number of people respect the opinions of the OSI and FSF, if only because they don't feel like combing through licensing terms themselves. I'm active on the OSI's license-discuss list, and see quite a few proposed licenses and terms. I'm rather convinced that novelty, all else being equal, is bad. Compellingly adventageous licensing language may be of some interest. The MozPL is the last license I'd consider to have provided this (it's a community-friendly mixed-mode copyleft + proprietary use license). Poor licensing choices are one of several key modes of failure for free software projects. If Plan 9 procedes forward, I expect to see another two or three significant licensing revisions.
Read Lessigs "Code".
At my side. What specifically?
- Filesystem is fully transitive, everything is treated like a file. This creates some unique opportunities to make publicly shared but privately maintaned resource pools. Hangar 18 is an attempt to do just this.
What does this mean? How does this compare with,
Transitive means that A mounts B, C mounts A and gets B free. Plan 9 does this, managed by a set of authorization layers for fine control, native. This means that when Hangar 18 goes online you can mount /hangar18 into your filespace (via Plan 9 or Linux NFS services) and you will get all the resources that Hangar 18 mounts through that point. ftp is a good example. In Plan 9 you 'mount' the ftp server to your file system. If you ever go out and walk that part of the file space tree and request a file it only then goes and gets it. You can control its lifetime (to manage disk space for example) via local cache controls. A 'lazy update' mechanism, very efficient of network and local resources.
Interesting. Some similarity then with the autofs system under GNU/Linux, in which remote filesystems may be mounted by various methods, including FTP, though transitivity isn't generally included IIRC.
- Encryption (currently DES, needs fixing) built right in.
Built into what?
The network layer. The traffic between any two Plan 9 boxes is encrypted with keys dependent upon the individual boxes (or larger classes if you desire) if the system is so configured. You can also use this to encrypt branches of your filesystem. Plan 9 provides SSH.
Always or at discression? Again, possible with GNU/Linux, though not as trivial as desireable at present.
But the passwords don't go across the network, therefore they're not 'used' in the conventional sense.
Interesting. Somewhat like, say, SSH RSA key authentication, but at OS level? Peace. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
On Sun, 21 Oct 2001, Karsten M. Self wrote:
Summer, June/July, IIRC. I've done a couple of look-ups since. There's been little additional news or information (I'm not saying none, I'm saying little). OpenBSD, a relatively little-known free 'nix, gets rather more press and community coverage.
You need to be on the mailing list. There is almost constant changes. You can also visit the wiki link at Bell Labs for the most current info.
proposed licenses and terms. I'm rather convinced that novelty, all else being equal, is bad.
Can't disagree more. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
on Sun, Oct 21, 2001 at 08:30:19PM -0500, Jim Choate (ravage@einstein.ssz.com) wrote:
On Sun, 21 Oct 2001, Karsten M. Self wrote:
Summer, June/July, IIRC. I've done a couple of look-ups since. There's been little additional news or information (I'm not saying none, I'm saying little). OpenBSD, a relatively little-known free 'nix, gets rather more press and community coverage.
You need to be on the mailing list. There is almost constant changes. You can also visit the wiki link at Bell Labs for the most current info.
I'll stop by.
proposed licenses and terms. I'm rather convinced that novelty, all else being equal, is bad.
Can't disagree more.
Care to expand (off list if you wish). It's an area of interest. Nutshell argument: license interactions are factorial. Interaction complexity reduces overall value of a codebase, and tends to marginalize minority licenses. By various methods (Debian package listings, Sourceforge projects), the GPL or LPGL are applied to some 84% of free software. A tally from January of this year: Of the roughly 8,800 listed projects with a license on SourceForge: 8,384 are based on an OSI approved license. 208 are based on an other or proprietary license. 235 are public domain. Of the OSI licenses, the breakdown is as follows (note that results may vary daily as projects are added and removed): GNU GPL: 6,178 74% GNP LGPL: 844 10% BSD: 480 6% Artistic: 302 4% MozPL: 114 1% MIT: 110 1% Python: 78 1% QPL: 60 1% zlib/libpng: 46 1% IBM-PL: 10 1% MITRE (CVW): 4 0% As mentioned, 84% of projects are licensed under the GPL. Compatibly licensed projects include software under the BSD (revised) terms, MIT, Artistic, and Python (most recent) licenses. Major QPL projects are licensed compatibly with the GPL. Major MozPL projects are licensed compatibly with the GPL. Given some room for variance (there are non-compatible BSD, and MozPL projects), some 90-95% of projects are likely licensed under terms compatible with the GNU GPL. Noncompatibility puts you in a rather small mindshare camp, with a serious sacrifice of network effects (Metcalfe's Law). This does assume that a project's intent is to become relatively widely used and supported by broad mindshare. As these are among the principle technical advantages offered by free software / open source, it's not an advantage to discard lightly. Per the FSF's analysis, Plan 9 is, again, not open source, free software, or GPL compatible. This is a significant strategic handicap. Moreover, the bulk of terms in the Plan 9 license serve the corporate interests of the software's owner -- there's little quid pro quo for the developer or community. This is typical of corporate licenses, particularly first drafts. The evolution of IBM's own Jikes licensing is instructive. If the code exists for its own purposes, it may not matter. From a broader community perspective, you could do better. Peace. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
On Sun, 21 Oct 2001, Karsten M. Self wrote:
Nutshell argument: license interactions are factorial.
How so? Proof?
Interaction complexity reduces overall value of a codebase, and tends to marginalize minority licenses.
Interaction for who, the author or the user? All license start out in the minority. It's a competition in a way. I've also got some question about exactly which of the Plan 9 licenses the reviews were for. There have been several over the last couple of years. As objections have been raised they've been addressed. I'll send a URL along to the list... -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
on Mon, Oct 22, 2001 at 02:20:34AM -0500, Jim Choate (ravage@einstein.ssz.com) wrote:
On Sun, 21 Oct 2001, Karsten M. Self wrote:
Nutshell argument: license interactions are factorial.
How so? Proof?
Sorry. Combinatorial. Not quite as extreme. From a legal standpoint, interactions of all combinations of licenses must be considered. The interesting cases usually reduce to a much smaller number. The trend in free software licensing has been strong reluctance to accepting novel licenses. A strong case for benefit is generally requested, many licenses boil down to ego, corporate politics, or failure to understand free software / open source concepts -- the licenses simply aren't either, again, Plan 9 is a case in point. There's also been a tendency among major projects to seek compatibility (usually through dual or multiple licensing) with the GPL, Sun and Mozilla being two cases in point.
Interaction complexity reduces overall value of a codebase, and tends to marginalize minority licenses.
Interaction for who, the author or the user?
Interaction between licenses. It's more overhead for the developer to deal with. Case in point: Tom's Root/Boot. GNU/Linux on a floppy, 1.77 MB. Licenses themselves comprised some 50KB, significant for this task. Terms for compliance that require license and binary to occupy the same media in use are not acceptable for the technical task (fortunately none of the major free software licenses require this). OpenBSD has eliminated several packages from Donald J. Bernstein due to his licensing clauses, despite their being technically excellent (if non standards compliant) software. Any number of proposals cross the OSI's door which exclude specific types of use or transfer. It's too much overhead for developers to consider most of these, they'll stick to a half-dozen or so known (or highly similar) licenses. Again, GPL, LGPL, BSD/MIT, and Mozilla cover a broad range of strategic interests.
All license start out in the minority. It's a competition in a way.
What are you competing for? What characteristics of a license will "win" the competition? This isn't software domination, it's more a protocol for collaborative development. Once you've got that nailed down, stop dicking with the damned lawyers, and start writing code. Peace. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
On Mon, 22 Oct 2001, Karsten M. Self wrote:
The trend in free software licensing has been strong reluctance to accepting novel licenses.
Right, that's why there are so many of them out there...
Interaction for who, the author or the user?
Interaction between licenses. It's more overhead for the developer to deal with.
Interaction between licenses for who?... You're using a flawed model. There are three 'roles'; author, distributor, user. Any license must interact with all three roles. The fact is that the license doesn't effect the developer nearly as much as the distributor and the end user. You're only looking at a single layer of interactions. There is another aspect you're completely ignoring, unless one license prohibits(!) use with another license the interaction (outside of "Can I make money off it?") is nil - both for developers and users.
All license start out in the minority. It's a competition in a way.
What are you competing for? What characteristic of a license will "win" the competition?
Utility, which license brings the maximum benefit to all three roles.
This isn't software domination,
Yes, it is.
it's more a protocol for collaborative development. Once you've got that nailed down, stop dicking with the damned lawyers, and start writing code.
One shoe doesn't fit all. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
on Tue, Oct 23, 2001 at 05:13:38PM -0500, Jim Choate (ravage@einstein.ssz.com) wrote:
On Mon, 22 Oct 2001, Karsten M. Self wrote:
The trend in free software licensing has been strong reluctance to accepting novel licenses.
Right, that's why there are so many of them out there...
Noting, as above, the majority having very low adoption.
Interaction for who, the author or the user?
Interaction between licenses. It's more overhead for the developer to deal with.
Interaction between licenses for who?...
You're using a flawed model. There are three 'roles'; author, distributor, user. Any license must interact with all three roles. The fact is that the license doesn't effect the developer nearly as much as the distributor and the end user. You're only looking at a single layer of interactions.
I'm primarially looking at author/developer interactions. However, all actors are considered. Distributors are concerned with licensing -- this is generally the exposure point for commercial liability, and high-profile distributors will have an aversion to novel or obscure licenses. To this extent, licensing is somewhat like cryptography: well established, well understood licenses which have stood the test of time, are considered lower risk. Again, corporate licenses tend to speak to ghosts in the corporate closet (IBM: Patents, Sun: compatibility and standards control, Corel: Canadian law). Another advantage of selecting a widely use license is that it aquires a strong institutional resistence to sudden change. In the DJB instance cited previously, and the IPFilters licensing "revision" which also effected OpenBSD, licenses which were authored at the sole discretion of a single user were unilateraly modified, or had their interpretation unilaterally changed, to terms not acceptable for broader use. This is less likely where a broader constuency is represented. The authorship / revision issue is one I've put some thought to, there are a few possible solutions.
There is another aspect you're completely ignoring, unless one license prohibits(!) use with another license the interaction (outside of "Can I make money off it?") is nil - both for developers and users.
Free software largely precludes significant revenue streams from software sales. Not always -- Red Hat continues to generate significant revenues from box and corporate sales, though it is moving to a subscription (Red Carpet) and services model. Still, in large part, your benefit is going to come from indirect revenues: services, hardware, publications (e.g.: O'Reilly). Eric Raymond's list from CatB still largely stands. In this case, appeal to developers is *quite* significant, as this distributes your cost structure effectively to unaffiliated partners. This particular lesson is one that a large number of free software businesses (including the one I was affiliated with for 18 months) fail to grasp. There's little specific benefit in direct control of code. IBM, incidentally, is a company that Seems To Get It(tm). Users, similarly, should be concerned with long term viability of code. A project with a single sponsor, and one with troubling financial prospects to boot, doesn't gain much credibility despite their "free software" status. Licensing compatibility emphasizes the inherent "code escrow" powers of free software licensing by providing the possibility that the compelling features of the project might be continued, or at least incorporated into another project, should the initial sponsor fail.
All license start out in the minority. It's a competition in a way.
What are you competing for? What characteristic of a license will "win" the competition?
Utility, which license brings the maximum benefit to all three roles.
Which is served by a mix of factors, significant among them, license compatibility.
This isn't software domination,
Yes, it is.
No. Software itself competes on a different level. It's influenced by the licensing, but isn't fully dictated by this. Poor license choice can severly hamper adoption, development, use, and credibility.
it's more a protocol for collaborative development. Once you've got that nailed down, stop dicking with the damned lawyers, and start writing code.
One shoe doesn't fit all.
There are a good four or five well established "shoes" (GPL, LGPL, BSD/MIT, Mozilla, IBM PSL) which serve a broad range of needs, with a pretty good track record for interoperability. But that's just one idiot to another. Peace. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
On Tue, 23 Oct 2001, Karsten M. Self wrote:
Noting, as above, the majority having very low adoption.
No Open Source license has that big a penetration into the market when compared to non-Open Source. Half a smidjen and a smidjen, when faced with a dump trunk... In addition, the Open Source movement is less than 20 years old, approx. ten years old with respect to providing real alternatives to closed source. That's a pretty short period as things go to be making claims of stability of any sort, including licenses.
I'm primarially looking at author/developer interactions. However, all actors are considered.
Distributors are concerned with licensing
No, they're concerned with making money. They have to live with licenses.
exposure point for commercial liability, and high-profile distributors will have an aversion to novel or obscure licenses.
Sure didn't slow down the Open Source folks.
To this extent, licensing is somewhat like cryptography: well established, well understood licenses which have stood the test of time, are considered lower risk.
By this argument you should be a real proponent of the Closed Source model ;)
Again, corporate licenses tend to speak to ghosts in the corporate closet
ALL license speak to ghosts in the closet. If somebody didn't have some sort of goal involved they wouldn't license the software in the first place (it wouldn't even be written). The distinction between some group of teenagers or college kids and your favorite corporation is specious. The only difference is the number of marbles in the bucket.
Another advantage of selecting a widely use license is that it aquires a strong institutional resistence to sudden change.
Which is a major disadvantage in my book.
Free software largely precludes significant revenue streams from software sales.
But it doesn't preclude them via other channels. To say that simply because the water got dirty when we washed the baby we shouldn't keep the baby just doesn't work for me. Hint, the software is not the point, ever. The reality is that simply because the 'standard corporate model' doesn't work with Open Source is an obvious consequence of understanding the intent of Open Source and should surprise nobody.
Not always -- Red Hat continues to generate significant revenues from box and corporate sales, though it is moving to a subscription (Red Carpet) and services model. Still, in large part, your benefit is going to come from indirect revenues: services, hardware, publications (e.g.: O'Reilly). Eric Raymond's list from CatB still largely stands.
Which simply demonstrates why these failed efforts simply support the rule. Open Source is a totaly different head.
In this case, appeal to developers is *quite* significant, as this distributes your cost structure effectively to unaffiliated partners.
No users, no point in doing the software. I have to disagree. Irrespective of intent or scale the customer must come first in all situations. Developers don't buy products (or services as a consequence of use) at anywhere the scale of consumers. Developers also don't do service.
Users, similarly, should be concerned with long term viability of code.
Of this I agree.
software" status. Licensing compatibility emphasizes the inherent "code escrow" powers of free software licensing by providing the possibility that the compelling features of the project might be continued, or at least incorporated into another project, should the initial sponsor fail.
That's fine if you can deal with the 'inertia' of the 'escrow'...it's actually counter productive to all concerned in the vast majority of cases. One of the hopes (admittedly not realized to a great extent to date) of Open Source was one of innovation. Of providing a forum for experimentation that was low cost but high quality.
Which is served by a mix of factors, significant among them, license compatibility.
For users the compatibility they care about is, will the product co-exist. Outside of that most users couldn't care less. Distributors want to sell whatever will solve customer issues and increase their ROI, $. If you want to understand a situation; follow the $$$, not the lawyers. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
On Sun, 21 Oct 2001, Karsten M. Self wrote:
Summer, June/July, IIRC. I've done a couple of look-ups since. There's been little additional news or information (I'm not saying none, I'm saying little). OpenBSD, a relatively little-known free 'nix, gets rather more press and community coverage.
Little-known? That's unfair. OpenBSD is a fairly well known operating system, among the members of its target audience.
Poor licensing choices are one of several key modes of failure for free software projects. If Plan 9 procedes forward, I expect to see another two or three significant licensing revisions.
Explain the popularity of Unix, then. -MW-
on Tue, Oct 23, 2001 at 04:56:35PM -0700, Meyer Wolfsheim (wolf@priori.net) wrote:
On Sun, 21 Oct 2001, Karsten M. Self wrote:
Summer, June/July, IIRC. I've done a couple of look-ups since. There's been little additional news or information (I'm not saying none, I'm saying little). OpenBSD, a relatively little-known free 'nix, gets rather more press and community coverage.
Little-known? That's unfair. OpenBSD is a fairly well known operating system, among the members of its target audience.
I'll leave grasping the concept of "relative mindshare" as an exercise to the reader. I use and admin an oBSD box myself.
Poor licensing choices are one of several key modes of failure for free software projects. If Plan 9 precedes forward, I expect to see another two or three significant licensing revisions.
Explain the popularity of Unix, then.
I think Unix is an exemplar of my points, for its heyday. Again, "relative" is a core concept. I couldn't do much better than Kernighan and Pike in _The UNIX Programming Environment_, Chapter 10, Epilog, written in 1984: The UNIX operating system is well over ten years old, but the number of computers running it is growing faster than ever. For a system designed with no marketing goals or even intentions, it has been singularly successful.... The main reason for its commercial success is probably its portability -- the feature that everything but small parts of the the compilers and kernel runs unchanged on any computer.... But the UNIX system was popular long before it was of commercial significance... The 1974 CACM paper by Ritchie and Thompson generated interest in the academic community....Through the mid-1970s UNIX knowledge spread by word of mouth: although the system came unsupported and without guarantee, the people who used it were enthusiastic enough to convince others to try it too.... Why did it become popular in the first place? ...[I]t was designed and built by a small number (two) of exceptionally talented people, whose sole purpose was to create an environment that would be convenient for program development, and who had the freedom to pursue that ideal.... In that early system were packed a number of inventive applications of computer science, including stream processing (pipes), regular expressions, language theory...and more specific instances like the algorithms in diff.... The UNIX system has since become one of the computer market's standard operating system, and with market dominance has come responsibility and the need for "features" provided by competing systems. As a result, the kernel has grown in size by a factor of 10 in the past decade.... [A]lthough UNIX has begun to show some signs of middle age, it's still viable and still gaining in popularity. And that popularity can be traced to the clear thinking of a few people in 1969.... Although they didn't expect their system to spread to tens of thousands of computers, a generation of programmers is glad that it did. This reads very much like a history of GNU/Linux, a similarity that struck me at some of the recent Linux10 celebrations and preparations. Change a few names and dates, and change user share to millions rather than thousands, and you're about on-base. What K&P don't get into is the licensing terms of the first Unix systems. AT&T was enjoined by its ongoing anti-trust restrictions (originating in 1911) which prohibited it from selling computer systems (1949, 1956 agreements). This meant that Unix was largely freely distributable among computer systems of the day in the standard format for data interchange: research computers on university and corporate campuses, via magtape. It rose to dominance on the thousands of such systems in existence from 1975 to 1985. This didn't change until 1985 and the final anti-trust settlement and breakup of AT&T, at which point bitter battles for control greatly hampered the Unix market, with impacts felt to this day in the *BSD / GNU/Linux split. Unix's success was based on its portability, liberal distribution terms, practical applications, "building blocks" architecture, extensibility, and continued viability with growth in size and popularity. Note that today, the comparative advantage along many of these dimensions lies with GNU/Linux. Again, it's highly portable, licensing is more liberal than proprietary Unices (with an ideological and pragmatic scuffle between the BSDs and Linux on licensing terms and adoption), the core "small, simple, does one thing" applications philosophy largely persists. Note that GNU/Linux is now growing in market and mindshare, at a cost to both UNIX and alternative server systems, particularly at the small end of the scale. See Christensen's _The Innovator's Dilemma_ for a general illustration of principles. Peace. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
This entire view misses the(!) one most important component of Unix's (and Linux's) success, they were first. There was NO credible competition. The same thing can be said for Apache and BIND and many other apps. It isn't that they were the best, they were simply the first - and get to reap market inertia as a result. However, and it's a doozy, this won't last. As the Open Source market expands and takes over pretty much completely you'll see this dominance begin to decrease. Why? Because of the component nature of the software. On Wed, 24 Oct 2001, Karsten M. Self wrote: [Long standard history of Unix deleted] -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
on Wed, Oct 24, 2001 at 09:53:35PM -0500, Jim Choate (ravage@EINSTEIN.ssz.com) wrote:
This entire view misses the(!) one most important component of Unix's (and Linux's) success, they were first.
Not hardly. I wasn't keeping notes when K&R were designing their gaming platform, but history seems to recall OS/360, Multics, TICO, ITS, VMS. A bit of quick Googling suggests the PDP-7 had its own native operating system (the PDP-11 certainly did), certainly more than what a couple of guys hanging around a broom closet could hammer out in a few days. Throughout the 1970s and 80s, Ken Olsen was selling VAXs running VMS and complaining bitterly about snake oil (I guess there's a bunch of snakes out there). However, to quote someone's response to Tim May in this list recently, I'm just one of the dilettants posting here out of ignorance for some free research on the part of the rest of you. Someone who was around at the time is going to have a better answer than me. When Linus started Linux, he was bootstrapping with Minix, and trying to get around its limitations. For PC Unix, there was alread Xenix and one or more of the very forgettably named SCO products (not Xenix). The Jolitzesi were wresting BSD from Berkeley. FSF had been working on the HURD since 1983 (originally as TRIX), in fits and starts. By the time Larry McVoy wrote "The Sourceware Operating System Proposal" in 1993, it still wasn't clear whether or not FreeBSD or Linux was the cart to hitch the horse to. http://www.redhat.com/knowledgebase/otherwhitepapers/whitepaper_freeunix.htm... The ultimate success of Linux doesn't have a single factor -- it meets most of the marks set in the exerpt I posted from K&P, I'd argue that licensing played a role, as did the fact it wasn't encumbered by the AT&T/UCB lawsuits, and most people give Linus himself strong credits for his project management skills and personality. Topics covered extensively elsewhere. Peace. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
On Wed, 24 Oct 2001, Karsten M. Self wrote:
on Wed, Oct 24, 2001 at 09:53:35PM -0500, Jim Choate (ravage@EINSTEIN.ssz.com) wrote:
This entire view misses the(!) one most important component of Unix's (and Linux's) success, they were first.
Not hardly.
Yes, very particularly in fact.
I wasn't keeping notes when K&R were designing their gaming platform, but history seems to recall OS/360, Multics, TICO, ITS, VMS. A bit of quick Googling suggests the PDP-7 had its own native operating system (the PDP-11 certainly did), certainly more than what a couple of guys hanging around a broom closet could hammer out in a few days.
Being 'first' doesn't imply they were 'alone'. You misrepresent reality to your own end. What happened is there were a cloud of near-miss OS'es. In the case of Unix it was the right one becuase of its scale and the mechanism it was distributed by (it's development process was also out of the ordinary not being some massive long running committee) and the particular pardigms that it chose to use. There was a low cost distribution mechanism so curious people could buy in for a low cost. The same can be said for Linux, but in that case there were many factors. FSF was around and pushing Hurd but they were looking for somebody else to do it. Minix and other near-miss kernels had been around for a few years so its not like Linux poped out of thin air. There was no low cost distribution mechanism that was popular prior to about '94. It's no accident that Linux took off at the same time that the Internet in general took off (about '94). In both cases it was synergy of issues. The license being a key, but not singular component (as you would have us believe). Another factor of equal if not greater import was the cost/ease of actual/physical distribution. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
on Thu, Oct 25, 2001 at 07:52:11AM -0500, Jim Choate (ravage@einstein.ssz.com) wrote:
On Wed, 24 Oct 2001, Karsten M. Self wrote:
on Wed, Oct 24, 2001 at 09:53:35PM -0500, Jim Choate (ravage@EINSTEIN.ssz.com) wrote:
This entire view misses the(!) one most important component of Unix's (and Linux's) success, they were first.
Not hardly.
Yes, very particularly in fact.
I wasn't keeping notes when K&R were designing their gaming platform, but history seems to recall OS/360, Multics, TICO, ITS, VMS. A bit of quick Googling suggests the PDP-7 had its own native operating system (the PDP-11 certainly did), certainly more than what a couple of guys hanging around a broom closet could hammer out in a few days.
Being 'first' doesn't imply they were 'alone'. You misrepresent reality to your own end.
Define your market or relevant niche, with specificity. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
On Thu, 25 Oct 2001, Karsten M. Self wrote:
Being 'first' doesn't imply they were 'alone'. You misrepresent reality to your own end.
Define your market or relevant niche, with specificity.
Computers intended for single-user interactive processing. When looking at cost/performance/feature for the OS'es current in the late 60's none were really effective (I'm excluding Language-in-ROM machines - not that any of them were stellar in performance). What would one day become engineering workstations and personal computers (which are the same thing today). A new class of machines was coming out (my first machine was a PDP 8e running BASIC) and while there were plenty of tools they tended to be vertical in intent or else not general purpose enough for this sort of computing. Look at the first couple of years of Byte or Dr. Dobb's for more specific examples (remember Godbout?) in the personal computer market. Which happens to be one of the primary reasons Unix was developed, there were no realistic choices in the market for this paradigm. So a solution can trotting along. We're facing the same sort of thing today with respect to 'grid computing' and such. All the current OS'es (Linux incl.) are focused on the old style of solutions. We'll also find that our current views of what IP means will be found to be as antiquated. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
on Fri, Oct 26, 2001 at 08:35:36PM -0500, Jim Choate (ravage@einstein.ssz.com) wrote:
On Thu, 25 Oct 2001, Karsten M. Self wrote:
Being 'first' doesn't imply they were 'alone'. You misrepresent reality to your own end.
Define your market or relevant niche, with specificity.
Computers intended for single-user interactive processing.
The problem I've got with this response is that Unix and GNU/Linux aren't computers, they're operating systems. Unix was written to run on those computers "that didn't exist", largely the PDP 7 and 11. I was seeing the market as the *operating systems* running on these computers. While I'll concede that Unix and GNU/Linux probably drove hardware, the fact is that both emerged in environments where there were existing OSs running, almost always preinstalled, on the hardware of choice for each system: RSX-11D, TWENEX, VMS. The Jargon file has TWENEX users migrating to Unix in the 1980s. For larger systms, VM/CMS still has its fans. I guess the question would be: what other OSs were popular in research environments at the time? What benefits did Unix offer? What timeframe are we discussing? Again, public availability of Unix seems to have come after 1974.
A new class of machines was coming out (my first machine was a PDP 8e running BASIC) and while there were plenty of tools they tended to be vertical in intent or else not general purpose enough for this sort of computing. Look at the first couple of years of Byte or Dr. Dobb's for more specific examples (remember Godbout?) in the personal computer market.
As I've indicated, I'm not as old as you think I am. Unix and I are close to the same age. My real awareness starts in the early to mid 1980s, some exceptions. Incidentally, if you want to remenisce, there's a DEC timeline here: http://www.montagar.com/dfwcug/VMS_HTML/timeline/1964-3.htm http://www.montagar.com/dfwcug/VMS_HTML/timeline/DECHISTORY.HTM
Which happens to be one of the primary reasons Unix was developed, there were no realistic choices in the market for this paradigm. So a solution can trotting along.
I'm unconvinced. Again, the PDP series, notably the '7 & '11, as well as the HP 3000, stand out in searches as significant mini systems of the day. I have to assume they included operating systems. And again, GNU/Linux emerged in a universe of PC operating systems: DOS, Macintosh, OS/2, Xenix, Minix, BSDi. In both cases, the newcomer (Unix/Linux) emerged as a technically inferior system, but (rapidly or otherwise) outpaced its competition due to architecture, licensing, and social factors. Regarding your comment (two posts back) that Linux was coincident with the Internet: yes, I agree that this was a formative factor. I have no doubt that if Linus hadn't come along, another solution would have emerged, the time was ripe. GNU/Linux happened to be best-of-breed.
We're facing the same sort of thing today with respect to 'grid computing' and such. All the current OS'es (Linux incl.) are focused on the old style of solutions. We'll also find that our current views of what IP means will be found to be as antiquated.
References? -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
On Fri, 26 Oct 2001, Karsten M. Self wrote:
The problem I've got with this response is that Unix and GNU/Linux aren't computers, they're operating systems. Unix was written to run on those computers "that didn't exist", largely the PDP 7 and 11.
An OS without a computer is worthless. What drives the architecture of OS'es (other than mental mastrubation) is applications and applications environments. In the very late 60's there was a growth in the computers::person ratio coupled with a great increase in #_computers as a whole. This led to a problem of scale and scope. Problems that Unix was able to resolve in a usable way (as 30 years of use will attest). Most other OS'es weren't. Not that Unix was the only alternative (eg C/PM). However, the sorts of problems used in a day to day business/activity creates a 'natural' schism. That is based around the distinctions between design/engineering and business-home/industry. Unix found a first home in the first. The 'smaller' OS'es found homes in the second. Each expanded into the others realm until today. Whence we have several set of originaly niche market solutions. These solutions have now saturated the market. However, there are forces that are changing the market radicaly. Moving from a real 'network is the computer' model. The reality is that the four horsemen of the network (software, hardware, infrastructure, law) are going to be replaced in the next 5 or so years with an almost completely different model. These differences will serve to amplify the current stresses and schisms in our societies. -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
On Sun, 21 Oct 2001, Jim Choate wrote:
Transitive means that A mounts B, C mounts A and gets B free. Plan 9 does this, managed by a set of authorization layers for fine control, native.
This could be bad. Say B doesn't want to allow access to C for its file systems. Then what? Any mechanisms to prevent A from resharing it? It's also bad depending on how it's implemented. Say A has a low bandwith connection to B such as ISDN, but it has a high bandwith connection to C. If C uses A to get to B, that slows you down. Does it treat the file systems differently or is it just resharing a mounted file system?
This means that when Hangar 18 goes online you can mount /hangar18 into your filespace (via Plan 9 or Linux NFS services) and you will get all the resources that Hangar 18 mounts through that point. ftp is a good example.
Yeah, so now you're eating double the bandwith of hangar 18. Say hangar 18 is connected over a T1 and nothing else. You mount it's file systems over nfs or whatever Plan 9 uses and do cd /ftp2.sourceforge.net, then you copy a file from there. Well now, you've just effectively doubled the traffic to hangar18. If the owner is paying a flat rate, it's not so bad, you're just wastng bandwith. But if it's at a colocation it will be enough to go over the percentile rating and have to pay for extra usage. Even if the two boxes are on the same local network (broadcast domain), you'd still saturate your hubs/switches by doubling this traffic, and even slow down access (latency) as you're using plan-9 to translate from ftp to it's native scheme to nfs. Further, you're eating lots of extra cpu cycles on the plan 9 box(es) while they do this. What if hangar18 has a few ftp sites mounted and you run the equivalent of the find command (which recursively searches for files.) It goes over the ftp connections and walks every single connection of the internet? That's kind of insane. It'll take forever and eat up bandwidth at both ends. And as you say below, it would even cache some of it... ugh... hope there are limits to this.
In Plan 9 you 'mount' the ftp server to your file system. If you ever go out and walk that part of the file space tree and request a file it only then goes and gets it. You can control its lifetime (to manage disk space for example) via local cache controls. A 'lazy update' mechanism, very efficient of network and local resources.
Yeah, it's "cool" but wasteful.
The network layer. The traffic between any two Plan 9 boxes is encrypted with keys dependent upon the individual boxes (or larger classes if you desire) if the system is so configured. You can also use this to encrypt branches of your filesystem. Plan 9 provides SSH.
Just DES? And no disk encryption???? What about swap? Any ip filtering / firewalling mechanisms? Any routing mechanisms? Any NATting mechanisms? The clustering / distributing computing aspects of it are cool. But... I'm still not convinced...
- Doesn't use passwords, Instead it uses tickets (ie certificates).
...which are granted via...?
However the resource owner choses. I'm using 'small worlds network' models for my 'web of trust'...
So it's like kerberos in some ways?
It should be, X Windows sucks.
Xwindows by itself without a window/desktop manager? yes that would suck balls. XWindows with Ximian GNOME or KDE, nope - it rocks, though not on old hardware.
What's Inferno?
Another OS, intended for real-time control of "Internet Appliances". You run it along side Plan 9.
Whoop. :) You can control most internet appliances through a web gui or telnet/ssh command line anyway.
On Thu, 25 Oct 2001, Sunder wrote:
On Sun, 21 Oct 2001, Jim Choate wrote:
Transitive means that A mounts B, C mounts A and gets B free. Plan 9 does this, managed by a set of authorization layers for fine control, native.
This could be bad. Say B doesn't want to allow access to C for its file systems. Then what? Any mechanisms to prevent A from resharing it? It's also bad depending on how it's implemented.
http://plan9.bell-labs.com -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
Sunder wrote:
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall, secure on installation without you needing to tweak stuff. Hell you can even tell it to encrypt swap pages.
I'd really like to use OBSD for my always-on server, but there are a few shortcomings. - New Java stuff: I need to have Java servlets, JSP, and all that rot available from my web site, and last time I tried, a few months ago, the new Java stuff just wasn't there yet. Eighty-five step installation procedure which either didn't work quite right or was too much for my tiny brain. (The procedure was actually for FBSD, but it didn't work there, either, so the chances of getting it working on OBSD were negligible.) - Encrypted file systems: I want my main server to have TCFS or equivalent, so if the machine is seized the feebs would see a tiny boot partition and a large, strongly-encrypted main partition. I tried a few encrypted file systems a while back, and the couple I found for OBSD weren't there yet, either; they typically dumped core when I tried to use them. (I see that Dr Evil posted a message on this subject last May on a list archived at Geocrawler, so I guess the shortcoming hasn't been fixed since I last looked at it in depth.) (Yes, in theory I can work on either of these myself, but in practice I'm already involved in two free projects and just can't spread any thinner.) Out-of-the-box OBSD seems less crackable than out-of-the-box Linux and I'd like to use it, but it just doesn't have the two features I really want. For now, I'll take my chances on securing my Linux server as best I can. -- Steve Furlong Computer Condottiere Have GNU, Will Travel 617-670-3793 "Good people do not need laws to tell them to act responsibly while bad people will find a way around the laws." -- Plato
On Sun, 21 Oct 2001, Steve Furlong wrote:
- Encrypted file systems: I want my main server to have TCFS or equivalent, so if the machine is seized the feebs would see a tiny boot partition and a large, strongly-encrypted main partition. I tried a few encrypted file systems a while back, and the couple I found for OBSD weren't there yet, either; they typically dumped core when I tried to use them. (I see that Dr Evil posted a message on this subject last May on a list archived at Geocrawler, so I guess the shortcoming hasn't been fixed since I last looked at it in depth.)
You can do this in Plan 9 and also spread the file around it's pieces hither and yon... -- ____________________________________________________________________ The people never give up their liberties but under some delusion. Edmund Burke (1784) The Armadillo Group ,::////;::-. James Choate Austin, Tx /:'///// ``::>/|/ ravage@ssz.com www.ssz.com .', |||| `/( e\ 512-451-7087 -====~~mm-'`-```-mm --'- --------------------------------------------------------------------
On 21, Oct, 2001 at 11:43:12AM -0400, Steve Furlong wrote:
Sunder wrote:
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall, secure on installation without you needing to tweak stuff. Hell you can even tell it to encrypt swap pages.
I'd really like to use OBSD for my always-on server, but there are a few shortcomings.
- New Java stuff: I need to have Java servlets, JSP, and all that rot available from my web site, and last time I tried, a few months ago, the new Java stuff just wasn't there yet. Eighty-five step installation procedure which either didn't work quite right or was too much for my tiny brain. (The procedure was actually for FBSD, but it didn't work there, either, so the chances of getting it working on OBSD were negligible.)
In 3.0 (out around Dec. 1st) you have jakarta-tomcat and jserv ports that might be what you need. I don't use it myself though, so I don't know how well it works, or how easy it is to configure.
- Encrypted file systems: I want my main server to have TCFS or
I think Linux is better at this. What about www.rubberhose.org? Works best on linux it seems. I'll play around with it sometime during the next two weeks when I get the new SuSE Linux. Plan 9 looks really, really cool too, though. ;-) Have a nice day Morten -- Morten Liebach <morten@hotpost.dk> PGP-key: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD796A4EB https://pc89225.stofanet.dk/ || http://pc89225.stofanet.dk/
Steve Furlong wrote:
Sunder wrote:
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall, secure on installation without you needing to tweak stuff. Hell you can even tell it to encrypt swap pages.
I'd really like to use OBSD for my always-on server, but there are a few shortcomings.
Does OBSD have a kernel optimized for use as a router like linux does? That's really important if you want a full-time router. http://master-www.linuxrouter.org:8080/ Likewise, the linux bios would be very useful here -- does obsd have a bios port? www.linuxbios.org (snip)
- Encrypted file systems: I want my main server to have TCFS or equivalent, so if the machine is seized the feebs would see a tiny boot partition and a large, strongly-encrypted main partition. I tried a few encrypted file systems a while back, and the couple I found for OBSD weren't there yet, either; they typically dumped core when I tried to use them. (I see that Dr Evil posted a message on this subject last May on a list archived at Geocrawler, so I guess the shortcoming hasn't been fixed since I last looked at it in depth.)
You need to look at the linux cryptoapi, which is fully functional at this point http://www.kernel.org/pub/linux/kernel/people/hvr/ and which can also be used to encrypt both swap *and* boot partition if you want (using initrd). I agree, Plan 9 looks very interesting, but then, so does MOSIX http://www.mosix.org/ which is also a distributed (kernel implemented) OS based on linux. -- Harmon Seaver, MLIS CyberShamanix Work 920-203-9633 Home 920-233-5820 hseaver@cybershamanix.com http://www.cybershamanix.com/resume.html
On 21, Oct, 2001 at 04:13:26PM -0500, Harmon Seaver wrote:
I agree, Plan 9 looks very interesting, but then, so does MOSIX http://www.mosix.org/ which is also a distributed (kernel implemented) OS based on linux.
In our local Linux User Group we tried setting it up 2 years ago, and it was quite easy, and really funny to see how the jobs were shufled around. It worked very well, but (at least back then) there were no crypto in it, only a distributed filesystem, but we didn't try that. Have a nice day Morten -- Morten Liebach <morten@hotpost.dk> PGP-key: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD796A4EB https://pc89225.stofanet.dk/ || http://pc89225.stofanet.dk/
on Sun, Oct 21, 2001 at 04:13:26PM -0500, Harmon Seaver (hseaver@cybershamanix.com) wrote:
Steve Furlong wrote:
Sunder wrote:
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall, secure on installation without you needing to tweak stuff. Hell you can even tell it to encrypt swap pages.
I'd really like to use OBSD for my always-on server, but there are a few shortcomings.
Does OBSD have a kernel optimized for use as a router like linux does? That's really important if you want a full-time router. http://master-www.linuxrouter.org:8080/
I suspect the oBSD response would be that you don't have to. Anecdotal evidence (and Linus's own comments) are that Linux networking is weaker than it could be, the BSD stack has traditionally held a security advantage. That said, oBSD's overall performance is generally considered to be lower than that of GNU/Linux. What with hardware prices, adding redundant hardware might be considered preferable to trying to handle the load on one high-performance box. But that's also not my area of expertise. Back to the cornfield.... Peace. -- Karsten M. Self <kmself@ix.netcom.com> http://kmself.home.netcom.com/ What part of "Gestalt" don't you understand? Home of the brave http://gestalt-system.sourceforge.net/ Land of the free Free Dmitry! Boycott Adobe! Repeal the DMCA! http://www.freesklyarov.org Geek for Hire http://kmself.home.netcom.com/resume.html
Why Plan-9? I'd say go with OpenBSD. :) Built in crypto, built in firewall, secure on installation without you needing to tweak stuff. Hell you can even tell it to encrypt swap pages.
"Built-in crypto" is a big overstatement for OpenBSD. Unfortunately, Win 2000 has more built-in crypto than OpenBSD does. Hint: Try to create an encrypted FS on OpenBSD. Now try on Windows 2000.
On 22 Oct 2001, Dr. Evil wrote:
"Built-in crypto" is a big overstatement for OpenBSD. Unfortunately, Win 2000 has more built-in crypto than OpenBSD does. Hint: Try to create an encrypted FS on OpenBSD. Now try on Windows 2000.
You trust Win2k's encryption? Are you CRAZY? You're trusting a closed source product to do what it advertizes to do, every time? And does do encrypt the swap, does it? Excuse me -- professes to do. Thanks for wetting my keyboard with beer via nasal passage.
"Built-in crypto" is a big overstatement for OpenBSD. Unfortunately, Win 2000 has more built-in crypto than OpenBSD does. Hint: Try to create an encrypted FS on OpenBSD. Now try on Windows 2000.
You trust Win2k's encryption? Are you CRAZY?
No and no.
You're trusting a closed source product to do what it advertizes to do, every time? And does do encrypt the swap, does it? Excuse me -- professes to do.
I didn't say I trust it. I just said it's there, and it isn't there in OpenBSD. Why doesn't OpenBSD support such a basic thing as an encrypted FS? There's encryption built-in everywhere else except the one place which makes all the difference if the machine itself is stolen. I think there are Two Great Encryption Tabboos: Encrypted voice and encrypted FS. I would like to see OpenBSD support the encrypted FS in its default kernel, thus making it the first OS with such a feature (I don't count hacks such as loopback FS).
Thanks for wetting my keyboard with beer via nasal passage.
Beer is precious. Don't waste it on your keyboard.
On 22 Oct 2001, Dr. Evil wrote:
"Built-in crypto" is a big overstatement for OpenBSD. Unfortunately, Win 2000 has more built-in crypto than OpenBSD does. Hint: Try to create an encrypted FS on OpenBSD. [...]
dd if=/dev/zero of=diskimage bs=1024k count=1024 vnconfig -ck svnd0 diskimage [enter a passphrase] newfs /dev/svnd0c mount /dev/svnd0c /mnt -- mailto:zem@zip.com.au F289 2BDB 1DA0 F4C4 DC87 EC36 B2E3 4E75 C853 FD93 http://zem.squidly.org/ "I'm invisible, I'm invisible, I'm invisible.."
"Built-in crypto" is a big overstatement for OpenBSD. Unfortunately, Win 2000 has more built-in crypto than OpenBSD does. Hint: Try to create an encrypted FS on OpenBSD. [...]
dd if=/dev/zero of=diskimage bs=1024k count=1024 vnconfig -ck svnd0 diskimage [enter a passphrase] newfs /dev/svnd0c mount /dev/svnd0c /mnt
I am aware of that, but it's a hack, and it doesn't work well. For example, it has no way of detecting when you enter an incorrect password. Anyway, for an OS which prides itself on built-in crypto, why do we have to mess around with loopback? There are many FS features, such as being able to change read, write end execute perms for owner, group and root, which don't require a loopback FS. How is this any different from that? If it were really integrated crypto, I would be able to do mount -k /dev/sd0c and it would do the right thing. Even better, I would be prompted for a password during boot so it could boot from an encrypted fs. This is a glaring hole in OpenBSD's crypt-everywhere mantra.
On 23 Oct 2001, Dr. Evil wrote:
vnconfig -ck svnd0 diskimage [...]
I am aware of that, but it's a hack, and it doesn't work well. For example, it has no way of detecting when you enter an incorrect password.
Sure. Just noting that the capability is there, since it's easy to overlook. It works reliably in my experience.
Anyway, for an OS which prides itself on built-in crypto, why do we have to mess around with loopback? There are many FS features, such as being able to change read, write end execute perms for owner, group and root, which don't require a loopback FS. How is this any different from that? If it were really integrated crypto, I would be able to do
mount -k /dev/sd0c
This I don't understand. Can you describe a scenario under which an encrypted fs is valuable enough to justify typing one command, but not two? OpenBSD's target audience is not exactly clueless newbies. Or is speed so important that you'd sacrifice security? Any encrypted fs will take a performance hit; I think you'll find loopback overhead is insignificant next to the crypto.
and it would do the right thing. Even better, I would be prompted for a password during boot so it could boot from an encrypted fs.
Is booting from an encrypted fs ever useful? Use read-only media if tampering is a concern. Configure and mount other encrypted filesystems from /etc/rc. If you can install and maintain OpenBSD, you can manage that.
This is a glaring hole in OpenBSD's crypt-everywhere mantra.
It's worth noting their primary goal is network security, not crypto. Rubber hoses don't factor significantly in their threat model. -- mailto:zem@zip.com.au F289 2BDB 1DA0 F4C4 DC87 EC36 B2E3 4E75 C853 FD93 http://zem.squidly.org/ "I'm invisible, I'm invisible, I'm invisible.."
On 23, Oct, 2001 at 01:38:02PM +1000, zem wrote:
This is a glaring hole in OpenBSD's crypt-everywhere mantra.
It's worth noting their primary goal is network security, not crypto. Rubber hoses don't factor significantly in their threat model.
I also think it's important to remember that OpenBSD is quite conservative in many ways, it's a part of the security philosophy. But I agree, a stable and functional encrypted FS would fit right in. (Maybe I should learn to code :-) Have a nice day Morten -- Morten Liebach <morten@hotpost.dk> PGP-key: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD796A4EB https://pc89225.stofanet.dk/ || http://pc89225.stofanet.dk/
On Sun, 21 Oct 2001, Harmon Seaver wrote:
All the more reason to use Linux routers and firewalls. Especially if Cisco pulls a Larry Ellison.
-- Harmon Seaver, MLIS
That's fine and dandy for ds1's, and maybe even enough for the majority of fractional ds3 customers, but how are you going to apply this to people with oc12 handoffs? Even oc3 handoffs are going to be *really* difficult boxes to build using COTS/PC technology. -- Yours, J.A. Terranson sysadmin@mfn.org If Governments really want us to behave like civilized human beings, they should give serious consideration towards setting a better example: Ruling by force, rather than consensus; the unrestrained application of unjust laws (which the victim-populations were never allowed input on in the first place); the State policy of justice only for the rich and elected; the intentional abuse and occassionally destruction of entire populations merely to distract an already apathetic and numb electorate... This type of demogoguery must surely wipe out the fascist United States as surely as it wiped out the fascist Union of Soviet Socialist Republics. The views expressed here are mine, and NOT those of my employers, associates, or others. Besides, if it *were* the opinion of all of those people, I doubt there would be a problem to bitch about in the first place... --------------------------------------------------------------------
measl@mfn.org wrote:
On Sun, 21 Oct 2001, Harmon Seaver wrote:
All the more reason to use Linux routers and firewalls. Especially if Cisco pulls a Larry Ellison.
-- Harmon Seaver, MLIS
That's fine and dandy for ds1's, and maybe even enough for the majority of fractional ds3 customers, but how are you going to apply this to people with oc12 handoffs? Even oc3 handoffs are going to be *really* difficult boxes to build using COTS/PC technology.
There's a number of router manufacturers that do a lot more than use PC hardware. This one beats the Cisco 7500: http://www.imagestream-is.com/News_1-26-01.html I'm sure the hardware to deal with oc12s will be soon forthcoming, if it isn't already available. Besides which, if you and I run a vpn between our routers, do we really care if it goes thru a feeb checkpoint? Remailer software could be modified to tunnel between themselves, not just encrypt, etc. Of couse, the whole concept of what they're talking about is impossible to implement. Easy to order, but I can't see how it would ever work in reality, not well enough to keep the net actually functioning. -- Harmon Seaver, MLIS CyberShamanix Work 920-203-9633 Home 920-233-5820 hseaver@cybershamanix.com http://www.cybershamanix.com/resume.html
participants (14)
-
David Honig
-
Dr. Evil
-
Eugene Leitl
-
Harmon Seaver
-
Jim Choate
-
Jim Choate
-
Karsten M. Self
-
measl@mfn.org
-
Meyer Wolfsheim
-
mikecabot@fastcircle.com
-
Morten Liebach
-
Steve Furlong
-
Sunder
-
zem