Terabit Trader – Guaranteed $20K Per Day Software

Best Binary Options Brokers 2020:
  • Binarium
    Binarium

    Best Binary Broker!
    Perfect for beginners!
    Free Demo Account! Free Trading Education!

  • Binomo
    Binomo

    Only for experienced traders!

Terabit Trader – Who does not want to make $20K per day?…For Life

I received an email 2 or 3 days ago informing me about a new ground breaking system that could change my life forever.Now 99,99% of these offers are fraudulent and I almost forgot about it when a new email arrived today.I decided to give it a go and let you know what is going on.

Terabit Trader – Optical Data Transmission Trading Software

The story unfolds…

On February 2020 they discovered a new top secret method of transferring data , many times faster than the simple broadband connections.

…the folks jumping on the bandwagon right now are making life changing cash with it , overnight … from the CBX Business News.

At last , a program that works?…millions of cash overnight , I want a piece of this pie please.

A revolution in the Binary Options Trading.I see , just because we can transfer data faster that means we can beat the machines on the Binary Markets and make tons of money.

The Real Show Begins

A man shows up announcing that during the last 3 months he transformed 27 random people into millionaires.And guess what , now is our turn , your turn , my turn.

Do you want to start making over $850 per hour?…right now?

or you can calculate over $20K in your bank account in less than 24 hours , all done for you by his team.

The process is 100% risk free.

The most closely guarded secret of Binary trading

The team is continuously transferring data via 2 forms of Optical Data Transfer (ODT 1 and ODT 2)..Impressive and simple.The signals travel almost with the speed of light….I told you , it is a revolution.

Best Binary Options Brokers 2020:
  • Binarium
    Binarium

    Best Binary Broker!
    Perfect for beginners!
    Free Demo Account! Free Trading Education!

  • Binomo
    Binomo

    Only for experienced traders!

The software is highly automated and places the trade at the exact required time.

On top of that the Terabit protocol makes it mathematically impossible to Lose even a single trade.

His name is Richard Heffner , CEO and Binary Options Mega Crusher.

and you may ask…why is he doing it for free?…

well the offer is not exactly free , you see , in order to place a trade you need to fund your account at the broker website first.This is when Richard is making a commission out of you.

Final Words

This one reminds me the Quantum Code , another similar speed transferring BS.

It is a new era of Binary Options Trading Scams.An era of new technology miracles , incredible speed in data transfer and no losses when trading,

Video productions that are deceiving newbies while promising to make YOU millionaires.

Well , you can become a millionaire , but you have to work for it.Hard work , sorry , there is no other way.

If you want to build a Real online business with long term potential , just like I did , then continue here.

Title: Evolving the Land Information System into a Cloud Computing Service

You are accessing a document from the Department of Energy’s (DOE) OSTI.GOV. This site is a product of DOE’s Office of Scientific and Technical Information (OSTI) and is provided as a public service.

Visit OSTI to utilize additional information resources in energy science and technology.

Abstract

The Land Information System (LIS) was developed to use advanced flexible land surface modeling and data assimilation frameworks to integrate extremely large satellite- and ground-based observations with advanced land surface models to produce continuous high-resolution fields of land surface states and fluxes. The resulting fields are extremely useful for drought and flood assessment, agricultural planning, disaster management, weather and climate forecasting, water resources assessment, and the like. We envisioned transforming the LIS modeling system into a scientific cloud computing-aware web and data service that would allow clients to easily setup and configure for use in addressing large water management issues. The focus of this Phase 1 project was to determine the scientific, technical, commercial merit and feasibility of the proposed LIS-cloud innovations that are currently barriers to broad LIS applicability. We (a) quantified the barriers to broad LIS utility and commercialization (high performance computing, big data, user interface, and licensing issues); (b) designed the proposed LIS-cloud web service, model-data interface, database services, and user interfaces; (c) constructed a prototype LIS user interface including abstractions for simulation control, visualization, and data interaction, (d) used the prototype to conduct a market analysis and survey to determine potential market size and competition, (e)more » identified LIS software licensing and copyright limitations and developed solutions, and (f) developed a business plan for development and marketing of the LIS-cloud innovation. While some significant feasibility issues were found in the LIS licensing, overall a high degree of LIS-cloud technical feasibility was found. « less

Software is a Long Con

I had a conversation with a bridge engineer one evening not long ago. I said, “Bridges, they are nice, and vital, but they fall down a lot.”

He looked at me with a well-worn frustration and replied, “Falling down is what bridges do. It’s the fate of all bridges to fall down, if you don’t understand that, you don’t understand bridges.”

“Ok, I do understand that,” I replied. “But they fall down a lot. Maybe if we stepped back and looked at how we’re building bridges –”

“You can’t build a bridge that doesn’t fall down. That’s just not how bridges work”

I took a deep breath. “What if you could build a bridge that didn’t fall down as often?”

“Not practical — it’s too hard, and besides, people want bridges.” By now, he was starting to look bored with the conversation.

“I bet if you slowed down how you build bridges, you could make ones that lasted decades, even in some cases, centuries. You might have to be thoughtful, set more realistic expectations, do a lot more of the design of a bridge before you start building it, but..”

He interrupted me again. “Look, you’re not a bridge engineer, so you don’t really understand how bridges work, but people want bridges now. So no one is going to build a bridge like that, even if it were possible, and I’m not saying it is.”

“But people get hurt, sometimes die, on these bridges.”

“Bridges fall down. Sometimes people are on them when they do. That’s not my fault as a bridge engineer, that’s literally how gravity works,” he said.

“I know there will always be accidents and problems with bridges, but I really do think that you could build them with careful planning, and maybe shared standards and even regulations in such a way that bridge collapses could be rare. Some of the problems with bridges are faults we’ve known about for decades, but they still get built into bridges all the time.”

He took a deep breath, and pinned me with a stare. “Even if we could, and it’s still entirely possible that no one can build these mythical bridges you’re talking about, that would slow down the building of bridges. People need bridges to get places. No one could afford to build bridges that slowly, and people would complain.” He stretched out the –plaaaain in complain, in a way that made clear this was the end of the argument and he’d won.

“They might not complain if they didn’t fall off bridges so often,” I mumbled.

He heard me. “Unlike you, people know that bridges fall down.”

Just then, a friend of mine, also a writer, also interested in bridges, stopped by.

“Hey guys!” he said. “So it looks like there’s a crew of Russian bridge destroyers with hammers and lighters who are running around in the middle of the night setting fires to bridges and knocking off braces with hammers. They started in Ukraine but they’re spreading around the world now, and we don’t know if our bridges are safe. They’ve studied bridges carefully and they seem to be good at finding where they’re most flammable and which braces to knock off with their hammer.”

We both regarded my friend a long moment, letting it sink in. I turned back to the bridge engineer and said, “Maybe we need to make them out of non-flammable material and rivet them instead of using exposed braces and clamps.”

But he was already red in the face, eyes wide with anger and fear. “GET THE RUSSIANS!” he screamed.

OK, obviously it’s not bridges I’m talking about, it’s software. And that other writer is Wired’s Andy Greenberg, who wrote a piece not that long ago on Russian hacking.

Greenberg’s detailed and riveting story focuses largely on the politics of hacking, and the conflict between an increasingly imperialist Russia, and Ukraine, with an eye towards what it means for America. For people who respond to such attacks, like FireEye and Crowdstrike, these kinds of events are bread and butter. They have every reason to emphasize the danger Russia (or a few years ago, China) pose to the USA. It’s intense, cinematic stuff.

It’s also one of a long sequence of stories in this vein. These stories, some of which I’ve written over the years, show that our computers and our networks are the battlegrounds for the next great set of political maneuvers between nation-states. We the people, Americans, Russians, whomever, are just helpless victims for the coming hacker wars. We have Cyber Commands and Cyber attack and Cyber defense units, all mysterious, all made romantic and arcane by their fictional counterparts in popular media.

But there’s another way to look at it. Computer systems are poorly built, badly maintained, and often locked in a maze of vendor contracts and outdated spaghetti code that amounts to a death spiral. This is true of nothing else we buy.

Our food cannot routinely poison us. Our electronics cannot blow up, and burn down our houses. If they did, we could sue the pants off whomever sold us the flawed product. But not in the case of our software.

The Software Is Provided “As Is”, Without Warranty of Any Kind

This line is one of the most common in software licenses. In developed nations, it is a uniquely low standard. I cannot think of anything infrastructural that is held to such a low standard. Your restaurants are inspected. Your consumer purchases enveloped in regulations and liability law. Your doctors and lawyers must be accredited. Your car cannot stop working while it is going down the freeway and kill you without consequences, except maybe if it’s caused by a software bug.

It is to the benefit of software companies and programmers to claim that software as we know it is the state of nature. They can do stupid things, things we know will result in software vulnerabilities, and they suffer no consequences because people don’t know that software could be well-written. Often this ignorance includes developers themselves. We’ve also been conditioned to believe that software rots as fast as fruit. That if we waited for something, and paid more, it would still stop working in six months and we’d have to buy something new. The cruel irony of this is that despite being pushed to run out and buy the latest piece of software and the latest hardware to run it, our infrastructure is often running on horribly configured systems with crap code that can’t or won’t ever be updated or made secure.

People don’t understand their computers. And this lets people who do understand computers mislead the public about how they work, often without even realizing they are doing it.

Almost every initial attack comes through a phishing email. Not initial attack on infrastructure — initial attacks on everything — begins with someone clicking on an attachment or a link they shouldn’t. This means most attacks rely on a shocking level of digital illiteracy and bad IT policy, allowing malware to get to end-user computers, and failing to train people to recognize when they are executing a program.

From there, attackers move laterally through systems that aren’t maintained, or written in code so poor it should be a crime, or more often, both. The code itself isn’t covered by criminal law, or consumer law, but contract law. The EULAs, or End User Licensing Agreements (aka the contracts you agree to in order to use software), which are clicked through by infrastructure employees are as bad, or worse, as the ones we robotically click through everyday.

There are two reasons why I gave up reporting on hacking attacks and data breach. One is that Obama’s Department of Justice had moved their policies towards making that kind of coverage illegal, as I’ve written about here. But the other, more compelling reason, was that they have gotten very very boring. It’s always the same story, no one is using sophistication, why would you bother? It’s dumb to burn a zero day when you can send a phishing mail. It’s dumb to look for an advanced zero day when you can just look for memory addressing problems in C, improperly sanitized database inputs, and the other programatic problems we solved 20 years ago or more.

Programmers make the same mistakes over and over again for decades, because software companies suffer no consequences when they do. Like pollution and habitat destruction, security is an externality. And really, it’s not just security, it’s whether the damn things work at all. Most bugs don’t drain our bank accounts, or ransom our electrical grids. They just make our lives suck a little bit more, and our infrastructure fail a little more often, even without any hackers in sight.

When that happens with a dam, or a streetlight, or a new oven, we demand that the people who provided those things fix the flaws. If one of those things blows up and hurt someone, the makers of those things are liable for the harm they have caused. Not so if any of these things happen because of software. You click through our EULA, and we are held harmless no matter how much harm we cause.

When I became a reporter, I decided I never wanted my career to become telling the same story over and over again. And this is, once again, always the same story. It’s a story of software behaving badly, some people exploiting that software to harm other people, and most people not knowing they could have it better. I’m glad people like Andy Greenberg and others at my old Wired home, the good folks at Motherboard and Ars Technica, and others, are telling these stories. It’s important that we know how often the bridges burned down.

But make no mistake, as long as we blame the people burning the bridges and not the people building them, they will keep burning down.

And shit software will still remain more profitable than software that would make our lives easier, better, faster, and safer. And yeah, we would probably have to wait a few more months to get it. It might even need a better business model than collecting and selling your personal information to advertisers and whomever else comes calling.

I could keep writing about this, there’s a career’s worth of pieces to write about how bad software is, and how insecure it makes us, and I have written many of those pieces. But like writing about hackers compromising terrible systems, I don’t want to write the same thing telling you that software is the problem, not the Chinese or the Russians or the boogeyman de jour.

You, the person reading this, whether you work in the media or tech or unloading container ships or selling falafels, need to learn how computers work, and start demanding they work better for you. Not everything, not how to write code, but the basics of digital and internet literacy.

Stop asking what the Russians could do to our voting machines, and start asking why our voting machines are so terrible, and often no one can legally review their code.

Stop asking who is behind viruses and ransomware, and ask why corporations and large organizations don’t patch their software.

Don’t ask who took the site down, ask why the site was ever up with a laundry list of known vulnerabilities.

Start asking lawmakers why you have to give up otherwise inalienable consumer rights the second you touch a Turing machine.

Don’t ask who stole troves of personal data or what they can do with it, ask why it was kept in the first place. This all goes double for the journalists who write about these things — you’re not helping people with your digital credulity, you’re just helping intel services and consultants and global defense budgets and Hollywood producers make the world worse.

And for the love of the gods, stop it with emailing attachments and links. Just stop. Do not send them, do not click on them. Use Whatsapp, use Dropbox, use a cloud account or hand someone a USB if you must, but stop using email to execute programs on your computer.

Thanks to my Patrons on Patreon, who make this and my general living possible. You can support this more of work at Patreon.

Share this entry

Well, I think we should get the Russians and the hackers.

But otherwise, wow. Well said, Quinn. We have been shipping our software engineering offshore and shipping people who have no stake in America here on temporary visas to do work Americans could do at $3 an hour more. It’s like having your bridges built by engineering firms with no legal liability and no intentions of ever using the bridge.

May the road rise up and meet us, metaphorically speaking. Or at least the software.

“stop using email to execute programs on your computer”, Might as well stop communicating electronically when every picture, every document, every video as well as every link is an opportunity to execute malware whether it is attached or on dropbox or where ever. In the interim I act as if my computer and communications are always compromised. You left out “big data” tracking every click and mouse movement and selling them. I have netstat -f, 28 allegedly akamai TCP ports open right now.

The “two factor” identification nightmare of phone companies supplying passwords with minimal spoofing is a bridge designed to fail. This is how CWA got Brennan’s email account and phone hacked. And shades of Murdoch’s UK and US phone hacking, what was probably done to Weiner as well.

Notice Bannon’s involvement with gamer virtual money sales, when does a reseller just become a simple fence? When does gamer hacking roll into vote hacking. This is not rocket science connecting the dots when you have a gamergater and a Russian stooge standing together on Assange’s embassy steps.

I see what you did here. I suggest you just wear a “Putin bought me” t-shirt.

Sorry about the coarseness. That is my problem. I have a foot in mouth disorder. I think most of us are grumpy because there is no solution to the people will die law of the jungle business model short of going into communities. No technical person really believes their computer, phone, ATM, transactions, or “savings” are secure. Non technical people think, “why should I not use my debit card instead of cash, it is so easy, that is the way it should work. Why should I not use my phone to purchase items over the net.” Have multiple emails, be careful of two factor ID hacks, never use a debit card.

The people who brought technology to the world are not “nice” people. Gates, Jobs, Zuckerberg, Ellison, Bezos to name a few are known by the gossip of their ruthlessness.

I am a bridge engineer.

Or rather, I am a software engineer. I once built a bridge, but it was made out of logs and I was 12. It’s probably fallen down and rotten by now. Still, I’ve learned a little about engineering since then. Knowing how bridges and software are built, there are some fundamental differences.

The first is the halting problem. Bridges, and other infrastructure governed by the laws of physics, do not suffer from an equivalent of the halting problem. By way of example, there are Roman bridges still in use today. They where some of the first bridges, and they very well stand to outlast human civilization. A bridge made out of nickel would see the death of our sun intact. Unlike civil engineering, in software engineering it is exceedingly difficult to prove mathematically that some turing machines have no bugs. As they increase in complexity, this becomes impossible. This is an immutable property of software, it is not an immutable property of bridges.

The second fundamental difference is that software is primarily a social science, not an engineering discipline. Primarily, being the operative word. Computer science is all about the study of turing machines. It’s a branch of mathematics. Software engineering is a misnomer, it’s primarily about communication with turing machines, but mostly with other humans interacting with the software and code. The turing machines just execute the results of those human interactions over and over again. Regulating good code would be more like regulating polite speech, or a soccer match, than building material standards. There are more variants of communication with turing machines than their are languages between humans at this point. Amongst each one there are too many dialects, accents, and pidgins to count. You can count the types of bridges on your hands and toes.

I think the first problem is intractable, because mathematics says it is intractable. I also think that “perfect is the enemy of good”, and that the second problem is where efforts should be spent. We now exist in an age where our algorithms have collectively become a form of super consciousness which is changing our society as fast as we change them. By changing the economic pressures which govern how software is written, we can change how software is written and how it influences our society. In other words, the solutions to improving software will be social, not technical, as you point out.

“Bridges” will still fall down though. They will always fall down, but we can change how fast they fall, and how much damage they cause when they do.

this is certainly a discussion whose time has come. for the last

40 years individuals have been mesmerized and delighted with the machines they could buy (or build) for personal use. those machines, personal computers, were novel, mysterious, and allowed much easier typing, much faster communications with others, and rapid access to an enormous universe of information.

but those machines had problems, problems we happily overlooked in part because of our fascination with and desire for the newest and best of that technology, and in part because we believed each time when we were told that we were buying “greatly improved” equipment – storage capacity of a megabyte, then 10 megabytes, processing speed from 3 mhz to 33 ghz (and multiple processors on the same cpu chip) – amazing. hynotizing. but dysfunctional in the long run.

from its beginning in the 80s, the personal computer’s harware changed by the month. in five years time an expensive and well- built machine could be passe or worse, no longer compatible with other equipment.

software similarly became outdated within years either because it was not sufficiently capable, because it didn’t fit with newer equipment, or because its originator refused to support itbin favor of a newer of his products.

hard drives were reliable, mostly. but occassionally catatrophically not, and fragile. over time the hardware interface, the plug, that connected a device like a harddrive or a disk drive to the cpu would change repeatedly then become unsuported.

worst of all were several extremely serious general problems with using a personal computer (used by individuals and organizations):

– the data stored on round floppy disks, square floppy disks, compact disks, digital video disks, flashdrives could not be reliablely stored over time. this due to storage medium becoming outmoded and thus unreadable, and to the “fading” over time of the stored electronic characters.

consider photographs printed on photopaper 100 yrs ago. they are still readily reveling their content. consider a typed manuscript from the 1920’s. still readable and copiable.

– it has come to pass in the last decade that no information stored on a personal computer is safe from being stolen without extraordinary effort to secure it. no information stored on a computer is safe from government(s) spying.

– the ordinary user of email, whatsapp, cloud storage, etc. has no idea of exactly where info he sends to others is going to go, nor to whom. further, he has no idea of how much information about himself he is revealing/making available to unknown others and organizations.

– the use of computer input devices (peripherals) like scanners, flashdrives, hard drives and output devices like sd cards, flashdrives, etc PLUS widely varying and obtuse instructions for transferring data, absolutely guarantee a lot of data is lost in transfer efforts which the originator would rather not have been lost.

– the operation of this machine (personal computer) and its components is completely opaque to the user without extraordinary effort.

humans have been way too accepting and complacent about this useful new machine whose manufacturers have settled on individual users most of the costs of poor engineering, very rapid obsolescence, lack of privacy, and incompetent long term storage capability.

it’s time for an accounting. making great fortune by constantly dumping new deficient machines and software on the matket is no longer acceptable.

out of time for now, but this is an extremely important topic to discuss – and cuss :)

it has come to pass in the last decade that no information stored on a personal computer is safe from being stolen without extraordinary effort to secure it.

It was about 70 years after the Model T before we collectively decided that you shouldn’t sell a model of car unless it passes a crash test.

Generations had to come and go. By then the vast majority of Americans couldn’t remember a time when ordinary folk didn’t use cars every day. That’s what it takes. Nobody has any interest at all in real quality or safety until the product has become so completely boring that everyone has one, but nobody thinks about. Only then does actual quality (as opposed to superficial, showy quality.) start to matter.

But, maybe we’re getting there. Slowly but surely.

You know, you were so close to the actual cause but you didn’t say it. If governments, states and individuals allowed bridges that would fall down in a year they could pay like 1/1000 of the price, perhaps less. Heck you could throw a plank over a creek and drive your car over it, instant bridge!

If software purchasers (Often you) refused to buy software that was poorly built and not guaranteed, you might have some safe software that would last. You aren’t interested in paying 1000x the price, are you? So enjoy these planks and give me my $30.

” an increasingly imperialist Russia,”

This wholesale attack on an entire profession using a phony analogy is amazing. Those of us who have made our careers in IT have much to answer for, but this stupid post could well have been produced by a monkey at a typewriter.

The analogy sucks. Equating mechanical structures with logic is false. If you were going to insist on making that kind of silly construction, bridges and highways would be the equivalent of hardware while cars, trucks and busses along with the all too human people who drive them would be software and the liveware that operates on the hardware. To get an idea how silly the post’s proposition is, turn it around. If bridges and cars had evolved as profoundly as hardware and software have over the last 50 years we would have bridges over the oceans, cars would traverse them in seconds and the cost per crossing would be pennies instead of the trillion or so dollars we desperately need to spend domestically on aging bridges and transportation infrastructure to keep it from collapsing.

Reliability testing/debugging is hard, and expensive. At the gross level it is pretty straightforward, most cars run off the showroom floor while most hardware and software executes out of the box. But, I would no more buy the first model year of a new car than the initial release of a new generation of hardware or software. There are too many unknowns, things that do not work exactly as expected, or have short serviceable lives or plain old screw ups. By the second model year, or version 2.0, early adopters have identified many of the bugs, and manufacturers have fixed them. The more standard deviations out you get from base functioning the harder it is to identify potential or low incidence problems and the sometimes bizarre ways people will operate cars or software. The amount of testing needed to find lower order bugs rises exponentially and varies inversely with probability.

Consider the space shuttle. It used rudimentary guidance computers designed in the ’60’s (you know, when lots of our bridges were new). It was so primitive it used magnetic core memory, wires with magnets strung on them, semi conductor memory had not been invented yet. The programs were kept on magnetic tape. For each mission they used three tapes, one to control launch, one to run orbital maneuvers and the third to control re entry. They mounted the first tape for lift off, loaded the second one when in orbit, and the third one to return to earth. That system was in use into this millennium while the astronauts were taking notebook computers, tablets and digital cameras to do their jobs. Why? Because that extremely simple system had been thoroughly debugged. NASA knew exactly how it worked and exactly what each of a limited number of instructions would do every time for every possible path through the programs.

Are any of us excited about the opportunity to trade our current systems for the extra time and effort required by simpler systems that have been debugged? Will you take 3 tapes to pay for your groceries or gas at a pump? One tape to initialize the transaction, the second tape to process it or run the gas pump, and the third to complete the transaction and turn off the pump. That kind of thing is the price of holding back technology until it is completely debugged and absolutely reliable. As it is today people are bitching at the extra seconds it takes to authenticate credit cards with chips. Mag stripes were easier and quicker, but they were hackable.

Moore’s law drove hardware development for half a century. It was pretty simple. Semi conductor density doubled roughly every 18 months, so 36 months, 3 years, is two Moore cycles which yields a 4 to 1 density increase. That makes everything that had gone before obsolete. Thank an electrical engineer. Since I started in the IT business in the late ’70’s that is more than a dozen Moore cycles. That is why IT is cheap today. There has been no equivalent change in bridge technology, and bridges built in the ’70’s are falling down. Disk drives, for example, went for about $1,000 per mega byte when I started, a 20mb drive was about $20k and the size of a washing machine. Current 3.5″ drives that hold several terabytes are around $25 per terabyte, that’s a couple of cents per mega byte. Bridge prices have not come down equivalently. We have reached a point where densities are no longer doubling every year and a half, but other changes in technology keep happening so the Moore equation still holds. Every three years it is all obsolete. Embrace what it does for us, don’t rage at it.

Semi conductors got faster along with getting denser. That has meant that periodically interfaces between components have had to change to keep up. Think cars, if we had maintained the model T starting user interface we would still be using hand cranks. Technology changed and so did the starting interface. Happens with IT too, but quicker, think Moore. Who’d a thunk it?

Some things like homes and coffee pots are commonly purchased. Others, like apartments, mineral rights and software are commonly leased. Cars roll both ways. If you don’t like leasing software or anything else, or don’t like the terms, the answer is simple, don’t do it. No one is holding a gun at your head compelling you to lease software under terms you do not agree with. Ain’t capitalism wunnerful?

In short, the author can put his phony bridge analogy and profound ignorance where the sun don’t shine. Maybe if he goes back to his typewriter in a million years he will produce “War and Peace”.

Remember, don’t click on those links in your email. Phishing catches someone every few minutes, be careful it is not you. Liveware is the biggest bug, always has been, always will be. Regards, Archy

lefty, i have read your long comment twice now. it is thoughtful and informative.

my beginning statement below (“as usual. lefty you miss the point”) was inaccurate, reflexive, and churlish. i apologize.

your fine comment and the comments of some others here represent the views of people who have worked to construct this new computer-based communications system we are using more and more.

other comments like mine represent the views of users/consumers of the system and its products.

both sides have valid stories to tell.

thanks again for a point of view filled with some very interesting historical background.

Is Free Software Inevitable?

Linas Vepstas

February-July 2001

There have been political [RMS], anthropological [ESR] and organizational [CBBrowne] analysis of free software and its implications for society, but few discussions of its underlying economic theory and its repercussions. In this essay, I review some of the economic forces acting on Free Software producers and consumers, and try to place these in a macroeconomic framework. I will be focusing primarily on corporate interests, in large part because the nature and role of free software in a business environment is widely misunderstood, permeated with myths, and prone to propaganda.

There are many free software users because the software market has matured (as all markets do) into a commodity market, where price, not features, dominate buying decisions. This alone can explain most of the growing popularity of free (gratis) software among consumers. But it is more interesting to examine the forces driving the producers: the forces acting on those who create free software. Why is free software getting produced, who is doing it, and will this trend continue indefinitely?

There are parallels that can be drawn between Free (libre) Software and free trade, and, more accurately, to markets controlled by intellectual property barriers. These parallels, understood as powerful economic forces, may be the right way of understanding the continuing vitality of free software. I will argue that the crux of the analogy is that these economic forces act indirectly, shaping markets rather than being markets. As such, they provide leverage that direct investment cannot match, and thus are far more powerful than first appearances. A good economic theory should provide a better way to predict and understand what the future may hold in store for Free Software. Based on the observations herein, it would seem that the dominance of Free Software is indeed inevitable. Free software wins not only because consumers like the price, but because producers like the freedom.

This article assumes a basic knowledge of the Free Software Movement and of the goals of Open Source. [additional overview references needed here.] [need URL’s for other references].

Introduction

How can we then explain the growing popularity of free software and open source? Eric S. Raymond tries to explain it all with an anthropological study in “Homesteading the Noosphere”. Richard Stallman categorically states that Free Software is about Freedom (Liberte) and not Price (Gratuit). ESR’s observations may explain the popularity of Linux with hobbyists, hackers, and IT professionals, while RMS’s appeal is to Kantian imperatives offers no explanation. C.B. Browne discusses the pros and cons of authoritarian/centralized vs. distributed/anarchic development processes [Linux and Decentralized Development]. This may explain the success of the development model, but not the adoption by end-users. Finally, the popular observations and shrugged shoulders invoked during conversations of ‘business models’ is belied by the growing adoption of open source by the hardcore corporate and business community. There’s a force driving this growth, but what is it?

Looking for an economic explanation seems particularly promising when dealing with corporate interests. This is because corporations tend to act much more like classical economic self-interested agents than do individuals. Indeed, to understand the actions of individuals participating in the free software movement, one has to appeal to sociological and anthropological principles[ESR], since the seemingly altruistic behavior of is unexplainable with textbook economic arguments. But when dealing with corporations, we might expect that behavior is driven for almost entirely selfish reasons: maximization of both short and long-term profits. Corporate motives are not subject to conjecture: they are usually plain to deduce.

Economic equations have two sides: producers and consumers. The forces acting on one can be nearly independent of the forces acting on the other. In a mature marketplace, the producers are as interchangeable as the consumers: one is pretty much as good as another. Thus, what matters are the forces between consumers and marketplace as a whole, and the forces between the producers and the marketplace as a whole.

To consumers, the software marketplace appears to be a mature, commodity market, or so I will argue immediately below. In a mature market, there is little product differentiation, and buying decisions are made primarily on price. This alone can explain most of the popularity of free (gratis) software.

To software producers, the software marketplace appears to be a complex, shifting seacoast, fraught with dangers, and a difficult place for anyone to make money (with the exception of Microsoft). The forces acting on producers are the ones that the majority of this essay will be devoted to. These forces are giving rise to a new class of producers, non-traditional producers who can ‘afford’ to create free (gratis) software, and give it away. They can afford to do this because their dominant costs are tied up in issues of intellectual property rights and freedoms (liberte). They do not recoup their software development investment through direct software sales, but instead, through other means. The need for source code access can outweigh the cost of underwriting ‘free’ software development. I claim that these needs will be powerful engines driving the continued production of free software. Free software wins not only because consumers like the price, but because producers like the freedom.

Innovation, Commodity Technology and Mature Markets

The process of introducing new, unique and powerful features that users want is called ‘innovation’. It would appear that innovation in mass-market server and desktop software has stopped or stalled. One web server is pretty much like another; it should be no surprise that the cheapest one, Apache, should gain market dominance. Of course, there are differences: some web servers may scale better than other on high-end, cluster machines. Other web-servers may install more easily, or may use a much smaller RAM footprint. There is an immense variety of ways in which web servers can distinguish themselves from one another. But the plain fact is that most of these features are not important or interesting to the majority of web-server buyers. Just about any modern web-server is ‘good enough’ for the majority of the market. And therefore, the market will choose the cheapest one.

Am I saying that Apache is merely ‘good enough’, and that Apache developers are stupid and incapable of innovating? No. I am merely stating that Apache didn’t need to be ‘the best’ to win market dominance, it only needed to be ‘the cheapest’. And in one important sense, innovation does continue: with every new whiz-bang feature that is added to Apache, an additional 1% of the market is able to look at it and declare: ‘this is good enough for me’. The same situation currently exists for other server subsystems: domain name servers, mail delivery agents, security tools, and, of course, server operating systems. This alone can explain most of the current popularity of Free Software in the server market.

The same situation has also existed in desktop systems for over a decade. The windowing systems in OS/2, the Macintosh, MS Windows 3.1 and the X Window System (to name the major ones) have been ‘good enough’ from almost the beginning, being distinguished primarily by price and the availability of the platform on which they ran. For over a decade, there have been dozens of word processors and spread sheets that were ‘good enough’. More recently, the same has become true of mail readers and office suites. Microsoft has built its software dominance by leveraging the ‘good enough’ phenomenon. Microsoft desktop products didn’t have to be better than the competition’s (although they often were), they merely had to be more easily available. (I say ‘more easily available’, not ‘cheaper’, because for the home user, factors such as the need to drive somewhere to purchase a competing product, or the need to go through a possibly daunting install process, are powerful offsets to price differences. This is called ‘convenience cost’, and the power of bundled, pre-installed software is that its total convenience cost makes up for its price-tag.) Microsoft achieved market dominance by failing to innovate, and by stifling innovation: it created products that were ‘good enough’ and were cheaper than the competition. If an innovator introduced a revolutionary product that could demand high prices and rich profits, Microsoft only needed to create something similar that was ‘good enough’. Microsoft could then leverage its distribution channel (viz., bundling, pre-installs) to offer a lower ‘convenience cost’ to win the market.

In a commodity market, the low-priced offer wins the business. To the extent that software is a commodity, then free (gratis) software will displace popular, commercial, non-free software. As we shall see below, Free Software is a disruptive innovation, and its continued development is powered by this fact. However, from the above arguments, we see that the popularity of free (gratis) software is due to the lack of product differentiation that is the hallmark of ordinary market maturation.

Is Free Software Like Free Trade?

The plaintive cries of the ‘intellectual property’ industry remind me of a similar refrain heard in the hallways of Washington, sung to international trade representatives and policy wonks: “How can we survive when the flood of cheap products and labor from Mexico will put us out of business?” (NAFTA) “How can the Japanese rice farmer survive, and Japan guarantee is economic independence, when cheap American rice is allowed to be imported?”. The answer, is, of course, that one doesn’t. The inefficient producers do indeed go out of business. When the trade barriers come down, the protectionists are the losers. Some survive, others grow to take their place: some may discover new, niche, upscale markets, while others figure out how to reduce overhead and be profitable with reduced margins. Technology innovation often leads to ‘disruptive dislocations’ [The Innovator’s Dilemma, Clayton M. Christensen] I think the same fate awaits the software industry, and more broadly, the intellectual property (viz. entertainment) industry.

Is this bad? Is there something wrong with this? Let me take sides with the trade liberals, who seem to have won: the loss of jobs and income from lowered trade barriers is more than offset by the broader and cheaper array of products. Once the dislocation is behind you, the total benefit of free trade exceeds the loss of jobs. Far from starving, or scratching for a living, the first world has a greater variety of foodstuffs and international cuisines available than ever before. Its not a zero-sum game: by removing economic friction and encouraging efficient markets, the whole world benefits. Indeed, the benefits usually go far beyond any original vision: one finds entirely new, unimagined industries springing up, built on the backs of the now-cheap, commodity products. [The Innovator’s Dilemma, Clayton M. Christensen]

On the flip side, dislocations have been severe (e.g. the American steel industry, the British coal mining industry), as thousands of workers were displaced and were unable to find comparable employment. Dislocation undeniable and is hard; in some ideal world, one might hope that somehow the industries and the politicians might find some softer way through the transition. This would not be pure capitalism: in traditional pure capitalism, the worker be damned, its all about profits. But I argue below that the high-tech industry has been wracked with deep and powerful dislocations and paradigm shifts for half a century. These seem almost routine: every high tech worker knows that what one did five years ago is not what one is doing now. But this is quite palatable: the high-tech sector, as a whole, has not shrunk. Open Source/Free Software will not cause it to shrink.

As it is with trade barriers, so it is with proprietary software licenses. A proprietary license is a barrier, much like a trade barrier: it prevents the efficient, frictionless flow of information, and uses this barrier in an effort to scoop up profits (vast profits or windfalls, in many cases). Access is denied, forbidden, and protected by the law and enforced by the arms of the state: reverse engineering software is like smuggling: a black art, and now illegal in yet a new way with the passage of the DMCA. Of course, some companies will sell you their source code, usually for a steep price. The analogy: a stiff fine, a tariff, import duties. This suits the producer quite well, but hardly the consumer.

(Side Note: I am not arguing that all notions of intellectual property be abandoned. Although in its current form, the copyright laws, and in particular the patent laws in the United States appear to be severely broken, this does not imply that copyright or patents can serve no purpose. They do not need to be abandoned, although there is a clear need for reform.)

[Editorial correction from Richard M. Stallman:

There is a partial similarity between free software and globalization, but also a major difference.

The advocates of “free” trade, and neoliberalism in general, argue that it creates wealth. That is true–but it also concentrates wealth. The result is that only the rich benefit. The poor gain little; they may even they lose, as has happened in the US.

Free software is different, because it works against the concentration of wealth. (Copyright is a major factor for concentration.) So when free software creates more wealth, the benefits are general.

This is how free software can be beneficial, while global “free” trade is harmful.

I put the “free” in “free trade” in scare quotes because it is a misnomer. Trade is still restricted, but now it is restricted by copyrights and patents rather than specific laws. These treaties do not eliminate control over trade; rather, they transfer it from governments, which sometimes respond to the well-being of their citizens, to corporations (usually foreign), which don’t recognize a concern for the public.
]

My goal here is to try to explain why open source is popular among businesses, and why that popularity is growing, and why its reasonable to believe that it will only grow more. I try to pin this on ‘classical’ economic ideas of self-interest: corporations are going for it because its cheaper than the alternative, and I only hoped to point out how its ‘cheaper’.

The focus here should be on the role that patents, secrets and proprietary rights play in software, and the thesis is really centuries old: barriers provide economic ‘friction’ that allow the rights-holders to derive economic ‘rents’. What’s new here is the simple observation that free software is built without these barriers, and thus (a) free software companies won’t make big bucks, (b) users of free software derive an economic advantage by avoiding the need to pay ‘rents’ to the rights-holders.

The flaw in my analogy is that of comparing ‘free trade’ which implies ‘tariffs’, when in fact, the analogy should be to ‘trade that prevented due to the action of patents and secrets’. This is partly literary slight-of-hand: most people believe patents are good, and that free-trade is good. So if I say ‘patents are bad’, I get booed, but if I say ‘free trade is good’, I get cheered. But they’re really similar statements.

(as RMS points out, large corporations are pro free-trade globalization, because they have something better: they have in place copyright and patent protections. If these didn’t exist, we might expect that large corporations would be anti-globalization.)

A Case Study

Company D, acting as the developer or lead contractor to an industry consortium C, developed a powerful and expensive web-based system. The technology used a proprietary web server from company X. The system was large: it consisted of hundreds of Unix-based SMP servers and a handful of mainframes located in two cities, mirrored in real time so that a major catastrophe in one city would not disrupt the operation of the system. It cost tens of millions of dollars, approaching hundreds, to develop and operate. It was used by millions of web users on a regular basis.

As any such large, complex system, it was beset by a variety of problems. One of the more serious problems was a ‘memory leak’. During normal operation, the web-server/dynamic web-page subsystem ‘leaked memory’, using larger and large amounts of the installed RAM in each of the servers. After a few days or a week, an alarmingly large part of RAM became unavailable, and soon enough, all 4GB of installed RAM on each server would become exhausted. Preventative maintenance was scheduled: at first, weekly, then twice-weekly, Wednesday and Sunday nights, the servers were rebooted to reclaim that memory. The reboot was not entirely simple: one had to check that all customers had logged off, so as not to disrupt any pending transactions. Log files and communications subsystems had to be carefully closed before a reboot, in order to not set off automatic error detection alarms. After the reboot, each system was examined to make sure it had come on line properly and was functioning and usable. This maintenance process was carried out by a small army of graveyard-shift high-tech workers, sysadmins. Rebooting hundreds of servers twice a week is no small task. The salary expense alone amounted to millions of dollars annually.

In order to correct this problem, a special SWAT team was assigned to locate and eliminate the bug. One little problem: the web server was proprietary, and the source code was not available to the SWAT team. A specialized, pure-custom debugger was developed to trace and match up all calls to the C library malloc() and free() calls. Calls that did not match up in pairs pointed at the memory leak. After tracing through and compiling millions of calls, three leaky suspect areas were uncovered. The web server company X provided a special debug version of their binary, and we were thus able to explore stack traces surrounding these leaks. One could very loosely say that a portion of the server was reverse-engineered. Armed with reams of data, a set of conference calls and eventually visits to the vendor were scheduled.

The meeting on vendor premises was surreal. We spent the morning setting up a system that exhibited the problem, in the presence of a support tech who was unfamiliar with our complaint or its severity. As we attempted to explain the problem, the recurring refrain was that ‘it is not our code, the bug must be in your code’. We made some progress in the afternoon in trying to explain to and convince the support tech, but the day ended in frustration as we escalated to a manager, and then a second-line manager. Both managers, unprepared, unprompted, blurted out that it can’t possibly be their problem, it must be our problem. (After all, their product was version 3, many years old, millions of users, and no other customer had ever registered such a complaint. Our perception was that no other customer had ever stressed their server so strongly, had used their product in a 24×7, 100% CPU-load situation). The next day, a fresh batch of techies came by to examine the problem, and after much persuasion, left with heads shaking. More escalations, and on the third day, our last day, we finally met with a real programmer, someone who actually developed the product. Unprompted, the words fell out of her mouth: ‘it can’t be our problem (we use Purify, the memory debugger), it must be your problem’. We left for the airport with a promise from the developer and her management that she would look into it. However, according to our on-site, permanent support rep, the demo machine exhibiting the problem was powered off and left untouched for weeks on end.

Followup phone conferences with management were scheduled weekly, and with upper management monthly. No forward progress was made. There were more plane trips, more on-site visits. The problem seemed to remain unresolved, festering. This went on endlessly, without relief, without news. Finally, after 3 months of severe frustration, and some strong, take-charge lobbying by the local architects, it was decided that the whole system be ported to Apache. At least that we, we would have access to the source code, and could be masters of our own destiny.

Another (possibly more important) factor contributed to this decision. The system was slow, with painfully long response times for certain web pages. A good part of the performance problem was that a large part of the function was coded in a semi-interpreted proprietary programming language supported by this server. There were some attempts made to find alternate compilers/interpreters for this language, with the idea being that maybe we could get a performance boost by tweaking the interpreter. But this proved infeasible: an alternative could not be found, and even if it had been, the thought of trying to optimize a compiler was a bit dubious. Compilers are ‘hard’, and the investment in improving one would be a risky and difficult one, possibly having little to show for it. Lack of access to the source code of the interpreter/compiler for this language proved to be another stumbling block.

It was decided that it would be re-written entirely in Java, at the expense of 15 million dollars: the cost of 4 or 5 departments of coders working for over a year.

This should have been the end of the story, at which point I could point out the obvious advantages of open source. If the system had been designed on top of Apache from the beginning, a fortune could have been saved. Almost anything would have been cheaper than what we went through: we could have paid dozens of programmers for a year to hunt down one solitary, single bug: that would have been cheaper than the army of sysadmins doing twice weekly reboots, and the army of programmers redesigning everything in Java. The mistake of not going with Apache was due to a reasonable decision backed by the traditional business reasoning of the late 90’s: Apache was an unknown. Company X was a large, powerful, widely acclaimed company. Company D was even larger and more powerful, the epitome of business computing. This was a professional, company-to-company relationship: each could be counted on to defend the other, to do what it takes for a successful web-site launch. The pockets were deep, seemingly bottomless: no matter how bad things got, there was a cushion, a safety net in picking a popular respected product. It seemed to be shear lunacy, absolute madness to think that one could ascribe such powers, such trust, such dependability to some rag-tag group of volunteers known as the Apache Foundation. Picking Apache over web-server X would have been career suicide, not a rational decision.

But that’s not the end of the story. Amnesia set in, starting with the choice of Java for the new version. Late in the development cycle, the new system was finally able to be tested for extended periods of time. After 30 hours or so of 100% cpu load, the JVM, the Java Virtual Machine, locked up, locked up hard. Our conclusion: no other user had ever run a JVM in a 100% cpu load, 24×7, high-availability environment before. In principle, this was fixable: our company had a source code license from Sun, and thus should have had many of the benefits of open source. With one little catch. Fear of ‘intellectual property contamination’ meant that only a restricted set of employees actually had the ability to view the source code. We didn’t want rank and file engineers to accidentally learn the secrets of Java and unwittingly use Sun’s patented ideas in other products. It was clean, honest and upright policy. Unfortunately, the Java developers were an ocean and six time zones away. And, as you might surmise, a bug that shows up only after 30 hours of heavy load is not the kind of bug that is easy to diagnose, and once diagnosed, is not easy to fix. It remained a tangle for months. Eventually, some work-arounds and an upgrade to a newer, more recent version of the JVM made the bug disappear; it was never actually fixed (fixing it was no longer important). And another bit of amnesia set in at this point: since everything now worked, the original motivation to move to Apache disappeared. The transition was never made.

In retrospect, it was a very costly misadventure. Not only was there a big hit to the profits, but there was a hit to the revenues. The consortium members were quite upset with the shenanigans. Schedules had been delayed by years, performance was questionable, much of the development and maintenance costs were shouldered by the consortium members. The worst had come to pass: there was a breakdown of trust between the consortium and the prime contractor, with rancorous and acrimonious accusations flowing between executives on both sides. By 2000, the whole thing had been essentially wound down, with asset sales of major portions of the technology made for tens of millions of dollars, and a skeletal operating crew remaining. A potential business valued in the billions of dollars, with decades of happy customers and users, was not to be.

Analysis

The largest of companies, with the seemingly best intentions of offering customer support, had failed to do so. In light of this, plaintively, I want to ask, what can a mere mortal, a small company or even a mid-size company, can ever hope to gain from ‘customer support’ from a proprietary software company? If a giant can’t get it, what hope do you have?

The powerful, personal lesson that I take away from this is never, ever use proprietary software for something important. Even if your life depends on it. Especially if your life depends on it. If a bug needs fixing, I can hire a programmer, or two or three or a dozen, to fix it. The cost will be less than waiting for the vendor to fix it for me. I am bound to make many business mistakes in the future. The free software movement is still young, and many, many important technologies are not available as free software, and those that are, are frequently immature. This is a handicap: its hard to build an all-Linux solution. But the costs: Free! and the potential to be the master of one’s domain, make it all seem worthwhile.

Is this story unique? At this scale, maybe. But this story played out at the dawn of corporate awareness of open source. Thousands of other large technology projects have failed at thousands of other large corporations. I suspect that few post-mortem analysis have ever pinned the blame on the lack of source code. But this, I suspect, is because no one ever thought before that source access was a viable or even a remote possibility, or that the lack of it could be a root cause of failure. This hasn’t yet been a thing that many executives or technology officers have much thought about, heard about at seminars, or discussed with their peers at other corporations. It hasn’t entered the consciousness of business at large on any broad scale. I couldn’t say that there has ever been a Harvard Business Review article written on such matters, or that a class has been taught at the University of Chicago Business School analyzing such business case studies. We are at the very dawn of understanding. But slowly, inevitably, and surely, it will enter business thinking. Pundits, advisors and consultants will eventually start noticing, and they’ll advise and consult in a noisy fashion, with splash and panache. And business leaders will come to hear such stories, and even become inured, before the light-bulb goes off with an bright ‘ah-ha!’. Open source removes barriers and stumbling blocks, open source provides strong financial incentives and rewards.

Notes:

  • Linux in Business – Case Studies provides a large collection of studies and stories of the adoption of Linux by businesses.
  • Component Use May Limit Reliability. Discussion of software reliability issues.

Altruistic Behavior by Capitalists and Corporations

Neither the GPL nor the BSD license prohibit the private, internal use of enhanced software. One can make ‘proprietary’ changes to GPL’ed software and use them internally. However, there is an economic incentive to not keep these changes secret and proprietary, at least not for long. This is the maintenance and upkeep costs of proprietary changes. If a proprietary change is made to version 1.0 of some GPL’ed software, then the user of this modified version cannot easily take advantage of the newer version 2.0 of the same software. This is because the proprietary changes would need to be integrated into the never version 2.0. This can be of such a great cost, equal or exceeding the cost of the original modifications, so as to be untenable. The internals of the software change, possibly requiring a major redesign of the proprietary enhancement. Even if the enhancement is easily ported, there can be non-trivial costs of test and validation, especially when the software is involved in a complex systems or server environment, where high uptime and dependability is a requirement. In the medium-to-long run, these maintenance and upkeep concerns provide a powerful incentive to donate the enhancements so that they become integrated back into the main body of the work. In this way, the maintenance and upkeep costs are spread to the community, rather than remaining concentrated in the hands of the modifier.

Thus we see that profit-driven, capitalistic, greedy corporations can be driven (and are being driven) into contributing significant developments to the free software pool. In fact, the greedier, the better: a sharp and quick-witted corporation will act quickly to shed costs onto the community. It may develop a needed feature, and dump it, half-baked and untested, onto the community. Then, with a bit of cheap and diplomatic mailing list behavior, encourage the pool of ‘volunteers’ to debug and polish its contributions. “Release early, release often” is a mantra already understood by individual open source developers, and it will be easily learned by self-interested corporations.

Thus, we see that the kinds of behaviors that are described as ‘altruistic’, and even derisively labelled ‘communistic’ by some [Mundie?], can emerge as a result of not-so-enlightened self-interested, greedy motivations.

An Example of Indirect Investment

Let us imagine a new processor, the SuperProc, developed by Microelectronics Inc. This is a low-power, embedded processor that Microelectronics Inc. wants to sell to Detroit, so that, e.g. Cadillac might use it in their anti-lock braking system, engine control, and the air-conditioning system. Microelectronics Inc. is an expert in designing chips, and its business is selling chips, not selling software. To make the chip all the more appealing, it needs to arrange for a development environment, consisting of a compiler, an assembler, and a C library. It has several choices: develop this technology in-house, subcontract it to a compiler/development tool specialty shop, or modify the GNU gcc/binutils/glibc toolchain. The first option is inappropriate: after all, Microelectronics is a hardware, not a software company. The last two options are practical, and the choice is really determined by the cost, and the question of whether the proprietary toolchain might have better features than the GNU toolchain. Assuming that the GNU toolchain is chosen, then we see again that the development of free software has been funded by a corporation acting strictly in its own competitive interests.

The moral of this story is that the free software is developed indirectly, as a side-effect of developing and marketing the main product. Unlike a for-profit, pure-software house, Microelectronics does not have to allocate a budget for marketing and advertising its software toolchain. It does not need a sales force to get it into the hands of customers. If it picks the GNU route, it doesn’t even have to track the number of copies or licenses, or pay any royalties. It can mostly skip support questions: except for serious bugs, it can refer needy customers to Cygnus Solutions (now a part of RedHat) for support. It has none of the overhead associated with a traditional pure-software company.

Imagine that some proprietary compiler/toolchain company had the idea to create a toolset for SuperProc. Without direct support from Microelectronics, it would have a very hard time making a business case. Its not the R&D costs, its the marketing and sales costs that would eat away the plan. By comparison, Microelectronics only needs to pay for the R&D, and thus can get a toolchain for a tiny fraction of the cost that it would take for a traditionally-structured software market to deliver the same. Again, we have an analogy to free-trade. By removing the proprietary barrier, a more efficient market results.

Its also important to note that there is an indirect trickle-down effect at play as well. If Microelectronics were to hire Cygnus Solutions to develop the toolset, then some small fraction of the total investment will go into enhancing and improving the CPU-independent parts of the toolchain. Although Microelectronics is ostensibly investing in the SuperProc backend only, defacto, the front end gains a bit as well. The front-end improvements are shared by all CPU’s, and in particular, by PC users running on Intel chips. This is benefit accrues to Intel users even though the investment is for a different platform entirely.

Altruistic Behavior by Governmental Entities

Other Ways That Corporations Support Free Software

Think of it this way: suppose you are the manager of some 50 technical employees, and a dozen sysadmins and support personell. Some are working on product, others are setting up linux firewalls, etc. But really, day to day, can you account for their time? That guy who has been struggling to set up an automated intrusion detection system on Linux for the last two months: how do you know he hasn’t been sending large patches, enhancements and bug-fixes to LIDS, all done on company time? And if you did find out, would you reprimand him? Direct experience shows that this sort of process is going on all the time; what is not known is how large this effect is.

But perhaps the real point of such an example is that this kind of behavior isn’t possible with proprietary software. The same employee may spend just as much time combing over newsgroups, documentation and the like, exchanging messages with peers, hunting for advice on configuration and the like. But just at the point where the employee finally becomes conversant, comfortable with the technology, proprietary software bars them from productive participation. They spend their time devising work-arounds and inventing clever hacks, when it might have been easier to just find and fix the bug. Open source projects and proprietary software both eventually ‘grow’ groups of strong, knowledgeable, committed users (after much time and invested energy). However, open source projects can ‘harvest’ contributions from their user groups that proprietary software vendors must of necessity leave ‘fallow’.

Unfortunately, there is little data available that might show how much of GNU/Linux was developed on unpaid time, vs. during working hours (whether management approved or not). There is a wide range of opinions on this matter.

Open Standards: A History of Computing

Note, however, that there is a gray area between the lowest levels and the highest levels of the system. In the layer between the web application server, and the details of the application, there might be some generic programming interfaces. When these are first created, there is the incentive to keep them proprietary: they provide a competitive advantage, as anyone wishing to create a competing system would need to reinvent these generic services. But time does not stand still. As time progresses, proprietary systems that used to provide a competitive edge are eroded, and become either extinct or become open. To understand this, lets look at the history of computer technology from the marketing point of view.

In the 1950’s, computers were sold without operating systems (much as many embedded application chips are today). It was up to the buyer to create the environment they needed. By the 1960’s, this was no longer the case. Computer manufacturers were briefly able to tout: ‘New! Improved! Now comes with Operating System!’. But this didn’t last long; eventually, all computers were sold with operating systems, and there was no particular point in using advertising space to announce that it came with an operating system. The buyer already assumed that. The buyer was basing their purchasing decision on other factors. The bar had been raised.

This scenario repeats itself over and over. In the 1980’s, Unix workstation vendors could and did compete on the basis of technology that we now take for granted: a windowing system (Sun, SGI, NeXT had NeWS, the rest the X Window System), networking capabilities (TCP/IP vs. IBM’s SNA and other protocols), distributed file systems (Sun’s NFS vs. IBM’s SNA-based file system, ‘Distributed Services (DS)’ and later, the Andrew File System (AFS) vs. the Distributed File System (DFS)), distributed computing environments (IBM, HP & Apollo’s DCE vs. (i forget the name) Sun’s stuff), windowing toolkits (Sun’s OpenLook vs. HP’s Motif vs. the academic (Carnegie-Mellon) Andrew Toolkit), 3D systems (SGI’s GL/IrisGL/OpenGL vs. IBM’s & HP’s PHIGS/PEX), and programming languages (C++ vs. NeXT’s Objective-C). During the interval of time that these technologies were hotly disputed, there was a tremendous amount of advertising ink spilled and PR hot air vented on promoting one of these technologies over the other. Customers were acutely aware of the differences, and made buying decisions based on the perceived competitive advantages. However, as time passed, things settled down. These days, all Unix customers take for granted that a workstation comes with X11, TCP/IP, NFS, Motif and OpenGL. Advertisements no longer make a big deal out of this; in fact, advertisements and PR completely fail to mention these technologies. Purchasing decisions are based on other factors. The bar had been raised.

In short, as time progresses, the level of sophistication rises. A company cannot have a competitive advantage when all of its competitors can match all of its features, feature-for-feature. One advertises what the competition does not have. That’s how one distinguishes oneself from competitors.

In the course of the competition, the competitors learned another lesson: “Open Standards”. This was not an easy lesson. Sun’s NeWS was considered by many to be a superior windowing system. Sun held on to it quite tightly: the licensing terms were restrictive (keeping Sun in control) and expensive. There were attempts to license it to large corporations (DEC, Microsoft) but only a few smaller, non-threatening corporations (as SGI was at the time) picked up on it. In some cases (IBM), Sun did not even make an offer to license. The restrictive terms and the lack of an offer drove away IBM, HP, DEC and all the other big players. As a result, the other vendors rallied around, were driven into the arms of MIT’s X Window System. Today, X11 dominates and NeWS is extinct. On the obverse, Sun seemed not to value NFS: it was given to all the Unix vendors. By the time that IBM introduced Distributed Services, it was too late. DS had some technical advantages: it had client-side caching, for example, which NFS of that era did not. It also allowed the sharing of volumes with mainframes; no other Unix machines did this. But it was too late. NFS had already taken over. On the window-toolkit side, Sun kept OpenLook proprietary until it was too late. Motif had won.

SGI was particularly clever with GL. GL gave SGI a tremendous competitive advantage in the 3D graphics market. It was only when the other workstation vendors finally stopped bickering and started throwing their weight behind PHIGS, that SGI realized it was threatened. It acted quickly and decisively: it remolded GL into OpenGL and licensed it quite freely. OpenGL won, while PHIGS has become irrelevant. Of particular note was SGI’s ability to ‘raise the bar’ even after it had opened OpenGL. While all other vendors were readying their first release of OpenGL, SGI rolled out new (defacto proprietary) features and enhancements to OpenGL. Sure, one could get OpenGL from IBM, but SGI’s implementation had more stuff, more sophisticated stuff. And it was also moving on other fronts: SGI encouraged programmers to code to its higher-level, more sophisticated Performer and Inventor 3D object systems, instead of the low-level OpenGL. It had trapped its customers and fought hard to keep them trapped by raising the bar. Stuff that was below the bar was freely and openly shared. Stuff below the bar no longer provided a competitive advantage; on the contrary, by sharing the stuff below the bar, one could protect one’s investment in the technology. The protection was protection from extinction. The Open Systems of the late-80’s, early-90’s did an excellent job of shutting out proprietary interlopers while generating billions in revenues.

This same phenomenon continues today, and carries over easily to Open Source/Free Software. Suppose that company D of the case study above had indeed developed a generic but proprietary set of functions in its code. It can derive a competitive advantage by keeping this interface proprietary, and then keep this advantage for years. But one day, an open, free implementation of a similar set of functions arises. It may not be anywhere near as good as Company D’s implementation, but it does have a popular following. What should company D do? If it keeps its interfaces proprietary forever, it will wake up one day to find that maintenance and upkeep costs are an anchor chain around its feet. The interfaces no longer provide a competitive advantage; rather, they have become a cost center. Company D’s only rational decision is to wait up to the last minute, and then liberate its proprietary technology. It might not need to make it completely free, but it does have to make it cheap enough so that it will crush any competing implementations. These days, with the rise of the GPL, it may well mean that Company D is best-off just GPL’ing their work, since anything else will drive away the future adoption of its technology. If company D is successful in opening the stuff below the bar, then it will have protected its investment. The ‘opening’, ‘liberating’, or ‘freeing’ of technology is nothing new in the computer industry. Its a theme that has been playing out for five decades. It is not about to stop.

Monopoly Forces

IBM has enjoyed a monopoly position in mainframe operating systems for three decades. It has two mainframe operating system products: VM and MVS. They are very different in design and capabilities. MVS is the operating system that was developed explicitly for sale/licensing to its mainframe customers. VM was developed internally, in part as a research project, and eventually became widely deployed within IBM. It had two or three very powerful features that no other operating system has ever had, and for the most part, still don’t have. First and foremost, it implements the concept of a ‘Virtual Machine’. Every user gets their own copy of a virtual machine. In this machine, one can then boot any operating system that one desires: from the users point of view, it looks like ‘bare metal’, and just as with real bare metal, one can do anything one pleases. Similar systems are available for PC’s these days: for example, VMware allows you to boot and run both MS Windows and Linux on the same PC at the same time. VM had the interesting property that one could ‘crash’ a virtual machine without disturbing other users. The VMware analog would be having Windows crash, without disrupting the Linux virtual machine. (VM is superior to VMware in that the mainframe hardware has specific support for VM, whereas Intel chips (and most RISC chips) do not. The hardware support makes VM much simpler and faster than VMware). VM also had a built-in hardware debugger, and was small, fast and lightweight.

Eventually, the existence of VM became known to IBM customers, and it was not long before they begged, pleaded, wheedled and threatened IBM into selling it. IBM eventually begrudgingly complied. Customers loved it: it allowed a sysadmin to partition a machine into several parts: one part ran the stable production environment, while other partitions ran the latest, new-fangled, experimental stuff that was still being debugged. You could use the same machine for two things at once: one could try out new software without endangering the stability of the older software.

IBM did not really want to (and still does not really want to) sell VM. It wants to put all of its development resources and dollars into MVS. It doesn’t really want to deal with the cost of customer support, the cost of sales, the cost of marketing for VM. It doesn’t want to have to enhance VM just because customers demand it. It would rather have it go away. It can charge much higher prices for MVS while slowly adding VM-like features to MVS (e.g. LPAR partitions). It can make more money licensing MVS. It has no competitors that are driving it to innovate MVS, or to lower the price of MVS. Its stupid to compete with oneself. When it let the genie out of the bottle, it found itself in a stupid situation: VM was applying competitive forces on MVS.

What should IBM do? Because it enjoys a monopoly, it has no incentive to open up VM. There is no competitor that is offering anything as good as or better than VM. IBM’s most rational move is to bury VM as best it can, and this is precisely the strategy that IBM is following. VM is now quite old, and hasn’t been kept up. While still interesting to a large segment of customers, its slowly withering.

Microsoft has never found itself in a VM/MVS situation. But it does enjoy a monopoly, and therefore feels no pressure to open its older technologies. The ‘open standards’ scenario cannot play out in the Microsoft world, because there is no competition that causes Microsoft to rethink its proprietary strategy. One could argue that, for example, Samba is providing a competitive pressure that should force Microsoft into opening up its file-server software. But two factors prevent this. Culturally, Microsoft has no experience in opening anything. Secondly, if Microsoft opened up its file server, it seems highly unlikely that they could save on development or support costs; nor would it be able to add new customers by opening it up. More likely, they would loose a small income stream, with nothing to show for it.

The ‘Open Standards’ history unfolded because of competitive pressures. The GPL’ing of software will also benefit from competitive pressures (see, for example, the database market). But in a monopoly environment, there is no incentive to open anything at all.

(footnote: MVS has been known under several different names during its product history, including OS/390 and OpenEdition, and currently, as z/OS. Name changes typically coincide with major feature/functional improvements to the operating system.)

Proprietary Software in Niche Markets

Let us take as an example the market for web-server performance measurement tools. This market is currently dominated by fewer than a half-dozen vendors: Mercury Interactive, [get names of others]. There are relatively few customers: this is because few companies have the kind of web-server infrastructure that is complex enough that performance needs to be analyzed. This fact, coupled to the fact that creating measurement/stress software is hard, means that the vendors must charge high prices for their product. As of this writing, this is from $5K to $20K per seat. The use of web servers is expanding, and so with careful management, these companies could be quite profitable.

Let us now imagine the following scenario. A small but growing business is building a complex web infrastructure. It has reached the point where management decides that some performance analysis and tuning is called for. One person, possibly part-time, is assigned to the task. This person, possibly unfamiliar with the niche for such tools, begins scrounging. They may find a few free tools, and they may find one of the niche vendors. Its a part-time job, and ‘not that big a deal’, and so the decision is made to use one of the free tools. The proprietary tools may be considered to be ‘overkill’, or just maybe too expensive to merit purchase. But such projects have a way of getting out of control. The user may add one tiny feature to the free tool, and fold it back to the tool maintainer. Management asks for a few more reports, and a few more features get added. Before long, the user develops a certain amount of loyalty to their tool. Even though in retrospect, it would have been cheaper to buy the expensive proprietary tool, it is now too late. The free tool has advanced, and it has a loyal following. This process, repeated over and over, leads to a progressively more and more sophisticated free tool.

The point here is that at each stage of the game, more-or-less rational decisions were made based on corporate self-interest and profits, and yet a free software system resulted. This result is completely counter-intuitive if one believes that all software is developed only by companies in the software-product business. It makes no sense for one of the niche vendors to give away their product; nor is it likely that some startup will find the business case to develop and give away software to such a small niche. The absence of these latter two economic incentives does not deter the advance of free software. Free software enters indirectly.

There is a peripheral question that we should deal with: Who is the lead maintainer of the free code? Presumably, the answer is that the heaviest user acts as the maintainer. It is not particularly costly or time-consuming to maintain a free project: services such as Sourceforge [sourceforge] makes it easy and cheap. The lead maintainer derives some advantage by getting an ever-improving product with little direct investment, and may even be conferred some marketing points in recognition of its services. Provided such a tool becomes popular enough, it may even be possible to sustain a small consulting business whose focus is in maintaining this tool. At this point, the full free-software dynamics kicks in and drives advancement.

The ‘end game’ is also worth noting. The proprietary vendors face two futures. In one, they are driven out of business. In the other, they must continue to add features to their proprietary product to stay ahead, or to migrate to a related, unfilled niche. In this way, free software drives competition and innovation: those who sit still get clobbered.

[This section may benefit from a re-write with a more compelling example from another segment, or from additional factual details about the history and status of the current free web-performance tools.]

Are There Any Jobs for Software Developers?

Proprietary software will not disappear. Proprietary software will simply have to be better than open source software. There’s an old joke: ‘Microsoft makes the worst software in the world. Thats because anyone who makes software that is worse than Microsoft’s can’t stay in business’. This joke works quite well when refurbished with ‘Open Source’ in place of ‘Microsoft’. A clever marketer would say that Microsoft has been ‘raising the bar’, has been ‘innovating’, and that Microsoft’s competitors have been failing to ‘innovate’. Of course, the very same can be said about Open Source/Free Software. The movement does advance, and if your software concern can’t keep up, you’re SOL. No doubt, many failing companies in the future will blame Free Software the way they used to blame Microsoft. Many software executives will come to passionately hate Free Software for this reason. But of course, this logic is flawed: the problem is not with innovation or lack there-of, but with the fact that the mainstream commercial software marketplace is mature, and the cheapest product is ‘good enough’ for most users. Free software is slowly taking the crown of ‘cheapest’ away from Microsoft.

But we have two questions to answer: the first, who will hire programmers? the second, what business plans will succeed? The answer to the first question will not be changed much by the growing acceptance of free software. The vast majority of software programmers develop software that is not meant for direct sale. Since their employers don’t derive revenues from direct software sales, open source does not pose a threat to their continued employment.

There are also many programmers involved in the development and sale of shrink-wrapped, retail consumer software. Their jobs will be threatened. But this is nothing new; Microsoft has been trying to, and succeeding in wiping out entire segments of the consumer retail shrink-wrap market. Compare the titles for sale today, to those that were for sale a decade ago, and now ask ‘how many of these are large, ongoing concerns, focused purely or mostly on software development’? Their numbers, and their variety, have collapsed. [need data to support this].

The other major segment of direct software sales that employs programmers is the business software segment. These guys are not threatened, or shouldn’t be threatened; they just have to stay on their toes. Ten years ago, there was some good competition in the compiler market. But Microsoft won, Borland lost, and most of the rest of us use gcc. IBM is still trying to push its Tobey/Visual compiler, but its not destined for market dominance. The others have found niches or have failed outright.

Five years ago, there was a busy market for web application development tools and application servers. Since then, this market has more or less consolidated, with the dominant technologies being Microsoft’s ASP, Sun’s JSP, and the open source PHP. The ASP/JSP/PHP battle will rage for a long time, because the first two have powerful backers, while the third has raw economic forces on its side. In the meantime, Microsoft is trying to change the rules with its .net strategy, much as SGI tried to change the rules with Performer/Inventor after opening OpenGL.

Today, there is a battle raging between Microsoft SQL, Oracle, IBM’s DB2, and Postgres. Informix and Sybase have been clobbered a while ago. The database programmers at Oracle should well feel a threat to their job security, but they currently see Microsoft, not Postgres, as the threat. Oracle is already reacting: They are focusing on non-database development and sales. DB2 has and will likely hold the high-end of multi-terabyte databases. Postgres will probably gut the low-end and midrange business, leaving Microsoft to mount ever more vehement attacks on Free Software. Its going to get uglier.

Can the world economy today sustain thousands of database-internals programmers? No, no more than it could sustain thousands of compiler developers ten years ago, or thousands of web-server developers five years ago. Free software was not to blame for the earlier market consolidations. In the future, it will serve as a convenient scape-goat, but even if open source didn’t exist, the ‘raising of the bar’ in software features would continue as it always had.

(Some readers of this section may be disappointed that I didn’t answer the questions directly: That company A with business plan B would be hiring programmers with skill-set C. I don’t answer this question directly because there are literally thousands of answers, and it doesn’t take much imagination to discover even a few of them. The point that I am trying to make is that Free Software poses no more of a threat to stable employment for programmers than any previous threat, be it Microsoft, or be it the hydraulic digger to steam-shovel engineers. [The Innovator’s Dilemma, Clayton M. Christensen].

Conclusions

There are some important corollaries to this claim. Although Free Software may become a large contributor to global economic output, equaling or exceeding the size of all proprietary software put together, pure-play open source companies, such as Red Hat, are unlikely to become as profitable or as big as Microsoft. Without other business models or revenue streams, Red Hat fundamentally cannot trap income based on licenses the way that Microsoft can, because it is not the exclusive owner of the intellectual property embodied in Linux. No pure open source company will be able to do this: the economic benefit of open source is distributed, not concentrated.

A second corollary is that open source will not kill Microsoft, although it will impact potential future revenues. The amount of damage is uncertain, but Microsoft is very strong, very shrewd, and involved in many ventures. The advent of the PC did not kill IBM mainframes, it restructured the flow of revenues, and did limit some upside potential. Microsoft is likely to have hundreds of billions of dollars in assets for decades to come. It just won’t be a monopoly player in some of the markets that it would like to be in.

These raw economic forces will be powerful engines of change. The adoption of Open and Free Software will grow by orders of magnitude, and the vitality of its developer community will increase and expand for decades if not longer into the future. Open Source and Free Software will become the predominant, central feature of the post-industrial world and will reach into all facets of life. Notes, TBD

myth — free software is developed at university by kids freeloading on parents. In real life, programmers have to eat.

Who wrote much of the existing free software? Need to find some study that covers this.

explain differences between free and open source. Debate BSD vs. GPL. this difference is vital in economic and decision making terms.

Best Binary Options Brokers 2020:
  • Binarium
    Binarium

    Best Binary Broker!
    Perfect for beginners!
    Free Demo Account! Free Trading Education!

  • Binomo
    Binomo

    Only for experienced traders!

Like this post? Please share to your friends:
How To Choose Binary Options Broker
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: