Posts Tagged ‘software’
As programmers we’re continually accused of doing a sloppy job. There are countless programs in the wild, crashing, locking up and accidentally writing “I am a fish” a million times over someone’s mid-term essay. The effect? Something like this:
This damn computer and excel r fuckin my life up! Hatin life right now
— MissAlauren (and everyone else at one time or another)
It’s experiences like this that cause people to rant about Microsoft and curse the anonymous programmers who suddenly (yet inevitably) betrayed them. We all know this; it’s burned into our very souls by countless hours of tech support provided to family and friends. Time after time we see that programmers who do quick, sloppy work make other people suffer. And so we try, we try so damn hard not to be like that. We try to be the good programmer who checks every return value and handles every exception.
If we stopped at competent error handling and sufficient testing, all would be well. In truth, we actually go too far and, it has to be said, largely in the wrong direction.
A vast proportion of software at work today is horribly over-engineered for its task. And I’m not talking about the interfaces, about having too many controls or options for the users. These are, indeed, terrible sins but they are the visible ones. The worst of the overengineering goes on under the surface, in the code itself.
You’re Doing It Wrong
Have you ever seen someone using the strategy pattern when they should’ve used a 5 line switch statement? There are a million ways to turn something like this:
case OP_ADD: return a + b;
case OP_SUBTRACT: return a - b;
case OP_MULTIPLY: return a * b;
default: throw new UnknownOperationException(operation, a, b);
… into a hideous, malformed mutant beast like this one, which I haven’t inlined because it’s far too long.
The most insidious cause of overengineering is over-generalizing. We will over-generalize anything given half a chance. Writing code to work with a list of students? Well, we might want to work with teachers and the general public someday, better add a base People class and subclass Student from that. Or Person and then EducationPerson and then Student. Yes, that’s better, right?
Only, now we have three classes to maintain each with their own virtual methods and interfaces and probably split across three different files plus the one we were working in when a one-line dictionary would have been fine.
Perhaps we do it because it’s relaxing to rattle off three classes worth of code without needing to pause and think. It feels productive. It looks solid, bulletproof, professional. We look back on it with a comforting little glow of self-satisfaction – we’re a good programmer, no messy hacks in our code.
Except, this doesn’t make us good programmers. Overengineering like this isn’t making anyone’s lives better; it’s just making our code longer, more difficult to read and work with and more likely to contain or develop bugs. We just made the world a slightly worse place. It lies somewhere between tossing an empty drinks bottle on the street and grand theft auto.
The extra effort caused by our overengineering carries a hefty opportunity cost:
- Less time spent refining the user experience
- Less time spent thinking about the meaningful implications of the feature we’re working on
- Less time available to look for bugs and – with harder-to-read code – more time spent debugging them
Yes, by overengineering the Student class you indirectly ruined MissAlauren’s day.
We have to stop championing each ridiculous feat of overengineering and call it what it is. It’s not ‘future-proof’, because we can’t see the future. It’s not robust, it’s hard to read. Applying a generic solution to a single case isn’t good programming, it’s criminal overengineering because like it or not somebody, somewhere will pay for it.
Don’t Worry, Be Happy
I suspect all the best programmers have already realized this, but they’re not shouting about it loudly enough for everyone else to hear. Paul Graham is completely right when he suggests that succinctness is valuable:
Use the length of the program as an approximation for how much work it is to write. Not the length in characters, of course, but the length in distinct syntactic elements– basically, the size of the parse tree. It may not be quite true that the shortest program is the least work to write, but it’s close enough… look at a program and ask, is there any way to write this that’s shorter?
— Paul Graham, The Hundred Year Language
He’s actually talking about language design here; indeed, in Succinctness is Power he’s careful to note that it’s clearly possible to write a program that’s too succinct. This is because, these days, Paul Graham is more a language designer than a working programmer. Otherwise he might have said:
If you’re about to take a hundred lines to write what you could in ten, stop and ask yourself this: what the fuck?
— Mark, Criminal Overengineering
When I feel tempted to over-generalize or over-engineer a bit of code, it’s often because of fear. Fear that someone will find a really good reason I shouldn’t have done it the easy way. Fear that I’ll have to rewrite the code again. Fear of finding myself on the wrong side of an argument about the merits of the visitor pattern. But fear does not naturally lead us to the most elegant solutions.
Next time you feel the compulsion to write a nice, general solution to a simple case, stop and ask yourself what’s stopping you just writing it the simple, specific, succinct way:
- Am I worried I’ll have to rewrite it?
- Am I worried someone will criticize it or that I’ll look bad?
- Am I worried that it’s not professional enough?
Are any of these true? Then relax. Don’t worry. You worry, you call me, I make you happy.
Just write the code the simple, specific way and then add a short comment, something like: Replace with the Strategy pattern if this gets any bigger.
This is the perfect solution. It’s a great reminder to you next time you come here about what you wanted to do. It shows other programmers on your team that you considered the ‘correct’ way to do it and have a good reason not to do it just yet. It’s very hard to argue with a comment like that, because you’re not arguing about the strategy pattern vs the switch statement, you’re arguing about whether to use the strategy pattern after 3 cases or after 4 cases – not a discussion that can reflect badly on you, in any case.
A few months later you can go back and look at how many of your comments eventually turn into more complicated, engineering code. I’ll bet you it’s not very many. That’s how much time and effort you’ve saved, right there. That’s setting yourself free to pursue the solution and that’s making the world a slightly better place.
Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!
Note: This post isn’t about the iPad. It’s about me and you, our bosses and most of all it’s about normal people. It just starts with a story about the iPad, because that’s the way it happened.
What did Yahoo’s bosses say when they saw Google’s homepage for the first time? Why are 37signals so militant about saying ‘no’ to extra features? What did the Apple engineers think when Jobs told then to make a phone with one button?
Last weekend I spent twenty minutes playing with an iPad on a stand in an airport. I opened Safari and read xkcd, Penny Arcade and Hacker News. I flicked through the pictures of some sunkissed holiday by the sea. I played a couple of not very good games. I wrote a short document. I watched a video. At the end of twenty minutes I wandered away feeling slightly uneasy, thinking:
Is that all?
As a programmer, I’m comforted by screens full of settings. When playing a new game the first thing I do is find the options and tweak the hell out of it before I’ve even played a single level. The iPad left me feeling somehow uncomfortable, as if I was missing some core element. Had I really seen all it could do?
That was when I saw it: in a handful of minutes on completely unfamiliar hardware and software (no, I don’t have an iPhone), with an unusual multitouch interface I’d just ‘done’ things without having to think about them, without having to learn anything, without having to struggle. The gap between wanting to do something and doing it was so short that, for twenty minutes, it ceased to exist.
Don’t worry, we’re almost at the end of the iPad bit.
I was asking myself what the iPad could do. The iPad wasn’t doing anything – it was letting me do what I wanted. It had been designed by people who loved me more than their product (as Gandhi says you should). Was that all? Yes, because playing around for twenty minutes was all I wanted to do.
The user interface should be like a soundtrack barely noticed by the user
— Steve Capps
Everything we create should aspire to this, should leave us – as programmers – wondering if that’s all and if we shouldn’t perhaps add a bit more. Scott Berkun (a genius and a craftsman) said all of this more than ten years ago and I’ve known about it for at least half that time, but it hasn’t really changed the way I write software because it’s too hard to just know when something’s simple enough.
The feeling of ‘is that all?’, however, the uncomfortable suspicion that I can’t really ship a product with just one button, that all the important companies have login screens – this feeling proves we are on the right track. It is an excellent guide. Our world is full of self-indulgent interfaces clamoring for our attention. Why should we keep making this worse? We have to be brutal with our interfaces. Strip everything out. Consider every single piece of text as being a waste of the user’s time reading it, every control an unnecessary, unpleasant intrusion.
The user’s attention is a limited resource and we should heavily optimize to minimize our impact upon it. We must always, always remember that nobody wants to use our software – they want to finish their work and go play outside.
It’s hard. It’s risky. It’s easy to defend a new dialog as full of buttons as the old one. Our colleagues and managers live in bizarro world, believing our software has value independent of the things it helps people to achieve. They don’t want the new startup screen to have just 10% of the controls of the old one.
That’s not progress! Progress means more! Deleting things isn’t doing work! It’s anti-work!
— A stupid person near you (or, possibly, you yourself)
I’ve felt this, even if I haven’t said it. There’s this massive tension between writing something to humbly serve people you’ve never met and may never meet, and writing something your boss and colleagues will approve of. Yet we have to try, because the way software has been written for the last twenty years is making people unhappy.
Our calling, our duty, is to write software that will make our colleagues, bosses and competitors scoff and say “Is that all?” while making the lives and work of real people simpler, easier and less stressful. Our customers will love us for it – we just need the courage cut and hack and tear down everything that’s not necessary to get their work done and to put it out there for them to use.
Postscript: What am I doing about this? My startup, CodeChart, is making profiling very simple and very beautiful; the old generation of tools are so ridiculously overcomplicated that most people never use them. It’s in closed beta at the moment, but have a look at our getting started guide to see how it works and feel free to sign up for the beta if you’ve got some .NET code you want to look at. I know, I know, other languages – including my beloved python – are coming later!
Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!
Don’t stop listening until you’ve heard my personal message around 1:44 😉 In fact, it keeps on getting better!
If anyone makes a nice cut summarizing the software development sections (or just a loop ending at 1:16) send me a link!
Anyone know of any more songs secretly about programming and startups out there?
I start in the middle of a sentence and move both directions at once.
— John Coltrane
Newspaper reporters are taught to write fractal articles: summarize the entire article in the title. Elaborate a little in the first sentence, then fill out the most important details in the first paragraph. A reader should be able to stop at any point without having missed any important details. We should approach programming projects in the same way.
As a child – after some experimentation and a lot of thought – I decided the best way to eat cornflakes was as follows:
- Pour the cool milk over the golden roasted flakes
- Sprinkle the one allowed teaspoon of sugar over the top
- Start eating around the edges, saving the sugary middle section for one last big spoonful of joy at the end
I stand by that decision. In fact, I’ve noticed I do similar things in other areas of my life. I’m sure a psychologist would talk for hours on the subject. Luckily for you I’m not a psychologist, I’m a programmer. And it turns out that this is an awful way to work on software projects.
Has this ever happened to you? You wake up one day with a great new idea for applying bayesian filtering to twitter streams to filter out the
pictures of Joel’s new puppy spam. You’re totally convinced it’s what the world needs. It’s the startup that’s finally going to help you to break out of your day job maintaining PHP payroll software stock supermarket shelf stockers. So what do you do? You do this:
- Fire up your IDE and start a new website project
- Whip up a login page and get the user account basics set up
- Decide OpenID’s really where it’s at these days and hit stackoverflow for a good OpenID provider plugin
- Run into problems getting it to accept Google accounts and spend half the night debugging it
Wait, what? How did this happen? Getting OpenID working isn’t fun. It’s almost the definition of not fun.
I didn’t want to do all this, I just wanted to make an awesome bayesian twitter filter, but somehow there’s all this stuff I have to get through first.
— Me (swear words redacted)
My hard disk is littered with projects that I started, got half way through setting up without ever really getting to the good bit, then abandoned. I suspect yours is, too.
The right way to start a bayesian twitter filter is to apply a bayesian filter to content from a twitter stream. I know. It looks like this:
- Google for some bayesian filter code
- Dump whatever’s in your twitter client logs to a file and write three lines of python to parse it into a form the bayesian filter can work with
- Train the filter and see what happens
Compared to the original approach it looks awesome, right? So what stops us approaching all projects like this? Well, there’s something beguiling about wanting to get the framework right from the start this time. It’s more comfortable starting with something we already know how to solve. Sometimes we have a clear vision of how it should end up in our heads and simply start to create that vision from the beginning through to the end.
Start in the middle
— Paul Graham (lightly paraphrased)
Lean startups and the Minimum Viable Product are all about starting in the middle. Paul Graham’s advice for startups can be summed up as ‘first solve the interesting part of the problem, then build the business around it’, but the process is also fractal – starting in the middle applies right down to the level of writing a new class, or a single function. First write some code that solves the problem even if it’s imperfect or partial, then expand it out with your favourite blend of accessors, inheritance and polymorphism (Note: don’t even bother with that bit unless you hate yourself).
I’ve seen four key benefits to starting in the middle:
Benefit 1: Ideas that turn out to be impossible or just plain bad are discovered early. This is classic lean startup advice: fail early.
Benefit 2: Spend most of your the time solving interesting problems and not fine-tuning framework skills. Which would you rather get better at?
Benefit 3: Discover interesting things while your project is still young and flexible enough to adapt to them.
Benefit 4: Once you’ve solved a problem, you’re so motivated to use it that you finish up the surrounding areas in no time. You add extra users because you want to show it to your friends; you add keyboard shortcuts because you’re getting tired of using the mouse all the time. This is programming the right way around – first the need, the desire, and then the solution.
I’ve recently seen all of these benefits while working on my own side-project-turned-startup. Ages ago I had this great little idea for making profiling so simple that it just told you which line of code was slowest in a nice little report and I whipped up some C# code to do just that. The results weren’t making much sense, so I tried plotting the data to a canvas to see what was going on. Pretty soon I was looking at a poor man’s sketch of this:
Instantly I knew I’d been working on the wrong thing; seeing the execution of a program laid out before me in all its glory was so rich and so interesting; something I had no hope of summarizing in a small table of figures. I just had to explore it – I added function names, colour, a breakdown of time spent in each and over time it grew into such a valuable part of my toolkit that I’ve started sharing it with the rest of the world.
Would I have changed direction if I had already created a website, a login system, a messaging layer, a database schema all geared around the original idea? No. I’d have reached the interesting part of the problem with a half-finished framework and close to zero energy and enthusiasm. The discouragement at seeing the futility of my cherished profiling-via-pdf idea would’ve seen me put the whole thing back on the shelf and go play Altitude to forget about it.
So start in the middle, start with the interesting, challenging, core of the problem you’re trying to solve. Cut down everything else to ridiculous minima and see what happens; you may create something fascinating.
Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!
Different environments give rise to different programming styles. Over the last few years there’s been a massive trend for software to move from:
Work for a whole year then release a fixed binary, then write for another year.
to the more dynamic, web 2.0 ideal of:
Whip up a web application and upload each feature as soon as it’s done.
This is a much more efficient way to do business, but it’s a very different kind of programming environment and benefits from a very different programming style – not the one we’ve all picked up by copying API conventions and old-fashioned corporate practices.
If you’re able to make your software available to customers instantly, there is one optimal strategy: code for flexibility.
The flexibility of your code is defined by the ease with which you can modify it to fulfill some purpose you hadn’t envisaged at the time you wrote it.
This is important, because when you give your customers a rapidly-iterating product you’re going to find your initial guess at what they wanted was completely wrong. In fact, all your guesses are probably wrong. You’re going to spend years learning from your customers and adapting your software in all sorts of unexpected ways to fit their needs as precisely as possible.
For a long time I confused generality with flexibility; I was always worrying about how I might want to use a class in the future and trying to abstract out all the common use cases right form the start. As it turns out, this is the opposite of flexibility; this builds rigidity into the project. The moment you realise that actually it makes much more sense to have flip your design so that users vote for *each other’s* questions a little part of you dies inside, because you now have five levels of abstraction and four different database tables to rewrite.
In fact, flexible code is specific code. We want to write code that expresses, as succinctly as possible, a solution to a problem. It’s quicker to read and understand code with fewer syntactic elements, such as variables, functions and classes. It’s also quicker to repurpose.
We also build flexibility – or rigidity – into the large-scale structure of our programs. I’ve often fallen in love with the idea of encapsulation for its own sake; I thought that each part of my program should be as well-hidden from the rest as possible, with only one or two well-specified interfaces connecting things. This is a very tempting dream that results in a kind of tree-like structure of classes. This is extremely rigid, because by design it’s difficult to get access to classes in another branch of the tree.
The best way to build for flexibility is drawn from the Lisp world and championed by Paul Graham: when working on a problem, write code that builds up into a language for describing and reasoning about the problem succinctly.
You can magnify the effect of a powerful language by using a style called bottom-up programming, where you write programs in multiple layers, the lower ones acting as programming languages for those above.
This way the solution ends up being a clear, readable algorithm written in this language – the language of the problem domain. In non-lisp languages you end up with a set of functions and classes that make it easy to reason about the problem, a little like the standard string and mail classes make it easy to work with strings and email.
Building up code for reasoning about the problem domain is vital for flexibility, because although our solution to the problem might change drastically as we get extra insights from our customers, the domain of the problem will probably only change incrementally as we expand into new areas. Adopting this flatter, domain-language appropach to program design almost always increases flexibility.
Another aspect of flexibility is robustness. A function or class that makes assumptions about the circumstances it is called in is going to break as soon as we repurpose it in some other part of the code; sooner or later I always forget that the caller needs to check the file exists, or that the user isn’t null and already has a validated address. There are two schools of thought on how to deal with this – contract programming (essentially visible, machine-verified preconditions) and defensive programming (make no assumptions, handle all the exceptions you can locally and try hard not to destroy any persistent state if things go wrong). It doesn’t really matter which you use, but use one of them.
Many of the extreme programming principles help us write flexible code – the c2 team recognized that:
Change is the only constant, so optimize for change.
They looked at this from a project management perspective, but code guidelines like Don’t Repeat Yourself and You Ain’t Gonna Need It are well-recognized as being fundamental to producing flexible code. In fact, these two oft-quoted principles are worth looking at more closely in terms of how they create flexibility.
Don’t Repeat Yourself: When modifying a bit of code it doesn’t help if there are three or four undocumented places it also needs to be changed; I never remember them all. A perfectly good way to deal with this is to add a comment in each block mentioning the link rather than to refactor right away. Over-zealous refactoring tends to merge lots of bits of code together that look kind of similar but ultimately end up going in different directions. At this point very few people take the logical step of splitting a class or subsystem back into two more specialized units; instead it’s over-generalized again and again until it’s completely unmaintainable
Refactoring should always be Just In Time, because the later you leave it the better you understand the problem; refactor too soon and you’ll combine things in a way that restricts you later, building rigidity into the system.
You Ain’t Gonna Need It: This doesn’t just apply to extra functionality. It applies to all elements of your code, from accessors (a waste of time in most cases) to overall class structure. You’re not going to need to handle N kinds of book loan, you’re going to handle at most two, so don’t write some complicated code to handle the general case. You’re not going to need three classes, one interface and two event types for your “please wait” animation because there’s only ever going to be one and it’s always going to be while printing. Just write a function with a callback and release – the simpler it is, the quicker we can write it today and the more easily we can modify it tomorrow.
There is a tension here. On the one hand, we want to write our software by constructing recombinable blocks that represent the problem domain, yet on the other we want to write the simplest thing possible – and writing reusable blocks of code isn’t usually the simplest thing possible.
This tension is resolved when we stop looking at a program as a snapshot of its state at any one time and instead look at it as an evolving codebase with many possible futures. So we write the simplest thing possible, yet while doing so we keep an eye on its potential for being refactored into nice, reusable blocks and avoid doing making poor choices that will make that process difficult. The potential for refactoring is just as valuable as the refactoring itself, but by deferring the refactoring we keep our options open.
The ability to see the code you’re writing today and its possible evolution through time is something you can focus on learning, but it also grows with experience. It’s one of the things that makes a great hacker’s code subtly better than a newbie’s code – both start writing simple code that addresses the problem directly, but the newbie makes all sorts of small structural mistakes that get in the way of the code’s growth, whereas the hacker knows when it’s time to turn this set of functions into a class, or to abstract out this behaviour, and has already prepared the way for doing so. The effect is to make programming look effortless again, naturally flowing from the simple into the complex.
I’m not a great hacker who does this as naturally as breathing. I have tasted it, have seen glimpses of it in my own work and in other people’s, but I still make mistakes; I mistime my refactoring – sometimes too early, sometimes too late. I still bind objects to each other too tightly here, or too loosely there. Despite my failings, just aiming at this goal makes me far more productive than I’ve ever been before. It’s not an all-or-nothing premise. The closer you come, the smoother and lower-friction your development will be.
We can gather all these heuristics together into a concise manifesto:
A Manifesto for Flexibility
- Write new code quickly, as if you’re holding your breath. Cut corners and get it in front of users as quickly as possible.
- Keep your code specific, with a clear purpose. Don’t over-generalize and don’t refactor too early – it should be as simple as possible, but not simpler.
- Stay aware that all code is thrown away or changed dramatically. Hold the image of the refactoring you’d like to do in your mind and avoid doing things that will make it more difficult – this is slightly better than doing the refactoring now.
- Recognize the point at which simple code becomes messy code and refactor just enough to keep the message clear. Just In Time refactoring is not the same as no refactoring.
- Build up a language for reasoning about your domain and express your application in that language. Avoid building up a large, rigid hierarchy of over-encapsulated classes that embody your particular solution to the problem.
Postscript: You should do this in a startup and you should do it in personal project, but you shouldn’t do it everywhere. If, say, you’re writing the Java or .NET API then none this applies to you because:
- Millions of people will abuse your API in every way possible the very second it’s released.
- Every time you change something a howling mob will descend upon your office, making it difficult to get a good parking spot.
When the Wolfram Alpha iPhone app was released for $50, we all laughed. iPhone apps cost $2, not $50! When the music industry complains about piracy we’re unsympathetic – they should stop trying to charge $20 for a handful of MP3s! $10 for an eBook? That’s insane! It’s just bits!
There’s a kind of background smugness in the software industry about digital distribution dropping content prices to a commodity level, but although we deny the truth of it, we’re in exactly the same situation. The days of selling software to consumers for $50 a time are numbered, and that number can be stored in 11 bits. The bright side is, everyone will be better off. Well, except for any poor fools still trying to sell software for $50, that is.
What’s driving the change? Several things. Digital content stores are springing up, with friction-free single-click purchasing and delivery of software. The iPhone App Store. Steam. WiiWare. XBox Arcade. In every case, a strong indie scene is springing up, and typical prices have dropped from $50 for consumer software or games down to $15, $5, even 99c. Are developers cutting their own throats? Will our jobs all be outsourced to the lowest bidder? Will we become minimum-wage factory workers, churning out dirt-cheap software in a 3rd world sweatshop?
No, we’re all going to become rich. Commoditization may drive the price of a loaf of bread down to the cost of making a loaf of bread, but how much does it cost to make a copy of a piece of software? Well, almost nothing. And yet people will pay $2 for it, just because it’s less buggy than the free version, because it make their lives better. It doesn’t have to be very good to make their lives $2 worth of better, either.
Last year Left 4 Dead’s developers posted some fascinating sales figures about a promotion on Steam. The cheaper their game got, the more money they made. Not just more sales, more profit. A lot more profit. Jeff Atwood discussed this in some detail in a post which is well-worth reading later.
Making more money by selling software for less hasn’t always been possible, which is perhaps why nobody’s noticed it yet. With a traditional software sale there’s the hidden cost to the customer of going to the effort of finding it on the web, evaluating it, creating an account, entering your credit card details and your address, deciding whether you really really trust this website and so on. Let’s say this was $20 worth of effort, although the real figure varies wildly between customers. Anyone trying to sell software this way for just $5 instead of $40 found it wasn’t selling 8 times as well. This is because it wasn’t 8 times cheaper to the customer – it cost them $5 + $20 in time and effort instead of $40 + $20; an overall $25 instead of $60. Sure, this is still a saving of just over 50%, but the software company is now taking $5 per copy instead of $40 on maybe half as many sales. The numbers just don’t work.
Friction-free payment and zero-cost distribution take this $20 of pain away, meaning at $5 your app doesn’t just appeal to people prepared to spend a total of $25 in time and money, but to the vastly larger group who will pay $5 for a latte if they can have it right there and then without filling in any paperwork. Soon, someone will make this work for web apps too. OpenID sign-in is a good start, but we need OpenID payment as soon as possible. I’m surprised you can’t sign in and pay with your Amazon account for more web services – after all, everyone has one of those already.
On the developer’s side of the fence, a standardized hardware platform decreases their support burden per sale dramatically. If you can test your software on half a dozen iPhone models and reasonable expect it to work on 99.9% of the iPhones out there, you drop your per-sale support costs to almost nothing. This is also necessary. In the web-app space these costs have always been low and even direct per-user costs such as compute time, data storage and bandwidth are dropping all the time.
Given the right environment a software company can make just as much money selling 1,000,000 units for $2 each as 10,000 for $200 each, only now 990,000 extra people are benefitting from the software, and the core audience of 10,000 have all saved $198 to spend on other things. Where will that money go? It won’t disappear. It’ll go elsewhere in the economy. A lot of it will be spent on more software – because there are a ton of tiny problems that would be worth solving for $2 that aren’t worth $200. Our society will start using and buying software ubiquitously. By 2020 almost all consumer software will sell for less than $10 and there’ll be more demand for software developers than ever before.