yield thought

it's not as hard as you think

Archive for the ‘Programming’ Category

Criminal Overengineering

As programmers we’re continually accused of doing a sloppy job. There are countless programs in the wild, crashing, locking up and accidentally writing “I am a fish” a million times over someone’s mid-term essay. The effect? Something like this:

This damn computer and excel r fuckin my life up! Hatin life right now
MissAlauren (and everyone else at one time or another)

It’s experiences like this that cause people to rant about Microsoft and curse the anonymous programmers who suddenly (yet inevitably) betrayed them. We all know this; it’s burned into our very souls by countless hours of tech support provided to family and friends. Time after time we see that programmers who do quick, sloppy work make other people suffer. And so we try, we try so damn hard not to be like that. We try to be the good programmer who checks every return value and handles every exception.

If we stopped at competent error handling and sufficient testing, all would be well. In truth, we actually go too far and, it has to be said, largely in the wrong direction.

vast proportion of software at work today is horribly over-engineered for its task. And I’m not talking about the interfaces, about having too many controls or options for the users. These are, indeed, terrible sins but they are the visible ones. The worst of the overengineering goes on under the surface, in the code itself.

You’re Doing It Wrong

Have you ever seen someone using the strategy pattern when they should’ve used a 5 line switch statement? There are a million ways to turn something like this:

switch(operation)
{
case OP_ADD: return a + b;
case OP_SUBTRACT: return a - b;
case OP_MULTIPLY: return a * b;
default: throw new UnknownOperationException(operation, a, b);
}

… into a hideous, malformed mutant beast like this one, which I haven’t inlined because it’s far too long.

The most insidious cause of overengineering is over-generalizing. We will over-generalize anything given half a chance. Writing code to work with a list of students? Well, we might want to work with teachers and the general public someday, better add a base People class and subclass Student from that. Or Person and then EducationPerson and then Student. Yes, that’s better, right?

Only, now we have three classes to maintain each with their own virtual methods and interfaces and probably split across three different files plus the one we were working in when a one-line dictionary would have been fine.

Perhaps we do it because it’s relaxing to rattle off three classes worth of code without needing to pause and think. It feels productive. It looks solid, bulletproof, professional. We look back on it with a comforting little glow of self-satisfaction – we’re a good programmer, no messy hacks in our code.

Except, this doesn’t make us good programmers. Overengineering like this isn’t making anyone’s lives better; it’s just making our code longer, more difficult to read and work with and more likely to contain or develop bugs. We just made the world a slightly worse place. It lies somewhere between tossing an empty drinks bottle on the street and grand theft auto.

The extra effort caused by our overengineering carries a hefty opportunity cost:

  1. Less time spent refining the user experience
  2. Less time spent thinking about the meaningful implications of the feature we’re working on
  3. Less time available to look for bugs and – with harder-to-read code – more time spent debugging them

Yes, by overengineering the Student class you indirectly ruined MissAlauren’s day.

We have to stop championing each ridiculous feat of overengineering and call it what it is. It’s not ‘future-proof’, because we can’t see the future. It’s not robust, it’s hard to read. Applying a generic solution to a single case isn’t good programming, it’s criminal overengineering because like it or not somebody, somewhere will pay for it.

Don’t Worry, Be Happy

I suspect all the best programmers have already realized this, but they’re not shouting about it loudly enough for everyone else to hear. Paul Graham is completely right when he suggests that succinctness is valuable:

Use the length of the program as an approximation for how much work it is to write. Not the length in characters, of course, but the length in distinct syntactic elements– basically, the size of the parse tree. It may not be quite true that the shortest program is the least work to write, but it’s close enough… look at a program and ask, is there any way to write this that’s shorter?
— Paul Graham, The Hundred Year Language

He’s actually talking about language design here; indeed, in Succinctness is Power he’s careful to note that it’s clearly possible to write a program that’s too succinct. This is because, these days, Paul Graham is more a language designer than a working programmer. Otherwise he might have said:

If you’re about to take a hundred lines to write what you could in ten, stop and ask yourself this: what the fuck?
— Mark, Criminal Overengineering

When I feel tempted to over-generalize or over-engineer a bit of code, it’s often because of fear. Fear that someone will find a really good reason I shouldn’t have done it the easy way. Fear that I’ll have to rewrite the code again. Fear of finding myself on the wrong side of an argument about the merits of the visitor pattern. But fear does not naturally lead us to the most elegant solutions.

Next time you feel the compulsion to write a nice, general solution to a simple case, stop and ask yourself what’s stopping you just writing it the simple, specific, succinct way:

  1. Am I worried I’ll have to rewrite it?
  2. Am I worried someone will criticize it or that I’ll look bad?
  3. Am I worried that it’s not professional enough?

Are any of these true? Then relax. Don’t worry. You worry, you call me, I make you happy.

Just write the code the simple, specific way and then add a short comment, something like: Replace with the Strategy pattern if this gets any bigger.

This is the perfect solution. It’s a great reminder to you next time you come here about what you wanted to do. It shows other programmers on your team that you considered the ‘correct’ way to do it and have a good reason not to do it just yet. It’s very hard to argue with a comment like that, because you’re not arguing about the strategy pattern vs the switch statement, you’re arguing about whether to use the strategy pattern after 3 cases or after 4 cases – not a discussion that can reflect badly on you, in any case.

A few months later you can go back and look at how many of your comments eventually turn into more complicated, engineering code. I’ll bet you it’s not very many. That’s how much time and effort you’ve saved, right there. That’s setting yourself free to pursue the solution and that’s making the world a slightly better place.

Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!

Advertisements

Written by coderoom

June 23, 2010 at 8:15 am

Posted in Programming

Tagged with , ,

Is That All?

with 50 comments

Note: This post isn’t about the iPad. It’s about me and you, our bosses and most of all it’s about normal people. It just starts with a story about the iPad, because that’s the way it happened.

What did Yahoo’s bosses say when they saw Google’s homepage for the first time? Why are 37signals so militant about saying ‘no’ to extra features? What did the Apple engineers think when Jobs told then to make a phone with one button?

Last weekend I spent twenty minutes playing with an iPad on a stand in an airport. I opened Safari and read xkcd, Penny Arcade and Hacker News. I flicked through the pictures of some sunkissed holiday by the sea. I played a couple of not very good games. I wrote a short document. I watched a video. At the end of twenty minutes I wandered away feeling slightly uneasy, thinking:

Is that all?
— Me

As a programmer, I’m comforted by screens full of settings. When playing a new game the first thing I do is find the options and tweak the hell out of it before I’ve even played a single level. The iPad left me feeling somehow uncomfortable, as if I was missing some core element. Had I really seen all it could do?

That was when I saw it: in a handful of minutes on completely unfamiliar hardware and software (no, I don’t have an iPhone), with an unusual multitouch interface I’d just ‘done’ things without having to think about them, without having to learn anything, without having to struggle. The gap between wanting to do something and doing it was so short that, for twenty minutes, it ceased to exist.

Don’t worry, we’re almost at the end of the iPad bit.

I was asking myself what the iPad could do. The iPad wasn’t doing anything – it was letting me do what I wanted. It had been designed by people who loved me more than their product (as Gandhi says you should). Was that all? Yes, because playing around for twenty minutes was all I wanted to do.

The user interface should be like a soundtrack barely noticed by the user
— Steve Capps

Everything we create should aspire to this, should leave us – as programmers – wondering if that’s all and if we shouldn’t perhaps add a bit more. Scott Berkun (a genius and a craftsman) said all of this more than ten years ago and I’ve known about it for at least half that time, but it hasn’t really changed the way I write software because it’s too hard to just know when something’s simple enough.

The feeling of ‘is that all?’, however, the uncomfortable suspicion that I can’t really ship a product with just one button, that all the important companies have login screens – this feeling proves we are on the right track. It is an excellent guide. Our world is full of self-indulgent interfaces clamoring for our attention. Why should we keep making this worse? We have to be brutal with our interfaces. Strip everything out. Consider every single piece of text as being a waste of the user’s time reading it, every control an unnecessary, unpleasant intrusion.

The user’s attention is a limited resource and we should heavily optimize to minimize our impact upon it. We must always, always remember that nobody wants to use our software – they want to finish their work and go play outside.

It’s hard. It’s risky. It’s easy to defend a new dialog as full of buttons as the old one. Our colleagues and managers live in bizarro world, believing our software has value independent of the things it helps people to achieve. They don’t want the new startup screen to have just 10% of the controls of the old one.

That’s not progress! Progress means more! Deleting things isn’t doing work! It’s anti-work!
— A stupid person near you (or, possibly, you yourself)

I’ve felt this, even if I haven’t said it. There’s this massive tension between writing something to humbly serve people you’ve never met and may never meet, and writing something your boss and colleagues will approve of. Yet we have to try, because the way software has been written for the last twenty years is making people unhappy.

Our calling, our duty, is to write software that will make our colleagues, bosses and competitors scoff and say “Is that all?” while making the lives and work of real people simpler, easier and less stressful. Our customers will love us for it – we just need the courage cut and hack and tear down everything that’s not necessary to get their work done and to put it out there for them to use.

Postscript: What am I doing about this? My startup, CodeChart, is making profiling very simple and very beautiful; the old generation of tools are so ridiculously overcomplicated that most people never use them. It’s in closed beta at the moment, but have a look at our getting started guide to see how it works and feel free to sign up for the beta if you’ve got some .NET code you want to look at. I know, I know, other languages – including my beloved python – are coming later!

Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!

Written by coderoom

June 6, 2010 at 8:46 pm

Posted in Programming

Tagged with , , ,

A Song Is Worth 1093 Words

leave a comment »

Natural programmer. Ended up a rock star. Pity.

If someone had sent me this earlier in the week I could’ve saved us all 1093 words. Much love to Craig Lyons and some random youtuber for the developer anthem of the week.

Don’t stop listening until you’ve heard my personal message around 1:44 😉 In fact, it keeps on getting better!

If anyone makes a nice cut summarizing the software development sections (or just a loop ending at 1:16) send me a link!

Anyone know of any more songs secretly about programming and startups out there?

Written by coderoom

May 20, 2010 at 8:51 am

Start In The Middle

with 59 comments

I start in the middle of a sentence and move both directions at once.
— John Coltrane

Newspaper reporters are taught to write fractal articles: summarize the entire article in the title. Elaborate a little in the first sentence, then fill out the most important details in the first paragraph. A reader should be able to stop at any point without having missed any important details. We should approach programming projects in the same way.

As a child – after some experimentation and a lot of thought – I decided the best way to eat cornflakes was as follows:

Cornflakes, by Mike Haufe

  1. Pour the cool milk over the golden roasted flakes
  2. Sprinkle the one allowed teaspoon of sugar over the top
  3. Start eating around the edges, saving the sugary middle section for one last big spoonful of joy at the end

I stand by that decision. In fact, I’ve noticed I do similar things in other areas of my life. I’m sure a psychologist would talk for hours on the subject. Luckily for you I’m not a psychologist, I’m a programmer. And it turns out that this is an awful way to work on software projects.

Has this ever happened to you? You wake up one day with a great new idea for applying bayesian filtering to twitter streams to filter out the pictures of Joel’s new puppy spam. You’re totally convinced it’s what the world needs. It’s the startup that’s finally going to help you to break out of your day job maintaining PHP payroll software stock supermarket shelf stockers. So what do you do? You do this:

  1. Fire up your IDE and start a new website project
  2. Whip up a login page and get the user account basics set up
  3. Decide OpenID’s really where it’s at these days and hit stackoverflow for a good OpenID provider plugin
  4. Run into problems getting it to accept Google accounts and spend half the night debugging it

Wait, what? How did this happen? Getting OpenID working isn’t fun. It’s almost the definition of not fun.

I didn’t want to do all this, I just wanted to make an awesome bayesian twitter filter, but somehow there’s all this stuff I have to get through first.
— Me (swear words redacted)

My hard disk is littered with projects that I started, got half way through setting up without ever really getting to the good bit, then abandoned. I suspect yours is, too.

The right way to start a bayesian twitter filter is to apply a bayesian filter to content from a twitter stream. I know. It looks like this:

  1. Google for some bayesian filter code
  2. Dump whatever’s in your twitter client logs to a file and write three lines of python to parse it into a form the bayesian filter can work with
  3. Train the filter and see what happens

Compared to the original approach it looks awesome, right? So what stops us approaching all projects like this? Well, there’s something beguiling about wanting to get the framework right from the start this time. It’s more comfortable starting with something we already know how to solve. Sometimes we have a clear vision of how it should end up in our heads and simply start to create that vision from the beginning through to the end.

Start in the middle
— Paul Graham (lightly paraphrased)

Lean startups and the Minimum Viable Product are all about starting in the middle. Paul Graham’s advice for startups can be summed up as ‘first solve the interesting part of the problem, then build the business around it’, but the process is also fractal – starting in the middle applies right down to the level of writing a new class, or a single function. First write some code that solves the problem even if it’s imperfect or partial, then expand it out with your favourite blend of accessors, inheritance and polymorphism (Note: don’t even bother with that bit unless you hate yourself).

I’ve seen four key benefits to starting in the middle:

Benefit 1: Ideas that turn out to be impossible or just plain bad are discovered early. This is classic lean startup advice: fail early.
Benefit 2: Spend most of your the time solving interesting problems and not fine-tuning framework skills. Which would you rather get better at?
Benefit 3: Discover interesting things while your project is still young and flexible enough to adapt to them.
Benefit 4: Once you’ve solved a problem, you’re so motivated to use it that you finish up the surrounding areas in no time. You add extra users because you want to show it to your friends; you add keyboard shortcuts because you’re getting tired of using the mouse all the time. This is programming the right way around – first the need, the desire, and then the solution.

I’ve recently seen all of these benefits while working on my own side-project-turned-startup. Ages ago I had this great little idea for making profiling so simple that it just told you which line of code was slowest in a nice little report and I whipped up some C# code to do just that. The results weren’t making much sense, so I tried plotting the data to a canvas to see what was going on. Pretty soon I was looking at a poor man’s sketch of this:

Visualizing a program's execution

Instantly I knew I’d been working on the wrong thing; seeing the execution of a program laid out before me in all its glory was so rich and so interesting; something I had no hope of summarizing in a small table of figures. I just had to explore it – I added function names, colour, a breakdown of time spent in each and over time it grew into such a valuable part of my toolkit that I’ve started sharing it with the rest of the world.

Would I have changed direction if I had already created a website, a login system, a messaging layer, a database schema all geared around the original idea? No. I’d have reached the interesting part of the problem with a half-finished framework and close to zero energy and enthusiasm. The discouragement at seeing the futility of my cherished profiling-via-pdf idea would’ve seen me put the whole thing back on the shelf and go play Altitude to forget about it.

So start in the middle, start with the interesting, challenging, core of the problem you’re trying to solve. Cut down everything else to ridiculous minima and see what happens; you may create something fascinating.

Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!

Written by coderoom

May 18, 2010 at 9:26 am

Chicken Little and 3.3.1’s Great Big Loophole

with 10 comments

I thought someone else would say this, but either they haven’t or they didn’t say it loud enough and now I can’t take the waiting any more, so here goes:

Chicken Little

Oh noes, my pythons iz banned!

On April 8th, Apple added some onerous conditions to section 3.3.1 of their iPhone Developer Agreement, explicitly prohibiting interpreters, translation layers and cross-platform toolkits from the Apple Store. It set off a wave of discussion that still echoes around to this day, and it pretty much killed Flash dead.

Much as I hate Flash, that’s not what I want to talk about. I want to talk about the reaction from most of the programming community:

The sky is falling, the sky is falling!
— Pretty much everyone

Some bloggers even complained that kids wouldn’t be able to grow up learning how to program in Apple’s Brave New World. What?

You can write in any language you want so long as it compiles to Javascript and runs in the browser, or runs on a server somewhere online
— The oddly-overlooked truth

Local applications are already dead. Whether they’re on the desktop or on the phone, their days are numbered. The resurgence in phone apps for the iPhone / iPad is a temporary blip. The future is in the cloud, in the browser and on servers.

Where will kids learn to program in Apple’s new world? On programming sites, interpreting their code in the browser, pulling in web services they way you and I learned to pull in local APIs. You don’t like Javascript? Don’t worry – You have options and they’re only going to keep getting better. Suddenly Bespin doesn’t look so dumb any more, does it? Mix in Github and free online hosting services like Google App Engine and you can see the parts are already assembling.

In fact, with 3.3.1 Apple has shot itself in the foot by ensuring that all the best developers are going to work extra hard to get their applications running in the browser; a bit of a home goal for iAd and a gift to Google – and the rest of us. After all, web apps are fundamentally easier to develop and support.

So here’s to Apple’s 3.3.1 clause and all its consequences: Thanks, Steve!

Written by coderoom

May 7, 2010 at 7:45 am

Posted in Business, Programming

Tagged with , , ,

TDD without the T

with 40 comments

While discussing our natural tendency to spend too much time and effort refactoring code, Jul (aka -jul-) raised an interesting point:

This is why I think TDD is good. According to the rules, you should not write a single line of code before having a failing test, but you should not write more than one failing test. Then write some code which makes the test pass, but then, start over (eg. don’t write code).

If you follow this cycle, you won’t code l’art pour l’art.

Haha.

Odds are you’ll end up writing tests l’art pour l’art.
— Jul, commenting on 7 Reasons To Hate Your Code

Note for those coming from digg: “l’art pour l’art” can be translated as “just for the sake of it”. Much love, my friends.

When I discovered TDD back in 2003 it came like a breath of fresh mountain air sweeping through the midst of my abstraction-freak analysis paralysis. At last I could code relaxed again, without trying to over-engineer every single feature. It set me free.

Years later I couldn’t help noticing I was writing more test code than program code and that I’d stopped working on side projects, because every time I started I thought:

Ok, I’ll hack out a prototype for this cool little game in no time. So, well, I guess I’d best start by writing a few tests for the core mechanics. Right, time to get my framework out. And… uh… ugh, maybe I’ll go watch some Buffy instead.

Writing tests turned me off the whole enterprise so much that eventually I decided I’d just write without them. And nothing awful happened! In fact, it was as much a breath of fresh air as I’d first felt when I started. What gives?

It wasn’t until reading Jul’s comment yesterday that I realized why this was: I hadn’t stopped doing TDD, I’d just stopped doing the T part.

Essentially, the core of many test-driven development processes looks something like this:

  1. Make a list of features and prioritize them
  2. Pick the most important or fundamental feature
  3. Write test cases for it and implement just enough code to make them pass
  4. Repeat form step 2 until you’ve got something useful

I hadn’t stopped doing this, I’d just skipped the “write test cases” part. Writing just enough code to implement the tests you would have written turns out to work just as well!

Let me be clear – I both like and appreciate unit tests. There are many places in which I still use and rely on them. You’d be crazy to skip the unit tests when you’re writing a complicated translation function, or implementing a novel data-analysis algorithm. However, it was still an eye-opener for me to realize that there are places in which they’re a waste of time. Shock! Call Kent Beck! No, it’s true. Albert Einstein backs me up:

Make things as simple as possible, but not simpler
— Albert Einstein (somewhat paraphrased)

Thanks, Al. In this case, testing your accessors and adding half a dozen interface patterns just to support a burgeoning test suite isn’t as simple as possible. Writing no tests at all for your parsing code is too simple. Finding the perfect level of minimalism between code, tests and bugs is difficult, but liberating.

The truth is, doing TDD for all those years changed my programming style, moulded it into a new, more efficient form that stays efficient even when I stop writing tests for everything. I guess this is common; maybe you should try abandoning your test-first framework for your next side-project and see how it works out for you.

TDD without the T: you may find it surprisingly refreshing.

Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!

Written by coderoom

April 27, 2010 at 7:15 am

Posted in Programming

Tagged with , ,

7 Reasons To Hate Your Code

with 85 comments

Incompetence is the cause of many ills in the software world, but increasingly I’m seeing a certain kind of competence as being just as destructive. You see, there are a lot of programmers who care deeply about their code. I did, but it turns out there are good reasons we shouldn’t do that; reasons why we should hate our code…

Reason 1: It keeps you open to change

The most productive way to write software is to release early and iterate quickly with feedback from real users, right? It saves us from implementing the wrong features, that it helps build a community and so on. What’s people don’t talk about as often is the impact it has on our relationship with the code. Putting a program in front of users is terrifying because they might not like it. The more lovingly we craft and polish every class before release, the more carefully we plan our abstractions and structures, the more emotionally attached we are to the way we have made it.

The result is that we instinctively resist making significant changes to the code based on early feedback, even though this was the whole point. It’s an unconscious, emotional reaction that we rationalize away to ourselves by claiming the users just don’t understand the feature, or that it just needs a bit more polish when, seen objectively, the best thing to do might be to restructure it completely do do something rather different.

In fact, the best way to remain responsive to your users is to hate your code.

Reason 2: Jeff Atwood says that you should

You can tell a competent software developer from an incompetent one with a single interview question: “What’s the worst code you’ve seen recently?”
If their answer isn’t immediately and without any hesitation these two words: “My own,” then you should end the interview.
Jeff Atwood

Every now and again people attack Jeff for his assertion that we write bad code, for his arms-wide-open group humility. I think people misunderstand him. The message is not you are a bad programmer, it’s that part of being – of becoming – a great programmer is writing code that you’re ashamed of. If you think all the code you write is wonderful, you’re probably not learning anything.

Reason 3: If you don’t hate your code, you’re wasting too much time on it

Andy Brice famously wrote that if you aren’t embarrassed by v1.0 you didn’t release it early enough – you spent too much time polishing it to meet your expectations instead of giving it to the user and finding out what they think of it. A corollary of this is that if you aren’t embarrassed by your own code then you spent time crafting and reworking it until you loved it. This is time that you should have spent trying to make a product the user will love.

Nobody cares if you refactor to use the Chain of Responsibility pattern or not; they want simple, elegant features that work first time. We should spend our time delivering that, instead.

Reason 4: It’s what Jesus would do

No one can serve two masters. For you will hate one and love the other; you will be devoted to one and despise the other. You cannot love both your code and your users.
— Matthew 6:24 (lightly paraphrased)

Reason 5: Loving your code for its own sake will make your product suck

Loving your own code is a kind of hubris. It presumes that you’re writing the code so that you can work on it in the future, or can show off a neat trick to your colleagues.

This love is misplaced; your code isn’t going to be judged on its internal beauty and consistency. It’s going to be judged by people who just want to get their work done. They’ll love it if it makes that easier and they’ll hate it if it distracts them from that task, if it makes their lives harder.

Creator: I love the power of Unix/AJAX/C#/whatever.
Customer: I want to finish my work and go play outside.
Scott Belkun

I once spent six months crafting a beautifully-constructed version 2.0 of our company’s product. We’d started again so we could use unit tests throughout, the code was completely modular and pleasant to work on. There was none of the spaghetti-code complexity that dogged the first version.

We demoed an early prototype to management and marketing who said:

Great, so it’ll be ready next month?

Aghast I pointed out we still had to add a whole bunch of features and do a ton of testing. Their faces fell, and at that point I realized that it didn’t matter how beautiful the new codebase was. It would cost too much to bring it up to the feature level of the old one.

Its intrinsic beauty wasn’t worth anything; only user-visible features weighed in the balance. The project was cancelled, and rightly so.

To worry about code aesthetics more than the aesthetics of the product itself… is akin to a song writer worrying about the aesthetics of the sheet music instead of the quality of sounds people hear when the band actually plays
Scott Belkun

Reason 6: Gandhi would want you to

Hate the User class, love the user.
— Mahatma Gandhi (lightly paraphrased)

Reason 7: It will make you a better developer

Like most people, I got into programming because I wanted to write things for myself and for my friends. During this enthusiastic newbie phase I was a happy user and a happy developer. My code sucked but I didn’t know that yet.

Later on I got into programming for its own sake, I ended up caring deeply about the kind of design patterns I was applying and whether this for loop could be refactored a third time. Eventually this abstraction freak phase left me almost incapable of creating any real value because I invariably got lost in the upper echelons of the software design and never finished any useful features.

Of course, when I started writing professionally I had to care about the features, and had to see my beautiful code mutilated again and again in the name of profit, of getting this feature out in time for the next release. This is enough to make anyone into a bitter veteran.

Eventually I started discovering the joy of programming again, via dynamic languages and web frameworks that made it possible to get useful programs out of the door on day one. I started writing side projects I actually wanted to use. I became free to write the minimum necessary to make something useful. I’d become the ‘guru’, who mistakenly believes he’s better than everyone else because he’s finally become productive again.

But I haven’t become great yet. What might the next step be? I don’t know. Reading more code? Writing less code? Perhaps. But at the moment I have a feeling that to grow in another dimension – creating awesome software – I need to sacrifice the pride I’ve built up in my art:

The seventh step of humility is that a monk not only admits with his tongue but is also convinced in his heart that he is inferior to all and of less value, humbling himself and saying with the Prophet: I was exalted, then I was humbled and overwhelmed with confusion and again, It is a blessing that you have humbled me so that I can learn your commandments.
— The Rule of St Benedict 7.51-54

Until today I found this quote somehow ridiculous or inhuman, yet now I’m beginning to believe that overcoming our egos and admitting that we are writing unworthy code but writing it anyway is a vital step in becoming truly great programmers. Perhaps we can let go and accept that, ultimately, the code never really mattered at all.

Postscript: I’m not saying we should set out to write bad code. It’s vitally important to have an idea of how the code should end up, but instead of working and working until we achieve that and then shipping the product we must write the least we can get away with while gradually approaching the ideal. Understanding this distinction is crucial.

Note: Yield Thought has moved to http://yieldthought.com – check there for the latest posts!

Written by coderoom

April 22, 2010 at 3:45 pm