Speed vs Predictability

Understanding which trait is most important to a project can be a big factor in whether that project is seen as successful. It can also help determine the best way to “manage” a project (in a “project management” sort of way).

A traditional waterfall approach to a project can attempt to restrict change and so looks good from a predictability point of view. We know the requirements up front (or think we do), we have a design we have signed off on, we can therefore know with some certainty the end date for this project.

When it comes to speed though, a waterfall approach has some issues. Firstly, there’s Student Syndrome to consider – the fact that many people will only start to fully apply themselves to a task at the last possible moment before a deadline. This can lead to wasting any buffers built into the original estimates and helps to explain Parkinson’s law – work expands so as to fill the time available for its completion.

Waterfall plans tend to have larger tasks on them with deadlines that are further away. Even if the work is broken down the focus on phases tends to be larger. For example, even though I know I should complete my task today, I know that it only really matters in 3 months time when this larger phase has a deadline. I can convince myself that things will get better, I will get lucky with the next task and so I can catch up.

Agile approaches focus more on embracing change so can appear to be less predictable. We don’t really know what we are creating or when it will be ready, but you will have an opportunity to really shape what we develop incorporating the best ideas of the time and deciding when we have enough value in order to release.

The Agile processes of daily standups(or scrums) and short sprints create a culture of constant targets and could be seen as a way of speeding up development. There is little scope for either Student Syndrome or Parkinson’s law to take effect here. Daily standups attempt to break down that large task we might have had with a waterfall approach and turn it into smaller daily commitments. By setting a daily target, apathy is never allowed to set in.

However, it is also this practice of daily commitments which could be seen to force teams to aim lower than they would normally. Rather than saying “I think I would like to get this done if things go well”, the group aims instead for a commitment – “I will definitely get this amount completed”. The very nature of it being a commitment means less is targeted and anyone attempting to commit to more than seems reasonable is questioned by the team.

This same thinking can also be applied to the “sprints” that an agile team will take part in; they could be seen as less like sprints and more like strolls as the amount they are “sprinting” for is reduced in scope in order to achieve a commitment. That’s not strictly true, a team should be tracking velocity, taking items from the backlog to fill the sprint and adding items that weren’t completed to a future sprint. However, teams and perhaps more importantly their stakeholders want commitments. Even if it isn’t seen as a commitment by the team; the innocent question at the start of a sprint “what can I expect to see in the product after this sprint?” can lead to an unspoken expectation of a commitment.

However, if a team constantly meets its daily commitments (or sprint objectives) then it is going to be more predictable, which could be seen as more valuable in some projects.

As a thought experiment, let us consider a task so important that we need to devote every single minute of our time to completing it. Surely, this is the route to completing it as soon as possible. Stopping work for a daily standup isn’t going to help us progress this task. The standup does offer the chance for the group to offer assistance in creative ways; perhaps someone else has already solved the problem you need to solve in another project, perhaps someone else is available to look at the problem with you and remove the blockages that have slowed your progress, perhaps the task can be broken down and worked on in parallel. So, the standup is a bit of a gamble, if nothing changes, the time is wasted, but you could save time on your project by attending.

This is one of the reasons the standup should be as short as possible; for all those people attending and taking the gamble that they will get a benefit from being there, if the gamble doesn’t pay off then the loss should be as small as possible.

By reducing scope a little with the daily commitment the group increases predictability, but also has a little spare capacity to help those people who might not make their commitment. A bet that will nearly always pays off is a very attractive proposition, and I think is one of the most powerful aspects of a daily standup.

I don’t think there is a simple answer to the question of whether speed or predictability is more important, but I do know from past experience that choosing the right approach is very important.


Posted in agile, planning, process, Project Management, software development | Tagged , , , , , | Leave a comment

Recruitment Talk

I’ve written a few times about recruitment.  In particular you can read what I had to say on Job Applications and Interviews.  There are even some amusing stories from my Interviews.

This time I’d like to share a talk I did for GeekUp Nottingham (@geekupnotts) which was all about hiring and getting hired.

You can watch the video below or view my slides here.  You probably want to skip the first 18 minutes of the video (it was being streamed live, but we had a few technical issues getting started).  The excellent @samwessel is the first speaker, well worth a watch.  My talk doesn’t start until about 52 minutes in.

Posted in community, recruitment | Tagged , , , | 2 Comments

Progress – Part 3 (What still needs improving?)

In the previous parts, we looked at how things used to be in the development team, and then looked at how things are now.  This time, lets look at the areas that still need improving.

As a small company trying to get started, it was important that we used our ability to react quickly as one of our strengths.  Speed was very important in the development team, but one of the ways we achieved that speed was by taking on technical debt.

Now that we are larger, speed is less important and quality and predictability are the main requirements.  Our software code base is larger and areas of inflexible design have surfaced.  We need to re-factor into smaller replaceable components so we can keep the product fresh and modern.  We can count this as another of our areas of technical debt that we need to reduce.  We need to find ways of reducing our debt whilst maintaining our ability to ship real benefits to customers.

One of the main techniques we plan to use to improve our quality and reduce technical debt is Test Driven Development.  We’ve been keenly watching the Uncle Bob – Clean Coders videos as his use of TDD has been a revelation.

As a test team we haven’t made as many changes as we could.  We have started the process of automating our testing, but too much is still manual.  We spend a long time with our regression tests; often not finding any bugs, but using as much as 2 weeks of our time.  When we’re aiming for a monthly release cycle (or shorter) this is more than we can afford.  It was noted from hearing Uncle Bob talk about his work on FitNesse, “if the tests pass, we ship”.  That has to be our goal as a development and testing team; having such a comprehensive set of automated tests, that if they pass, we ship.

A constant battle in project management, between stakeholders and the project team is in understanding when a project will be complete.  Agile methodologies attempt to address that issue, by offering a new version regularly and letting the product owner decide when a version is complete enough to ship.  However, unless the whole business is truly agile, then this doesn’t really work; marketing want to announce what’s coming for the next year, and expectations are set with customers.  No-one wants an estimate or to be told to wait 2 weeks at a time to see if the system is what they want.  What they really want is a commitment!  So whilst we can work in 2 weekly sprints, and can release software to our customers monthly, we need to plan for something like quarterly commitments.  To have any chance of keeping these commitments, we need to use some of the more traditional project management tools to manage change (and restrict it).  We need to remove risk, and we need to estimate the size of tasks more accurately.

Whilst our stated aim has been for a monthly release, these have been stretching out to nearer 3 months and that starts increasing the pressure to delay the next release.  No-one wants to take the risk that if their sponsored feature misses the current release that it will then be a further 3 months before it can be released.

One of the tools that we can use to change this is to make our behaviour visible.  We have started to track our key metrics, what date are we aiming to have developed all our new code for?  How many features and fixes are still in development?  How much have we already passed to the test team?  Once we have this information, we can measure our progress, track our confidence for making the release and then adjust our plans accordingly.  By using a daily stand-up with the key members of the team, we keep visibility and remind everyone of our targets.  We can also display these metrics in a public place so that everyone is aware of them.

One of the thought experiments we can run is to find out which parts of our process would stop us releasing software on a weekly basis.  At the moment that is our manual testing, but as we automate more of that we should keep our eyes on what else might be slowing us down.  If we get to a point where we could release weekly, then the next step is to look at what would prevent us releasing daily.

One of the big benefits of a daily, automatically built, tested and ready for release version of our software would be within our support team.  Our support developers still spend a lot of their time producing hot fixes and sending them to customers.  These are manually built, put together and tested by the developer who made the code change.  Automating this process would free up more time to help other customers and reduce the occasional mistakes we make following this manual process.

If you’ve followed all 3 of these posts on progress then hopefully you’ll have seen how far we have come as a team; from sound principles, but poor practice, to a more modern approach with a vision of where we still need to improve.  It really is a credit to the team members, both past and present that have helped drive the change whilst maintaining a high quality product for our customers to use and our business to sell.


Posted in estimates, peopleware, planning, process, Project Management, software development | Tagged , , , , , , , | 1 Comment

Progress – Part 2 (How things are now)

Last time we looked at how things used to be within my development team.  This time, let’s look at how things are now and some of the improvements that have been made.

We now have 3 broad teams of developers, each team with its own focus.  Within the teams we still have many smaller projects in progress, often with just a single developer working on their own.  The teams are focussed on front end / user interface developments, back end / 3rd party integrations, and customer support.  We’ve introduced a variety of languages into the product – we now use Python, C#, Silverlight and Javascript instead of just sticking with C++. Some people have started to specialise within those teams and our user interface team now includes a graphic designer – we include “user experience” as a serious part of our process.

By splitting the developers into these teams (and putting them in different rooms) we manage to focus on all areas of our product.  As we saw last time, with everyone in the same room and no defined roles, we only managed to focus on the current interesting issue within the team (usually whoever had just phoned with a problem).

Our release schedule is more defined.  Partly in thanks to Johanna Rothman’s book Manage It! we realised that in order to challenge our failure to release every year, we needed to shorten our release cycle not extend it.  We now aim for a monthly service pack release, a quarterly feature release and a yearly major upgrade.

This change in release cycle forced us to address our build and release process.  We now run an automated nightly build via Teamcity.  This also allows our test team to always have something to test, rather than waiting months between builds.

Our source control also needed to change, and we got good mileage from a switch to subversion.  However, it recently became apparent that we needed more flexibility in our branching strategy to cope with the different team goals and release cycles, so a switch to distributed version control is well underway.  As we were already big fans of Fogbugz, we decided to try Kiln, which is based on Mercurial.

Fogbugz is the hub of everything we do in development.  We track every change to our product through a case in Fogbugz, every change is proposed, reviewed for acceptance, implemented and then tested.  Fogbugz produces the output for our release notes (which now accompany every release, with every change noted), and contains a wiki of all the really useful information we want to share as a team.

We now have a dedicated team to handle incoming support phone calls and email, so developers no longer have to do this.  Whilst Fogbugz did a great job of handling those incoming emails, it wasn’t structured enough for the support team to figure out how to use it effectively enough – a wiki was just too general to store the information the support team required.  They have now switched to using SuperOffice, which does have the advantage of being the same system that the sales team uses.  As a database of customers, their equipment, previous support issues and maintenance contract details it excels.  As a piece of software to use to drive your daily tasks (handling support) it is one of the worst combinations of technology and user interface design I’ve had the displeasure of using!

As well as adding a dedicated support team, we have also increased the number of people in our testing team.  With a daily release available the test team always have something to test.  They have also improved the way they test.  They have more representative hardware available and focus on finding the faults our customers care about.  They also now test whether our software will downgrade as well as upgrade.

As well as improving the hardware available to the test team, the hardware available to the developers has also improved.  Dual (or even triple) screen setups are now the norm.  We have laptops available for each person who wants their own, whilst we still have a few available to share for occasional use.  In fact we have a dedicated budget to spend on whatever equipment or software we need as a development team.  We now use Visual Studio 2010 or Komodo as our IDE with tools such as ReSharper, NCrunch and Visual Assist X available to help and are constantly looking at new ways of improving our development environment.

Our physical working environment has also improved.  We shuffled people around into different rooms to help keep the teams focussed.  We even managed to build a partition wall to divide the largest and noisiest of our rooms down into two smaller and therefore more focussed work areas.  Our tired, old, straight-edged desks that had served us well for 15 years were replaced with modern curved desks that all matched.  With a more consistent look to our desks, we were also able to add matching book cases, cupboards and storage space.  With the disruption of changing furniture we also took the opportunity to bring decorators in to repaint the walls and brighten our working environment.

We now have a budget for training.  This allows us to purchase almost any book we think looks interesting, but also allows us to purchase training videos, go to external training courses and attend conferences.  This year we’ve had people at DevWeek 2012, ACCU and Usability Week as well as regular sessions for the whole team to watch Clean Coders.  It’s not just having a training budget that helps with training, but a general increase in the importance of training.  We now run regular Coding Dojos to practice and improve our skills and look at other free ways of improving.  We believe that Practice is important.  We are also far more involved in the local developer community.  We always have at least a couple of us at the monthly Geekup meetings – watch out for me giving a talk at one in the future!

Over the years we’ve doubled the size of the team.  Whilst bringing in new people brings in new ideas, it is important that those people are at least as talented as the current team.  Our previous interview process was very hit and miss, and whilst it certainly isn’t perfect now it’s definitely improved.  We now put ‘juggling‘ at the heart of our interview process with a series of simulations to try and identify if the candidate can do the job.  We still keep the interview friendly and fun, but we do need to know that the person we are interviewing can do what they say they can.

Whilst we can be very proud of the progress we have made, there are still lots of areas we can still be improving.  Let’s look at those areas next time.

Posted in information, planning, process, recruitment, software development | Tagged , , , , , , , | 2 Comments

Progress – Part 1 (How things used to be)

I started as Software Development Manager about 5 years ago and I’d like to share some of the changes that have occurred since then.  Some of those changes I’ve worked hard to create, some were pushed by others within the business, but collectively I believe we’ve achieved a lot.

So, let’s start with looking at how things used to be in the development team…

We aimed to release a new version of our software every 6 to 12 months.  This involved collecting a big list of requirements and dividing them up amongst the team.  However, what actually happened was that we started with more than we could do.  This was fuelled by a business need to aim high and stretch ourselves.   As the months progressed we picked up more “must do” requirements that we added to the next release and before we knew it our release had taken 18 months to finish!  We often delayed a release whilst we waited to complete something that no-one actually wanted.  The pressure to delay a release while we added something new just got bigger the longer a release was delayed – if the next release was going to be a year away, we couldn’t possibly wait, so it was better to delay the current release a few more weeks whilst we added that item.  A vicious circle of ever delayed releases was created and getting to “done” was extremely difficult.

Not only was this bad for morale, it was bad for our own processes.  As we didn’t have an official release of software we would send out builds done from a developers PC that included the bits a particular customer required.  As you can imagine, trying to keep track of who was running what and supporting our customers was very problematic.  No one had a standard, official release of software so we relied on knowing each customer’s quirky software versions.

This was only made worse by the tools we used.  We used Visual SourceSafe for our version control.  Now it was always pretty reliable for us so I don’t wanted to knock it too much, but it didn’t help us branch and merge our software to handle our various customers and changing requirements.  It was definitely not helping us, but at least we didn’t suffer from some of its well known limitations.  We had outgrown it and needed more if we were to improve.

As I mentioned we often released software direct from a developers PC.  Again our tools did nothing to help with this.  We didn’t have a dedicated build machine (well that’s not quite true, all official releases were done from my machine and that relied on me not doing anything to upset the build).  The build wasn’t automated in any way, so was time consuming and error prone.  We gradually progressed from a checklist I put together so I didn’t forget something from the release, to a home brewed C++ application that did some of the work for us.  However, a build would still take a day to put together and was often unreliable or had missing components.

Another consequence of having such large release cycles was the workload on the test team (all one of them).  They could go months with nothing to test and then we’d apply huge amounts of pressure to get the software tested and released when we were finally ready.  Increasing the size of the test team was not really an option when they were only busy one month in 12, but for that one month we were drastically short of testers.

Our development team had very little structure.  Whilst we had a few Senior members on the team, that generally just meant they got given some of the more difficult developments on the list.  We all sat in a large open plan office, which was great for everyone knowing what everyone else was doing, but terrible for concentration and solving difficult problems.  The other major drawback was that any support issues would take over the whole team.  We just couldn’t focus on delivering the new features as we got too engrossed in solving the interesting problems our customers were experiencing.

Which brings us to the next area; support.  We only employed developers, so they were the ones answering the support enquiries.  They would answer the phone and respond to the emails and the quality of those communications would vary drastically – some would go to great lengths to explain the problem and show how clever they were to have found a resolution, others would expose their grumpier side at the need to explain a trivial issue for the fifteenth time!  Developers could lose whole days or weeks with customers on issues that should have been addressed with better information, training or sales support.  They weren’t writing any software, which for a software developer was unrewarding and for the business was unproductive.

Again, as we only had developers on the team, as well as covering the support side, we were also called upon to do all of the installations and to attend meetings with the sales team and offer pre-sales support.

When we all sat in the office, working on a desktop PC was no problem, in fact they offered good performance for the price, but once we started going out of the office we struggled.  No-one had their own laptop, so we shared an aging Dell laptop.  Somehow this was never quite right, never had the latest source code or tools installed.  Before anyone went on-site they would need to spend a day updating and preparing the laptop for their visit.  As well as sharing a laptop we would share a mobile phone – at least this didn’t require any setup!

It wasn’t just laptops that were lacking in the equipment department.  Developers worked from a single CRT display and still developed using Visual Studio 6.  The test team didn’t have a single representative hardware platform to test on, they just used the 2nd hand developer PCs.  This made any form of compatibility and performance testing very difficult to complete.

The test team, not only lacked representative hardware platforms to test on, they lacked test systems to exercise our product properly, i.e. the sorts of systems our customers were using.  They’d developed test scripts that were very effective at finding bugs in the first few versions of software, but no longer found the issues our customers were experiencing.  In fact several releases of software were sent to customers where even the basic functionality didn’t work!

Another area that we covered as developers was I.T. support.  Perhaps unsurprisingly this area was also lacking.  We ran a central server with limited storage space and an unconvincing backup process.  Our remote access into the office was also unreliable.

I’m pleased to say that the developers I had in my team were all very talented and had a fantastic understanding of the product we worked on, but that wasn’t always the case.  We’d had several colleagues in the past who could talk a good game, but really couldn’t perform.  As we had no defined interview process, we were very much in danger of repeating this mistake (and we should all learn from our mistakes).

The final area to mention was the general state of the office we worked in.  Whilst it was located in a beautiful and picturesque location, inside the space was being wasted, the walls looked shabby and the desks looked very tired.

Next time we’ll look at how these areas have changed from the primitive practices described above to something more modern.  However, before we do, I think it is worth noting that even with these obstacles to success, the team still triumphed.  No matter how good or bad the environment, processes and other practices, it is the people that matters the most.

Posted in information, planning, process, software development | Tagged , , , , , | 3 Comments

Learning from Mistakes

Wouldn’t it be great if everyone learnt from their mistakes?  Well, I like to think I’m a glass half full sort of person, so I believe that people do learn from their mistakes.  However, what happens when they don’t realise it was a mistake?  How can they learn?

I’ve a story here of a mistake that has been made, but not recognised and I think I might be contributing to hiding that mistake.  This is really tricky, as the consequences of letting someone see the full extent of their mistake is not going to be pleasant for anyone.  We’re all supposed to be on the same side so shouldn’t we always try and help each other out?  Isn’t tidying up after someone else the right thing to do?  I’m sure it is, but if that stops them realising they made a mistake in the first place, then actually I’m the only person who suffers and those mistakes will keep being made and I’ll keep being asked to sort them out.

So, here’s the story, and I think it will be very familiar to many software companies.  The software development backlog is growing and the board is becoming impatient that their ideas are not being realised quickly enough.  They are tempted by an offer too good to be true; why not outsource the development?  Fixed cost, plenty of people saying yes, and guaranteed good results or you don’t have to pay; what’s not to like?  Don’t worry about those negative know-it-alls who caution against giving up control of how the software is written and the hidden costs of supporting the outsourced software for the next 5 years.

I’d like to say this story has a happy ending, and in a way I suppose it does.  Of course, the outsourced software was terrible (as we said it would be) and the next 6 months were lost as a team of 5 people tried to bring it back in line and 2 of the original team quit the company after this.  So instead of accepting the internal estimates of trusted experts with a track record of producing quality and being realistic, the words of the outsourced Johnny-come-lately were believed instead.  As we saw previously the true cost of development was not considered.  The actual costs were more and the end results below original expectations.  It has only been the hard work and expertise of the team inheriting the outsourced software that has produced the eventual good results.  They have dealt with the mistake and helped tidy up the mess.  After this work, the software looks pretty good, and customer feedback is now very positive.  So here’s the problem, the mistake isn’t recognised, the lesson isn’t learnt.  Even worse is that the ends are now starting to justify the means which is only going to encourage more mistakes like this.

So what should have been done differently?  Should we have left the product to rot, perhaps refused to touch it all and made it clear that every single piece of that outsourced software was wasted?  Maybe, but is that really what a team does to each other?

Note:  If’ you’d like some help learning from your mistakes, I can recommend this essay from Scott Berkun: http://www.scottberkun.com/essays/44-how-to-learn-from-your-mistakes/

You can also find lots more of these great essays in Scott’s book Mindfire: Big Ideas for Curious Minds

Posted in costs, planning, Project Management, reading, software development | Tagged , , , | 1 Comment

The True Cost of Developing Software

How much does it cost to develop software?

I remember in my early days of software development I worked for an outsourced software development company and we followed a Waterfall approach to software development.  We had neat phases that we could assign effort and timescales to and so we could agree a cost for each stage.  We prided ourselves on being able to offer fixed cost quotes for any development work that was required.  These costs were well defined and plenty of techniques existed to make sure they stayed roughly in line with expectations.  The whole nature of contracted software development was focussed around minimizing changes and predicting costs.  Don’t tell our customers, but making them happy was only part of the story, the real aim was to deliver what was agreed at the time that was agreed for the cost that was agreed.  If the customer agreed to something they didn’t really want, then whose fault was that?

The last stages of any development effort would largely consist of testing, handing over a  final version and then into maintenance.  How long would the maintenance stage last for?  2 weeks?  Perhaps a few months?  Maybe we’d offer 3 months of defect fixing free of charge, but after that we could charge for each additional fix.

If it took us 6 months to build the product and we offered 3 months maintenance then the total cost of the product was 9 months.  Actually, we’d charge for 3 months maintenance, but only expect to actually work on the product for 1 of those months (if that), so the actual cost would really only be around 7 months.  Brilliant, that’s 2 months of being paid for nothing!

Now let’s look at how that has changed now I work for a product focussed company.  We take a more Agile approach to software development, so the comparisons are not quite the same, but we can still consider the costs of producing version 1 and maintaining that version.  These are largely the same, but now the onus is on us to find the defects and fix them before our customers do.  Previously, if the customer didn’t find the defect quickly enough we didn’t need to fix it.  If we did need to fix it, then there was also the possibility that we made some more money.  As we know, the cost of fixing defects increases with time from when they were introduced to when then they are discovered and fixed.  This leads to the cost of maintenance increasing with time rather than having a fixed cut off point as it would with our outsourced development.  So now, we are not only more motivated to find the defects (so we find more), we also pay more of a premium to fix them (rather than someone else paying for them to be fixed).

The second factor with working on a product is that you don’t want to just stop at version 1.  You want to get to version 2 just as quickly as you did with version 1.  In fact, you don’t want to stop at version 2, really great software takes 10 years, so you need to think about all the versions you are going to build over those 10 years.

Let’s compare the use of outsourced development with that of doing it in house.  Let’s imagine that by outsourcing version 1 you can reduce the costs by 50%.  But, now you have a problem with taking that to version 2.  You could pay the outsourced company again to take the software to version 2, but very soon you are going to be tied into their rates and only they will be able to take the product forward.  Do you really want your product controlled by a 3rd party?

Alternatively, you could take the product in-house for maintenance and produce version 2 yourself.  But, now you have the problem of finding the right people to take ownership of this software.  It wasn’t written in-house to your company coding standards and with long term maintenance in mind (remember the outsourced development team only has to get to version 1, version 2 is someone else’s problem).  Bringing this back in house is going to introduce a ramp up cost and delay.  You then end up paying for the extra overheads of having to develop in-house (which is exactly the cost you were trying to remove by outsourcing in the first place).

So, let’s take a look at the costs again.  By outsourcing, you saved 50% of 6 months, but now that the software is back in-house, you still end up paying the full cost for the next 9 and a half years!  Even worse, is that you pay back more than you saved in trying to bring the outsourced software back in line as it will take longer for the new in-house team to get up to speed than if you had simply built the product in-house to start with!


Posted in costs, estimates, planning, software development | Tagged , , , , , | 2 Comments

My First Computer

One of my favourite interview questions is asking how long someone has been interested in computers. Someone with a long-term interest will usually be more motivated to learn and improve, i.e. the sort of person I want to work with.

Now, before I offend anyone by suggesting that this is the only route to success, it certainly isn’t. I’ve worked with great programmers who haven’t followed this route and not so great ones who have. It’s more important to enjoy your work now than how long you’ve enjoyed it.

Anyway, here’s my story of how I started with computers and programming. Lets go back to the early 80’s and the arrival of the home computer…


The Commodore Vic20 – my first computer

My Dad came home from work one day with a Commodore VIC20. It came with a tape player, plugin Space Invaders game cartridge and joystick (whilst writing this post, I realised that it wasn’t actually a Space Invaders game, but was in fact a Galaxian clone). Immediately, I was hooked. I can remember getting up early before school, sneaking downstairs whilst everyone else was still asleep and enjoying some serious alien blasting practice on Star Battle

Star Battle Cartridge

Star Battle Game Cartridge

Star Battle

A Screenshot of Star Battle on the VIC20

Our library of games slowly expanded as new games were purchased, most of which required loading from tape, using the immortal commands:


One of the greatest aspects of the VIC20 and home computing at the time was that games were often printed in magazines and books. I remember spending hours typing in a program from a book that promised the most exciting game I could imagine. Most of the time, however, I ended up bitterly disappointed with the result. Defusing a bomb? Guess the correct number. Gunfight in the wild west? Tap the space bar as soon as possible when “draw” appears on the screen.

The other common result was that the program wouldn’t run at all! A combination of having not typed the program incorrectly or the program being incorrect to start with were usually to blame. Having invested hours of my life and with the promise of alien lands I was always determined not to be beaten by a simple programming error.

This was my first introduction to debugging. I didn’t particularly understand the strange commands I had typed in, but I did start to spot patterns. I’d always manage to get a program at least partly running, which was usually good enough, even if it would later crash.

Whilst writing this post, I found this amazing website that provides a full VIC20 emulated environment inside your browser.  Why not take a look?

The Commodore 64

Eventually the VIC20 was replaced with a Commodore 64. The games got better, a lot better actually, but I still enjoyed typing in programs from books and magazines. I also got a few books that showed little snippets of programs. How to generate sound, how to draw colour to the screen, how to accept user input, that sort of thing. With this new knowledge I remember creating my first program – a flight simulator. Now before you start thinking I was some sort of child genius, let me explain what this flight simulator would actually do. It was loosely based on a commercial game that mainly consisted of getting the right speed and adjusting the flaps at the right time. My game started with a nicely drawn blue sky, green grass and grey ASCII runway. When you pressed a key the engine noise would start and gradually increase. I then convinced my younger brother that he needed to pull back on the joystick at the right time to take off. However, I didn’t know how to program joystick input, so all the program would ever do (or could ever do) was rev the noise up and up and then play a crashing sound as the engine overheated. It was then game over! This was an important lesson in the significance of imagination and how suggesting underlying intelligence can fool a user.

Commodore 64 Floppy Disk Drive

As I got better with the C64 and BASIC programming I started writing more of my own programs. Eventually “Silly Ken’s Dungeon Adventure” was born. This was a choose your own adventure program (all text-based), but also included my first look into Assembly language programming. By that time I had a 5 1/4″ floppy disk drive and was a regular reader of an enthusiast magazine that once a month came with a floppy disk full of goodies. One month came a sample app that showed how to display a bouncing graphical bar with text using assembly language and raster programming. Again, I didn’t understand any of it, but with enough trial and error and perseverance I managed to change the colour of the bar and the text displayed on it.

From the Commodore 64 I progressed to an Amiga. What a machine the Amiga was. The games were amazing, but I also progressed to some serious tinkering. I paid a lot of money for a 2MB memory upgrade and a 20MB hard disk drive. By now I was pretty efficient in BASIC programming and could write small programs in Amiga BASIC and had started scripting in Amiga DOS. Combining these skills with some friends DPaint magic and Music Tracker skills resulted in various “demo disks” that showcased our talents. They didn’t get very far as the only people who saw them was ourselves, but still we felt good about them.

The Commodore Amiga

Amiga Workbench Startup Screen

By the time I got to my A-levels I knew I wanted to work as a programmer, but wanted some more formal training in this craft. My A-level Computing course was actually a huge disappointment as it didn’t offer any teaching in the art of programming. It covered a lot of hardware basics and system design, but nothing programming related, even though a major component of the course was writing your own large program. I decided to write my own play-by-mail game based around another system I had some detailed instructions for. This was a pretty big challenge and one I didn’t quite complete by the time deadline day came round, but as I’d enjoyed writing this so much, had a clear vision of where I wanted to get to and really felt a strong ownership of the software, I carried on improving the software during the Summer. Before the end of the Summer and my disappearing to University I had a working system that I was play testing with 6 friends. They really enjoyed it and again whilst the program didn’t actually do much behind the scenes, the power of suggestion and their imagination meant they perceived the software as being more than it was. This was also my first pragmatic approach to software deadlines. I realised that what I needed for a complete system would take weeks of solid effort, but I didn’t want to wait that long. So, I prioritized the most important features to get started and then added new features later. I realised that it would take a while before any of the players would get to the more advanced part of the game, so simply skipped those parts and wrote them later. This was very successful and is an approach I can still use today – it is important to get the most used features in front of users as soon as possible, other features can wait until version 2.

I continued using my Amiga as my main programming machine right through into my 2nd year of University. It really was a great machine, it read DOS disks, had a C and C++ compiler as well as Prolog and other lesser known language interpreters that I used on my Computer Science course. And did I mention the games were amazing!?  Unfortunately, one evening I’d gone out and left my TV on standby. An electrical fault caused the TV to explode and, with my Amiga positioned right in front, it also went up in smoke. Since then I’ve been a happy PC owner, but sometimes I still miss the old Commodore machines.

My First PC

Posted in software development, Uncategorized | Tagged , , , , , , , | 4 Comments

Planning Genius or Stupidity?

Here’s a little story of a project I was recently involved in. I always enjoy the chance to watch and learn from other people’s work, so with an external Project Manager I was looking forward to this one.

Things started well with all parties coming together to form the project plan. One thing that stood out on the plan was that it didn’t include any testing to show that the implementation had been successful. Whilst this concern was raised, nothing changed. Otherwise, the plan looked reasonable with various check points and milestones along the way.

Everyone signed up and off we went, full of optimism and ready for the challenge.

Pretty soon we hit problems, certain tasks were delayed and unexpected items cropped up that needed to be dealt with. Nothing too unusual there, however, instead of re-planning or executing contingency plans we simply proceeded as if nothing had changed and pushed on as best we could.

I was amazed that the original plan still stood, but even more amazed that the go-live date of the project was still achieved. Was the PM a genius for getting it this far or was he stupid to ignore the warnings along the way? I’d have been pushing hard to act upon the various setbacks we’d had on the project. Questions such as “Can we have more time?” and “What items can we cut from the first phase?” would have been asked. My approach would have been to get the bad news out early and reset expectations – better to give all the stakeholders as much warning as possible and opportunity to adjust their own plans. But maybe that wouldn’t have been right, after all this project still achieved its go-live date without having to change plans or cut items.

Well, having spent the past few weeks living through the consequences of this false go-live, I’m pretty sure the PM was not a genius!

All sorts of senior people got dragged into a project that they didn’t need to be involved in. If the original plan had considered how the end results would be tested and assessed the risks of needing some time to adjust the solution then additional time for user acceptance testing and rework would have been added and the eventual timescales would have been about the same. However, instead of creating the extra stress and bad feelings, everyone would be pleased to see the project on plan!

Here’s where I think the project went wrong, and I think this is a common mistake. A project plan is your best estimate towards reality. It is there to help you make sensible decisions that you otherwise couldn’t make unless you knew the future. A project will always take as long as it takes. Always. That’s worth repeating, a project will take as long as it takes. You might not like that fact, but it is a fact nonetheless. Your plan has little influence on the time a project takes, so you are best getting the plan as accurate as possible instead of as short as possible. Of course you can change the parameters of a project; deliver less, lower quality, assign more people, but these are so much harder to do once you have published the plan and formed a commitment to the stake holders. Your skills as a PM are often judged on how close you can make reality fit your plan, but really all you’re actually judging is how well you produced the plan in the first place!

The warning signs were there throughout the project – tasks were 90% done, but not complete. Everyone reported only what they thought the PM wanted to hear, not the truth. Once the project became live, there was nowhere left to hide and so it wasn’t pleasant viewing.

So here are the lessons learned:

1. When starting a project consider the end goals and how you will check if they have been achieved or not.

2. Look at the risks to achieving those goals and make contingency plans. In this case running a backup system whilst the new one was being tested and reworked would have been a good plan. Alternatively, scheduling representative groups of users to trial the system in phases would have been a better alternative to a big bang where the whole project was either a total success or total failure.

3. Seek the truth on a project no matter how unpalatable that might seem. Request honesty when dealing with progress and risks, and reward that honesty instead of instilling a fear of failure.

Posted in estimates, information, listening, planning, process, Project Management | Tagged , , | 2 Comments

The Talent Code

The Talent Code

I’ve recently finished reading, The Talent Code by Daniel Coyle.

Here’s my summary of the key points and how I think we could use these in software development.

I’m a definite subscriber to the idea that the best software developers are 10x more productive than the worst.  I’ve seen this in practice with the people I’ve worked with and there are numerous studies that have produced the same results.  The interesting thing about The Talent Code is that it offers an explanation and to how some people can be so much more productive than others.

I’m a big fan of programmers who started programming early and the idea put forward by Daniel Coyle that it takes 10,000 hours of deep practice to become world class would support that reasoning.  Note that 10,000 hours on its own is not enough, it has to be deep practice, and to maintain that practice requires ignition (a spark to light the desire to succeed) and master coaching.

Rather than just accepting that some people have a talent for programming,   Coyle provides a blue print for gaining that talent that doesn’t require an innate ability to start with.

Some of the current practices that I see within the development community can be directly linked to the talent code.  Coding dojos, and Coding katas both promote the idea of deep practice.  Not just slinging together the quickest solution to a problem to get the code out the door and the boss off your back, but really thinking about the problem and pushing your brain to grow more talent.

Sharing the experience, and being inspired by great speakers is a way to ignite the desire and maintain the will to keep learning and practicing.

Agile practices such as pair programming offer the opportunity to examine the work of your pair and really think about the problem, what you are producing together, and ultimately learn from the experience.  In short, they offer the chance to grow myelin, the secret ingredient to talent.

Retrospectives also invite learning and continual improvement.  Rather than repeating the same mistakes, retrospectives offer the chance to suggest change, try new things, practice better and therefore grow more talent.

So whatever you are doing and whatever your skill level, if you want to improve, you should be doing something you enjoy, you should do it a lot, but most importantly you should really think about what you are doing.

Posted in peopleware, reading, software development | Tagged , , , , , | 2 Comments