Spaghetti or lasagna? What about your code? According to Conway's law, your business and your code have more in common than you might think. Break bad dependencies.
There is a simple heuristic, which you can use to determine the top priority activity you can engage in-at any given moment. It comes out of the “lean manufacturing” camp. It can apply to a business as a whole, a specific product and its backlog. Your developers typically apply it, when improving software performance. Now you can use it in the context of your product development process.
Your biggest priority at any given moment is clearing your biggest bottleneck. This will give the largest non-linear jump forwards in system productivity, because of the Herbie problem . This includes business productivity (read profit). Cycle time goes down. You reduce “friction” around production.
Once you clear a bottleneck, you create another one (a relatively smaller one) elsewhere. This is the nature of this game. Then clearing that bottleneck will give you the highest possible non-linear improvement in the output of the business as a whole. In that context, if you aren’t releasing your software to production automagically with every check-in, you have bottlenecks to clear.
The end game of clearing bottlenecks is simple. You become a “pull-based” organization. You can respond immediately to customer requests, if you want to, if you need to, or if it tickles your fancy. That’s a pretty valuable place to be.
[image cred: hockadilly, brand0con]
If you’re thinking of creating a new product, you don’t want to wait a week. Especially in the early days, when you need to make up your mind about what the product is.
The fastest way to speed up testing? Delegate tests and run them in parallel. The financial value of low granularity tests which you can run quickly is immense. You want to benchmark product ideas against each other. You can complete hypothesis tests much faster.
When organizing your own workflow, modularity increases your ability to scale testing. Total elapsed time is much lower. You can do many tasks in parallel and then combined at the end. This includes testing activity.
You iterate out of that initial fuzzy zone, and into working on product that sells based on a clear promise. Your product doesn’t need to be finished. Instead, the product idea must be clear and easy to understand and attractive for your target audience.
For example, for years big banks and trading firms tried to create software for algorithmic trading. Traders used computers to execute simple strategies much faster than a person could. A computer can snatch and resell a product in milliseconds, if it identified a price difference across two markets.
The principle is the same as using an ebay sniper. If you could have computers
1. identify opportunities
2. patiently wait for the appropriate moment,
3. buy and sell at exactly the best time,
the computers would execute transactions on better terms. Computers’ response time was much lower resolution. Electrical impulses sent from eye, to brain, to finger took ages, relatively speaking. That speed made a big difference.
A few years ago, banks started moving away from big cloud arrays of central processing units (CPUs) with standard code. Instead, they started using hardware from high performance computing. Graphics hardware known to gamers, like Graphics Processing Units (GPUs) found new purpose.
The main difference between CPUs and GPUs? GPUs enforced the use of modularity and parallelism at an low level. Data would passed in. The whole chip could be used simultaneously to process data. There were no serial bottlenecks, which often arose because CPUs were general purpose.
The difference in performance was staggering. JP Morgan claimed that they had reduced the time required for overnight risk calculations down to one minute. With the same level of accuracy. While creating the FGPA arrays requires more effort up front, the benefit is clear.
Competitively, JP Morgan had much better information to act on. Calculations happen on an interval of one minute. Everyone else just “whips their horses” harder. At the same time their competitors only reconcile their overall risk once a day.
Low-level hardware modularity meant that JPM executed the same instructions much faster. It created the right environment. Geeks run hundreds of similar calculations in parallel, and then combining the results as one final step.
Academics have replicated these results in studies. Compared to using a standard CPU, the performance difference for simple arithmetic was 147x faster on modular hardware. It just happens the arithmetic calculates the price of a financial option. You pass in standardized data. Any part of the chip can run the calculation. Nothing changes as it all runs. At the end, the results are all summed up. Only then, the data in updated.
You have a bunch of ideas, but don’t know where to start. If you can figure out how to sort out the best ones quickly and test many ideas in parallel, you can get clarity very, very, very fast.
You can use same approach for individual benefits or even features. Hey, drill down as much as you want. The same logic applies. As the Beastie Boys say, “If ya got the skills, you pay the bills.” Easy.
If you want to find out more about how to do this, take a look at Clear Strategy Now