## Getting in the Zone

Let’s start with Power, here I’m using the Andy Coggan method, recommended by Joe Friel and the default option on Training Peaks.

Power Zone Zone Description FTP %
1. Recovery Active recovery less than 55
2. Aerobic Aerobic or extensive endurance 55-75
3. Tempo Intensive endurance 75-90
4. Threshold Lactate Threshold 90-105
5. VO2 Max Muscular endurance or aerobic capacity 105-120
6. Anaerobic Anaerobic Capacity 120-150
7. Power Neuromuscular Power greater than 150

So this would be great if it was used everywhere, but with power meters being still rather pricey we need to be able to translate these to Heart Rate zones. So this is how I think these relate to power based on Joe Friel’s cycling zones.

Power Zone Heart Rate Zone FTHR %
1. Recovery 1 less than 68
2. Aerobic 2 68-83
3. Tempo 3 83-94
4. Threshold 4 94-105
5. VO2 Max 5a greater than 105
6. Anaerobic 5b n/a
7. Power 5c n/a

So in the version of the Mountain Biker’s Training Bible I have Joe uses Critical Power as a guide to which zone to be in. These are pretty hard to relate back to the Coggan zones but here is my best guess. I’ve included some details on a typical ride which I found in another of his books.

Power Zone Critical Power Typical Ride
1. Recovery CP180 n/a
2. Aerobic CP180 5h ride
3. Tempo CP180 3h ride
4. Threshold CP30-60 8-30 min intervals
5. VO2 Max CP12-60 3-8 min intervals
6. Anaerobic CP1-6 30 sec – 3 min intervals
7. Power CP.2 less than 30 sec intervals

And finally I’ll try to relate these to the Borg Rating of Perceived Exertion (RPE) Scale which is 6 to 20. There is also a 1 to 10 version of this used in the old Friel books. But it’s quite hard to relate the 1-10 scale as it is very vague around the 6 value.

Power Zone RPE (6-20) RPE (1-10) RPE Description
1 6-8 0 No exertion at all to extremely light
2 9-11 1-4 Very light to light
3 12-14 4-5.5 Somewhat hard to hard
4 15 6 Hard (heavy)
5 16 6.5 Harder
6 17-18 7-8 Very hard
7 19-20 9-10 Extremely hard to maximal exertion

## The Corner

My eyes look past it. But I see it clearly.

I take the less travelled line in. Fingers itching to dab the brakes. The last chance to scrub some speed before I’m committed. This time I hold my nerve.

My body coils ready. The bike leans. I feel the gyroscopic tug of the big wheels. Stabilising across the stutter.

Now in the moment. I’ve started shift weight to the outside peddle. I sense the knobbly edges bite.

Both the acceleration and the centripetal forces work to create my own artificial gravity. Wrapping me and the bike. The world shifts. What was a berm becomes a bomb hole.

The legs push deep into the thin line. Confident everything is balanced. My whole body squeezes the bike against the ground. Like a soap bar squeezed to hard, I get pushed out of the apex.

I’m aware of the noise of the wheels. The click of the rear hub. The feel of the air being pushed aside. Rocks and debris setting back down behind me.

But this corner isn’t a singularity. There are many. Those behind like a hand on my back continue to push me forward. Those ahead exert their own pull. Willing me on. Each merging into the next.

## Secrets of the Solar System

Horizon, 2014-2015: 8. Secrets of the Solar System.

I had a long trip home last night. All the way from a sunny Luton back to Chippenham. Thankfully I got home in time for bedtime with the kids. So we went on iplayer to find something to watch together. Horizon’s “Secrets of the Solar System” caught everyone’s interested and we settled down to watch.

Things started off well enough. The production quality was good. It captured the kids attention from the start with a clear simple narrative and great special effects. I was also learning new things. The idea of the Jupiter and Saturn forming at the point where water turns from liquid to ice is very compelling. It also explains why they grew so fast.

However it isn’t long before I’m starting to shout at the TV. First with the programmes central point. This is that the old clock-work view of the solar system needs replacing with a model where the planets move around. During this whole discussion there was no mention of the Three Body Problem.

This idea basically says that if you have two objects then their orbit around each other is stable. However you only have to add one more object into the mix and things get chaotic and unpredictable. The more objects you have the more chaos ensues.

Once you’ve got an understanding of the three-body problem then the question becomes how is our solar system so stable? When you look at the orbits of the 3 objects above then any thought the the early solar system came into being in the same stable configuration it is today is surely madness? Why didn’t the Horizon team tackle this subject? I’ve seen Brian Cox tackle the subject before and it isn’t hard to visualize.

Unfortunately things got worse. Next we had the Hot Jupiter discussion. At no point did Horizon mention Sampling Bias. We see Hot Jupiter type systems out there simply because these massive gas giants sitting close to their sun’s are easy to see. Do these solar systems look weird? You bet. What conclusion can we draw from that? At the extremes of possible solar system formation there are some pretty weird configurations.

As my 10 year old said. It all seems pretty random. Based on the chaotic orbits you can expect in any system with more than two bodies it seems impossible to predict how the final stable configuration is going to look like. Once our ability to see other systems improves perhaps we’ll find each configuration is unique. Just like our fingerprints.

So that takes us back to the original question. Why is our solar system so stable? Well that one again is simple. Because we’re here. Life needs a stable system that works perfectly for life. There is no mystery in that.

## Random, Predictable or Chaotic?

There is a lot of confusion about complexity. Chaos about chaos. Sorry that’s not helping is it. You see the point?

No. OK here goes.

Let us start simply. There are random systems. Systems where the next thing that is going to happen can’t be predicted. The mathematicians call these systems stochastic. Some people like to use the term non-deterministic. If you want to work with these kind of systems you use something called probability theory. The classic example of a stochastic system is the coin flip. Probability of heads or tails is 50%.

So there are also non-random systems. These systems are deterministic. That is the future state of the system is not random. A deterministic system will always produce the same behaviour given the same starting position. Newton’s laws of motion are classically deterministic.

Now the confusion Chaos theory brings is that it is possible to have a deterministic system in theory, which in practise behaves like a non-deterministic system. This was best illustrated by the butterfly effect. Where very small changes in the initial conditions of a system have a significant effect on the outcome.

It turns out even the coin-toss is actually a deterministic system that behaves randomly.

OK see why I was confused?

Now add into the mix linear and non-linear systems. Linear systems are simple. The input into these systems are proportional to the output. You turn a tap a little bit, some water comes out. You turn it more. More water. Non-linear systems don’t work like these. The link between input and output is more complicated. These non-linear functions are very hard to work with mathematically.

So what happens? Well it just so happens that most non-linear systems can be approximated with a linear model of the system. That means you get nice simple maths to work with. Great.

However the problem is chaotic behaviour (the stuff that makes a deterministic system look non-deterministic) is neatly hidden when using a linear model. Taking a software-engineering example. If you have a linear model of a team. Then adding more people to a team will increase the amount of work that team can do in a linear way. In practice we have know since at least 1975 when The Mythical Man-Month was published that this is not true. Counter-intuitively adding more people to a software-development team can actually slow down the team.

So what does it take for a system to move from something we can model usefully in a deterministic way to one that must then be tamed using probability? It was Newton who first noticed the problem way back in 1687. Newton noticed his mathematics worked brilliantly for 2 objects failed once he tried to model 3 (the three-body problem).

It wasn’t until 1887 that Poincaré was able to understand what was going on with three-body problem. 200 hundred years. It only takes three inter-dependant objects to create chaotic behaviour that appears in practice to be random. This is why both in software-engineering and in project management people have a principle to remove dependencies wherever possible.

## Careful what you measure

While measuring one side of a rectangle on average gives you a good estimate for the total area, it only works if your rectangles follow a standard distribution. If they follow a fat-tail it’ll not work so well.

So if you’re using a proxy measure to estimate the value of the work you do i.e. hours, cost, story points etc. be warned you need to consider more dimensions to get an accurate picture.

## A Brief History of Value

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

There is much talk of value in modern software development.  Lean and Agile methods require the whole team understand the value they are delivering. However in order to deliver value we must first understand what is meant by value.

This post will take a quick trip through the history of the theory of value and attempt to demonstrate the ideas from a software development perspective.

Value in Exchange and Value in Use

Aristotle the Greek philosopher began by distinguishing between two types of value.

• “value in exchange” which is the market rate for the item at a point in time.
• “value in use” is the worth of the item while it is being used.

Let us apply this to software. Consider an simple ecommerce website. The “value in exchange” is the current market rate for delivering such a website based on a certain set of features and level of quality desired by the customer. The customer will generally discover this “value in exchange” by going through a tender process. The “value in use” in this case will be based on the profit generated by the ecommerce website.

Labour Theory of Value

It was Karl Marx building on the work of Adam Smith and David Ricardo who postulated that ultimately value is based on the amount of labour required to generate the item. This is essentially a cost based approach to value.

For software this is a classic time and materials basis for providing value to a customer.

Marginal Theory of Value

This theory came about in the mid-to-late nineteenth century. The central idea is that of marginal utility. This idea is best explained with an example. Consider a newspaper. You purchase today’s Times. You purchase a second copy of the Times it will have zero marginal value for you. You now buy the Guardian. Reading the Guardian is not as useful as when you read the Times. Many of the stories are repeated. However it has some marginal value with a handful of exclusive stories and a different perspective on the rest of the news. Each successive newspaper you buy on the same day, delivers diminishing marginal utility.

This concept is critical in development teams delivering value in the long-term. It explains why you should concentrate on a “minimum viable product” which will release the most marginal utility for the least amount of work.

It also explains why it is hard to maintain pace in a team. Even if the team keeps technical debt low. Over time the marginal value of each new feature diminishes. Innovation helps solve this problem but is out of scope of this post.

So in summary looking back at the history at economic theories of value there are three ways of looking at value:

1. Market Rate – The value in exchange based on what other people are charging for the work you do.
2. Time and Materials – The cost based approach which focuses on the labour and other incidentals.
3. Utility – The marginal utility of the work being done.

Clearly all these perspectives are interlinked, for example a market rate can be driven down by cheap offshore labour costs. So use all three approaches when considering the value your team delivers to your customers.

References

## Motivation Ego vs. Task based

It is possible to split goals into two categories. Task orientated and Ego oriented.

Task orientated goals which focus on mastery of skills. Task orientated goals are more likely to encourage learning even if the goal is not achieved.

Ego orientated goals which focus on achievements. These goals tend to reinforce some aspect of self-image. Boosting ego. Ego orientated tasks tend to make people more anxious about failure.

In sport psychology both goal types can be used. With task oriented goals dominating in the off-season when athletes are preparing for competition. While ego oriented goals dominate during the competitive season.

Websites like Strava focus on ego oriented goals. With the aim being to beat the time of your friends or simply of others riding the same segments as you.

Last year I moved away from competing in events like La Marmotte to focus on improving my mountain bike skills. A shift from ego to task based goals. I stopped making every ride about beating the clock. So I could spent more time exploring. I found new routes even on my well ridden ride to work.

Reference

## The case for PORV (Plain Old Razor Views)

At the turn of the century there was a battle for where and how to store your applications business logic. Entity Beans became the bloody loser. Plain Old Java Objects came away victorious.

The benefits worked both ways. It became easy to change your infrastructure without effecting the business logic. Or in faster moving businesses developers could evolve the business logic without concerning themselves with the infrastructure supporting the application.

When I look at ASP.NET MVC I see Razor templates littered with details about the infrastructure. Surely the responsibility of the View is to tie together the HTML, styles, scripts and images to create the visual aspects of the user experience? Developers want to be concerned with browser and device compatibility. Beyond the basic model interface they should know nothing about how the routing is implemented, how the controllers are structured, which actions to call etc.

The case for PORVs is clear. You can test all the visual aspects of your application without a database or more complex content management system. You can work directly on the views without the need to build/deploy to a central application framework. Your visual tests don’t break every time the database is updated. You can implement a separate deployment process taking visual changes into production using a light-weight fit-for-purpose process. You can use the same visual code across multiple applications.

The main villain of the piece is the static HtmlHelper classes. If the PORV idea is to work then this approach needs to change. Perhaps any infrastructure specific logic can be shifted into a View Model or more open Helper interfaces can be used with DI making it easy to switch the infrastructure service? I certainly don’t have all the answers. I spend too little time coding these days to explore these ideas fully. All I can do is put the idea out there. So please feel free to comment.

One final note. Martin Fowler and co. recognized the need for a fancy name. So is it Plain Old Razor Views, Plain Old Razor Templates or something else?

References