Automate creating a NuGet package from your Visual Studio project file

Dependencies lead to complexity. Complexity makes things unpredictable. When you change software and get unpredictable things happen. Rarely are these things good. That makes people fear change. Which makes it hard to harness change for the competitive advantage of the customer.

NuGet manages dependencies. This helps reduce complexity. Which makes it easier to predict what happens when I change software. Which means I don’t fear change. Which makes it easier for me to harness change for the competitive advantage of the customer.

This is a post about using NuGet to create packages. It is pretty simple to start using a GUI to build a package. If you make a change to a file this won’t get picked up by the tool. You’ll need to remove the old file and add it again. It gets dull pretty quickly. What is needed is some kind of automation.

NuGet supply a command-line utility to help. There is even a bootstrap version that updates itself every-time you use it. After I downloaded this stand-alone executable I created a new ASP.NET MVC project in Visual Studio. For the purpose of this exercise I don’t think it matters what kind of project you choose. In a command prompt, from you project folder type the following.

nuget spec

This will create a nuspec xml document in the root folder. This file holds the additional meta-data you’ll need later for creating the package. By default the file looks like this. The areas that need updating are highlighted pretty clearly. It is worth noting the author replacement token uses the company name in the project’s AssemblyInfo file.

<?xml version="1.0"?>
<package >
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2014</copyright>
    <tags>Tag1 Tag2</tags>

Now you are ready to automate the build. Run the following command. This will create a nupkg file based on the name of your project and version number.

nuget pack

You can exclude files using an option of the same name. That should be enough to get you started after that use the following links for more information.

Posted in Sitecore | Leave a comment


What is the purpose of architecture in software development? Where does the role of the architect stop and the developer begin? In an uncertain domain (and who doesn’t work in one of those these days) what is the value in spending time upfront designing a grand architecture?

Before attempting to answer these questions, let’s go back to the beginning. The earliest reference to what makes good architecture goes back to Roman times. Vitruvius believed in the three principles of firmitas, utilitas and venustas. Here are common translations.

firmitas – firmness, durability, strength, truth, constancy, stability, endurance
utilitas – useful, utility, expediency, advantage
venustas – loveliness, comeliness, charm, grace, beauty, elegance, attractiveness

How does this translate for modern software architecture?

Do not mistake the concept of firmitas with rigid, fixed, unchanging or constant. A good architecture is strong and robust. A great architecture survives the ever changing environment in which it finds itself. For traditional architects, that means ability to adapt to changing regulations, fashion and use of the building. In software change arguably comes faster. New technologies, rapidly evolving customer needs and changes to overall business focus.

The firmitas principle raises a simple question. What is the Software Architecture? What is it that can remain unchanged when everything is changing around it? Architecture isn’t the code or the implementation details. Those need to change too often. Architecture then becomes about the boundaries. There is a nice analogy of walls, doors, stairs etc. with the contracts that exist in a service-orientated architecture. Allowing rooms to be easily closed for redecorated without impacting other areas of the building is important.

firmitas also means a great architecture needs to be honest. It can’t just look good on paper. It is firmly grounded in reality. The only true test of the firmitas principle is how well the architecture lasts in the long-term.

An architecture needs to be useful. The second principle looks to ensure your architecture delivers value. Useful to the people using the finished product. Be it software or a building. Also useful to the builders/developers. It is clear with a traditional architecture needs to be useful for the people using the building. With Software it is not so straight forward.

Who is it that the Software architecture needs to be useful for? One aspect of utilitas is about delivering competitive advantage to the organisation paying the bills. However it can be argued the users of the software should be unaware of a software architecture. There is of course the design of the user experience but I assume that is outside the scope of Software Architecture. So that really leaves the developers who work with the software day to day.

The utilitas principle changes the relationship between architect and developer. Rather than the architect being the design authority, instead their focus becomes one of attending to the needs of the developer. Their architecture may well be beautiful and be able to stand the test of time, but if it provides no utility to the developers who must work with it day to day it is worthless.

Having used the first two principles to narrow down the focus of Software Architecture to defining the boundaries and in so doing making the lives of the developers easier. Finally we come to the aesthetics of the solution. While the need to focus on the charm of a building is clear, perhaps it is too easy to ignore this aspect when considering the virtual world of software.

Given we’re asking developers to spend most of their working lives working within these architectures, it is only fair I believe to ensure they feel like nice places to work. The effect of an ugly, repulsive architecture on a team is clear. Fear and loathing lead to low morale. So while beauty is in the eye of the beholder, in our case the developers again become the judge of the aesthetic qualities of the chosen Software Architecture.


Posted in Uncategorized | Leave a comment

Not what it is rather what it is not

I first heard the term “via negativa” in Taleb’s Antifragile. I thought it time I dig a little deeper into the idea.

The term itself is theological in origin. In essence we have two ways of knowing God. Either “via positiva” where we can describing all the positive attributes of God, or “via negativa” where we talk about all the attributes that God is not. In the “via negativa” approach we accept we do not know God.

My interest in the term is based on ideas in general rather than the specific idea of God. In theory at least “via negativa” allows us to develop strategies to deal with ideas that cannot be understood (either by a description in words or a mathematical model). If you have read my other posts then you can see how this relates to finding ways to create meaningful experiences.

I need something to work on. I’d like to avoid Theology. Something simpler to get started. OK I’ll start with the experience of “flow” when riding my mountain bike.

I don’t need to think logically what to do next. I don’t need to consider my body position. There is no me and the bike. I have no concept of time. My day to day worries disappear.

This works well? I think so. What is interesting I’ve said nothing directly about the experience of “getting in the zone” but it still allows me to talk about it indirectly. Useful tool when the usual “via positiva” approach isn’t working.

Taleb takes the concept further and uses “via negativa” as a strategy for what to do.

The entire idea of *via negativa* is that *omission* [avoidance of harm, removal of drugs, corn syrup, cigarettes, gluten, carbs (by fasting), gym instructors, tail risks, etc.] does not have side effects and branching chains of unintended consequences -hence robust.

It is interesting. Back to my mountain biking example. One thing you learn early on is not too look at the rocks or other obstacles. A great example of “via negativa” in action.

Taleb’s next point is interesting.

But big corporations [evil pharma, pepsi] and consultants cannot make money from removing; they only benefit from adding.

When I think about software development in this context you can see how backlogs, feature requests etc. work to increase complexity and ultimately kill the experience of using the software. Be interesting to build a backlog of things to “take out” of the software.

There is clear links also with complex systems thinking. A “via negativa” approach allows things to emerge. Like a gardener pruning and weeding, the elements of your solution that are discordant with your desired outcome are removed. However I think we need to be careful of that interpretation. “Via negativa” isn’t about doing something negative, it’s about what you don’t do.

Posted in Antifragile, Complexity | Leave a comment

Methods for Measuring Meaning

I’m doing a lot of head scratching at the moment. I’m convinced that brands that are able to create, or co-create, meaningful experiences with their customers will win their trust and with it build long-term non-fragile relationships. I’m also convinced that creating these experiences isn’t something that can be done by going into a room and designing from scratch. These social systems are inherently complex, and as with other complex systems it is about designing an environment from which these great experiences will evolve and emerge.
In the most abstract sense we need to be able to consider the perception of the brand from at least one customers perspective. For that given customer there needs to be a mechanism by which we can judge the quality of the experiences that the customer has with the brand. Both in the moment and over the life-time of that customers relationship with the brand. What emotions the customer experiences as it happens and how the memories of the interaction influence future behaviour.
Let’s consider a broad range of possible methods for getting the answers we need. My belief is that organisations that get locked into one method risk developing an entrained perspective of the customer experience. In other words while optimisation of the experience may occur the optimisation risks being centred around a local maximum. The Lean Startup for example use their “pivot” to shift out away from these when they occur.
Applied Research is a practical approach to scientific investigations conducted to answer specific questions. Assume this is focused on unique questions specific to one brand and one segment of customers.
Basic Research is more general, while still scientific in nature this research focuses on development of new theories that apply across all brands. In fact brands trying to do basic research can quickly become unstuck (see Facebook example).
Correlational Research allows for the investigation of relationships rather than cause-and-effect. Given the interdependence of factors involved in the customer experience this type of research is often useful. However when used with Big Data then care needs to be taken with the false-positives where correlation is coincidental.
Descriptive Research provides an accurate portrayal of characteristics of a particular individual, situation or group. These studies are a means of discovering new meaning, describing what exists, determining the frequency with which something occurs, and categorizing information.
Ethnographic Research is the investigation of a culture through an in-depth study of the members of the culture; it involves the systematic collection, description, and analysis of data for development of theories of cultural behaviour. Depending on the context this kind of research may be relevant to understanding social aspects of the customer experience.
Experimental Research is objective, systematic and controlled investigation for the purpose of predicting and controlling phenomena and examining probability and causality among selected variables. When we talk of experimentation this is the classic scientific approach however removing the interdependence of variables to gain insight into direct cause-and-effect is very hard with anything but the most simple customer experience.
Exploratory Research are studies that are merely formative, for the purpose of gaining new insights, discovering new ideas, and increasing knowledge of phenomena. Often informal in nature this kind of research is often useful for framing a problem area.
Grounded Theory Method turns traditional methods on their head. Rather than formulating a hypothesis first, this method rather begins by collecting data first. Before allow a theory to emerge from the data. Perhaps a model would be a better term than theory, however this method certainly has application with customer experience.
Historical Research involving analysis of events that occurred in the remote or recent past. Certainly possible to gain insight by analysing historical customer experience data.
Phenomenological Research is an inductive, descriptive research approach developed from phenomenological philosophy; its aim is to describe an experience as it is actually lived by the person. With dubious links to spiritual science these methods are often viewed as unscientific and to be disciplined about this method requires some level of training. 
Qualitative Research is good for dealing with phenomena that are difficult or impossible to quantify mathematically, such as beliefs, meanings, attributes, and symbols. Well disposed to handling the meaning behind a good customer experience.
Quantitative Research involves formal, objective information about the world, with mathematical quantification; it can be used to describe test relationships and to examine cause and effect relationships. It is unlikely to help provide insight in any but the simplest customer experiences.
Posted in Uncategorized | Leave a comment

Ethical & Emotional Experiments

This was going to be a more philosophical piece about how to design a good experiment when the outcome you are trying to measure is an emotional response. To be honest I doubt I would have done more than touch briefly on the ethical dimension. However events have taken over, and Facebook are under attack for publishing the results of one such experiment they carried out in 2012.

The New York Times “Facebook Tinkers With Users’ Emotions in News Feed Experiment, Stirring Outcry”

I don’t think anyone is going to argue the case for experiments which risk doing harm to customers or take advantage of vulnerable groups. Customers also need to feel that their privacy is being protected at all times.

It is clear Brands need to develop trust with their customers. If we are to perform experiments then customers must be happy to be involved. People like to feel in control. The anger demonstrated by Facebook users I suspect is based on that feeling a lack of control. Whether we need to go through a process of informed consent or not, I’m not sure, but anyone who is involved in an experiment needs to feel in control of the process.

In the physical sciences an experiment will vary one-factor-at-a-time. There will be a theory created before hand. The effect of this variation is measured and if the results don’t match the theory, then the theory will have been falsified. Notice theories are never proven correct. Experimenters need to be cautious of false-positives both statistical and those due to confirmation bias. Good experiments need to be repeatable by other teams.

However we are talking here about building an understanding of how customers will respond to your brand experience. This is about experimentation on people. In the world of clinical trials the gold standard for experimental design is known as the Randomized Controlled Trial (RCT). Here different treatments go head-to-head. Often the trial will be blind, so the participants and the experimenters don’t know which treatment is being used in each case. When it is not possible to carry out a RCT then a less rigorous Observational study offers a good compromise in terms of quality of results vs. cost.

We can now put two strategies in this context of experimental design. The first is the A/B or the more complex multivariate testing. These tests work well for changes in UI within an ecommerce environment where impact on sales can be measured. The other is the cohort based tests favoured by the Lean Startup community. This approach allows a much richer change in factors and will also allow us to measure the effect over time. For example to see the effect on a customer who first searches for a product, before returning a few days later to make a purchase, as is common when selling flights.

However these approaches are very useful in the right circumstances both are susceptible to false-positive errors, and don’t help us measure the quality of the customer experience directly. Just a proxy measure such as purchase behaviour, which further components the false-positive error rate. I suspect this is the case with the Facebook incident. They claim some link between users click on likes or uploading messages containing certain words, with the users emotional state.

The question I’m not going to answer here is are there other options when it comes to the design of experiments?


Posted in Agile | Leave a comment

Lean – Shoulders of Giants or House of Cards?

I’ll start by coming clean. I’m more guilty than most. I read the Womack and Jones books on Lean Thinking. Goldratt’s the Goal. The Poppendiecks lean software stuff. David ‘Kanban’ Anderson. I loved them all. I ran into the office looking for “waste” everywhere. Show me the next constraint. How to get flow? To all those I inflicted this bullshit on, I’m sorry. Really sorry.

The problem is, what people do in manufacturing and what people do in software development, is pretty much a polar opposite. Even that doesn’t really cover it. Opposites after all have something in common, some axis along which the two extremes can be placed upon. Not then apples and pears, more apples vs. beach towels.

I’m guessing at this point I risk losing you. What is this guy ranting on about? To borrow a phrase. It’s the work stupid!

Manufacturing is about the production of finished goods. Usually at scale. These are physical objects. Of course the lean guru will talk about one-piece flow. Different products from the same production line. Different models of car. Different vacuum cleaners. Never a truck, followed by a smart phone, followed by a beach towel.

Lean is a reaction against the status quo. A fight against the efficiency driven, economy of scale, batch style of thinking that characterises the manufacturing industry. It is reductive. Their work item is a finished good, bound by the laws of physics. Inventory piles up. Failure gets binned. Every type of Muda is visible. Takt time can be measured. Each is independent of the others. You can touch, smell, feel, see each finished work item.

Therein lies the rub. The key assumption that sits behind all of this is deeply flawed. That a request for work entering a software development team is equivalent to a request from a customer for a finished good entering a factory. Clearly if this assertion can hold the pay off is huge. Suddenly the software development community can tap into a wealth of hard won knowledge about optimising the flow of work within manufacturing. Books can be written. Conferences booked. Careers made.

Mixing my metaphors for a moment. What we thought was the shoulder of a giant, turns out to be a house of cards.

Software development isn’t constrained by physical laws. You can’t pick it up in your hands and weigh it. You can’t see it. I won’t get started on what it does to statistical analysis. Each work item has a mass of dependencies on all others. Even the idea you can reduce software development to an atomic unit like a user story is deeply flawed. Software is inherently holistic in nature.

The only real connection I can now see to manufacturing is that the software we write has more in common with the factory than with the finished goods produced. The software being robots that produce something called a user experience. One customer getting a series of interactions that make up our “product”. Perhaps but I’m clutching at straws. Either way this looks very unlike anything we see today.

So how bad is it? For now we have a lot of very clever people keeping the assumption in place. The focus is on making our work items fit the manufacturing model. That way we get some value at least from the model. But it will get harder and harder to maintain the fallacy. Everyone can see the cracks appearing. My advice is simple look at the work. Love it and learn to see it. Challenge every assumption about it. After all if you run your software developments teams using a manufacturing mindset you’ll end up with a factory feel to the work, perhaps we can do a whole lot better than that?

Posted in Uncategorized | Leave a comment

The dangers of ROI based on low conversion rates

You have a large ecommerce website.You want to make small incremental improvements to the performance of the website. You can measure the impact via an increase in profits. Everything sounds pretty simple. Just run small experiments on everything from the user experience, pricing, pay-per-click ads etc. when you see something working do more of it. If things aren’t working then try something else.

This is age-old marketing know-how. I’ve seen this approach being used in direct-marketing since the start of my career. This is the beauty of digital. We can measure everything. Not like stodgy old media. But are these assumptions true?

Lets consider a simple model. The experiment could be anything from a new online ad campaign, an A/B test around button positioning or a good old fashioned bit of discounting. For the purpose of this discussion it doesn’t matter. We have a large customer base. We measure success based on a influencing the customers behaviour. We can expect a very low conversion rate. We also have a low cost total cost for the experiment.

We begin with a big cohort of customers. We then split these out into those who we were able to positively influence, and those who we didn’t and had a negative effect on. The second group were never going to buy, or were going to buy anyway. In each group we then consider the accuracy of our measurements, are the results we measure true or false.

This gets confusing really quick. So please stay with me.

When calculating our ROI the measurement we need to get a count of all the positives. This count is made up of two types. The true positives (i.e. people we correctly measure as being influenced by our actions) and false positives (i.e. people who weren’t influenced but because of inaccuracy in the measurement methods we think we’re).

Let’s assume we have a cohort of 100,000 customers. We have a 1% error rate in measuring false positives (people who weren’t actually influenced). Let’s also assume the true influence rate is 5%.

  • True Positives = 100,000 x 5% x 99% = 4950
  • False Negatives = 100,000 x 5% x 1% = 50
  • True Negatives = 100,000 x 95% x 99% = 94050
  • False Positives = 100,000 x 95% x 1% = 950

So our test results give the following.

  • Positives = 5,900 = 18% error
  • Negatives = 94100 = 1% error (as expected)

This is pretty worrying. We could easily be making a decision based on an ROI of 20%, while actually with a small error rate of just 1% that results are break-even.

False-Positive Paradox

Let’s consider some real world examples and some possible strategies for avoiding this effect (known as the False-Positive Paradox).

The first is a pay-per-click campaign. So here we just pay for clicks. Tracking purchases is pretty straight-forward most analysis tools give you revenue figures. However it is going to be pretty hard to measure definite cause and effect here unless we adopt a more scientific approach. Ideally we would have a pre-defined cohort of users to whom we show the advert, then we can measure real influence by comparing users who find the site organically vs. those clicking on the ad. Given most reporting tools don’t do this I’d argue the error rate here is much higher than our illustration of 1%. Ideally use cohorts, if not ensure you’re ROI barrier is raised high enough to lift you out of danger.

Next let us consider A/B testing of a new design. In this case we are running an experiment using a typical javascript based tool. We are looking for justification for doing some change to the platform. So it is the cost of the proposed changes to the platform we need to consider. Now in this example we can expect a little more scientific approach from the start. We are running the test in parallel which will remove a lot of noise from the results (for example a sale starting during the test period will have the same effect on both groups). However unlike the campaign example here are measurement is not based on absolute sales. We’re looking at a shift in buying patterns. The customers who didn’t buy something with the old design (A) who will now buy something with the new design (B). So unless you have a landslide victory with the new design take care.

The final example is implementing personalisation logic. In this case we are segmenting our customer base and to a certain group showing some different content. Again if carried out using A/B logic the results are more scientific. However in analysis of these kind of rules generally will only show the sales figures of the segmented group of users and any uplift seen in this group against the norm. If the rate of ‘influence’ is low then we can expect errors. To avoid this case personalisation rules should again lead to much higher influence rates. In a word keep it simple, creating multiple highly targeted rules based on non-cohort based analysis may be unwise.


Posted in Complexity | Leave a comment

The Roll the Dice Once Fallacy

It was Stephen Hawking who noted,

The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. … The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.

What concerns me is the claims people make based on these facts. Whether it is intelligent design, the hard Anthropic Principle or any claim about the likelihood of life as we see it coming into being.

All suffer from a very simple logical flaw known as the Selection Bias. When any discussion about our place in the universe starts using extremely large or small numbers to make a point then you see this bias in action. The truth is we only have one data sample with which to make such claims. That sample of data has intelligent life emerge, on one planet, with one species skilled enough to make suitable observations. We simply know nothing about the situations where life didn’t emerge, or where it emerges differently. Any argument that uses probability with only one data point is flawed.

Imagine trying to make claims about the behaviour of a simple 6 sided-dice. If you have been able to throw the dice just once. For arguments sake on that one throw you get a 6. We’d be told the chances of rolling a six are small, just 1 in 6. Behold the more we examine the dice the more it is clear the only outcome we observe is a 6. We’d analysis the video of the dice bouncing and tumbling across the table. The improbable events that led to our seeing a six would be discussed at length. External forces would be called into question. Some would say the dice was loaded.

Of course eventually there would be calls to roll the dice again, but with life we can’t do that. We can’t change the starting conditions and see what happens. Of course we can speculate, but in the same way we’d try and model the tumbling dice, the smallest changes in parameters will have a large effect to the outcome.

So when you see an argument based on probability and our place in the universe call bullshit. We have only been able to roll the dice once.

Posted in Complexity | Leave a comment