a very common tragedy (chapter 17)

Although the tragedy of the commons is best known in economics, ecology, and other sciences, I think we can observe such behaviour in technology strategy and architecture, it is just harder to notice it for what it is. We can also look at some principles from Elinor Ostrom on how collectives successfully manage this problem as these might help us.

Read more: a very common tragedy (chapter 17)

This chapter is posted in its entirety from the below book, available now.

“For that which is common to the greatest number has the least care bestowed upon it. Everyone thinks chiefly of his own, hardly at all of the common interest; and only when he is himself concerned an individual. For besides other considerations, everybody is more inclined to neglect something which he expects another to fulfil…”

– Aristotle, The Politics, 4th-century BC

The term The Tragedy of the Commons was the famous title of a 1968 essay by ecologist Garrett Hardin. It has become the metaphoric label of a concept dating back to at least Aristotle in the West, with a significant work from Oxford economist William Forster Lloyd in an 1838 pamphlet. The term relates to how people are motivated to behave when sharing a common resource; basically, our near-term self-interests can take precedence over the common good, including our own longer-term interests, if we mistakenly believe someone else will deal with the issues.

The tragedy of the commons gets quite detailed and complex in modern academic studies including economics, ecology, psychology, politics, and science. It is generally considered a subset of game-theory, but for our discussion we will keep the concept simple. Where it is different from other types of game-theory, like the prisoners dilemma, is that it is based on positive effects being specific to the decision maker, while the negative effects being dispersed among a wider population, meaning the decision maker gets a big upside and a relatively small downside. The tragedy part of the name comes from it working as long as most people don’t do it, yet it is often obvious to everyone that they are better to do this, and therefore in the end everyone gets a large downside.

Hardin indicated the countermeasure to the tragedy of the commons was for societies or other collectives to make some type of social contract, such as agreements, legal contracts, or legislation, with a modern example being fishing quotas to limit the depletion of fishing areas. However, Elinor Ostrom shared the Nobel Prize in Economics with her extensive work on how various societies have managed this problem with a wide variety of countermeasures for hundreds of years, and not just the countermeasure suggested by Hardin.

Although the tragedy of the commons is best known in economics, ecology, and other sciences, I think we can observe such behaviour in technology strategy and architecture, it is just harder to notice it for what it is. We can also look at some principles from Elinor Ostrom on how collectives successfully manage this problem as these might help us. But first, let’s work a typical scenario so we are clear on the concept.

Counting Cows

“Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.”

― Garrett Hardin, The Tragedy of the Commons

Examples of the tragedy of the commons normally go something like this. Consider 15 farmers grazing cows on common land, where the land can optimally support 150 dairy cows. So, it seems obvious that each farmer should keep 10 cows, assuming they want to play nicely. However, one farmer, let’s call him Barry, realises that if he adds an additional cow the total number of cows on the land only goes from 150 to 151, so not much less efficient for all farmers, but a big relative gain of 10% more milk production for him. Barry adds another cow, so he has 12, and the overall total is 152. Barry gets 20% higher milk production, with little personal cost to him, and a small, shared cost for the other 14 farmers. This is smart. Barry is smart. But so is Steve, one of the other farmers, and he comes to the same conclusion, it is an idea he’ll bet on with 10 extra cows and a promising young yearling. But the same conclusion, without the yearling, is reached by farmers Bernard, Peter, Keng, Mehdi, Andy, Kane, Prabhat, Nikhil, Nick, Nathan, Jono, Adam, and Hieu. Suddenly by everyone making the smartest individual decision there are over 200 cows on land that can only support 150 cows, and the small individual cost envisioned by each person becomes a catastrophic cost to all, hence the tragedy of the commons.

There is a lot of debate about the reality of this scenario, given that with a little bit of communication and some social rules it could easily be avoided. For example, when Barry added extra cows, Steve could have named and shamed Barry out on the Farmbook social media platform.

But for our discussion we can leave the many debates on this concept behind and just consider where we might see similar behaviour in technology. If you have a small piece of land with large cows then coordinating might be easy, but in the complex world of technology it is often hard to even know this tragedy of the commons is happening.

Shared resources and BAU

One area we might see this happening is with shared resource models of various types. I tend to look at what behaviour is being incentivised either explicitly or implicitly.

Perhaps your company has some type of recharge model, where resources working on initiatives are recharged to the requesting business unit. I’m not a fan of it as the company ends up doing business with itself; it is a bad game. But sometimes that is what you must deal with, and to be honest you need some type of limiter and prioritisation approach, or the tragedy just gets worse. Often a company will realise it is impractical to charge for every small change, so perhaps there is a common pool of time for business-as-usual changes (BAU), under a certain complexity or cost. The common pool often also covers basic technology upkeep tasks like software upgrades and delivery improvements like DevOps.

However, people are smart and realise the best move in this bad game is to chop their changes up into small requests that therefore count as BAU and are free, meaning the cost is hidden and shared by all, so a relatively big upside and very small downside for each requestor. Of course, if everyone does this smart move, then the whole BAU resource gets overwhelmed, and worse there may be no time for upkeep tasks. We shouldn’t confuse this with a sensible product approach, with a Kanban, where work is pulled through by teams in small units; that is a good approach, and different from the BAU scenario above.

Technical debt

“The rational man finds that his share of the cost of the wastes he discharges into the commons is less than the cost of purifying his wastes before releasing them. Since this is true for everyone, we are locked into a system of ‘fouling our own nest’ so long as we behave only as independent, rational, free-enterprises”

(Hardin 1968: 5).

Importantly the tragedy of the commons is not only relevant for resource usage, but also for pollution of an environment, for example if you discarded your personal wastewater into a local stream, it wouldn’t likely cause an issue, but if the whole town made the same decision, it would and therefore there are laws prohibiting such behaviour.

For strategy and architecture, particularly architecture, this is a good analogy for technical debt (tech-debt). As architects we tend to talk a lot about tech-debt, though I’d question if we really have a clear and coherent understanding of what it is. Not the tech part, it is the debt part. At best it is mostly a doubtful debt, but if your starting position is a doubtful debt then it is probably just a bad loan. Also, if you don’t have a choice about the loan, that they won’t pay back, then I think that might just be a robbery. It is a thought-provoking analogy. There are times when people personally profit by incurring tech-debt that allows them to meet incentive targets. Incentives can lead to undesirable and unforeseen outcomes. Be careful how you raise this issue. Often people are behaving in a logical manner given the situation. It might be, for them, the best move in a bad game.

Anyway, let’s just go with the idea we want to minimise technical debt. I think we can agree we don’t want to pollute our system with bad architecture. However, we do often get pressured by business stakeholders and project managers to accommodate tech-debt. Now I often argue that at times it makes sense to accommodate tech-debt, and if it does make sense for the business, then is it really tech-debt. But for this discussion, we are talking about the many situations where individuals are incentivised to get their tech-debt accepted, so they can get their initiatives in cheaper or quicker. The accumulated effect of everyone doing this does indeed pollute the system, and does lead to system instability, cost, and lack of agility.

Better countermeasures

Elinor Ostrom shared the Nobel Prize in Economics with her extensive work on how various societies have managed this problem for hundreds of years using a variety of countermeasures. Ostrom looked at what these independent societies did and defined 8 principles that describes the characteristics of these various management practices.

Some of these principles will feel familiar to us, but some probably won’t, or the nuances of them can be unpacked, and we can refine our approach.

Principle one certainly should resonate, and that is to define clear group boundaries. Now there are many levels at which this can apply but think of it in terms of resource allocation and environment management (including the architecture of the environment, system, or ecosystem). For architecture, delivery of true microservices mapped to a product ethos seems to fit well. Ideas like team topologies can be useful, and clear architectural domain boundaries. These boundaries must include consideration for business stakeholders, funding, and prioritisation.

Principle two is easier if we get the first principle right, as this principle is to match rules governing use of common goods to local needs and conditions. This is basically ensuring that the rules match the intent of the boundaries for resource allocation and environment management.

Principle three is to ensure that those affected by the rules in principle two can participate in modifying the rules. Again, with microservices and a product ethos we talk about self-governing teams, but it is hard. The wording here is that people can participate, which doesn’t mean we need a consensus, as that is just not realistic.

Principle four requires high level executive support as it is to make sure the rule-making rights of community members are respected by outside authorities. That means these rules need to be part of a wider technology strategy and governance approach. This is because we can’t just go it alone within our team as the corporate pressure will build from outside.

Principle five requires us to develop a system, carried out by community members, for monitoring members’ behaviour. This can be done with governance, but it goes further as it is also a culture of holding ourselves and each other to account. It is important to find a way to not allow this to create technology verses business friction, or architecture verses delivery. This is where cross functional teams can really help.

Principle six might not be easy to apply as it prescribes using graduated sanctions for rule violators. The key here is to tone back on the word sanctions. That is probably not what we need in most corporations, although in some regulated industries there can be very real sanctions for certain technology missteps, so this can flow down. But generally, we’d be using softer approaches like nudging, mapping to performance measures, and other methods to encourage appropriate behaviour.

Principle seven will be applicable as it is to provide accessible, low-cost means for dispute resolution. In this regard low-cost would be in terms of time and effort, and political repercussions. This is probably going to look something like an architectural review board, but it is important to minimise how often this body, or person, needs to make a disputed final call. Disputes can be viewed as coaching opportunities, because to the extent possible you want teams to self-govern within guardrails of appropriate decision making.

Principle eight rounds this out to an enterprise scale by building responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system. This is certainly a key area for enterprise architecture to add value, which of course needs to be linked to a clear strategy.

No magic

The eight principles above can help avoid the tragedy of the commons in terms of behaviour in decision making, however, it doesn’t magically make decisions easy, or trade-offs evaporate. The eight principles help to allow the hard part of the decision to be brought into the open, but it is still hard. If you have a compliance change to a legacy system and to meet the timeline you have to add more technical debt, then that is still likely to happen because it might still be the only feasible trade-off. No magic.

Kind regards

Michael D. Stark

Leave a comment