Copy
Human Risk Newsletter

Launch Edition 

Estimated Reading Time 7 mins
Welcome to the launch edition of the Human Risk Newsletter featuring Behavioural Science (BeSci) inspired content chosen to help you better understand and mitigate Human Risk; a term I've coined for "the risk of people doing things they shouldn't or not doing things they should". 
Coming up in this edition:
 
  • Human Risk in action courtesy of British Airways;
     
  • Why rules can sometimes make things less safe than rule makers might intend;
     
  • a BeSci intervention from the World Wildlife Fund that encourages us to use less paper;
     
  • some academic research that highlights a simple way we can improve our chances of sticking to commitments; and
     
  • a new book on the Science of Storytelling that reveals much about how we process information.

Human Risk in action

To bring Human Risk to life, I'll feature examples of it in action and highlight lessons we can learn from it. 
A British Airways (BA) passenger flight that landed in the wrong country illustrates how a small human error can have a significant impact.
A few weeks ago, I spotted this BBC News headline:
As I often fly with BA, this intrigued me. It's not an error you'd expect an airline of its size and reputation to make.
Though perhaps I shouldn't have been surprised. As a Forbes article explains, it happens more often than you might think; usually, because pilots accidentally fly to a nearby airport. 

In this case, however, the two destinations are in different countries and entirely different directions:
BA stressed that passenger safety wasn't compromised and that the error was the fault of WDL Aviation ("WDL"), a third party supplier from whom BA had "wet leased" a plane. 

Wet leasing is when all aspects of a flight, including crew and logistics, are outsourced to a third party (like WDL). Dry leasing, on the other hand, is when the borrower handles operations themselves.


The WDL crew thought they were flying to Edinburgh and filed their flight plan accordingly. And if you're wondering why the error wasn't picked up by Air Traffic Control, here's their tweeted explanation:
 
Which explains how, the first either the passengers or the crew knew of the error, was when they arrived in Edinburgh; a moment that was recorded and shared on social media. The footage (link in the image below) is worth watching, particularly for the moment when the announcer somewhat naively asks the passengers not to film proceedings:
Source: Twitter
So what did happen? A travel journalist investigated and discovered that the faulty flight plan the crew had been following was identical to the one they’d correctly flown the previous day; which suggests that the cause was basic human error in uploading an old plan.

There are lots of Human Risk lessons here, but I'll focus on two:

Firstly, the "scalability" of the error: given the "back to back" nature of short-haul flights, it didn't just impact one flight but had a knock-on effect for the rest of the day. That's a lot of delay compensation they'll have to pay out.

Secondly, people are less likely to speak up if they presume someone else has responsibility for the activity. Particularly in a controlled environment.  Flying from England to Germany involves crossing the North Sea, a well-known body of water that is visible from the plane. Flying to Scotland does not. Whether any passenger noticed something wasn't right is debatable. But it's highly likely that even if they had, they would not have thought to challenge the crew. After all, they're unlikely to get it wrong and isn’t that what you’re paying the airline to do?!

Bi-weekly bias

To mitigate Human Risk, we need to understand the processes behind human decision-making. So, in each newsletter, I'll highlight an aspect of how we think; either a cognitive bias, a heuristic or just something else I think you'll find interesting.   
The Fence Paradox illustrates how rules designed to mitigate risk can actually have the opposite effect.
Humans are often bad at assessing risk. Which is why regulators and authorities often put in place things to protect us.  The Fence Paradox is an illustration of why this isn't always a good idea. I'll let Dr Pasquale Cirillo, the author of a paper that explores it, explain:

”When we see fences, we feel safe, and we tend to forget the risks fences are supposed to take care of. We start being less rational, if not irrational. Nothing bad can happen if there is a fence! But it is exactly when we start feeling very safe that actually, we are not.

If we do not accept the human nature of risk, if we do not understand that risk management is meant to limit unnecessary risks, to mitigate the unavoidable ones, but not to prevent all risks, we are just preparing the next tragedy, the next crisis, the next state of exception.“


Unusually for an academic paper, he's illustrated his words with some rather nice cartoons:
People (and organisations) have different risk appetites; some will be willing to take more risk than others.
Some may take too much risk.
To prevent the downside of excessive risk taking, authorities build fences:
However, fences make people under-estimate risk and modify their perception of it.  They feel safe; often safer than they should.
As a result, if these fences break, it can impact a much larger population and have a more significant impact, than if the fences didn't exist.
In other words, far from helping to mitigate risk, some forms of regulation can actually have the opposite effect. Cirillo's research focuses on banking and the impact regulation has on that sector. However, it has much broader implications.

As BeSci guru Rory Sutherland explains in his blog:

"When rules are imposed centrally, everyone outsources judgment to the central authority, losing the skill, experience and safety-in-diversity that arises from individuals exercising instinctive judgment, learning from small individual mistakes rather than big collective ones".


A good example being speed limits; when driving, we tend to judge the appropriateness of our speed by reference to those, rather than the circumstances in which we’re driving. 

Readers with risk management responsibilities might want to think about whether any fences they've put up, actually increase rather than decrease Human Risk.  However, before taking them down, I also recommend reading this article on Chesterton's Fence; the idea that we shouldn't dismantle a fence unless we fully understand why it was put up in the first place.

A BeSci Intervention

BeSci techniques are in use all around us. Whether we know it or not. In this section, I'll showcase an attempt to influence decision-making that I've come across. Either that I really like...or really don't.
A World Wildlife Fund (WWF) campaign to persuade people to use fewer hand towels, made use of some clever visualisation to illustrate the impact on the environment.
Most people like to think of themselves as environmentally friendly. But the effect of our (in)actions isn't always apparent to us. Recognising this dynamic, the WWF's advertising agency came up with a simple idea to highlight the direct impact of the use of paper towels on the environment:   
In their words:

“To make people realize that saving the planet starts with them saving paper, we took a standard paper dispenser and made a simple modification with green foil and the silhouette of South America...(showing) the survival of the forest is directly connected to what people consume".

I don't know whether they actually made this dispenser, or if it was just a visualisation. But either way, it's a very effective way of getting the message across.

Something that made me think

To find inspiration and further my understanding, I'm an avid reader of the latest research on cognitive matters. I'll filter the smooth from the rough, so you can spend your time reading about the absolute best. 
The way we phrase things impacts our perception of them. Very small changes can make a big difference. 
We all need to be able to say 'no' to things from time to time.  Imagine, for example, that you're trying to lose weight, but you're also a big fan of chocolate cake, and you're offered something tempting like this:
Photo by Mike Meeks on Unsplash
You know you shouldn't, so you say "I can't eat cake". But what about if instead of saying "I can't", you said, "I don't"?

That's what some research called “I Don’t” versus “I Can’t”: When Empowered Refusal Motivates Goal-Directed Behavior set out to explore.


You'll know from the fact I'm featuring this, what the researchers discovered.  In a series of experiments, they established that saying "I don't" empowered participants much more than saying "I can't".

This appears to be because "don't" evokes an intrinsic motivation, whereas "can't" evokes an extrinsic one.  We're most easily persuaded by ourselves, so the former is much more powerful than the latter. So, if we use "don't" rather than "can't", it feels like we're more in control of the reason for saying "no".

If any readers find this works well for them (or really doesn't), do let me know using the link at the end.

Something for the weekend

I'll close each newsletter with a recommendation of something BeSci related that has inspired me and which I think is worthy of your valuable time.  It might be a TED Talk, a podcast, a TV show, a book or even a movie.  
My recommendation this time is a new book called The Science of Storytelling.
Storytelling is an essential part of what makes us human. We use stories to teach children about the world, construct our relationships, keep order in our law courts and to interpret events in the media. It's also a potent BeSci tool.
So I was intrigued to read The Science of Storytelling: Why Stories Make Us Human and How To Tell Them Better by the aptly named Will Storr.   

The publisher's summary rightly describes it as "a scalpel-sharp, thought-provoking book that demonstrates how master storytellers manipulate and compel us, leading us on a journey from the scriptures to Mr Men, from prize-winning literature to box set TV".
By better understanding how to structure stories, we can start to look at both tales we tell and are told, in a different light.

You can hear an excerpt from the audiobook or, if you prefer your content in video format, then watch the TED X talk he gave last year on the same subject.

Until the next time

That's it for this first edition. 

If you've enjoyed reading this half as much as I have researching and writing it, then do let me know. Above all, please feel free to share it with friends and colleagues.  

Had this forwarded to you? Click on the button below to subscribe yourself:

Subscribe
As well as feedback, I would love to hear about any BeSci interventions you've come across; good or bad. For more BeSci content, use the links below to visit the Human Risk website or follow me on Twitter.  

I'll be back with the next newsletter in two weeks.

Christian
Feedback
www.human-risk.com
@humanriskblog
Copyright © 2019 Human Risk, All rights reserved.


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list.


 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Human Risk · Sutherland Street · London, SW1V 4LA · United Kingdom

Email Marketing Powered by Mailchimp