🤔 🖥 WeWork Edition 
Welcome back to your Human Risk Newsletter filled with more lovingly-curated Behavioural Science (BeSci) inspired content. 
First time reading this? Welcome! I recommend you begin by reading the Intro to Human Risk.

Missed a previous edition? Find it in the Newsletter archive.
Not subscribed yet? Be the first to see future newsletters by clicking the button below:
Subscribe me!
Coming up in this edition
1. WeWork gives us a prime example of Human Risk in action;

2. An art project explores the dangers posed by the ways in which Artificial Intelligence is influenced by  Cognitive Biases;
3. A BeSci Intervention to support dementia sufferers;

4. Research into Codes of Conduct is Something that made me think; and

5. My Something for the weekend recommendation is a podcast that explores risk.

Human Risk in action

Recent news about WeWork, the office space provider, illustrates Human Risk on a number of levels.
By now, you've probably heard of WeWork (or "The We Company" as the parent company is called).  "We" rents out shared working space and was due to float on the NY Stock Exchange, but has had to delay those plans.

My interest in the company from a Human Risk perspective was sparked by this paragraph in a WSJ article entitled  This Is Not The Way Everybody Behaves:
Since then, we've seen investors force out the CEO and Founder following the disclosure of unusual business arrangements that included renting his own buildings to The We Company.
To catch up on the whole story, I highly recommend this Business Insider article. You might think you know all the details, but I'm pretty sure there'll be something you weren't aware of that still manages to surprise you.  

WeWork is symptomatic of the phenomenon of "Unicorns"; start-up companies that are (over?) valued in excess of $1bn.  If like me, you're sceptical that these companies can all be worth that much, then meet Scott Galloway, a digital economy specialist who refers to WeWork as WeWTF and records highly entertaining and insightful video insights like this: 
[Reader warning:  contains NFSW language]
To help navigate the Human Risks of the Unicorn world, I also recommend reading his No Mercy/No Malice blog.
Other recent displays of second-order thinking that made it onto my Human Risk radar, but didn't displace WeWork as the headline story, include:

the CEO who was charged for allegedly fabricating the existence of a CFO in his company;

the rogue oil trader who racked up losses of $320m; and

illustrating unparalleled idiocy, some “tired and emotional” passengers who missed their flight and tried to chase a plane down the runway.

Back to the top

Bi-Weekly Cognitive Bias

An online art project illustrates the impact human Cognitive Biases have on Artificial Intelligence.

Cognitive Biases are generally associated with Humans rather than Machines. As Excavating.AI, a recent art project illustrates, they're arguably more relevant when it comes to Artificial Intelligence (AI).

That's because AI learns from human data sets. If you've ever done one of these reCAPTCHA tests, you're helping to teach the AI that powers Google's self-driving cars:

To help AI cope with the challenges of identifying and categorising people, Princeton and Stanford Universities have built Image-Net, a database of pictures tagged by people.  
How the AI learns from this can produce unexpected and undesirable results. It led some researchers to create Image-Net Roulette; an art project that allowed people to upload photographs and see what the Image-Net trained AI saw in them.

Here's what it made of photos of a recent G7 leader's meeting and a famous scene from the White House situation room:

As these examples illustrate and this Guardian article nicely explains, the consequences of AI learning from biased human data are potentially severe. Having proved the point they set out to prove with ImageNet Roulette, the researchers have taken the site offline, but you can read more about the Excavating.AI project here.

Readers who live in or are visiting Milan or London between now and February can also see exhibits from the project at the Fondazione Prada Osservatorio's Training Humans or the Barbican's From Apple to Anomaly.

Back to the top

A BeSci Intervention

A BeSci intervention is helping to spread awareness of dementia and changing the perception of those suffering from it.

A major factor in prejudice is ignorance and a fear of the unknown.  So I loved this BeSci intervention in Japan that seeks to change people's perception of dementia.

The Restaurant of Mistaken Orders seeks to spread awareness of the condition and “to make society just that little bit more open-minded and relaxed”.  Here's how it works:

They've also produced a short film of the restaurant in action:

Back to the top

Something that made me think

New research explores how the language used in Codes of Conduct can have a significant impact on whether people comply with it. 

If you work for a company, then you're likely to be subject to a Code of Conduct; a set of rules and principles designed to guide your behaviour. 

While companies typically expend considerable effort on the content of their Code, they often place far less emphasis on the language they use to write it. Some fascinating new research from Maryam KouchakiFrancesca Gino and Yuval Feldman suggests that they really should.

The research identified that the target audience's perception of the group or organisation they are part of, changes according to the language used.  

Codes that used impersonal language ("employees/members"), resulted in lower levels of dishonesty amongst their target audience than Codes containing personal communal language ("we"). 

In very simple terms, the "warmer" the group or organisation is perceived to be, the greater the likelihood that transgressions will be forgiven.

Guess which one WeWork uses?

When you've read the highly-readable research, do also check out these excellent books by two of the report's authors:
Back to the top

Something for the weekend

A podcast that explores risk, looks at Human Risk in its latest episode.
I've been a longtime listener to The All Things Risk podcast, so I was delighted when the host, Ben Cattaneo, asked me to come and talk about Human Risk. 
But don't think I'm just recommending the podcast because I'm on Episode 115 (though clearly that's a good reason to listen!). 

Ben consistently manages to invite a diverse and insightful guestlist to talk about...well, all things risk.  It's always worth a listen. You'll find the All Things Risk podcast wherever you normally get yours.
Readers who enjoyed that episode can also listen to my appearances on the Behavioral Grooves and Risktory podcasts.  And look out for the Human Risk podcast, coming soon...
Back to the top

Incase you missed it...

In a recent newsletter, I featured the story of Boeing’s 737 Max plane. Last week, the NY Times published more details in this fascinating in-depth piece:

They weren't the only ones to look at the story again.  Also worth your time are: Plus, just when you thought things couldn't get worse for Boeing, here's a story about a new problem with another of their planes.

And in case you're wondering, here's the Boeing Code of Conduct.
Friend of the newsletter, Tom Hardin (aka Tipper X) has recently launched a series of one-minute video insights into conduct, ethics and compliance topics.  They're highly Human Risk relevant.

If you're not familiar with Tom's story, then I recommend visiting his website. Then catch his first video covering the concept of "Reduction Words" by following him on Twitter or LinkedIn.
Back to the top

Coming soon...

Final call for October’s Risk Awareness Week: 5 days of free webinars covering a range of risk management topics. Featuring over 30 speakers, including:
Once registered you can attend sessions live or watch a playback at a time to suit you. 

That's it for this time. Don't forget that you can get regular BeSci updates by following @humanriskblog on Twitter and/or connecting with me on LinkedIn.


Forward to a Friend
The newsletter is brought to you by Human Risk, a Training & Consulting Firm that specialises in the deployment of BeSci in the fields of Risk, Compliance, Conduct and Culture
Back to the top
Copyright © 2019 Human Risk, All rights reserved.

To manage your Human Risk newsletter subscription use these links:
update my preferences or unsubscribe


This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Human Risk · Sutherland Street · London, SW1V 4LA · United Kingdom

Email Marketing Powered by Mailchimp