How To Handle Risk In Tech

24-11-2023
#tech
Calculating buzzword density...
Powered by 💇BuzzCut
Instagram logo for Matt Bristow's blog LinkedIn logo for Matt Bristow's blog Logo to click to give feedback on Matt Bristows blog.
Brain icon to indicate ability to summarise blog with AI.

Summarise with AI

AI summary

What a year to be an individual named Sam in technology.

Sam Altman, the AI messiah and future favourite human of the robotic overlords, suffered the indignity of the world's most public sacking from tech darling OpenAI, followed by an even more public return to the helm of one of the most important companies in human history.

Coming seemingly out of nowhere, Sam’s departure (using the word departure here feels strange, like saying that guy in the helicopter scene in Scarface ‘departed’ the chopper) spurned a flurry of speculation as to exactly why Altman got given the heave ho, with the public only being provided the tantalising titbit that he hadn’t been consistently candid with the board.

It’s now become sort of clear (there is still a lot of mystery behind the exact machinations of what happened), that the non-profit board believed that Sam was taking too many risks in pursuit of profitability that rankled the board enough for him to be removed, especially as they believe we are closing in on AGI, which is a machine that can be smarter than a human being.

This got me thinking.

Here we have one of the most cutting edge companies in the world removing their visionary founder for being too innovative and risky. It’s like a F1 team removing a driver for taking a turn too quickly and replacing them with my nan in her Vauxhall Corsa.

But maybe, just maybe, they were right. 

AI is such a revolutionary technology, that perhaps we should be slower with its journey. We literally just came off a blockbuster movie where we made Cillian Murphy look like he’d found the family cat's collar in the stew for 4 hours, specifically about the folly of moving too quickly with impactful tech, and it’s now been revealed that the mysterious Q* project within OpenAI is worrying some of their top researchers.

So how do we balance this? How do we move forward into the brave new world without turning it into the Brave New World? Let’s have a look at risk in technology and the role it plays, when it’s acceptable and when it isn’t.

What is risk in information technology?

Financial risk

The most garden variety risk in technology is financial risk. When you are ploughing money (your own, investors and your great aunts' rainy day fund) into an unproven idea, there’s a tonne of risk involved financially with new tech.

The VC game is literally built off risk and reward, and with every startup that goes kaplunk, it takes with it a chunk of change.

Investor and founder money is usually the most at risk, but in some special cases, customer money can be at risk too, like in the FTX saga, where retail investors and lunch pale Louis’ lost a whopping $8 billion due to what is a complete fraud. Sidenote, nice to not have to use the word “allegedly” there anymore, win for word count!

Physical risk

Not all tech innovation is purely software, a lot of innovation is in the physical hardware space, like self-driving cars and autonomous warehouse robots.

With these come a level of physical risk, as was evidenced by the Cruise robotaxi incident that happened a few weeks ago. After miscalculating what to do after a traffic incident, a Cruise robotaxi dragged an injured woman 20 feet under the car, eventually pinning her down.

Physical risk is an incredibly important risk to manage, especially with self driving cars, as there is an issue of non consent amongst bystanders (which we will get to in a bit).

Mental risk

Tech platforms are communication platforms, and with communication, there is always a risk of mental harm befalling users. 

Sounds verbose, but one glance at the impact that social media apps have on young people's mental health, and the rise of gambling streaming has on the behaviours of impressionable people, shows you that when tech is pushing new boundaries in communication, the mental impact of its users needs to be considered.

Societal risk

This one only applies to the really big fish, the OpenAIs and the Facebooks. 

When you are completely forging a new path, you can be so disruptive you actually alter the fabric of society with your invention.

Facebook did that for the mid 2000s, ballooning to a market cap of over a trillion dollars and OpenAI is doing that for the 2020s. With great power comes great responsibility, and reckless wielding of this power could be the reason that Sam Altman was removed from OpenAI.

This is possibly the biggest risk category for tech companies, and often the most overlooked as it isn’t quite as quantifiable as a rogue warehouse machine hitting a human coworker with a 2X4. 

But make no mistake, founders on a mission to change the world should be evaluating how they are going to make sure the “change” isn’t us all being turned into gumbo soup by Spot the robot dog.

How to avoid risk in information technology?

Now we’ve outlined the types of risk that are prevalent in producing new technology, we need to look into when risk is acceptable in this process. 

If we avoided all risk, we wouldn’t have any innovation, so it’s important for companies at the bleeding edge to know when risk is acceptable, and I have four criteria for “acceptable” tech risk.

Obtain consent

This is possibly the biggest factor when considering what risk is acceptable for a start-up or technological innovation.

Consider FTX. 

We all know that investing money into speculative assets like cryptocurrency is like ordering the dish at your local takeout that has the three chillies next to it. 

We consent to this risk, and FTX shouldn’t and can’t be liable for your financial situation should your funds take a nosedive.

However, FTX was also taking customer deposits and gambling them on questionable investments. Customers didn’t consent to this risk at all, and that’s where it breaches the ethical line.

When building your technological innovation, it should be of paramount importance to consider the level of consent from consumers and stakeholders, and I don’t just mean burying some dodgy stuff in your terms of service and hoping for the best.

Take time to consider what your consumers have signed up for, and if you find your product is infringing on consumer blindspots, then it’s time to reign it in.

Manageable

That brings me onto my second point, and another crucial factor to consider when evaluating technological risk.

Asking yourself whether your team has the ability to manage potential risk of your tech is a hard thing to do, and you have to be brutally honest, but it’s important.

Take, for example, social media sites. 

A risk with these is negatively impacting users' mental health or, as we are seeing with Elon’s X, promoting hate speech.

For the leadership of these sites, a key ingredient to working out whether this risk is acceptable is can you mitigate it without damaging the entire value proposition of your site, e.g. if you remove the addictive nature of your recommendation engine or start banning people for hate speech, does your platform still make any sense as a viable comms medium?

If the answer is no, then the risk may not be worth it.

Do you know all of the risks?

I’m going to do a first for my website and quote Donald Rumsfeld when he said:  

“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.”

Awareness of risks is incredibly important, and can often boil down to your team composition. 

If your team is an echo chamber, you can miss incredibly important risks that may be apparent to others in the know. 

This is why I’m not as hawkish on the OpenAI board as the rest of the tech world is. Their actions were rash and ill-thought out, but it’s important to have dissenting opinions, especially ones not motivated by money, when tackling risk in tech.

An example of this is from the Elizabeth Holmes saga, where she surrounded herself with VCs and army generals somehow, and didn’t listen to many of the actual blood scientists (phlebotomist just doesn’t sound as cool, sorry) who would have informed her of the risks of trying to invent science out of thin air whilst doing a Steve Jobs cosplay.

If your team are suspiciously quiet about risks and everything is Coco Puffs and rainbows, it may be that you aren’t able to foresee risk, and need to pump the brakes a bit and bring in fresh blood (pun intended).

Reversible

“No harm, no foul.”

Not just what I say when I let off a flare during my weekly five-a-side game to distract the opposition.

If you can feasibly reverse the harm caused by your risk, then there is an argument to be made that the risk is agreeable.

However, a crucial caveat is that you can’t mix and match your risk types for this one.

For example, if you are investing consensually with customer deposits, then it can be considered a reversible risk if you have the liquidity to repay the investors if it all goes haywire.

However, if you are a robotics company, and the risk is that your robot may accidentally injure someone if you release it too soon, you can’t reverse this financially. 

This is why it’s important to bucket the types of risk you are engaging in, so you can better prioritise fixes and mitigations.

How do we build cutting edge tech without taking unnecessary risk?

Now we have a bit of a framework for analysing risk, which is awesome, but we need a way we can use this framework to make actual business decisions.

How do we begin to evaluate risk successfully, and push boundaries as safely as possible?

The answer, I think, lies in three parts.

Diverse opinions

I am a huge proponent of diversity in tech.

Not just on the grounds that, you know, it’s morally right, but because I believe the more diverse the conversation participants are, the more complete and sturdy the resolution that comes out of the conversation will be.

Let me give you an example.

Say you run a company that sells robotics to warehouse operators.

You hold an in-house meeting to discuss risk, composed of yourself, the COO and your buddy Dave, who shouldn’t really be there but he has a boat and it’s coming up to summer.

Being from robotic backgrounds, you figure that the greatest risk is the robotic malfunctioning in some way, causing physical harm.

This could be potentially true, but consider the outcomes if instead of just an in-house committee, you invited workers from one of your clients warehouses to discuss safety implications.

You could then realise that, due to union obligations, the warehouses you sell to are in danger of causing a workers strike, crippling revenue and resulting in tonnes of bad press.

This could cause you to develop a new service, training workers who may have parts of their jobs automated by the robots to be able to fix the robots or work on their installation, providing a safe (and profitable) way to mitigate this risk.

This is a narrow example, but it highlights how broad risk discussion groups need to be to cover bases and make sure you aren’t blindsided by problems.

Inform regulation

I typed the word regulation, and fifteen of the tech heads in the coworking space I’m in span around like the Invasion of the Body Snatchers.

Within tech, regulation is often seen as stuffy bureaucrats halting innovation to implement wildly out of touch limits on promising tech, and often, this can be the case.

A large part of this is due to the fact that tech so vehemently resists regulation though I believe.

I think there’s a middle ground.

Industry leaders should be incentivised to collaborate on building guidelines for what unethical and risky practice looks like, but the key is that everyone (not just one side) has to cooperate. If you just get one leader, they’re most likely going to say they’re completely swell and their main competition is the risky one.

Also, shaping regulation collaboratively with lawmakers could save firms a lot of money, with latest estimates suggesting AI organisations spent $957 million dollars lobbying the government to prevent regulation.

By encouraging interoperability and the tech industry helping inform regulation rather than resisting it, we can more clearly outline the risks of technology.

Greater emphasis on risk evaluation

When I tried to defeat my crippling Biscoff spread addiction, the court-mandated psychotherapist I was assigned after being discovered like a wild raccoon in a supermarket at 4am imparted upon me a nugget of wisdom.

The first step in quitting is wanting to quit.

Risk evaluation should become commonplace in technological circles. And I’m not suggesting that your second hire should be an energy vampire compliance lawyer. Instead, entrepreneurs should take greater pride in safe products with an appropriate level of risk. I get that doesn’t sound sexy to a lot of entrepreneurially minded folk, but it didn’t to the guy who made the Titan submersible either, and look how that ended up.

To conclude (I know my year 9 English teacher is screaming right now at the fact that I just used that phrase), I believe the risk is inherent in technological progress and it shouldn’t be something that is shied away from or avoided. Instead, it should be quantified and evaluated early, so we can keep seeing breakthroughs in a safe and beneficial way for all!

Logo to click to leave a comment on this blog.

Load comments

Comments

No comments yet, be the first!

Name

20
Message

250
Post comment