Why Are We Humanising Machine Intelligence?

Instagram logo for Matt Bristow's blog LinkedIn logo for Matt Bristow's blog Logo to click to give feedback on Matt Bristows blog.
Brain icon to indicate ability to summarise blog with AI.

Summarise with AI

AI summary

It’s rich that the man who once tried to build a water park in his parents back garden with nothing but a hose and a dream is talking about intelligence, I know. 

But stick with me.

Months and months ago, you may have seen the story of Blake Lemoine, a Google engineer who was fired for claiming one of their AI chatbots was sentient, saying it had the same intelligence as an eight year old human being, mainly because it kept saying it wanted to build a waterpark in its parents garden with nothing but a hose and a dream for some reason.

At the time this was all over the news, with people worrying about the dawn of the SkyNet.

I always saw it differently, because I’m the tech equivalent of a pick-me girl, which I am now realising neatly explains my entire website.

Can an AI be as smart as a human?

The phrase “same intelligence as an eight year old” really stuck in whatever the British equivalent of my craw is.

How can thousands of lines of code and a UI possibly be compared to human intelligence?

You see this phenomenon everywhere with AI.

AI “passing” the bar exam. AI “apologising” for mistakes. AI “gaslighting” people. All of these things are acutely human behaviours that we have trained a machine to mimic. 

A machine has no use for an MBA exam. A machine has no use for friends. A machine has no use for vocal tics. 

These are all completely superfluous characteristics, that we are giving or measuring the AI by, not because the AI wants or needs to, but because we want the AI to do it.

For example, an examination like an MBA exam is a purely human concept of memory and recall, based on human mental limits, and focused towards demonstrating and rating skills humans deem necessary for certain vocations.

This is completely unsuited for an artificial intelligence with all of humanity’s stored knowledge at its fingertips, in the same way a footrace is a bad way to judge the speed of an eagle.

Given that AI isn’t queuing up to take exams or an IQ test on its own, that must mean this activity is driven by us.

We are obsessed with using our own intelligence level as the benchmark for all other forms of intelligence. 

In a testament to human arrogance, whenever we perceive something as “intelligent” we view the metric to measure this by as our own intelligence, e.g. “it’s twice as smart as a human”, “it’s as smart as a human eight year old,” “it’s definitely way smarter than Matt I caught him trying to drink printer ink the other day”.

We do this with animals as well, the most egregious being my favourite animal, the octopus.

We constantly compare octopus intelligence to human intelligence but they have three hearts, no skeletal structure, live their entire lives underwater and can change colour.

How on earth are you going to compare that to a human in any meaningful way?

Your 401k and modest yet performative stock portfolio means absolutely nothing to a creature that flies around its domain in 3 dimensions, changes colours and genuinely could be from Mars.

An IQ test is so wildly inapplicable because they have a completely different lifestyle, genetic structure, aims/wants and experience whatever it is we call “life” so incomparably different to us.

It should be obvious by now because I have explained it so deftly, that we use human tests and measurements against any “other” intelligence, but the really scary stuff is when you consider, why?

Why do we compare AI intelligence to our own?

I think it’s because we put ourselves at the top of the pyramid (I actually wrote “pyramind” by accident here at first, and almost convinced myself to try and coin a phrase, but decided to put it in brackets instead, the cowards compromise) and we are worried about losing the top spot.

Us being “smarter” than everything we know of, and self-appointed champions of planet Earth, means we justify some pretty egregious behaviour in the name of our superiority. And because of this association between intelligence and cruelty, we think higher intelligence is a threat, so we constantly compare and contrast against ourselves to see how much danger we are in. 

There’s a great theory that the reason we’re terrified of aliens is a projection of the things we know we have done to each other when one group of people is more technologically advanced than another. I think this extends to artificial intelligence as well. 

We sit alone atop a pyramid we control, wearing our makeshift crown, obsessively checking the intellect of things that do not respect or care about our stupid tests and made up titles and monikers.

When it comes to AI, we are better off acknowledging what it is, a very advanced content creation machine based on our collective work, than trying to draw human-like meaning from the ramblings of a spicy autocorrect app.

Yes, it is “smarter” than you and I in a lot of ways.

But there are things that you have within your realm of experience that cannot be mimicked.

Experiencing joy. Connecting with other people. Going up to bat at a cricket game and overhearing someone tell the fielders to move back. These are all things that AI could never mimic or experience.

So hug your friends, drink a beer, write a stupid blog. It’s what makes you human, and that can’t be taken away by an AI or a hyper-sentient octopus with an MBA diploma that is making its way through your walls right now to get you.

Sweet dreams!

Logo to click to leave a comment on this blog.

Load comments


No comments yet, be the first!



Post comment