As is the case with most complex computing, the theory precedes the practical.
All the way back in the 1940ās, American neurophysiologist Warren McCulloch and self taught logician Walter Pitts published āA Logical Calculus of Ideas Immanent in Nervous Activityā, where they outlined how to achieve massive computing power by connecting a series of artificial āneuronsā.
This built off the groundbreaking and underappreciated work of Alan Turing but was the first time that a true artificial neural network was outlined, and kicked off the path to creating a machine that could āthinkā like a human could.
Every idea needs branding, and itās often taken for granted that we have unified a single discipline under the umbrella of āArtificial Intelligenceā.
Recognised as one of the founders of AI research, John McCarthy was preparing for the seminal event, the Dartmouth Conference, when he coined the term āartificial intelligenceā in one of his proposals.
This definitive name and the Dartmouth Conference as a whole (a 6-8 week brainstorming session on āthinking machinesā) kicked off the field of AI, fully putting the works of McCulloch, Pitts, Descartes, Turing and a host of historyās greatest thinkers into overdrive, and setting the scene for one of humanity's greatest achievements.
Humans are obsessed with talking.
So it makes sense that one of the first big pushes in the fledgling field of AI was to build a robot that can talk back to us.
So ELIZA was born.
Joseph Weizenbaum built a program at MIT that was capable of listening to inputs from a human and simulating a response back to them using pattern matching.
I use the word āsimulatingā there as ELIZA actually had no way to contextualise or āunderstandā anything that was being said, and had no ādatabaseā of information that it was referencing. In this way ELIZA wasn't really AI at all, but was such a huge step in the field of computing and the act of creating a "talking machine" that it recently won a legacy Peabody Award.
Instead of a real attempt to create a thinking machine, ELIZA was instead built to be a parody of conversation between man and machine, and highlight the superficial nature of talking to computers. ELIZA analysed inputs for keywords, assigned values to those keywords, and then generated an output based on those values, which computationally isnāt that hard.
The human response to Eliza however was incredible.
Despite ELIZAās creator asserting the opposite, people ascribed intention and intelligence to ELIZA, with Weizenbaumās own secretary asking Weizenbaum to leave the room so ELIZA and her could have a private conversation.
"I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
If you fancy it and promise you wonāt fall in love with it, the source code for ELIZA was actually found and replicated, so you can now chat to an ELIZA clone!
As much as we like to talk, we also like to not have to do things for ourselves.
This is why a large part of AI is attempting to automate tasks us lowly humans usually have to do, and physical tasks are no different. The first breakthrough for this was in 1970 when Shakey was created/born (depending on your philosophical views on the soul).
Created/born at the Stanford Research Institute, Shakey was a physical robot that was able to reason about its own actions, and had the ability to break goals down into smaller manageable tasks much like a human would.
Previous machines would have to be explicitly instructed on each step to complete a task, but not Shakey, Shakey could knock down blocks all by themself, just like a real person!
This was a monumental step in AIās history.
A core tenet of the discipline is creating a machine that is able to problem solve and reach a solution by itself, rather than having the route to solution hard coded in.
It is not enough for us to simply make robots, we must play games with them.
Chess has always been seen historically as the domain of the ultra intelligent (explains why I am terrible at it), so itās no wonder that a large amount of computing power (and resource) went into building an AI that could play chess better than any human alive.
Given its logical move set and imperative strategy, chess is the perfect domain for a computer to learn, but progress on beating human players moved relatively slowly.
In 1956, a computer beat a novice chess player with simplified rules.
In 1967, a computer beat a MIT professor.
In 1981, a computer beat a chess master in tournament conditions and became the first artificial mind with a chess rating.
In 1989, a chess machine called Deep Thought finally beat a chess grandmaster, although Garry Kasparov mounted a valiant comeback for the humans, beating the machine pretty comfortably. The machines clearly remembered that because in 1997, Deep Blue put the debate to bed by beating Grandmaster Kasparov 3Ā½ā2Ā½ in official match play.
We now had machines that could best us in intellectual strategy, which is not, I repeat not, a harrowing foreshadowing to the end of our species.
Ever since Shaky was knocking blocks off of tables for some reason, we as a species have had a bold dream. And that dream is āgod wouldnāt it be great if a robot could drive me places so I could sleep or eat chicken nuggets in the car instead of avoiding pedestriansā.
Self driving cars are a staple of almost all science fiction, and as life tends to imitate art, itās no wonder that billions of pounds (Bitcoin for the robots reading this, idk Iām trying, please donāt kill me) have been poured into making cars that can drive themselves.
It should go without saying that this endeavour is obviously higher stakes than teaching an AI a game.
As far as I am aware, if you mess up a chess AI, no one gets run over, although I havenāt done all possible moves so I canāt say for certain.
Researchers in Berlin in 2010 said screw it, letās have a go anyway and the Institute of Control Engineering of the Technische UniversitƤt Braunschweig created a car called Leonie that drove through Berlinās streets without incident.
Since then, the driverless car market has ballooned to be worth billions, and there has been a bit of a concerning software race between companies to create the first ātrueā driverless cars, which has led to a fair few accidents and mishaps along the way.
āMishapsā here refers to the high speed smashing of steel, so a rather morbid milestone worth noting here is the first death attributed to an AI decision, which occurred in 2018 when Elaine Herzberg was killed by a self-driving car owned by Uber.
Itās important to remember that companies rushing to push AI software out is all well and good when itās trying to beat a person at chess, or add a new social media feature, but when it is something as important as driving and safety, irresponsible business decisions have devastating impacts.
As we hand over more and more control to the thinking machines, more stories like this will and have popped up, and it's important that AI is implemented in the safest way with the largest benefit for all, rather than a quick buzz for shareholders.
Rather morbid end to that section, letās go back to the lower stakes domain of games.
Chess is all well and good, but what about something a little more complex? Considering chess traditionally has 10 to the power of 123 possible moves, the 2,500 year old Chinese game of Go is infinitely more complex with a possible 10 to the power of 360 moves.
Perfect for a challenge.
It was only natural that after laying waste to the game of chess, AI (and the researchers behind it) would set their attention on Go, and in 2016, AlphaGo beat a 9 dan professional Go player 4-1 in official play, and was awarded its own 9 dan in honour of its play.
By this point, we had built machines that were strategically way more complex than us, but as long as no one teaches them Risk, we should be fine right? Hopefully.
We have all seen some version of science fiction where a dude will say ācomputer, render an eighteenth century ye olde tavernā and somehow, the computer does. Weāve all seen that exact thing right?
Using AI to generate imagery and video is such a crucial part of human progress because for some reason we seem to want to replace all creation of art, you know the stuff that makes life fun, so that we can have more time to work and do taxes.
OpenAI (the company that made ChatGPT) made one of the biggest and first commercially available image generation applications, cleverly called DALL-E, which allowed users to enter a text prompt, and DALL-E would generate an image of that prompt. It wasnāt exactly perfect as it canāt do hands or teeth for some reason, but this step to create imagery from words is a huge leap for humanity.
It is however, as I previously alluded to, a very strange use of tech.
Why would we want to use AI to automate the creation of art rather than some of the more mundane and frankly boring aspects of life? I donāt really know, but I suspect that is has something to do with the fact that humans yearn to create art, so we transcribe that over to our machines as well, being more interested in machines that can create or mimic our interests, than ones that do our mundane tasks for us.
And there we have it! A breakneck run through of gadgets and widgets and doodads that comprise the history of AI!
ChatGPT is by far one of the most impressive feats of engineering in the AI space, but it is so important to remember that developments like this donāt just occur overnight.
Behind every instant success, there are decades of innovation and toil that the success is built off.
We tend to individualise achievement as a species, and whilst the team that created ChatGPT obviously deserve the praise (and giant bags of cash) that they are getting, it is a species effort that create things like this, and I think thatās pretty wonderful.
Load comments
Comments
No comments yet, be the first!