News & Events

AI: How do we really define intelligence?

Chat-CHS: In the dawn of AI how do we really define intelligence?

Within a week of its launch Chat-GPT had over a million registered users and currently it is estimated that over 96 million visit its website every month. For those who are uninitiated, Chat-GPT is an artificial intelligence language model that is most commonly accessed as a chatbot. It currently has the internet ablaze with people envisioning it changing multiple areas of work and how we live our lives. Every day someone new is trying to push the limits of its capabilities from programmers writing code with it, users asking it to explain difficult concepts and even chef’s using it to design new dishes. Is this something that will change the way we live and do we need our students to understand this technology in more detail so they are able to utilise it in the best way possible as it grows in prominence and capability?

The history of chatbots dates back to the 1960s, with the development of the first chatbot called ELIZA. ELIZA was a computer program created by Joseph Weizenbaum, a computer scientist at the Massachusetts Institute of Technology (MIT). ELIZA simulated a conversation between a human and a computer by using a simple pattern-matching technique to respond to user inputs.  It worked by parsing user input for keywords, and then using a set of pre-programmed rules to transform those keywords into responses that mimicked a therapist’s probing questions. For example, if a user said “I am feeling sad today,” ELIZA might respond with “Why do you feel sad?”. ELIZA was not designed to provide meaningful or accurate therapy, but rather to demonstrate the limitations of language processing and the potential for computers to mimic human conversation. Despite its simple programming, ELIZA was able to fool many users into thinking they were speaking with a real human therapist. However, it was obvious that there was no real “intelligence” in this approach, but how would we measure this in a scientific way?

In 1950 Alan Turing, the famous British mathematician, proposed a test to determine a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. The test involves a human evaluator who engages in a natural language conversation with two entities: another human and a machine. The evaluator is unaware which of the two they are speaking with, and must determine which is the machine based solely on the conversation. If the machine is able to convince the evaluator that it is human, it is said to have passed the Turing test. It has been the subject of much debate and criticism over the years, with some arguing that it sets an impossibly high bar for machines, while others argue that it fails to capture the full range of human intelligence. Based upon the Turing Test, the Loebner Prize is an annual competition in artificial intelligence that tests the ability of a computer program to exhibit human-like intelligence in conversation. Since the inception of the Loebner Prize in 1991, no chatbot has actually won the grand prize, which is a gold medal and $100,000. However, several chatbots have come close to winning, and in some years, a cash prize has been awarded to the best performing chatbot. It is not clear if Chat-GPT or a version of it will be entered into this competition as its goal is not necessarily to prove intelligence but instead to provide a robust, usable natural language model.

Next term some of our students will be working on creating their own chatbots using the skills they have gained in the Python programming language. We will be looking back on some chatbots from the Loebner Prize and identifying the techniques they used to try and win, hopefully incorporating some of these into our own designs.  If we are able to simulate human-level conversation (which is still yet to be seen) then move over Chat-GPT and make way for…Chat-CHS!

Note: Can you tell which parts of this article were written by me and which were written by Chat-GPT?