By: Bernd van Ruremonde
Introduction
Mary Shelley’s Frankenstein is a novel known by many and widely adopted into popular culture. The tale is about doctor Victor Frankenstein, a man who is captivated by the force of life. He aims to recreate this force by forming his own sentient life, sewing together dead body parts and imbuing it with lightning, wherafter it springs to life. While the story is fictional, many aspects of it are getting closer to our reality due to many scientists on the quest to build an artificial consciousness. Fundamentally, these scientists are attempting the same as Victor: creating intelligent life. Science, however, is focused on the ‘intelligent’ part.
The internet has been full of it lately: Generative Artificial Intelligence, often simply referred to as AI. The rise of OpenAI’s ChatGPT has flipped a large part of the world upside down, with policymakers scrambling to form new laws around Generative AI. And yet, AI is increasingly becoming a buzzword, with old technologies just getting the ‘AI’ label slapped on to sound more modern. So what really is AI? And why are so many people afraid of it?
Background
Firstly, there is an important distinction to be made. AI can be classified as ‘strong’ or ‘weak’ (Searle, 1980). Now, this isn’t a distinction made on the basis of computing power, but more about the fundamental idea behind it. ‘Strong’ AI has the goal of understanding the human mind as a computational device, and possibly recreating it. ‘Weak’ AI, on the other hand, is simply the practice of developing intelligent machinery or software that solves problems (Liu, 2021). While the difference might seem slight, both types approach the field from the exact opposite sides.
Alan Turing, famous for his work in cracking the Nazi’s Enigma cypher, and for his great contributions to the field of computer science, was one of the first people to approach AI from the strong side. Until then, most researchers were focused on building rigid but powerful machines, Turing proposed the concept of a ‘Child Machine’ - an unorganized, simple system based on networks between simple nodes that would learn and develop intelligence like a human child (Sterrett, 2012). This concept would later develop into what we now know as neural networks.
Turing (1950) also developed a thought experiment: the Imitation Game, now better known as the Turing test. The Imitation Game’s purpose was to discern when an AI was sentient. The idea behind the test was that there are three agents: a human evaluator and two partners, one being a computer program. The partners will then talk to the evaluator and if the evaluator can’t with certainty tell which one is the human and which one is AI, the AI would ‘pass’ the game and be considered intelligent.
Nowadays, the Turing test isn’t seen as very reliable. ChatGPT, for example, has demonstrated the ability to pass the test (Biever, 2023), and experts agree that the chatbot is not intelligent nor conscious (He et al., 2023). But Turing’s ‘Child Machine’ did catch on: neural networks are fundamentally the same.
When did the neuron turn into a node?
Before we delve into this, let me quickly give you some definitions: a neuron refers to a biological nerve cell that transmits, generates or receives information in the form of electrical or chemical signals, usually found in the central and peripheral nervous systems. A node refers to an artificial interpretation of a neuron, usually represented by a complex mathematical formula that decides when the node is activated (a sigmoid function, for those interested). A neural network is a combination of a large number of nodes, which work together to accomplish a certain task prescribed by the designer.
The first idea of a neural network came in 1958, from psychologist Frank Rosenblatt. He wanted to know how information was acquired, stored, and how it affected behavior. With these questions in mind, Rosenblatt (1958) proposed the Perceptron, different nodes linked together in order to ‘sense’. The original perceptron was rudimentary and a proof of concept, but it quickly evolved into something known as a multi-layer perceptron. In a multi-layer perceptron (MLP), many of these nodes are stacked on top of each other to form a layer, and these layers are again stacked to form the perceptron. The multi-layer perceptron is also known as a neural network. Each node from one layer is connected to each node from the layer before and the layer after, and the strength (or more officially, ‘weight’) of these connections is dependent on the information that has passed through the system. Automatically adjusting these weights based on labeled input is what we know as learning, but is formally called training (hence “training a machine learning model”).
Even though these perceptrons have significantly improved over time, they still do not even come close to the complexity of a biological neuron. In fact, David Beniaguev (2021) and his team found that to accurately represent the full complexity of a single biological neuron, they needed a deep neural network (DNN) consisting of five to eight layers, each of which consisted of 128 nodes. That’s between 640 and 1024 nodes just to mimic the complexity of one neuron! Now consider that the human brain has, on average, 86 billion neurons. So, the next time you are wondering why we haven’t built a full reconstruction of the human brain digitally, remember that it would take more than 70 trillion artificial nodes.
Conclusion
Both our knowledge of the human brain and our computing capabilities have greatly improved over the past decades. We are currently in an AI boom, and it is likely that in the near future we will look back at the place we are now the same way we look back at Turing and Rosenblatt. While it might feel like we are far away from a true artificial intelligence, the systems we have now come quite close to it. And though the foundations are still the same, the multi-layer perceptron has evolved far from the original perceptron that Rosenblatt proposed in 1958.
Bibliography & Further Reading
Beniaguev, D., Segev, I., & London, M. (2021). Single cortical neurons as deep artificial neural networks. Neuron, 109(17), 2727-2739.e3. https://doi.org/10.1016/j.neuron.2021.07.002
Biever, C. (2023). ChatGPT broke the Turing test — the race is on for new ways to assess AI. Nature, 619(7971), 686–689. https://doi.org/10.1038/d41586-023-02361-7
He, Q., Geng, H., Yang, Y., & Zhao, J. (2023). Does ChatGPT have consciousness? Brain‐X, 1(4). https://doi.org/10.1002/brx2.51
Liu, B. (2021). “Weak AI” is Likely to Never Become “Strong AI”, So What is its Greatest Value for us? ResearchGate. https://www.researchgate.net/publication/350484538_Weak_AI_is_Likely_to_Never_Become_Strong_AI_So_What_is_its_Greatest_Value_for_us
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519
Searle, J. (1980). The Chinese room experiment. https://rintintin.colorado.edu/~vancecd/phil201/Searle.pdf
Sterrett, S. G. (2012). Bringing up Turing’s ‘Child-Machine.’ In Lecture notes in computer science (pp. 703–713). https://doi.org/10.1007/978-3-642-30870-3_71
Turing, A. M. (1950). I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/lix.236.433
Comments