https://allianceindependentauthors.org/badges/author-badge-109x185.png
top of page
Search
Writer's pictureMartyn Rhys Vaughan

AI AND THE FUTURE - PART SEVEN: Human Programming No Longer Required

From the beginning some people have predicted that machine intelligence will one day outstrip that of humans. The response of the majority of people has been that the human brain’s complexity is matchless and no assemblage of printed circuits will ever be able to do more than unconvincingly ape human brilliance.

The best way of testing this would be if the machine intelligences could produce outputs which could not be predicted from the code that they are based on. In the jargon, they would display “emergent” properties. “Emergent” simply means that the outputs from a given system cannot be foreknown from a knowledge of what the components of that system are.

In the early days of AI, it was thought that the best way to produce artificial reasoning abilities would be through the standard model of programming, which in its simplest form consists of a series of commands based on logical functions such as “if…then…”. However, this approach quickly broke down due to the ever increasing number of commands necessary to perform even the simplest tasks. Consequently, most people wrote off AI as an impossible dream.

However, there was another way, and that was to mimic the actual structure of a mammalian brain by allowing processors to link up in an ever increasing number of combinations. As the number of connections increased, so the responses of the “neural net” became more flexible.

The ultimate aim of such neural nets is to produce Artificial General Intelligence (AGI) which is a term meaning that the neural net has attained the resourcefulness of a mammalian brain. The “GPT” in “ChatGPT” stands for “Generative Pretrained Transformer.” The important word here is “Generative”, which signifies that the system can generate its own responses to a given stimulus. The preferred method as the moment is the “Large Language Model” (LLM) where the neural net is fed gigabytes of text and data from the Internet. The system can then train itself to recognise connections between the data items and allow it make deductions from those data sets.

And slowly but surely emergent properties have come to the surface. It is well known that AI can pass the American Bar Exam, with higher scores than the human average. From simple prompts they can compose Petrarchan sonnets. From a text description of a problem, they can write Python code, and point out bugs in programs written by humans. One team even reported that their LLM had suggested that the lead researchers should divorce. A GPT when asked to write a program to calculate Fibonacci numbers initially produced a wrong answer. When this was pointed out, it produced a second version which was correct. It had demonstrated classical learning. It had looked for the error, found it, corrected it.

Kenneth Li

of Harvard University and his colleagues have developed a probe to analyse their main network, layer by layer. Li compared this to neuroscience procedures on human brain. They watched it teach itself the simple boardgame “Othello” and were able to map its neural activity to the rules of the game, suggesting that GPT had developed a mental map of Othello. Board games are, of course, old hat to AI but neural nets soon mastered role playing, text-based adventure games. Once again, the system had developed, without human programming, a mental map of the structure of the game, including its fictious geography. Based on evidence such as this, Ben Goertzel of AI Company SingularityNET has stated that AGI will be achieved within the next decade.

Perhaps the most staggering example of machine intelligence was achieved by the firm Deep Mind, a subsidiary of Google’s Alphabet. Using an AI system called Alpha Fold it has predicted the final structure of almost every protein known to science in just 18 months. Proteins are created when amino acids link up and combine into a new compound. However, when they do so, they adopt a unique three-dimensional structure which determines their function. Predicting what this structure will be required computing which arrangement will form from a truly astronomical set of probabilities. So vast was this problem that it was beyond the purely numerical abilities of standard computers. Indeed, so gigantic is the problem that Isaac Asimov used a version of it to stump the giant computer Multivac in his story “The Life And Times Of Multivac.” And yet Alpha Fold had determined the structure of 98.5 % of human proteins by the middle of 2021. A crystallographer at Oxford University who had previously struggled with the problem commented that people like him will soon be unemployed.

People who cling to the idea that human intelligence will always outstrip that of mere machines state that these devices are just lines of code—they have no “mind.” Of course, psychology was once bogged down by beliefs derived from people examining their own minds, in the process of introspection. As long as psychology was bogged down in introspection, it could make no progress. Then people like Pavlov and Skinner developed “Behaviourism” which ignored the concept of mind, and concentrated on what could be consistently observed and measured—behaviour.

And so we are lead to the conclusion that if an artificial structure displays behaviour consistent with the types of behaviour in humans we would ascribe to the possession of a mind—Occam’s Razor requires us to believe that they do indeed possess a mind.

And once a simple mind is accepted, it is a very small step to a transhuman mind.

0 views0 comments

Comments


bottom of page