Kinkajou : Singularity is an unusual word. What does it mean?
Erasmus : The technological singularity can be said to have been achieved when a computer or related technologies achieve intelligence. When intelligence is achieved, the artificial technology artefact (computer, computer network, or robot) would be theoretically capable of recursive self-improvement (redesigning itself).
AIs by becoming self-aware and intelligent would be able to apply greater problem-solving and inventive skills than humans are capable of delivering, resulting in design changes to the AIs own hardware and software, at an accelerating rate.
It would be able to improve its own design, effectively bootstrapping its intelligence leading to an intelligence explosion. These recursive redesign loops would allow machine intelligence to expand in an exponential fashion.
The prediction is that intelligent machines could design ever more intelligent machines, eventually creating intelligence exceeding human intellectual capacity and control. Many authors pose that the capabilities of these intelligent machines may reach the point beyond which they become unfathomable to human intelligence.
Even people with contrarian views on the imminence of the technological singularity agree that it is likely at some point in time machine intelligence will eventually surpass human intelligence. A tipping point will be reached.
However, proponents and opponents of the technological singularity theory disagree on when machine intelligence may surpass that of the human. Answers given in surveys of when machine intelligence may exponentially surpass human intelligence range from 20 to 1000 years.
Computer Power Vs Animal Computing Power
Kinkajou : You’re a bit sceptical?
Erasmus :To take a contrarian view of my own, I think intelligent machines are still limited by their hardware and software. A change in design and structure of these elements takes time. It would appear unlikely that a single machine would out think thousands of humans working in the same area of machine intelligence to develop a superhuman intelligence all by itself.
The logic of numbers (millions of humans in a population of 7 billion on the planet), decries the probability of machine intelligence achievements surpassing that of its creators. Even if the AI machine is 100 times as “intelligent” as a human (however we may measure that), we still have humans to spare on that one.
However other people take an even more contrarian view. They believe that even though you can visualise a future in your imagination, this does not mean that such a future is likely or even possible. Human history is littered with examples of failed concepts.
Science fiction has proposed underwater domed cities, flying cars and even StarGates. Perhaps these are possible but at this point in time it is unlikely that such dreams could become reality. There is no guarantee that the coming singularity will ever achieve reality.
Kinkajou: Perhaps so. If you believe in the possibility of the technological singularity, there is no reason that it may reach the point beyond which Machine or AI intelligence becomes unfathomable to human intelligence. Most technology in retrospect looks very simple. Achievement of high technology can be assessed by its efficiency and its capabilities. This will often not mean that it is incomprehensible.
Erasmus : Yes. I think almost all human beings today are not capable of inventing calculus, such as was discovered by Sir Isaac Newton. However many human beings are capable of understanding and following the application and principles of the scientific advancement.
It is not beyond human understanding. Essentially someone else thought of it first. (I.e. Isaac Newton). If Isaac Newton had not developed calculus, it is likely someone else eventually would have done so instead. (Not that I want to decry this incredible achievement. Newton invented calculus as a young man at a rate faster than most students learn it today.)
Perhaps a better way to define intelligence is by the ability to do or think something that has not been taught to you. For example, Isaac Newton certainly was not taught calculus by anyone.
Kinkajou: If we use this definition, i'm afraid that most humans alive today may well fail their own intelligence test.
Erasmus : Disconcerting! Humans may not be intelligent ?
Kinkajou : Many people are predicting that the technological similarity will occur by 2045. However, I think such predictions are far too optimistic. While one-day humanity may achieve the technological singularity, it will likely be a long road.
Erasmus :Yes. Many things look simple when you don’t understand or know much about them. But the more you learn the more you realise how much you don’t know and how much you have to do is discover and learn.
Even things that we do understand may be too complex to fabricate in the quantities that are required. For example, we can grow neurones and can encourage them to synapse. However to fabricate the human brain with its billions of neurones in a functioning form may well be beyond our capabilities for a long long time.
And even if we do attain the capacity to re-create such a structure, it may be economically unviable to do so.
Kinkajou : The technological singularity is a hypothesised future event. How do people see it coming about?
Erasmus :Many authors propose the computers increase their computing capacity until they become sentient or awake. Other people propose networks of computers in communication and linking with each other, may also become awake.
There is a possibility that human beings interacting with computer programs may allow these programs to develop in ways that could be judged to be intelligent.
There are many problems with these proposals.
The most obvious is that we do not really understand how the brain works.
- Where our memories stored?
- Are they stored as DNA?
Genetic Information Coding
- Are they stored as proteins?
- Are they stored as conformational changes within the neurones or as structures within the synapses?
- What is the nature of intelligence?
- Does intelligence occur because humans forget and fuzz together what they have learnt?
- With time these memories are cross-linked developing new associations and new solutions to problems.
- Is the functional structure of human neurone different to those of lower organisms?
- Obviously the genetic coding allows the creation of specific structures consisting of neurones. But is the actual genetic coding for the function of the neurone different in lower species as opposed to human beings?
Erasmus : If humans fuzz together memories and create new associations, this can obviously be replicated in silicon. However, computers that cannot compute the same answer with several iterations of process runs, become a problem. Answers are always different. Unusual or even dangerous associations could be made. Maybe SkyNet was just a computer with a deviated memory unable to think straight.
Another issue here is that computations always need the same answer, but perhaps thought process simulations do not.
The obvious answer is that for every heuristic adaptable silicon chip built to simulate intelligence, there must be matching chips built to maintain the sanity of the machine. Biology and evolution have done this for human beings, but perhaps human beings must do this for computers.
The obvious inference from this previous discussion is that memory and intelligence in human terms is a complex issue that we essentially do not understand. We can teach computers many things, which can be stored in a memory structure within the computer and searched for answers. But the computer has not created the answers. Human beings have.
Erasmus : Other paths have been proposed for the development of superhuman intelligence. One such proposal is through the application of human intelligence utilising machine components. This is essentially the cyborg model of progress. The computer allows intelligence amplification through the medium of the human being, taking advantage of the inherent strengths of each type of intelligence.
Kinkajou : But people in recent tests have been unable to tell the difference between a normal human being and a computer.
Erasmus: Yes but think where these instructions have come from. Human beings have programmed the computer with human responses. They have not risen from the intelligence of the computer. The question then becomes can human beings tell the difference between human information sourced from a computer as opposed to being sourced from a living human being.
The challenge here is that computers have a depth of perfectly remembered knowledge whereas human beings tend to forget. A human being is more likely to forget while a computer can simulate forgetting.
The same words (I don’t know) can be used by either a computer or a human, there being essentially no difference between the words said that allow a questioner to differentiate the nature of the respondent.
Robots and computers perform many physical tasks already in competition with human beings.
These range from computer chip fabrication to automated automotive (car) assembly lines. These machines however do not have the ability to make decisions outside of their programming.
Their lack of self-awareness and their inability to look for and find solutions to problems based on available information means that these machines will always remain as tools.
Robotic Car Factory
Erasmus : Some people would argue that we are living right at this time, within the age of the technological singularity. Many mobile phones and computer systems have assistants programmed to respond to questions that mimic the activities of a personal assistant.
However, they can only do what they are programmed to do. You can even ask a question of Google, which is not programmed to talk to you, and it will find you information to address your response.
This type of artificial intelligence is called soft AI. Essentially it represents software programming which is adapted to give the impression of intelligence.
The term “Hard intelligence” is often used to describe the technological singularity inherent in the birth of AI. It is proposed that this is true intelligence, as opposed to simulated intelligence of soft AI programs.
Kinkajou : So, you are saying that because computers are hard -wired in silicon, they will be incapable of true thought or intelligence. They can however similar intelligence through the medium of human programming.
Erasmus : Once upon a time I would have concurred. However with the advent of the Memristor, I think that new possibilities from the horizon. This new circuit element may well provide a way for silicon to simulate intelligence or thought. The development of this element is an early stage this time and it has yet to be incorporated on any scale into computer CPU chips.
Kinkajou : So what is the argument for the technological singularity?
Erasmus : I think people are excited by the speed of technological progress this century. Moore’s law (which is not a natural law but a description of past events especially relevant to the computer world and the development of CPU chips), suggests that computers will become ever more powerful and faster the never-ending fashion.
However I would propose that there are limits to the development of new technologies. Atoms have a finite size so it is not possible to infinitely reduce the size of computer transistor elements.
At atomic scales, quantum effects occur which essentially make silicon transistors operate in unusual ways. (Perhaps this erratic event may allow computers to simulate intelligence, by generating a range of answers to a single question due to random probabilistic events occurring in the microscopic elements of the computer CPU chip.
Perhaps human intelligence is based on random erratic events in synapses creating new thoughts, new processes and new solutions.)
Computer RAM Memory Chips
The one thing that is certain is that human intelligence is a complex phenomenon. It enables humans to be flexible sensitive to contextual clues, create shortcuts for assessment, and create flashes of insight and self-assessment of self- reflection. Uncertainty may well be an inherent phenomenon that underlines intelligence.
Erasmus : Another proposal for the development of superhuman intelligence is to create computer systems that are capable of mimicking human intelligence. We have achieved this in many specialist areas. But we have not achieved a general solution.
A computer program that is capable of outperforming a human in the chess game, cannot use its skills to play other games or do other tasks. Petaflop computers are now available commercially.
Exaflop class computers are currently in planning stages. Systems of this speed may well have the raw computational capability required to similar the firing patterns of all of the brains neurones. They are however unlikely still to match the speed of processing in an actual brain.
Kinkajou : Surely, current trends in exponential growth of computer capacity and abilities, strongly suggest that the Singularity is nigh?
Erasmus : Memristors, optical computing, molecular size transistors, DNA computers, the proposed use of sheets of carbon nanotubes (graphene), high temperature superconductors based on new technological meta-materials quantum computing all suggest possible methods for improving computer speeds, while maintaining low power consumption, perhaps compatible with current day computer CPU technologies.
However I would claim that past performance in the computer world of exponential growth, does not predict future exponential growth.
I think the point will be reached with a growth curve or plateau until some new significant technology allows another burst of growth. I think this is the characteristic of mature understanding of the universe and a mature understanding of technology.
Kinkajou : How about the melding of men with machine?
Erasmus : People talk about interfacing human brains with silicon computer elements. This is a very complex achievement in fraught with danger. We do not understand memory so do not understand how to feed information into the human memory or extract information from it.
Do we need to implant 100 micro silicon wires into the brain to extract information from it? What happens if we connect to the neurone next to the one we want? Can we still get the same information and of the role neurone?
Erasmus :The other major problem is that foreign bodies such as silicon computer chips inside biological systems are accidents waiting to happen. In the medical world, ventricular peritoneal shunts colonise with bacteria in effect, quite frequently.
People require these drainage structures to enable their brains to discharge excess fluid. But if the shunts become infected people can develop brain damage and die.
Imagine a human brain crisscrossed by foreign silicon material. A single bacteria accessing one of the strands could initiate an infection causing rapid death.
Hip and knee prostheses are often infected, with drastic results. There doesn’t seem to have been a lot of problems with cochlear implants but then I haven’t seen many cochlear implants either. I think there is a long long way to go before there can be a melding of biological and extraneous foreign bodies such as silicon.
I can perhaps see that there may be another solution in that engineered biological extracts from the original person can be modified and reinserted into the person allowing development of the brain along parts that human technology has discovered.
Kinkajou : Not likely to happen anytime soon, I think.
So your prediction is that the growth of the computer technologies will eventually plateau?
Erasmus : Yes. Technology is derived from small increments to what we have known. What we have seen to date is the rapid though incremental changes of computer CPU chip design.
All of this innovation is based on the design of the transistor. Even the transistor is not necessarily a new invention. These structures existed as vacuum valves before they were created in silicon.