InfoFireMultimedia

Is AI the future of everything? A fireside chat with one of the fathers of the Internet, Vint Cerf. 

Is AI the Future of Everything? A Fireside Chat with One of the Fathers of the Internet, Vint Cerf

Shalini Urs

Not a single day passes without AI grabbing headlines. It is but natural. AI (Artificial Intelligence) is coming at humanity fast and furious. AI marks “a new epoch” as it signals the end of Enlightenment’s ethos that placed humans at the center of all that is knowable (displacing God), putting in their stead machines with superior intelligence (Kissinger et al., 2021). 

“AI has hacked the operating system of human civilization,” argues historian, philosopher, and author Yuval Noah Harari. 

From World Brain to Internet to AI: The Quest for Connectivity

In a series of talks and essays in 1937, H. G. Wells—called the “father of science fiction,” social reformer, evolutionary biologist, and historian, proselytized for what he called a “World Brain,” as manifested in a World Encyclopedia—a repository of scientifically established knowledge—that would spread Enlightenment worldwide and lead to world peace. In the wake of the First World War, Wells believed that people needed to become more educated and conversant with events and knowledge that surrounded them. In order to do this, he offered the idea of the knowledge system of the World Brain that all humans could access.

“These innovators, who may be dreamers today, but who hope to become very active organizers tomorrow, project a unified, if not a centralized, world organ to ‘pull the mind of the world together,’ which will be not so much a rival to the universities, as a supplementary and co-ordinating addition to their educational activities—on a planetary scale.”—H.G. Wells

Wells’ ideas influenced many toward the ideal from the 1930s to the 1960s and beyond. For example, the World Congress of Universal Documentation, held in August 1937 in Paris, France, discussed the methods for implementing Wells’s ideas of the World Brain; Arthur C. Clarke, in his 1962 book Profiles of the Future, predicted that the construction of what H. G. Wells called the World Brain would take place in two stages— the construction of the World Library, a universal encyclopedia accessible to everyone from their home on computer terminals and the second stage, would be a super-intelligent artificially intelligent supercomputer that humans would be able to interact with to solve various world problems mutually; Brian R. Gaines saw the World Wide Web as an extension of Wells’  “World Brain” that individuals can access using personal computers.

As Bruce Sterling writes in his foreword to the 2021 edition of Wells’ work published by MIT:

“The World Brain did not happen; the Internet did.”

And yes, Wells dreamed of the World Brain as a technical system of networked knowledge.

In this episode of InfoFire, listen to Dr. Vinton Cerf, one of the fathers of the Internet, who takes us through the journey of the Internet and the wonder and the worry of our times, AI. As per Cerf, it is all about connectivity. I feel compelled to recall and quote:

“It was the best of times; it was the worst of times; it was the age of wisdom; it was the age of foolishness; it was the epoch of belief, it was the epoch of incredulity; it was the season of Light; it was the season of Darkness, it was the spring of hope; it was the winter of despair.”—Charles Dickens, A Tale of Two Cities (1859)

Evolution of the Internet: Major Milestones.

Vint Cerf gives the history and evolution of the Internet and the Internet Society in a capsule: 

    • Demonstration of three different packet-switched networks—a mobile packet, radio net packet, satellite net, and the original ARPANET, showed that TCP-IP was doing what it was supposed to do.

    • 1983, the Internet was turned on for the academic community, sponsored and supported by the DARPA.

    • NSF Net, Energy Sciences Net, and the NASA Science Internet ran on TCP/IP Protocol (1986 to 1990). 

    • Optical fiber networking in the early 1980s 

    • In December 1991, Sir Tim Berners-Lee at CERN, released the worldwide Web.

    • Marc Andreessen and Eric Bina at the National Center for Supercomputing Applications (NCSA), University of Illinois Urbana-Champaign, developed and released the Mosaic web browser in 1993.

    • The next milestone is the commercialization of the Internet, with Netscape Communications opening its door in 1994, followed by the dot-com boom.

    • And then that came a cropper in April of 2000 when many of those startup companies ran out of capital and did not have a revenue stream to keep going. And so the dot bust happened.

    • 2007, the arrival of the iPhone and the mobile internet revolution. Mobile telephony and the Internet are hyperbolically connected. 

    • Launch of low earth orbiting satellites and satellite communications, leading to intra-satellite connectivity, which is faster than undersea cable connectivity. 

    • Two milestones have yet to happen: Quantum computing and the interplanetary Internet.

And, of course, the introduction of the desktop, the laptop, the iPad, and then the mobile is somewhere between all these things. Along comes the Machine Learning and Artificial Intelligence (AI) breakthroughs bringing us to the current AI epoch.

—”Prime force is connectivity.”—

According to Cerf, the primary force that brought visionaries and experts behind the emergence of the Internet and other technologies was “connectivity.”

Singularity

Depending on the context, the term singularity has many different meanings. James Clerk Maxwell was the first to use the term singularity in its most general sense in 1873, referring to contexts in which arbitrarily small changes, commonly unpredictably, may lead to arbitrarily large effects. In natural sciences, singularity describes dynamical systems and social systems where a small change may have an enormous impact. In technology, singularity describes a hypothetical future where technology growth is out of control and irreversible. These intelligent and powerful technologies will radically and unpredictably transform our reality. John von Neumann is said to be the first to discuss the concept of technological singularity early in the 20th century (Shanahan, 2015). Since then, many authors have either echoed this viewpoint or adapted it (Chalmers, 2016). Technological singularity is a hypothetical point at which the development of artificial general intelligence will make human civilization obsolete (Eden et al., 2013). The Singularity, a 2012 documentary film about technological singularity, included interviews with commentators related to the singularity. It has been called “a large-scale achievement in its documentation of futurist and counter-futurist ideas.”

Ray Kurzweil, currently a director of engineering at Google, is one of the commentators in this documentary. He is a well-known advocate for the singularity and transhumanist movements, authored books on singularity, and has publicly shared his optimistic outlook on life-extension technologies and the future of nanotechnology, robotics, and biotechnology. His book The Singularity Is Near When Humans Transcend Biology (2005) builds on the ideas introduced in his previous books, The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1999). In the book, Kurzweil embraces the term “singularity,” which was popularized by Vernor Vinge in his 1993 essay “The Coming Technological Singularity.” Kurzweil also directed the film adaptation of the book. The Singularity is Near mixes documentary interviews with a science-fiction story involving his robotic avatar Ramona’s transformation into an artificial general intelligence.

Vint Cerf on Singularity

—”I do not agree with Ray on Singularity.”—

Ray is a good friend and colleague at Google, and we have had many conversations on this topic. Ray has been very good about predicting several things related to the rapid growth of computational capability and often quotes that the number of neurons in the brain is becoming similar to the number of computers on the Internet. Does that mean the singularity is near? I am afraid I have to disagree with that because the connectivity of the brain is even more important than the number of neurons in it. And so again, connectivity becomes a critical value, and I have yet to see evidence of that.

One cannot say singularity is near merely based on the number of available computing sites. But, of course, the machine-learning world makes it increasingly look as if there is a singularity. However, I want to emphasize that what we are seeing here is the simulation of human discourse, and I want to emphasize simulation.

Although it sometimes looks as if these machine learning (ML) systems are reasoning and that they somehow have models of the way the world works, I think those models are dimly reflected in the statistics of text, which has been ingested into the system while training the machine learning models, the large language models (LLMs). Thus, we see the various and multitude of intelligent conversations. Nevertheless, it is thin and not based on fundamental models of the world’s workings.

The ML systems are not perceiving the real world. They are only getting a reflection of the real world, as it shows up in human discourse—in texts, humans have created.

Thus, even though it is very tempting to imagine that this indicates singularity is near, it is not. So, I am still skeptical at this point. However, these systems have enormous power as tools, and humans can employ them in useful ways.

Thus, even if the singularity does not happen, these kinds of tools can be extremely helpful in discovering information we might not otherwise find. Thus, the similarity between the search engines that Google and others operate and LLMs’ ability to capture information.

Artificial Super Intelligence: Risks and Rewards

Touted as the future of AI, Artificial Super Intelligence (ASI) is expected to perform extraordinarily and surpass humans at everything—arts, decision-making, and emotional relationships. In 1998, Philosopher Nick Bostrom at the University of Oxford defined “super Intelligence” as an intellect much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. In his more recent book, titled Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller, Nick Bostrom, known for his work on existential risk, lucidly wrote about this profoundly crucial existential question taking us through a fascinating terrain of topics and contemplations about the human condition, the future of intelligent life, and reconceptualization of the essential task of our time.

—”Are we there yet?”—

I do not think so, says Vint Cerf. If by superintelligence, we imagine something comparable to human intellect. Humans have the ability to formulate models based on very small amounts of input. He elaborates, taking the table as an example. A table is a very simple concept. It is a horizontal surface that is perpendicular to the gravitational field. Of course, most people never think of it in exactly those terms, but even a two-year-old can figure out what a table can do. They know that they have an object with a flat bottom. It stays on the table and does not roll, but they soon discover that other things can be tables. For example, a box can be a table; a chair can be a table; even your lap can be a table. And this concept is rapidly understood by a two-year-old. It is not a hundred percent clear that we have artificial intelligence that recognizes that kind of fact and can generalize from it.

It does not matter what the shape of the table is; it does not matter what its color is; it does not matter what material it was made of. All that matters is that it is a flat surface perpendicular to the gravitational field. So, we are still some distance away from that kind of understanding.

However, we have shown remarkable narrow ability. To understand things that even humans are still determining why they work. Taking the example of Google’s DeepMind success in solving the protein folding problem, Vint further elaborates on the success of AI in narrow ability.

Protein Folding Problem and AI: AlphaFold is a game changer.

Proteins—large complex molecules made up of chains of amino acids that support practically all human functions, are the building blocks of life. What a protein does largely depends on its unique 3D structure. Figuring out what shapes proteins fold into is known as the “protein folding problem” and has been a grand challenge in biology for the past 50 years. The “protein folding problem” consists of three closely related puzzles: (a) What is the folding code? (b) What is the folding mechanism? (c) Can we predict the native structure of a protein from its amino acid sequence?

The thermodynamic hypothesis of Nobel Laureate Christian Anfinsen is considered an important milestone in protein science (Dill et al., 2008). Anfinsen won the Nobel Prize for his work on the relationship between an amino acid sequence and its biologically active three-dimensional (3D) conformation. In his Nobel Prize acceptance speech (Anfinsen, 1973), Anfinsen said:

“Empirical considerations of a large amount of data now available on correlations between sequence and three-dimensional structure (48), together with increasing sophistication in the theoretical treatment of the energetics of polypeptide chain folding (49), are beginning to make more realistic the idea of the a priori prediction of protein conformation. It is certain that major advances in the understanding of the cellular organization and the causes and control of abnormalities in such organization will occur when we can predict, in advance, the three-dimensional phenotypic consequences of a genetic message.”

Researchers deciphered proteins’ 3D structures for decades using experimental techniques such as x-ray crystallography or cryo-electron microscopy (cryo-EM). However, such methods can take months or years and do not always work. As a result, structures have been solved for only about 170,000 of the more than 200 million proteins discovered across life forms. AI has solved one of biology’s grand challenges: predicting how proteins curl up from a linear chain of amino acids into 3D shapes that allow them to carry out life’s tasks (Beck et al., 2021). Predicting a protein’s three-dimensional native structure from its amino acid sequence has been a significant problem to solve for computational biology as it leads to accelerated drug discovery by replacing slow, expensive structural biology experiments with faster, cheaper computer simulations and annotating protein function from genome sequences to predict a protein’s three-dimensional native structure from its amino acid sequence. As Ornes (2022) states, today, researchers turn to deep learning to decode protein structures. John Moult initiated the biennial CASP (Critical Assessment of Techniques for Protein Structure Prediction) in 1994. At the biennial (CASP) competition, groups vie to predict the 3D structure of proteins. At CASP 14, in 2020 DeepMind’s AlphaFold 2.0 bested all other groups and matched experimental findings on a measure of accuracy.

Furthermore, this has been universally lauded as a game changer. Some say it will change everything and transform biology (Callaway, 2022). Almost all scientists, especially molecular and structural biologists, hail that the DeepMind method will have far-reaching effects, among them dramatically speeding the creation of new medications (Service 2020). It is proclaimed as the most important achievement of AI.

Nobel Laureate Venki Ramakrishnan, a structural biologist at the Medical Research Council Laboratory of Molecular Biology, calls the result “a stunning advance on the protein folding problem.”

Dame Janet Thornton, the group leader and senior scientist at the European Molecular Biology Laboratory’s European Bioinformatics Institute, said: “AlphaFold protein structure predictions are already being used in a myriad of ways. I expect that this latest update will trigger an avalanche of new and exciting discoveries in the months and years ahead, and this is all thanks to the fact that the data are available openly for all to use.”

AlphaFold 2.0 is the first computational method to predict protein structures with near experimental accuracy and is called the gold standard for this type of technology. The original AlphaFold combined local physics and pattern recognition and would often overestimate the effect of interactions between nearby residues (Al-Janabi,2022). Instead, AlphaFold 2.0, an attention-based neural network architecture combined with a deep learning framework, relied exclusively on pattern recognition (Senior et al., 2020).

—”AlphaFold is Narrow Superintelligence.”—

Vint Cerf elaborates on AlphaFold 2.0 and cites it as something close to superintelligence but in a narrow and specific domain. He shared his thoughts on Narrow AI and how they predicted the folding of the 200 million human proteins. To put it in his own words:

Recently they trained one of their neural networks to learn how to predict the folding of proteins for all the various proteins that might be made from human DNA. And that was 200 million proteins. And they practiced with human beings playing it like a puzzle. Moreover, they managed to show an algorithm for some 80,000 proteins. Then they applied that algorithm to all the known DNA in the human genetic sequence. And produce the folding of the 200 million proteins. It is so important because we would never have done that at a human scale without the assistance of artificial intelligence, i.e., neural networks. And the second reason it is important is, having done that, we now know what those proteins are and what and how they are shaped. Moreover, that means we know something about how they might interact with other proteins and other parts of the body. And that might lead to medical breakthroughs. It might lead to treatments of various kinds.

We could only do that with the capacity of these large language models or these neural networks. So, these are tools, and they are augmenting human capability. Something that Douglas Engelbart articulated passionately in the 1950s, along with JCR Licklider at MIT.

Thus, those two giants in our field foresaw the possibility that computers could augment human intellect. And they have, and they do, and they will. Whether that turns into superintelligence is still an open debate, but it will enable humans to do things they could not otherwise do.

BCI and AI: Turning Science Fiction into Realities

Huge technological advancements have narrowed and even blurred the borders between humans and machines. Brain-computer interfaces (BCIs) have demonstrated remarkable prospects as real-time bidirectional links between living brains and actuators. AI methodologies have made great strides, advanced the analysis and decoding of neural activity, and thus turbocharged the field. Over the past decade, many BCI applications with AI assistance have emerged. Kawala-Sterniuk et al. (2021) present an exhaustive summary of the relevant aspects of the BCIs, and all the milestones made over its nearly 50-year history, acknowledging pioneers and highlighting the technological and methodological advances transforming the field from something available to few and understandable by a very few into possible breathtaking change for many. 

Zhang et al. (2020) review the current state of AI as applied to BCIs and describe advances in BCI applications, their challenges, and where they could be headed. Their review offers an overview of the applications of BCIs based on AI: Early application beginning with cursor control helping paralyzed patients; Applications in neuroprosthetics and limb rehabilitation; Applications in somatosensation; Applications in auditory sensation; Applications in speech synthesizers; Applications in optical prosthetics. One of the biggest advantages of machine learning to BCI is the ability to achieve real-time or near-real-time modulation of training parameters and subsequent adjustments in response to active real-time feedback, as they note.

BCIs—devices that establish a direct communication channel between a brain and an external device, record from the nervous system, provide input directly to the nervous system, or do both. The BCI creates a communication pathway between the brain and an external device—a computer or a robotic arm, through which information is exchanged in both directions (“Brain-computer interface,” 2023). A BCI lets people interact with technology or their environment using only their brain activity without needing physical input devices like a keyboard or a mouse. Instead, a BCI uses sensors, such as electroencephalography (EEG), Magnetoencephalography (MEG), Magnetic resonance imaging (MRI), or invasive techniques, such as implantable electrodes. These brain signals are then processed and interpreted by a computer algorithm, translating them into commands that can control an external device or provide feedback to the user. Smart BCIs, including motor and sensory BCIs, have helped paralyzed patients, expanded the athletic ability of common people, and accelerated the evolution of robots and neurophysiological discoveries. 

BCIs are categorized into invasive and non-invasive systems. The brain signals (i.e., the overall electrophysiological activity of the brain nerve cells) are obtained from either the surface of the scalp or directly from the cortical surface. Most non-invasive BCI systems are based on EEG data, while invasive BCIs are mainly based on signals recorded directly from the brain through electrocorticography. The electrocorticogram (ECoG) is an invasive device for reflecting recordings of the brain’s electrical activity. Invasive BCIs are more accurate but carry greater risks and are only suitable for certain medical conditions. Non-invasive BCIs are less accurate but are safer and more practical for everyday use.

BCIs can potentially assist, augment, or repair human cognitive or sensory-motor functions. Some of the neurological functions that BCIs can potentially assist, augment, or repair are Motor control—helping persons with paralysis or motor impairments; Sensory perception—helping people who have lost their sense of touch, sight, or hearing; Communication—helping individuals with communication impairments; Cognitive processing— enhancing cognitive processes such as attention, memory, and learning, by stimulating specific brain regions.

The recent advances in neurotechnology and AI have further bolstered the possibilities of BCI. The field of BCI is poised to advance from the traditional goal of controlling prosthetic devices using brain signals to combining neural decoding and encoding within a single neuroprosthetic device acting as a “co-processor” for the brain based on artificial neural networks and deep learning (Rao, 2019). The BCIs aided by AI offer wide-ranging applications, from rehabilitation after brain injury to reanimating paralyzed limbs and enhancing memory. The brain signals in BCI communication have been advanced from sensation and perception to higher-level cognition activities. Gao et al. (2021) review various BCI paradigms and present an evolutionary model of generalized BCI technology comprising three stages: interface, interaction, and intelligence (I3). 

Cochlear Implants: Oldest and most Successful afferent interface

A cochlear implant is the most common and oldest way to use a BCI (Peters et al., 2010). Sensory BCIs, such as cochlear implants, have already had notable clinical success, and motor BCIs have shown great promise in helping patients with severe motor deficits (Wander & Rao, 2014).

A cochlear implant system consists of the following four major components: (1) a microphone that picks up an input speech signal, (2) a signal processor that converts this signal into electrical signals, (3) a transmission system that transmits the electrical signals to implanted electrodes in the cochlea, and (4) an array of electrodes that are surgically inserted into the cochlea. Via the array of electrodes, auditory nerve fibers at different locations in the cochlea get stimulated depending on the signal frequency. A signal processor is used for bandpass filtering the input speech signal into several (12–22) frequency bands. The processor converts the signals from each band into electrical signals and delivers them to the array of electrodes.

Speaking from his personal experience, Vint Cerf, whose wife has two cochlear implants, explains the working of the cochlear implants, a sensory neural device with a speech processor that takes in sound from a microphone. First, it does a Fourier transform to figure out which frequencies are present in the speech utterance or the sound input. And then, it figures out which electrodes implanted in the cochlea should be stimulated to persuade the brain that it is hearing. To do this, you must understand what the signal input produces in the normal human brain and the auditory system and what nerve signals the brain detects when certain sounds, frequencies, and amplitudes arrive.

So this mechanical system effectively convinces the brain that it has a sensory neural capability that it no longer has from biology. Cerf mentioned that in his wife’s case, it was spinal meningitis that destroyed the stereocilia (the mechanosensing organelles of hair cells) inside the cochlea, which generally move in synchrony with the incoming frequencies, the long hairs with the low frequencies, and the short hairs for the higher ones. Furthermore, as they move, they generate electrical signals that the brain interprets the sound. So this is an artificial construct that reproduces those signals. The cochlear implant is now standard therapy as opposed to experimental as it was 25 years ago is an example of the success of this type of BCI. 

“Sensory neural implants for sight and sound are feasible.”

Vint Cerf believes that sensory neural implants will be increasingly feasible for sight and sound. We are starting to see some efforts at haptic interfaces. Sensory-motor capability is also becoming increasingly feasible, where someone with a loss of a limb but still has normal capability can get assistance from a BCI, notes Cerf. The brain’s thinking can stimulate the sensory-motor neurons, and that stimulus is detected and reproduced in an artificial hand or artificial arm or leg, and what have you.

BCI and Rehabilitation Medicine

BCI is currently influencing physical medicine and rehabilitation. BCI systems used for motor control record neural activity associated with thoughts, perceptions, and motor intent; decode brain signals into commands for output devices; and perform the user’s intended action through an output device. BCI systems used for sensory augmentation transduce environmental stimuli into neural signals interpretable by the central nervous system and thus have the potential to reduce disability by facilitating a user’s interaction with the environment. Investigational BCI systems are being used in the rehabilitation setting as neuroprostheses to replace lost function and as potential plasticity-enhancing therapy tools to accelerate neuro recovery. Populations benefitting from the motor and somatosensory BCI systems include those with spinal cord injury, motor neuron disease, limb amputation, and stroke. Bockbrader et al. (2018) discuss the basic components of BCI for rehabilitation, including recording systems and locations, signal processing and translation algorithms, and external devices controlled through BCI commands. 

BCI systems need to deliver naturalistic and functional grasp speed, force, and dexterity to be useful in daily life. Bockbrader (2019) notes that in clinical trials, individuals with paralysis have achieved the most dexterous grasp control using robotic neuroprosthetics or neuromuscular stimulation orthotics controlled by intracortical BCI systems. The next steps are in progress, with the development of portable components and decoding algorithm optimization to simplify setup and calibration.

BCI: from sensation and perception to cognition

Since antiquity, enhancing cognitive abilities has been a dream and mission for people and society. The development of languages to articulate thoughts, the invention of writing to augment and archive our memories, printing technology to replicate records of human thoughts and expressions, and the computer and communication technologies of the 20th century to accelerate information processing and human communication are all humanity’s efforts and triumphs in this direction. In the domain of BCI, Cognitive enhancement or augmentation of brain functions has attracted attention and become a trending topic. Efforts are on to achieve cognitive augmentation—enhancement of cognitive functions such as attention, memory, and learning using AI. The last couple of years have seen many studies for boosting cognitive functions, and biochemical, physical, and behavioral strategies are being explored in cognitive enhancement. While the field of cognitive augmentation using BCIs is still in its early stages, these studies demonstrate the potential of BCIs to enhance cognitive functions and induce neuroplasticity. Possible enhancements include:

    • Attention: BCIs have been used to enhance attention in healthy individuals and individuals with attention deficit hyperactivity disorder (ADHD). Hill et al. (2021) showed that transcranial alternating current stimulation (tACS) combined with EEG-based neurofeedback training improved attention and cognitive control in healthy adults.

    • Memory: BCIs have been used to improve memory performance in healthy individuals and individuals with memory impairments. Suthana et al. (2020) showed that deep brain stimulation (DBS) of the fornix, a brain region involved in memory processing, improved memory performance in patients with Alzheimer’s disease.

    • Learning: BCIs have been used to facilitate learning in healthy individuals and individuals with learning disabilities. Sánchez-Kuhn et al. (2021) showed that a closed-loop BCI system that used real-time feedback of EEG signals during a motor learning task improved learning and retention in healthy adults.

    • Brain plasticity: BCIs have been used to induce neuroplasticity, which refers to the brain’s ability to change and adapt in response to experience. For example, Pahor et al. (2020) showed that a BCI that combined motor imagery training with tACS increased motor cortex excitability and induced lasting changes in brain connectivity in healthy adults.

Jangwan et al. (2022) review the most common neuroscientific methods for monitoring and manipulating brain activity, essential for human cognitive enhancement, to draw attention to this emerging revolutionary technology, its challenges, and limitations, including ethical issues.

Vint Cerf is skeptical about cognitive implants; because we do not understand “thought.”

Sensory-motor and sensory-neural things are directly understandable to us. We can measure those; we can detect them. However, thoughts are different. It is probably distributed in the connections. It could be groups of neurons that form thought, and to the best of our current ability, we do not know how even to model this. So, we are still far away from the notion of being able to detect, understand, or signal thought. So, we have yet to learn enough to do that. And the connectivity that would be required for such an implant is extreme, especially if it is not concentrated in one small part of the brain.

Furthermore, we are very, very far away from being able to do anything like a memory implant. So what if you need to have this collection of connectors scattered throughout the brain to do this sort of cognitive, artificial cognitive stimulus? Well, that is very invasive, and there isn’t any one place where you can put those sorts of interfaces as opposed to seeing, which has an optical nerve, or hearing, which has an auditory nerve.

AI: The Wonder and the Worry

The incredible advancements in technology have brought hope and, along with it, despair about the future. AI could take us to a wonderland of a utopian future or bring us to the brink of dystopia. Some of the incredible advancements in BCI could usher the wonder of sound and light to the profoundly deaf or blind, possibly game-changing breakthroughs in drug discovery through protein folding predictions. The Internet has already transformed our world in so many diverse ways, including creating a flat world and giving voice to everyone and thus ushering in a more egalitarian world by democratizing innovation and entrepreneurship, as noted by an Indian Internet industry pioneer and author of the book Wave Rider Ajit Balakrishnan in a previous episode of InfoFire.  As Hilbert (2022) notes, digital technology, including its omnipresent connectedness and powerful artificial intelligence, is the most recent long wave of humanity’s socioeconomic evolution.

In response to a question on the power of AI for good, especially for bringing healthcare to all, Vint Cerf said there are two things: Remote Medicine and Remote Surgery.

—”Remote medicine, yes. Remote surgery, no.”—

Remote medicine has been a dream for many. In countries like India in particular, it is important because, in the rural parts of India, we often do not have access to doctors and hospitals and things. For remote medicine to work, of course, we need connectivity, which we talked about earlier. We need more internet capability everywhere. We also need the ability to do medical sensing locally. Mobile phones are starting to show a certain capacity for being remote medical sensors. For example, we can sense breathing; we can use cameras with suitable attachments to examine the eye, look at lesions, and so on. There are other kinds of sensors that can sense blood pressure and sense pulse rates, and so on. As we develop more and more devices that can do local sensing, which will be very useful for remote diagnosis, for example, does not necessarily require any artificial intelligence. Although, in many cases, the ability to accurately sense something does require the application of neural networks in order to filter the out noise from the signal in order to detect what you are looking for.

A combination of artificial intelligence, machine learning, and remote sensing could substantially improve healthcare. Though it has been attempted, I would not argue that we could do remote surgery using robotic equipment. I would not want to be the first person to test remote surgery over the Internet out of concern that the surgery will go awry if the Internet breaks. And that would be scary. However, remote diagnosis is entirely feasible. So, I am looking forward to that happening. However, it can only happen if, for example, we get reliable sources of electricity to run our laptops, desktops, mobiles, etc. And we got the Internet, which is highly reliable and has adequate speed and latency for all these kinds of interactive things to work.

AI and the Future of Human Agency

While society and most of us have embraced the new all-pervasive digital life on the Internet, gently and subtly guided by AI, whether through autocomplete feature of Google or other search engines or asking Siri or Alexa, the virtual assistants, the fear of loss of human agency has been a constant worry. Recently Pew Research Centre and Elon University’s Imagining the Internet Center asked experts to share their insights on the “future of human agency”; 540 technology innovators, developers, business and policy leaders, researchers, academics, and activists responded. They are split about how much control people will retain over essential decision-making as digital systems, and AI spread while agreeing that powerful corporate and government authorities will expand the role of AI in people’s daily lives in useful ways. Nevertheless, many worry that these systems will diminish individuals’ ability to control their choices. At the same time, experts on both sides of the issue also agree that the current moment is a turning point that will determine a great deal about the authority, autonomy, and agency of humans as the use of digital technology spreads into more aspects of daily life.

—”It is a choice we can make and not an inevitable problem.”—

When asked for his views on the “loss of human agency,” Vint Cerf is optimistic that while it is of concern, it is a choice we can make and not an inevitable problem.

Large Language Models and Salad Shooters

Speaking about the risks of LLMs, Vint Cerf shared his experience of getting ChatGPT to write his obituary and how inaccurate it was. Given that obituaries are boilerplate texts, there is a huge amount of information about me; I was expecting a better accuracy, said Vint.

—”LLMs are like salad shooters.”—

So, even when you train these LLMs on factual information, they chop it all up into tokens (short phrases of text) and glue tokens together, one after the other. And then determine what the high probability next token is going to be. Then, LLMs produce the text based on all the high values of probabilities. So, I would say LLMs are like salad shooters, chopping up and spraying in the salad bowl. It is like you put in the truth, and it chops it up and shoots. The problem is that the facts get conflated with each other in the wrong way, and the system does not know that. Thus, even when you train on facts, it may produce non facts and misinformation. So, this system is okay for entertainment but not for other things where it is causing something to happen, either by giving advice or taking a specific action. So, we should be concerned about that.  

Like in the art world, adopt provenance to overcome misinformation/disinformation.”

One way to overcome this kind of misinformation is to have provenance as is in the art world, says Cerf. We must find a way of speaking to the provenance of the output from these LLMs.

AI and Existential Crisis:  Accountability, Guard Rails, and Regulations

“It is all about accountability, liability, and regulations.”

Citing the AI Guardrails Mantra that Google adopts, Vint says holding companies accountable for using these technologies, especially in high-risk areas, and ensuring they follow Responsible AI principles is vital. Acknowledging that urge to win the AI race could lead to unbridled competition, Vint believes that regulating bad outcomes and ensuring safety is the best way forward. As is done in some parts of the world, adopting legislation incentivizing companies to constrain their use of these technologies to ensure safe engagement is possible. He added that it is not enough to have legislation that simply says we do not want algorithms that cause harm. It is important to work out the frameworks and details of implementing the idea of Responsible AI.

In a recent survey https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/, a small number of experts (median 5% to 10%) believe that AI poses an existential risk. Specifically discussing the worry about democracy in crisis and existential concerns, Vint believes that provenance, increased authentication, accountability, and liability through a legislative framework could ensure our technologies are safe. However, a healthy skepticism about the validity of some of these algorithmic outcomes is necessary. And it is also important to have implementable legislation through engagement and partnership with all stakeholders.

Finally, it boils down to trust. Responding to the question on injecting trust on the Internet, he says we can safeguard through enhanced authentication.

Future technologies, waiting to happen.

Moving beyond the Internet and AI, Vint Cerf spoke about two milestones yet to happen—quantum computing and interplanetary Internet, which will transform everything. In the context of the interplanetary Internet, Vint Cerf brought in the issue of commercializing space travel and the attendant challenges. He says, If you are going commercialize space travel and space use, you have to consider the possibility of ownership. However, the 1967 outer space treaty needs to change as it is not sustainable.

In conclusion, Vint Cerf believes that we are living in exciting times. Future scientists and researchers need to focus primarily on astrophysics and computation because we know less about the universe than we thought we knew in 1900, and the future will be computational everything.

As Stephen Wolfram says at the Information Universe 2022 conference: “Making Everything Computational Including the Universe.”

Listen to Vint Cerf and go on an exciting journey into the future of everything.

References

Al-Janabi A (2022). Has DeepMind’s AlphaFold solved the protein folding problem? Biotechniques 72(3):73–76. https://doi.org/10.2144/btn-2022-0007

Anfinsen C (1973). Principles that govern the folding of protein chains. Science 181(4096), 223–230.

Baek, M. et al (2021). Science 373, 871–876

Bockbrader, M. (2019). Upper limb sensorimotor restoration through brain–computer interface technology in tetraparesis. Current Opinion in Biomedical Engineering, 11, 85-101. https://doi.org/10.1016/j.cobme.2019.09.002

Bockbrader, M. A., Francisco, G., Lee, R., Olson, J., Solinsky, R., & Boninger, M. L. (2018).

Brain-Computer Interfaces in Rehabilitation Medicine. PM&R, 10(9), S233-S243. https://doi.org/10.1016/j.pmrj.2018.05.028

Bostrom N. (2014). Superintelligence: paths dangers strategies (First). Oxford University Press.

Brain–computer interface. (2023, April 13). In Wikipedia.

Callaway, E. (2022). What’s next for the AI protein-folding revolution? Naturepp. 604, 234–238.

Cerf, V., & Kahn, R. (1974). A protocol for packet network intercommunication. IEEE Transactions on communications22(5), 637-648.

 

Chalmers, D. J. (2016). The singularity: A philosophical analysis. Science fiction and philosophy: From time travel to superintelligence, 171-224.

Clarke A. C. (1973). Profiles of the future; an inquiry into the limits of the possible (Rev.). Harper & Row.

Dill, K. A., Ozkan, S. B., Shell, M. S., & Weikl, T. R. (2008). The Protein Folding Problem. Annual review of biophysics, 37, 289. https://doi.org/10.1146/annurev.biophys.37.092707.153558

Dickens, C. (1859). A Tale of Two Cities: I-II (Vol. 1). B. Tauchnitz.

Eden, A. H., Steinhart, E., Pearce, D., & Moor, J. H. (2013). Singularity Hypotheses: An Overview: Introduction to: Singularity Hypotheses: A Scientific and Philosophical Assessment. Singularity Hypotheses: A scientific and philosophical assessment, 1-12.

Gao, X., Wang, Y., Chen, X., & Gao, S. (2021). Interface, interaction, and intelligence in generalized brain–computer interfaces. Trends in Cognitive Sciences, 25(8), 671-684. https://doi.org/10.1016/j.tics.2021.04.003

Hilbert, M. (2022). Digital technology and social change: the digital transformation of society from a historical perspective. Dialogues in clinical neuroscience.

Hill, N. J., Martin, T. J., Marzano, F., & Holmes, P. (2021). Combining tACS and neurofeedback improves cognitive control and attentional processing. Brain stimulation, 14(1), 79-87.

Hochberg, L. R., Serruya, M. D., Friehs, G. M., Mukand, J. A., Saleh, M., Caplan, A. H., Branner, A., Chen, D., Penn, R. D., & Donoghue, J. P. (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442(7099), 164-171. https://doi.org/10.1038/nature04970

Jangwan, N. S., Ashraf, G. M., Ram, V., Singh, V., Alghamdi, B. S., Abuzenadah, A. M., & Singh, M. F. (2022). Brain augmentation and neuroscience technologies: Current applications, challenges, ethics, and future prospects. Frontiers in Systems Neuroscience, 16. https://doi.org/10.3389/fnsys.2022.1000495

Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., … & Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature596(7873), 583-589.

Kawala-Sterniuk, A., Browarska, N., Al-Bakri, A., Pelc, M., Zygarlicki, J., Sidikova, M., Martinek, R., & Gorzelanczyk, E. J. (2021). Summary of over Fifty Years with Brain-Computer Interfaces—A Review. Brain Sciences, 11(1). https://doi.org/10.3390/brainsci11010043

Kissinger, H. A., Schmidt, E., & Huttenlocher, D. (2021). The age of AI: and our human future. Hachette UK.

Kurzweil R. & Jaroch D. (1990). The age of intelligent machines. MIT Press.

Kurzweil R. (1999). The age of spiritual machines: when computers exceed human intelligence. Viking.

Kurzweil R. (2005). The singularity is near when humans transcend biology. Viking.

Ornes, S. (2022). Researchers turn to deep learning to decode protein structures. Proceedings of the National Academy of Sciences119(10), e2202107119.

Pahor, A., Jaušovec, N., & Galičič, M. (2020). Motor cortex excitability and connectivity change in response to motor imagery and action observation combined with transcranial alternating current stimulation (tACS): a single-blind randomized controlled trial. Frontiers in Neuroscience, 14, 889.

Peters, B. R., Wyss, J., & Manrique, M. (2010). Worldwide trends in bilateral cochlear implantation. The laryngoscope, 120(S2), S17-S44.

Rao, R. P. (2019). Towards neural co-processors for the brain: Combining decoding and encoding in brain–computer interfaces. Current Opinion in Neurobiology, pp. 55, 142–151. https://doi.org/10.1016/j.conb.2019.03.008

Sánchez-Kuhn, A., Lameira, A. P., Ruiz-López, M., Soriano-Ruiz, J. L., & Gomez-Pilar, J. (2021). Brain-computer interface based on EEG signals and closed-loop sensory feedback improves motor learning and retention. Sensors, 21(6), 1969

Shanahan M. (2015). The technological singularity. MIT Press., page 233

Senior, A. W., Evans, R., Jumper, J., Kirkpatrick, J., Sifre, L., Green, T., … & Hassabis, D. (2020). Improved protein structure prediction using potentials from deep learning. Nature577(7792), 706-710.

Suthana, N. A., Ramirez, E., Mosher, C., Cahan, M., Knowlton, B., & Engel, J. Jr. (2020). Deep brain stimulation of the fornix improves episodic memory in Alzheimer’s disease. Annals of Neurology, 87(1), 67-79.

Wander, J. D., & N Rao, R. P. (2014). Brain-computer interfaces: A powerful tool for scientific inquiry. Current opinion in neurobiology, 70. https://doi.org/10.1016/j.conb.2013.11.013

Wells, H. G. (2021). [1938]. World Brain. Cambridge, MA: MIT Press

Wells H. G. Sterling B. & Reagle J. M. (2021). World brain. MIT Press.

Yuan, H., & He, B. (2014). Brain-Computer Interfaces Using Sensorimotor Rhythms: Current State and Future Perspectives. IEEE Transactions on bio-medical engineering, 61(5), 1425. https://doi.org/10.1109/TBME.2014.2312397

Zhang, X., Ma, Z., Zheng, H., Li, T., Chen, K., Wang, X., Liu, C., Xu, L., Wu, X., Lin, D., & Lin, H. (2020). The combination of brain-computer interfaces and artificial intelligence: Applications and challenges. Annals of Translational Medicine, 8(11). https://doi.org/10.21037/atm.2019.11.109

Cite this article in APA as: Urs, S. (2023, May 11). Is AI the future of everything? A fireside chat with one of the fathers of the Internet, Vint Cerf. Information Matters, Vol. 3, Issue 5. https://informationmatters.org/2023/05/is-ai-the-future-of-everything-a-fireside-chat-with-one-of-the-fathers-of-the-internet-vint-cerf

Shalini Urs

Dr. Shalini Urs is an information scientist with a 360-degree view of information and has researched issues ranging from the theoretical foundations of information sciences to Informatics. She is an institution builder whose brainchild is the MYRA School of Business (www.myra.ac.in), founded in 2012. She also founded the International School of Information Management (www.isim.ac.in), the first Information School in India, as an autonomous constituent unit of the University of Mysore in 2005 with grants from the Ford Foundation and Informatics India Limited. She is currently involved with Gooru India Foundation as a Board member (https://gooru.org/about/team) and is actively involved in implementing Gooru’s Learning Navigator platform across schools. She is professor emerita at the Department of Library and Information Science of the University of Mysore, India. She conceptualized and developed the Vidyanidhi Digital Library and eScholarship portal in 2000 with funding from the Government of India, which became a national initiative with further funding from the Ford Foundation in 2002.