Categories
AI

Artificial intelligence might eventually write this article

I hope my headline is an overstatement, purely for job purposes, but in this week’s Vergecast artificial intelligence episode, we explore the world of large language models and how they might be used to produce AI-generated text in the future. Maybe it’ll give writers ideas for the next major franchise series, or write full blog posts, or, at the very least, fill up websites with copy that’s too arduous for humans to do.

Among the people we speak to is Nick Walton, the cofounder and CEO of Latitude, which makes the game AI Dungeon, which creates a plot in the game around what you put into it. (That’s how Walton ended up in a band of traveling goblins — you’ll just have to listen to understand how that makes sense!) We also chat with Samanyou Garg, founder of Writesonic, a company that offers various writing tools powered by AI. The company can even have AI write a blog post — I’m shaking! But really.

Anyway, toward the end of the episode, I chat with James Vincent, The Verge’s AI and machine learning senior reporter, who calms me down and helps me understand what the future of text-generation AI might be. He’s great. Check out the episode above, and make sure you subscribe to the Vergecast feed for one more episode of this AI miniseries, as well as the regular show. See you there!

Repost: Original Source and Author Link

Categories
AI

DeepMind creates ‘transformative’ map of human proteins drawn by artificial intelligence

AI research lab DeepMind has created the most comprehensive map of human proteins to date using artificial intelligence. The company, a subsidiary of Google-parent Alphabet, is releasing the data for free, with some scientists comparing the potential impact of the work to that of the Human Genome Project, an international effort to map every human gene.

Proteins are long, complex molecules that perform numerous tasks in the body, from building tissue to fighting disease. Their purpose is dictated by their structure, which folds like origami into complex and irregular shapes. Understanding how a protein folds helps explain its function, which in turn helps scientists with a range of tasks — from pursuing fundamental research on how the body works, to designing new medicines and treatments.

Previously, determining the structure of a protein relied on expensive and time-consuming experiments. But last year DeepMind showed it can produce accurate predictions of a protein’s structure using AI software called AlphaFold. Now, the company is releasing hundreds of thousands of predictions made by the program to the public.

“I see this as the culmination of the entire 10-year-plus lifetime of DeepMind,” company CEO and co-founder Demis Hassabis told The Verge. “From the beginning, this is what we set out to do: to make breakthroughs in AI, test that on games like Go and Atari, [and] apply that to real-world problems, to see if we can accelerate scientific breakthroughs and use those to benefit humanity.”

A gif of two rotating protein fold models made up of curls and swirling lines. AlphaFold’s predictions are overlayed on the models, with 90.7 GDT accuracy on the left and 93.3 GDT accuracy on the right.

Two examples of protein structures predicted by AlphaFold (in blue) compared with experimental results (in green).
Image: DeepMind

There are currently around 180,000 protein structures available in the public domain, each produced by experimental methods and accessible through the Protein Data Bank. DeepMind is releasing predictions for the structure of some 350,000 proteins across 20 different organisms, including animals like mice and fruit flies, and bacteria like E. coli. (There is some overlap between DeepMind’s data and pre-existing protein structures, but exactly how much is difficult to quantify because of the nature of the models.) Most significantly, the release includes predictions for 98 percent of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. It isn’t the first public dataset of human proteins, but it is the most comprehensive and accurate.

If they want, scientists can download the entire human proteome for themselves, says AlphaFold’s technical lead John Jumper. “There is a HumanProteome.zip effectively, I think it’s about 50 gigabytes in size,” Jumper tells The Verge. “You can put it on a flash drive if you want, though it wouldn’t do you much good without a computer for analysis!”

After launching this first tranche of data, DeepMind plans to keep adding to the store of proteins, which will be maintained by Europe’s flagship life sciences lab, the European Molecular Biology Laboratory (EMBL). By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,” according to Edith Heard, director general of the EMBL.

The data will be free in perpetuity for both scientific and commercial researchers, says Hassabis. “Anyone can use it for anything,” the DeepMind CEO noted at a press briefing. “They just need to credit the people involved in the citation.”

Understanding a protein’s structure is useful for scientists across a range of fields. The information can help design new medicines, synthesize novel enzymes that break down waste materials, and create crops that are resistant to viruses or extreme weather. Already, DeepMind’s protein predictions are being used for medical research, including studying the workings of SARS-CoV-2, the virus that causes COVID-19.

New data will speed these efforts, but scientists note it will still take a lot of time to turn this information into real-world results. “I don’t think it’s going to be something that changes the way patients are treated within the year, but it will definitely have a huge impact for the scientific community,” Marcelo C. Sousa, a professor at the University of Colorado’s biochemistry department, told The Verge.

Scientists will have to get used to having such information at their fingertips, says DeepMind senior research scientist Kathryn Tunyasuvunakool. “As a biologist, I can confirm we have no playbook for looking at even 20,000 structures, so this [amount of data] is hugely unexpected,” Tunyasuvunakool told The Verge. “To be analyzing hundreds of thousands of structures — it’s crazy.”

Notably, though, DeepMind’s software produces predictions of protein structures rather than experimentally determined models, which means that in some cases further work will be needed to verify the structure. DeepMind says it spent a lot of time building accuracy metrics into its AlphaFold software, which ranks how confident it is for each prediction.

Example protein structures predicted by AlphaFold.
Image: DeepMind

Predictions of protein structures are still hugely useful, though. Determining a protein’s structure through experimental methods is expensive, time-consuming, and relies on a lot of trial and error. That means even a low-confidence prediction can save scientists years of work by pointing them in the right direction for research.

Helen Walden, a professor of structural biology at the University of Glasgow, tells The Verge that DeepMind’s data will “significantly ease” research bottlenecks, but that “the laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain.

Sousa, who has previously used data from AlphaFold in his work, says for scientists the impact will be felt immediately. “In our collaboration we had with DeepMind, we had a dataset with a protein sample we’d had for 10 years, and we’d never got to the point of developing a model that fit,” he says. “DeepMind agreed to provide us with a structure, and they were able to solve the problem in 15 minutes after we’d been sitting on it for 10 years.”

Why protein folding is so difficult

Proteins are constructed from chains of amino acids, which come in 20 different varieties in the human body. As any individual protein can be comprised of hundreds of individual amino acids, each of which can fold and twist in different directions, it means a molecule’s final structure has an incredibly large number of possible configurations. One estimate is that the typical protein can be folded in 10^300 ways — that’s a 1 followed by 300 zeroes.

Because proteins are too small to examine with microscopes, scientists have had to indirectly determine their structure using expensive and complicated methods like nuclear magnetic resonance and X-ray crystallography. The idea of determining the structure of a protein simply by reading a list of its constituent amino acids has been long theorized but difficult to achieve, leading many to describe it as a “grand challenge” of biology.

In recent years, though, computational methods — particularly those using artificial intelligence — have suggested such analysis is possible. With these techniques, AI systems are trained on datasets of known protein structures and use this information to create their own predictions.

DeepMind’s AlphaFold software has significantly increased the accuracy of computational protein-folding, as shown by its performance in the CASP competition.
Image: DeepMind

Many groups have been working on this problem for years, but DeepMind’s deep bench of AI talent and access to computing resources allowed it to accelerate progress dramatically. Last year, the company competed in an international protein-folding competition known as CASP and blew away the competition. Its results were so accurate that computational biologist John Moult, one of CASP’s co-founders, said that “in some sense the problem [of protein folding] is solved.”

DeepMind’s AlphaFold program has been upgraded since last year’s CASP competition and is now 16 times faster. “We can fold an average protein in a matter of minutes, most cases seconds,” says Hassabis. The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

Liam McGuffin, a professor at Reading University who developed some of the UK’s leading protein-folding software, praised the technical brilliance of AlphaFold, but also noted that the program’s success relied on decades of prior research and public data. “DeepMind has vast resources to keep this database up to date and they are better placed to do this than any single academic group,” McGuffin told The Verge. “I think academics would have got there in the end, but it would have been slower because we’re not as well resourced.”

Why does DeepMind care?

Many scientists The Verge spoke to noted the generosity of DeepMind in releasing this data for free. After all, the lab is owned by Google-parent Alphabet, which has been pouring huge amounts of resources into commercial healthcare projects. DeepMind itself loses a lot of money each year, and there have been numerous reports of tensions between the company and its parent firm over issues like research autonomy and commercial viability.

Hassabis, though, tells The Verge that the company always planned to make this information freely available, and that doing so is a fulfillment of DeepMind’s founding ethos. He stresses that DeepMind’s work is used in lots of places at Google — “almost anything you use, there’s some of our technology that’s part of that under the hood” — but that the company’s primary goal has always been fundamental research.

“The agreement when we got acquired is that we are here primarily to advance the state of AGI and AI technologies and then use that to accelerate scientific breakthroughs,” says Hassabis. “[Alphabet] has plenty of divisions focused on making money,” he adds, noting that DeepMind’s focus on research “brings all sorts of benefits, in terms of prestige and goodwill for the scientific community. There’s many ways value can be attained.”

Hassabis predicts that AlphaFold is a sign of things to come — a project that shows the huge potential of artificial intelligence to handle messy problems like human biology.

“I think we’re at a really exciting moment,” he says. “In the next decade, we, and others in the AI field, are hoping to produce amazing breakthroughs that will genuinely accelerate solutions to the really big problems we have here on Earth.”

Repost: Original Source and Author Link

Categories
AI

Symbl.ai, provider of conversational intelligence APIs and tools, gets $17M

Symbl.ai, the Seattle-based company that provides APIs to developers to build apps capable of understanding natural human conversations, has raised a $17 million series A funding.

Symbl.ai says its APIs unlock machine learning algorithms from a variety of industry partners and sources. The company helps developers ingest conversation data from voice, email, chat, and social, and provides understanding and context around the language, without the need for upfront training data, wake words, or custom classifiers.

VentureBeat connected with cofounder and CEO Surbhi Rathore for a broader perspective on what this funding means for Symbl.ai and the emerging industry that research firm Gartner has called communication platform-as-a-service (CPaaS).

Conversational intelligence APIs

While a ton of vendors have jumped in to serve developers with APIs for conversational AI, Symbl.ai says it seeks to differentiate itself by providing an end-to-end conversation analytics platform.

Rathore told VentureBeat, “There are other API-first products that provide NLP services, custom classifiers, speech recognition — all of which are parts of the wider conversation intelligence capabilities. Symbl.ai stitches these pieces and more with connected context without dependency on huge data sets and machine learning knowledge.”

Many conversational AI vendors focus on specific industries. In the sales industry, for example, companies like Gong.io and Chorus.ai are gaining momentum. But more end-user customers are seeking to build conversational AI experiences across various functional areas, including webinars, customer experience, team collaboration, and more.

Symbl.ai says it enables businesses to compete with these brands and build differentiated, AI-driven experiences in their products cost effectively and with a shorter time from lab to market. Symbl.ai’s offerings include Transcription Plus, Conversation Analytics, Conversation Topics, Contextual Insights, Customer Tracker, and Summarization.

Turning conversation into data

Recent projections from Mordor Intelligence show that the CPaaS industry will grow from $4.54 billion in 2020 to $26.03 billion by 2026 at a compound annual growth rate (CAGR) of 34.3%. Up from 20% in 2020, 90% of global enterprises are expected to leverage API-enabled CPaaS offerings as a strategic IT skill set to bolster their digital competitiveness by 2023. These figures offer critical insight into today’s unanticipated adoption rate of digital communications. However, even with this recent growth, Symbl.ai claims the current solutions do not enable businesses to unlock the potential of communication content and analyze conversation data at scale.

“Businesses are looking to differentiate beyond ‘just’ enabling communication so as to meet their customer expectations,” the company’s press release stated. The company says it will continue to ease developers’ access to communication data and help product managers to iterate early on the new user experiences for their product — eliminating the need for building ML models from the ground up.

“We are bullish on empowering builders with the right toolkit to unlock the value in conversation data across all digital channels. Conversations are by far the most unused data source, and there is so much potential in this data. Tools need to ease the pain point of capturing, managing, understanding, and acting on this data — and that’s exactly what we are focused on. With this financing, we will further speed up our product roadmap to scale awareness, adoption, and growth of conversation intelligence driven experiences for developers,” Rathore said.

Speaking on what this means for the CPaaS industry, she added, “CPaaS platforms must enable developers to get access to the raw streams easily — so that experiences built go beyond just enabling voice and video. Symbl.ai is excited to partner with leading CPaaS platforms to collectively enable their adopters extend communication experiences and use machine learning to understand the content of that communication, take actions on it, and learn from the best via plug and play APIs.”

Building the technology

Symbl.ai says a major trend in the industry is transcription, as it has become a mainstream use case for live captioning. “This is critical for products to build accessible and inclusive experience — voice analytics that were primarily restricted to the contact center industry are finding ways into other verticals like recruitment, edtech, marketing, and more. Products and businesses are starting to leverage insights from text, chat, email, calls, video from customers interactions to personalize user experience,” Rathore said.

“The trend of digital conversations, which has increased over the past year, is here to stay. Surbhi, [CTO] Toshish [Jawale], and their team have built a special engine to deliver this data in a developer-friendly way, and we are excited to partner with them in their next leg of growth,” said Ray Lane, managing partner at Great Point Ventures.

According to Rathore, “Traditional NLP and NLU techniques fall short on conversations.” Symbl.ai’s technology combines several deep learning and machine learning models to convert unstructured conversations from raw speech or text into structured format that can be programmatically accessed and leveraged to build intelligent experiences in applications in a plug-and-play manner.

“Our system contains more than 50 different models built in-house working together to solve three layers of problems. The first layer models the foundational characteristics of conversations, language, and speech, leveraging various existing languages and speech modeling techniques. The second layer is for contextual correlation modeling between different concepts and their relationships. The third layer performs reasoning tasks to draw conclusions or eliminate the improbabilities. These layers are specifically required for understanding human conversations, enabling our system to model several dimensions of human conversation that help us to draw inferences and conclusions for actionable insights, irrespective of the domain of conversation,” Rathore said.

Funding expectations

This investment round was led by Great Point Ventures, with additional participation from Gutbrain Ventures, PBJ Capital, Crosscut Ventures, and Flying Fish Ventures.

Rathore said this additional capital is expected to accelerate the development of its end-to-end conversational intelligence platform, foster team expansion at the company, and meet the growing demand for the technology.

The company says its customers include Rev.ai, Airmeet, Intermedia, Remo, SpectrumVoip, Intuit, Bandwidth, and Hubilo. “Symbl.ai enables developers to not spend months but just days to integrate, saving time and money with the most accurate and scalable conversation intelligence stack,” the press release stated.

Against the backdrop of this funding, Lane of Great Point Ventures will join Symbl.ai’s board of directors. The round will also go toward recruiting top talent.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Data intelligence provider Alation acquires AI insights company Lyngo Analytics

Enterprise data intelligence solution provider Alation Inc. has announced the acquisition of data insights and AI vendor, Lyngo Analytics. This deal will enable Alation to scale its data intelligence offerings, help companies to drive their data culture, and elevate the business user experience.

Lyngo Analytics’ natural language interface enables users to ask simple, business-focused questions to uncover data and insights. Its AI and machine-learning technology will be integrated into Alation’s platform, further strengthening its intelligent and user-friendly machine-learning data catalog. This will also enable Alation to convert natural language questions into SQL, providing greater support and functionality for non-technical users.

“Alation created the first machine learning data catalog, and we’re known for providing the most user-friendly interface on the market,” said Raj Gossain, Chief Product Officer, Alation. “With this acquisition, we’re building on the best. We’re doubling down on key aspects of the platform that will help drive data culture and spur innovation and growth. Jennifer and Joachim developed a unique solution for a complex data and analytics issue, and I’m excited to welcome them to the Alation team.”

Easier access to data intelligence

Integrating Lyngo Analytics’ interface with Alation’s platform will make it easier for business users to identify and implement data-driven insights across an enterprise’s complement of data sources. Users can use natural language to ask questions and receive data insights without needing SQL expertise or data analyst assistance. Every employee within an organization can take control of their own data and analytics to drive its data culture.

Continued growth Alation’s recent acquisition follows its June 21 announcement of a $110 million Series D funding round and a $1.2 billion market valuation. The company’s nearly 300 customers include Cisco, Exelon, GE Aviation, Munich Re, NASDAQ, and Pfizer. Alation was recently named a leader in The Forrester Wave™: Data Governance Solutions, Q3 2021 report, and Snowflake’s Data Governance Partner of the Year.

Alation’s enterprise data intelligence solutions including data search and discovery, data governance, data stewardship, analytics, and digital transformation. In addition to its data catalog offerings, Alation’s Behavioral Analysis Engine, inbuilt collaboration capabilities, and open interfaces support the combination of machine learning with human insight to provide solutions in data and metadata management.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

To create AGI, we need a new theory of intelligence

All the sessions from Transform 2021 are available on-demand now. Watch now.


This article is part of “the philosophy of artificial intelligence,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future

For decades, scientists have tried to create computational imitations of the brain. And for decades, the holy grail of artificial general intelligence, computers that can think and act like humans, has continued to elude scientists and researchers.

Why do we continue to replicate some aspects of intelligence but fail to generate systems that can generalize their skills like humans and animals? One computer scientist who has been working on AI for three decades believes that to get past the hurdles of narrow AI, we must look at intelligence from a different and more fundamental perspective.

In a paper that was presented at the Brain-Inspired Cognitive Architectures for Artificial Intelligence (BICA*AI), Sathyanaraya Raghavachary, Associate Professor of Computer Science at the University of Southern California, discusses “considered response,” a theory that can generalize to all forms of intelligent life that have evolved and thrived on our planet.

Titled, “Intelligence—consider this and respond!” the paper sheds light on the possible causes of the troubles that have haunted the AI community for decades and draws important conclusions, including the consideration of embodiment as a prerequisite for AGI.

Structures and phenomena

“Structures, from the microscopic to human level to cosmic level, organic and inorganic, exhibit (‘respond with’) phenomena on account of their spatial and temporal arrangements, under conditions external to the structures,” Raghavachary writes in his paper.

This is a general rule that applies to all sorts of phenomena we see in the world, from ice molecules becoming liquid in response to heat, to sand dunes forming in response to wind, to the solar system’s arrangement.

Raghavachary calls this “sphenomics,” a term he coined to differentiate from phenomenology, phenomenality, and phenomenalism.

“Everything in the universe, at every scale from subatomic to galactic, can be viewed as physical structures giving rise to appropriate phenomena, in other words, S->P,” Raghavachary told TechTalks.

Biological structures can be viewed in the same way, Raghavachary believes. In his paper, he notes that the natural world comprises a variety of organisms that respond to their environment. These responses can be seen in simple things such as the survival mechanisms of bacteria, as well as more complex phenomena such as the collective behavior exhibited by bees, ants, and fish as well as the intelligence of humans.

“Viewed this way, life processes, of which I consider biological intelligence — and where applicable, even consciousness — occur solely as a result of underlying physical structures,” Raghavachary said. “Life interacting with environment (which includes other life, groups…) also occurs as a result of structures (e.g., brains, snake fangs, sticky pollen…) exhibiting phenomena. The phenomena are the structures’ responses.”

Intelligence as considered response

school of fish

In inanimate objects, the structures and phenomena are not explicitly evolved or designed to support processes we would call “life” (e.g., a cave producing howling noises as the wind blows by). Conversely, life processes are based on structures that consider and produce response phenomena.

However different these life forms might be, their intelligence shares a common underlying principle, Raghavachary says, one that is “simple, elegant, and extremely widely applicable, and is likely tied to evolution.”

In this respect, Raghavachary proposes in his paper that “intelligence is a biological phenomenon tied to evolutionary adaptation, meant to aid an agent survive and reproduce in its environment by interacting with it appropriately — it is one of considered response.”

The considered response theory is different from traditional definitions of intelligence and AI, which focus on high-level computational processing such as reasoning, planning, goal-seeking, and problem-solving in general. Raghavachary says that the problem with the usual AI branches — symbolic, connectionist, goal-driven — is not that they are computational but that they are digital.

“Digital computation of intelligence has — pardon the pun — no analog in the natural world,” Raghavachary said. “Digital computations are always going to be an indirect, inadequate substitute for mimicking biological intelligence — because they are not part of the S->P chains that underlie natural intelligence.”

There’s no doubt that the digital computation of intelligence has yielded impressive results, including the variety of deep neural network architectures that are powering applications from computer vision to natural language processing. But despite the similarity of their results to what we perceive in humans, what they are doing is different from what the brain does, Raghavachary says.

The “considered response” theory zooms back and casts a wider net that all forms of intelligence, including those that don’t fit the problem-solving paradigm.

“I view intelligence as considered response in that sense, emanating from physical structures in our bodies and brains. CR naturally fits within the S->P paradigm,” Raghavachary said.

Developing a theory of intelligence around the S->P principle can help overcome many of the hurdles that have frustrated the AI community for decades, Raghavachary believes. One of these hurdles is simulating the real world, a hot area of research in robotics and self-driving cars.

“Structure->phenomena are computation-free, and can interact with each other with arbitrary complexity,” Raghavachary says. “Simulating such complexity in a VR simulation is simply untenable. Simulation of S->P in a machine will always remain exactly that, a simulation.”

Embodied artificial intelligence

hand

A lot of work in the AI field is what is known as “brain in a vat” solutions. In such approaches, the AI software component is separated from the hardware that interacts with the world. For example, deep learning models can be trained on millions of images to detect and classify objects. While those images have been collected from the real world, the deep learning model has not directly experienced them.

While such approaches can help solve specific problems, they will not move us toward artificial general intelligence, Raghavachary believes.

In his paper, he notes that there is not a single example of “brain in a vat” in nature’s diverse array of intelligent lifeforms. And thus, the considered response theory of intelligence suggests that artificial general intelligence requires agents that can have a direct embodied experience of the world.

“Brains are always housed in bodies, in exchange for which they help nurture and protect the body in numerous ways (depending on the complexity of the organism),” he writes.

Bodies provide brains with several advantages, including situatedness, sense of self, agency, free will, and more advanced concepts such as theory of mind (the ability to predict other the experience of another agent based on your own) and model-free learning (the ability to experience first and reason later).

“A human AGI without a body is bound to be, for all practical purposes, a disembodied ‘zombie’ of sorts, lacking genuine understanding of the world (with its myriad forms, natural phenomena, beauty, etc.) including its human inhabitants, their motivations, habits, customs, behavior, etc. the agent would need to fake all these,” Raghavachary writes.

Accordingly, an embodied AGI system would need a body that matches its brain, and both need to be designed for the specific kind of environment it will be working in.

“We, made of matter and structures, directly interact with structures, whose phenomena we ‘experience.’ Experience cannot be digitally computed — it needs to be actively acquired via a body,” Raghavachary said. “To me, there is simply no substitute for direct experience.”

In a nutshell, the considered response theory suggests that suitable pairings of synthetic brains and bodies that directly engage with the world should be considered life-like, and appropriately intelligent, and — depending on the functions enabled in the hardware — possibly conscious.

This means that you can create any kind of robot and make it intelligent by equipping it with a brain that matches its body and sensory experience.

“Such agents do not need to be anthropomorphic — they could have unusual designs, structures and functions that would produce intelligent behavior alien to our own (e.g., an octopus-like design, with brain functions distributed throughout the body),” Raghavachary said. “That said, the most relatable human-level AI would likely be best housed in a human-like agent.”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Artificial intelligence vs. neurophysiology: Why the difference matters

All the sessions from Transform 2021 are available on-demand now. Watch now.


On the temple of Apollo in Delphi (Greece), it was written: “Cognosce te Ipsum” (Know thy self). It is important to remember these words for everyone who wants to create artificial intelligence.

I continue my series of articles about the nature of human intelligence and the future of artificial intelligence systems. This article is a continuation of the article titled “Symbiosis Instead of Evolution — A New Idea about the Nature of Human Intelligence.”

In the previous article, after analyzing the minimum response time to a simple incoming signal, we found that the human brain with a high degree of probability may turn out to be a binary system, consisting of two functional schemes of response to excitation: reflex and intellectual.

In this article, we will talk about the first, the reflex part. Together, we will try to find out how similar a reflex response scheme really is to an algorithm and how this might affect the future of artificial intelligence.

Similar does not mean exactly the same

What is the difference?

In popular science films, a nerve impulse is presented as a kind of signal that travels through nerve cells like wires. We perceive this as a biological analogy for an electrical impulse.

In fact, this is not at all the case. A nerve impulse is a sharp movement of sodium and potassium ions across the outer membrane of a neuron using potential-dependent ion channels. This process can be compared to the successive falling of a track of cards or dominoes. After each nerve impulse, the neuron must return the ions back to their original positions. In our example, this is how to build a track from cards or dominoes again.

Nerve impulse is hard work. It is important that, in its deep physical essence, a nerve impulse is rather a mechanical work, and not an electrical signal at all, as many think.

This severely limits the rate of signal transmission in biological tissues. The signal travels along non-myelinated small-diameter fibers at a speed of only one meter per second. It is like a slow walk. For larger myelinated fibers, the speed increases to 45-60 kilometers per hour. And only for some large fibers with a thick myelin sheath and special interceptions of Ranvier, the speed reaches 200-300 km per hour.

On average, nerve impulses in our nervous system move 3 million times slower than electrical signals in computer systems. The nerve impulse, in addition to being slow, also makes constant stops at synapses, the junction points of neurons. To continue the path, the signal needs to pass through the synaptic cleft, the junction point of neurons. We can say that the nerve impulse is a rather slow journey with transfers.

All this suggests that the nerve impulse itself is already the result of serious effort, which simply must arrive somewhere at the end of the path.

The computer algorithm is of a completely different nature

deep neural network AI

The algorithms that work in computers are powered by sequences of voltage drops or machine code, consisting of conventional ones and zeros.

In addition to speed and physical essence, there is a long series of important differences between reflex and algorithm. A nerve impulse or reflex is an inevitable response, and an algorithm is a sum of rules, or a set of instructions designed to solve a specific problem.

In other words, the reflex can be wrong, but cannot be silent, and the algorithm, on the contrary, as a rule, does not make mistakes, but may not give an answer if the instruction contained in it cannot be executed.

The reflex knows the answer even before the task, and the algorithm learns the answer only after completing all the necessary steps.

A simple example

Imagine a simple problem to find the value of X in the formula 1 + X + 3 = 6. The algorithm will do this: first 6-1 = 5, then 5-3 = 2, so X = 2. The reflex will immediately answer X = 2. True, this will happen only if the reflex has already encountered such a situation and has empirically found out that answers 1 and 3 are incorrect.

But what if the situation changes and the question becomes more difficult 1 + X + Y = 6. With such a question, the algorithm will remain silent and will not be able to give an answer. Indeed, there is not enough initial data to answer. This option has several correct answers. The algorithm will not be able to figure out which one is accurate.

And for the reflex, nothing has changed, reflex will just answer X = 2, and Y = 3 if reflex has already met such a task before. If not, then reflex will still answer, but most likely with an error.

Why is it like this?

The answer lies in the energy cost of the nerve impulse process. Signal movement in the human nervous system, a very energy-intensive process for which it is first necessary to create a membrane potential (up to 90 mV) on the surface of a neuron, and then sharply shift it, thereby generating a wave of depolarization. At the moment of a nerve impulse, ions quickly move through the membrane, after which the nerve cell must return sodium and potassium ions to their original positions. For this, special molecular pumps (sodium-potassium adenosine triphosphatases) must work.

As a result, the nervous tissue turns out to be the most energy-consuming structure of our body. The human brain weighs on average 1.4 kilograms, or 2% of body weight, and consumes about 20% of all energy available to our body. In some children 4-6 years of age, the energy consumption of the brain reaches 60% of the energy available to the body!

All this forces nature to save the resources of the nervous system as much as possible.

To solve one single simple functional task, the nervous system needs about 100 compactly located neurons. Sea anemones (a class of coral polyps) have such a simple nervous system (100 neurons), which can reproduce (repeat) the original orientation of the body if they are transferred from one place to another.

More difficult tasks, more neurons

neurons

Additional tasks and functions require an increase in the power of the nervous system, which inevitably leads to an increase in the grouping of involved neurons. As a result, hundreds and thousands of voracious nerve cells are needed.

But nature knows how to find solutions even when it seems nothing can be invented. If the work of the nervous system is so expensive, then it is not necessary to obtain the correct answer at such a high price.

It is just cheaper to be wrong.

On the other hand, a mistake is worthless to nature. If the organism is often mistaken, it simply dies, and the one who gives the correct answers takes its place. Even if such answers are the result of a fluke. Figuratively speaking, everything is simple in nature — only those who have given the correct answer live.

This suggests that the work of the nervous system is only superficially similar to the algorithm. In fact, there is no computation at the heart of its work, but a reflex or simple repetition of stereotyped decisions based on memory.

The nature and nervous system of any living organism on our planet simply juggle with pre-written cribs in the form of various memory mechanisms, but outwardly it looks like a computational activity.

In other words, trying to beat the reflex with a computational algorithm is like trying to play fair against a card sharper.

This tactic, combined with synaptic plasticity, gives the biological nervous system tremendous efficiency.

In living nature, the brain is an extremely expensive commodity. Therefore, its operation is based on a simple but cheap reflex, and not an accurate but expensive algorithm. With this method, a small number of neurons solve very complex problems associated, for example, with orientation. The secret is that the biological nervous system does not actually calculate anything, it just remembers the correct answer. Over billions of years of evolution and a period of one’s own life, a universal set of solutions has been created that were previously successful. And if not, then it is not scary to be wrong. This allows even small and primitive nervous systems to simultaneously respond to stimuli and maintain automatic functions such as muscle tone, breathing, digestion, and blood circulation.

Algorithms lose before the competition starts

brain vs integrated circuits

All this suggests that trying to create AI based on existing computational algorithms, we will fundamentally lose to nature, even in examples of simple non-intellectual activities associated, for example, with movement. Our electronic devices will be accurate, but very energy-intensive and, as a result, completely inefficient.

We can already see this in self-driving cars. One of the unexpected problems faced by developers of autonomous control systems is related to energy consumption. Experimental self-driving cars need special high-performance electric generators to power electronic control systems.

While in nature there are amazingly simple nervous systems that perfectly cope with the task of dynamic maneuvering. For example, nurse sharks (which weigh up to 110 kilograms and can attack humans), the brain weighs only 8 grams, and the entire nervous system, together with all the fibers of the peripheral section, a little more than 250 grams.

The main conclusion

The first thing we need to create real artificial intelligence is electronic systems that work on the principles of a biological reflex arc, i.e., biological algorithms with zero discreteness.

It is interesting that the structural block diagrams of biological algorithms existed at the end of the last century, but due to the obscure zero discreteness, they have remained exotic. The only exception was evolutionary algorithms, which formed the basis for evolutionary modeling in the field of computational intelligence.

Biology teaches us that in real life it is not the one who makes mistakes that loses, but the one who does not save resources.

There is no need to be afraid of mistakes. In fact, you need to be afraid of accurate answers paid for by high energy consumption.

But this is only part of the problem, the solution of which will make it possible to create relatively simple artificial systems capable of controlling movement and fine motor skills.

To develop real artificial intelligence applicable in real life, we will have to figure out how the second, the intellectual scheme of the human brain, works.

Dr. Oleksandr Kostikov is a medical doctor by education based in Canada. He is working on a new theoretical concept about the nature of intelligence that also aims to create a completely new and unusual type of artificial intelligence.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Equipping AI with emotional intelligence can improve outcomes

All the sessions from Transform 2021 are available on-demand now. Watch now.


There is a significant gap between an organization’s ambitions for using artificial intelligence (AI) and the reality of how those projects turn out, Intel chief data scientist Dr. Melvin Greer said in a conversation with VentureBeat founder and CEO Matt Marshall at last week’s Transf0rm 2021 virtual conference.

One of the key areas is emotional intelligence and mindfulness. The pandemic highlighted this gap: The way people had to juggle home and work responsibilities meant their ability to stay focused and mindful could be compromised, Greer said. This could be a problem when AI is used in a cyberattack, like when someone is trying to use a chatbot or some other adversarial machine learning technique against us.

“Our ability to get to the heart of what we’re trying to achieve can be compromised when we are not in an emotional state and mindful and present,” Greer said.

Align AI with cloud projects

In a recent Harvard Business Review survey of 3,000 executives in 14 industry sectors, just 20% of respondents said they have actually implemented AI as part of their core business.

In order to bridge the gap between ambition and reality in AI, it is “absolutely critical” that organizations align AI with their cloud computing and cybersecurity initiatives, Greer said. When organizations think about other ongoing digital transformation initiatives — cybersecurity and cloud computing, for example — and align them with AI initiatives, that becomes a force multiplier, Greer said. These initiatives don’t require the same skills, move at the same pace, or achieve the same goals, but they do fit together. Cloud computing, as a place where lots of data is stored, can be a catalyst for AI, he said. Cybersecurity is another because the data, data models, and algorithms need to be protected.

“What we are seeing is that there is an inflection point, and what it requires us to do is to think more clearly around all the other initiatives that are going on in our digital transformation or artificial intelligence projects,” he added.

Quantum vs. neuromorphic

Enterprise leaders have to stay up to date with trends because the field is evolving rapidly, but some of the emerging trends are still years away from practical use. Quantum computing and neuromorphic computing are two very exciting research areas, Greer said, but neither is at a point of having commercial applications yet. In 2017, Intel formed its neuromorphic research community with about 100 universities and 50 industry partners. Researchers get access to hardware and computing platforms, along with a software development kit specifically designed as a software optimization mechanism, Greer said.

“We will see commercial applications and neuromorphic brain-inspired computing much sooner than we will with quantum,” Greer predicted, but noted that was still five to 10 years out.

In the past few years, Intel has made itself a data-centric organization that focuses on AI as a core competency. While many companies have been working on developing AI for different uses, Greer said, there is a significant gap between the ambition that organizations want to achieve and the reality associated with insights those data and programs delivered. For example, Greer said organizations need to start thinking about the emotional intelligence and mindfulness of AI. In the current stage of the COVID-19 pandemic, individuals need to work on multiple tasks at the same time; thus the ability to stay focused and be mindful may sometimes be compromised.

Growing AI capabilities

Greer noted that while investments in AI initiatives have tripled since 2016, many of those are driven by the fear of missing out, rather than successes in the development and deployment of AI. The enthusiasm, investment, and activity around AI aside, organizations need a pragmatic approach, Greer said.

One thing to consider is that in some cases, AI is not a suitable option, he said. It is important to be “absolutely crystal clear” about the problem to solve before trying to figure out whether to run deep learning applications.

Understanding the workforce — which means having diverse teams in the development and distribution of AI capabilities — is the most critical, Greer said. The lack of diverse talent “requires us to pretend everybody is representative of the very homogeneous people that make up the talent pool,” he said.

Having a data strategy

Another gap enterprises often overlook is the amount of data they have and what they can do with it. Many enterprises don’t have the access to manage the data they need to become successful. Greer estimated that 85% of a data scientist’s job is making data available, manageable, and governable so it can be used. Data needs to be classified, managed, and labeled at the point it is being created. Considering that data is being created at 3.7 terabytes per person every day, it isn’t easy to go back and clean data later. Before an organization can develop an AI strategy, it has to first create a data strategy.

“We’re still very much in a situation where if we have really bad data, we will simply do stupid things faster with machines, and we will train them to do things which are inherently erroneous or bias,” Greer said.

It is imperative that researchers, scientists, and developers take a human-centric approach to data and AI systems. Intel has published its ethical principles, or human rights policy, around how AI should be used, and is engaged with non-governmental and international organizations on how to use AI for good, Greer said.

“Because no, data is not oil. And data is not fuel. Data is people,” Greer said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Tableau gets new AI-powered business intelligence features

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Tableau today unveiled new AI– and analytics-powered additions to its platform designed to, in the company’s words, “empower more people with the right technology to make smarter and faster decisions.” Among the highlights are an input experience designed to guide users on how to ask questions of data and an updated tool, Explain Data, that shows key drivers behind a specific data point.

“Tableau’s mission has always been to help people to see and understand data. We started out by introducing the self-service revolution in business intelligence, and with each product release we want to make it even easier and more intuitive for people to solve problems with data and for organizations to become data-driven. Tableau has been accelerating its pace of innovation and democratizing analytics,” Francois Ajenstat, chief product officer at Tableau, told VentureBeat via email.

One of the expanded capabilities in Tableau, Ask Data, lets users answer business questions with natural language, autocorrect, and synonym recognition. Ask Data integrates directly with business intelligence dashboards and can be embedded in portals or apps, and as of this week, it allows analysts to curate natural language experiences as a single source of truth: Lenses. Lenses can be set up for specific use cases, letting different teams query the same data source in the context of their own business. For example, while a single column might be known by one team as “sales,” by another as “revenue,” and by a third as “invoices,” Lenses enables each team to get results familiar and relevant to their work.

By contrast, Explain Data, which is now available to all licensed Tableau users, runs statistical models and checks potential explanations behind the value of a specific data point. According to Tableau, Explain Data can reduce the risk of error from “dirty data” or selection bias by searching for explanations in the entire data source, in addition to what’s shown in visualizations.

Salesforce features

Beyond the Ask Data and Explain Data enhancements, Tableau is introducing Einstein Discovery for Reports, a feature powered by Salesforce’s Einstein technology that augments Salesforce customer relationship management (CRM) workflows with big data insights. It delivers these insights within Salesforce reports, automatically analyzing data and providing a link to Einstein Discovery for further data exploration and optional machine learning model deployment.

A complementary new Tableau feature, Ask Data for Salesforce, lets Salesforce customers ask questions of CRM data using natural language, yielding answers in the form of dashboards, insight, and business recommendations. Dashboard recommendations and semantic search are available in the Salesforce Summer 2021 release, with natural language capabilities scheduled to come in a pilot this fall.

Fast-growing mobile traffic, cloud computing, and the rapid development of technologies including AI and the internet of things are contributing to the increasing volume and complexity of datasets. According to Statista, the global big data and business analytics market was valued at$ 168.8 billion in 2018 and is forecast to grow to $274.3 billion by 2022.

“The technology helps address the growing disconnect between business leaders expecting a data-driven organization, and employees who either aren’t comfortable questioning metrics or leveraging data analysis to drive actions. This helps organizations create strong data cultures where more people have analytics tools and insights they wouldn’t find on their own,” Ajenstat said. “This benefits us in so many ways, including increased collaboration, data exploration and innovation, and measurable value, across every industry and around the world. With these innovations, we believe that customers will be able to empower more people to make better decisions, faster.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

A beginner’s guide to global artificial intelligence policy

Welcome to Neural’s beginner’s guide to AI. This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works.

In addition to the article you’re currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, the difference between human and machine intelligence, and ethics.

In this edition of the guide, we’ll take a glance at global AI policy.

The US, China, Russia, and Europe each approach artificial intelligence development and regulation differently. In the coming years it will be important for everyone to understand what those differences can mean for our safety and privacy.

Yesterday

Artificial intelligence has traditionally been swept in with other technologies when it comes to policy and regulation.

That worked well in the days when algorithm-based tech was mostly used for data processing and crunching numbers. But the deep learning explosion that began around 2014 changed everything.

In the years since, we’ve seen the inception and mass adoption of privacy-smashing technologies such as virtual assistants, facial recognition, and online trackers.

Just a decade ago our biggest privacy concerns, as citizens, involved worrying about the government tracking us through our cell phone signals or snooping on our email.

Today, we know that AI trackers are following our every move online. Cameras record everything we do in public, even in our own neighborhoods, and there were at least 40 million smart speakers sold in Q4 of 2020 alone.

Today

Regulators and government entities around the world are trying to catch up to the technology and implement polices that make sense for their particular brand of governance.

In the US, there’s little in the way of regulation. In fact the US government is highly invested in many AI technologies the global community considers problematic. It develops lethal autonomous weapons (LAWS), its policies allow law enforcement officers to use facial recognition and internet crawlers without oversight, and there are no rules or laws prohibiting “snake oil” predictive AI services.

In Russia, the official policy is one of democratizing AI research by pooling data. A preview of the nation’s first AI policy draft indicates Russia plans to develop tools that allow its citizens to control and anonymize their own data.

However, the Russian government has also been connected to adversarial AI ops targeting governments and civilians around the globe. It’s difficult to discern what rules Russia‘s private sector will face when it comes to privacy and AI.

And, to the best of our knowledge, there’s no declassified data on Russia‘s military policies when it comes to the use of AI. The best we can do is speculate based on past reports and statements made by the country’s current leader, Vladmir Putin.

Putin, speaking to Russian students in 2017, said “whoever becomes the leader in this sphere will become the ruler of the world.”

China, on the other hand, has been relatively transparent about it’s AI programs. In 2017 China released the world’s first robust AI policy plan incorporating modern deep learning technologies and predicted future machine learning tech.

The PRC intends on being the global leader in AI technology by 2030. It’s program to achieve this goal includes massive investments from the private sector, academia, and the government.

US military leaders believe China’s military policies concerning AI are aimed at the development of LAWS that don’t require a human in the loop.

Europe‘s vision for AI policy is a bit different. Where the US, China, and Russia appear focused on the military and global competitive-financial aspects of AI, the EU is defining and crafting policies that put privacy and citizen-safety at the forefront.

In this respect, the EU currently seeks to limit facial recognition and other data-gathering technologies and to ensure citizens are explicitly informed when a product or service records their information.

The future

Predicting the future of AI policy is a tricky matter. Not only do we have to take into account how each nation currently approaches development and regulation, but we have to try to imagine how AI technology itself will advance in each country.

Let’s start with the EU:

  1. Some experts feel the human-centric approach to AI policy that Europe is taking is the example the rest of the world should follow. When it comes to AI tech, privacy is analogous to safety.
  2. But other experts fear the EU is leaving itself wide open to exploitation by adversaries with no regard for obeying its regulations.

In Russia, of course, things are different:

  1. Russia‘s focus on becoming a world leader in AI doesn’t go through big tech or academia, but through the advancement of military technologies – arguably, the only relevant domain it’s globally competitive in.
  2. Iron-fisted rule stifles private-sector development, so it would make sense if Russia kept extremely lax privacy laws in place concerning how the private sector handles the general public. And there’s no reason to believe the Russian government will affect any official policy protecting citizen privacy.

Moving to China, the future’s a bit easier to predict:

  1. China’s all-in on surveillance. Every aspect of Chinese life, for citizens, is affected by intrusive AI systems including a social credit scoring system, ubiquitous facial and emotional recognition, and complete digital monitoring.
  2. There’s little reason to believe China will change its privacy laws, stop engaging in government-sponsored AI IP-theft, or cease its reported production of LAWs technology.

And that just brings us to the US:

  1. Due to a lack of clear policy, the US exists somewhere between China and Russia when it comes to unilateral AI regulation. Unless the long-threatened big tech breakup happens, we can assume Facebook, Google, Amazon, and Microsoft will continue to dictate US policy with their wallets.
  2. AI regulation is a completely partisan issue being handled by a US congress that’s divided. Until such a time as the partisanship clears up somewhat, we can expect US AI policy beyond the private sector to begin and end with lobbyists and the defense budget.

At the end of the day, it’s impossible to make strong predictions because politicians around the globe are still generally ignorant when it comes to the reality of modern AI and the most-likely scenarios for the future.

Technology policy is often a reactionary discipline: countries tend to regulate things only after they’ve proven problematic. And, we don’t know what major events or breakthroughs could prompt radical policy change for any given nation.

In 2021, the field of artificial intelligence is at an inflection point. We’re between eurekas, waiting on autonomy to come of age, and hoping that our world leaders can come to a safe accord concerning LAWS and international privacy regulations.

Repost: Original Source and Author Link

Categories
AI

Evolution, rewards, and artificial intelligence

Elevate your enterprise data technology and strategy at Transform 2021.


Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands.

Natural selection

In their paper, the DeepMind scientists present the following hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.”

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. I’m not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that don’t get eliminated.

According to Dawkins, “In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple — that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.”

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organism’s survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didn’t, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMind’s scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

Reinforcement learning and artificial general intelligence

Reinforcement learning artificial intelligence

In their paper, DeepMind’s scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, “A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour.”

In an online debate in December, computer scientist Richard Sutton, one of the paper’s co-authors, said, “Reinforcement learning is the first computational theory of intelligence… In reinforcement learning, the goal is to maximize an arbitrary reward signal.”

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science.

The scientists further wrote in their paper, “According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].”

This is where hypothesis separates from practice. The keyword here is “complex.” The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they can’t offer “theoretical guarantee on the sample efficiency of reinforcement learning agents.”)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we don’t have a fraction of the compute power needed to create quantum-scale simulations of the world.

Let’s say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still don’t have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power you’ll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Robot working in kitchen

Above: Image credit: Depositphotos

Many will say that you don’t need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: “In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.”

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutor’s mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of “cleanliness” as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for “cleanliness” would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, there’s a tradeoff between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link