Categories
AI

Artificial intelligence might eventually write this article

I hope my headline is an overstatement, purely for job purposes, but in this week’s Vergecast artificial intelligence episode, we explore the world of large language models and how they might be used to produce AI-generated text in the future. Maybe it’ll give writers ideas for the next major franchise series, or write full blog posts, or, at the very least, fill up websites with copy that’s too arduous for humans to do.

Among the people we speak to is Nick Walton, the cofounder and CEO of Latitude, which makes the game AI Dungeon, which creates a plot in the game around what you put into it. (That’s how Walton ended up in a band of traveling goblins — you’ll just have to listen to understand how that makes sense!) We also chat with Samanyou Garg, founder of Writesonic, a company that offers various writing tools powered by AI. The company can even have AI write a blog post — I’m shaking! But really.

Anyway, toward the end of the episode, I chat with James Vincent, The Verge’s AI and machine learning senior reporter, who calms me down and helps me understand what the future of text-generation AI might be. He’s great. Check out the episode above, and make sure you subscribe to the Vergecast feed for one more episode of this AI miniseries, as well as the regular show. See you there!

Repost: Original Source and Author Link

Categories
AI

DeepMind creates ‘transformative’ map of human proteins drawn by artificial intelligence

AI research lab DeepMind has created the most comprehensive map of human proteins to date using artificial intelligence. The company, a subsidiary of Google-parent Alphabet, is releasing the data for free, with some scientists comparing the potential impact of the work to that of the Human Genome Project, an international effort to map every human gene.

Proteins are long, complex molecules that perform numerous tasks in the body, from building tissue to fighting disease. Their purpose is dictated by their structure, which folds like origami into complex and irregular shapes. Understanding how a protein folds helps explain its function, which in turn helps scientists with a range of tasks — from pursuing fundamental research on how the body works, to designing new medicines and treatments.

Previously, determining the structure of a protein relied on expensive and time-consuming experiments. But last year DeepMind showed it can produce accurate predictions of a protein’s structure using AI software called AlphaFold. Now, the company is releasing hundreds of thousands of predictions made by the program to the public.

“I see this as the culmination of the entire 10-year-plus lifetime of DeepMind,” company CEO and co-founder Demis Hassabis told The Verge. “From the beginning, this is what we set out to do: to make breakthroughs in AI, test that on games like Go and Atari, [and] apply that to real-world problems, to see if we can accelerate scientific breakthroughs and use those to benefit humanity.”

A gif of two rotating protein fold models made up of curls and swirling lines. AlphaFold’s predictions are overlayed on the models, with 90.7 GDT accuracy on the left and 93.3 GDT accuracy on the right.

Two examples of protein structures predicted by AlphaFold (in blue) compared with experimental results (in green).
Image: DeepMind

There are currently around 180,000 protein structures available in the public domain, each produced by experimental methods and accessible through the Protein Data Bank. DeepMind is releasing predictions for the structure of some 350,000 proteins across 20 different organisms, including animals like mice and fruit flies, and bacteria like E. coli. (There is some overlap between DeepMind’s data and pre-existing protein structures, but exactly how much is difficult to quantify because of the nature of the models.) Most significantly, the release includes predictions for 98 percent of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. It isn’t the first public dataset of human proteins, but it is the most comprehensive and accurate.

If they want, scientists can download the entire human proteome for themselves, says AlphaFold’s technical lead John Jumper. “There is a HumanProteome.zip effectively, I think it’s about 50 gigabytes in size,” Jumper tells The Verge. “You can put it on a flash drive if you want, though it wouldn’t do you much good without a computer for analysis!”

After launching this first tranche of data, DeepMind plans to keep adding to the store of proteins, which will be maintained by Europe’s flagship life sciences lab, the European Molecular Biology Laboratory (EMBL). By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,” according to Edith Heard, director general of the EMBL.

The data will be free in perpetuity for both scientific and commercial researchers, says Hassabis. “Anyone can use it for anything,” the DeepMind CEO noted at a press briefing. “They just need to credit the people involved in the citation.”

Understanding a protein’s structure is useful for scientists across a range of fields. The information can help design new medicines, synthesize novel enzymes that break down waste materials, and create crops that are resistant to viruses or extreme weather. Already, DeepMind’s protein predictions are being used for medical research, including studying the workings of SARS-CoV-2, the virus that causes COVID-19.

New data will speed these efforts, but scientists note it will still take a lot of time to turn this information into real-world results. “I don’t think it’s going to be something that changes the way patients are treated within the year, but it will definitely have a huge impact for the scientific community,” Marcelo C. Sousa, a professor at the University of Colorado’s biochemistry department, told The Verge.

Scientists will have to get used to having such information at their fingertips, says DeepMind senior research scientist Kathryn Tunyasuvunakool. “As a biologist, I can confirm we have no playbook for looking at even 20,000 structures, so this [amount of data] is hugely unexpected,” Tunyasuvunakool told The Verge. “To be analyzing hundreds of thousands of structures — it’s crazy.”

Notably, though, DeepMind’s software produces predictions of protein structures rather than experimentally determined models, which means that in some cases further work will be needed to verify the structure. DeepMind says it spent a lot of time building accuracy metrics into its AlphaFold software, which ranks how confident it is for each prediction.

Example protein structures predicted by AlphaFold.
Image: DeepMind

Predictions of protein structures are still hugely useful, though. Determining a protein’s structure through experimental methods is expensive, time-consuming, and relies on a lot of trial and error. That means even a low-confidence prediction can save scientists years of work by pointing them in the right direction for research.

Helen Walden, a professor of structural biology at the University of Glasgow, tells The Verge that DeepMind’s data will “significantly ease” research bottlenecks, but that “the laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain.

Sousa, who has previously used data from AlphaFold in his work, says for scientists the impact will be felt immediately. “In our collaboration we had with DeepMind, we had a dataset with a protein sample we’d had for 10 years, and we’d never got to the point of developing a model that fit,” he says. “DeepMind agreed to provide us with a structure, and they were able to solve the problem in 15 minutes after we’d been sitting on it for 10 years.”

Why protein folding is so difficult

Proteins are constructed from chains of amino acids, which come in 20 different varieties in the human body. As any individual protein can be comprised of hundreds of individual amino acids, each of which can fold and twist in different directions, it means a molecule’s final structure has an incredibly large number of possible configurations. One estimate is that the typical protein can be folded in 10^300 ways — that’s a 1 followed by 300 zeroes.

Because proteins are too small to examine with microscopes, scientists have had to indirectly determine their structure using expensive and complicated methods like nuclear magnetic resonance and X-ray crystallography. The idea of determining the structure of a protein simply by reading a list of its constituent amino acids has been long theorized but difficult to achieve, leading many to describe it as a “grand challenge” of biology.

In recent years, though, computational methods — particularly those using artificial intelligence — have suggested such analysis is possible. With these techniques, AI systems are trained on datasets of known protein structures and use this information to create their own predictions.

DeepMind’s AlphaFold software has significantly increased the accuracy of computational protein-folding, as shown by its performance in the CASP competition.
Image: DeepMind

Many groups have been working on this problem for years, but DeepMind’s deep bench of AI talent and access to computing resources allowed it to accelerate progress dramatically. Last year, the company competed in an international protein-folding competition known as CASP and blew away the competition. Its results were so accurate that computational biologist John Moult, one of CASP’s co-founders, said that “in some sense the problem [of protein folding] is solved.”

DeepMind’s AlphaFold program has been upgraded since last year’s CASP competition and is now 16 times faster. “We can fold an average protein in a matter of minutes, most cases seconds,” says Hassabis. The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

Liam McGuffin, a professor at Reading University who developed some of the UK’s leading protein-folding software, praised the technical brilliance of AlphaFold, but also noted that the program’s success relied on decades of prior research and public data. “DeepMind has vast resources to keep this database up to date and they are better placed to do this than any single academic group,” McGuffin told The Verge. “I think academics would have got there in the end, but it would have been slower because we’re not as well resourced.”

Why does DeepMind care?

Many scientists The Verge spoke to noted the generosity of DeepMind in releasing this data for free. After all, the lab is owned by Google-parent Alphabet, which has been pouring huge amounts of resources into commercial healthcare projects. DeepMind itself loses a lot of money each year, and there have been numerous reports of tensions between the company and its parent firm over issues like research autonomy and commercial viability.

Hassabis, though, tells The Verge that the company always planned to make this information freely available, and that doing so is a fulfillment of DeepMind’s founding ethos. He stresses that DeepMind’s work is used in lots of places at Google — “almost anything you use, there’s some of our technology that’s part of that under the hood” — but that the company’s primary goal has always been fundamental research.

“The agreement when we got acquired is that we are here primarily to advance the state of AGI and AI technologies and then use that to accelerate scientific breakthroughs,” says Hassabis. “[Alphabet] has plenty of divisions focused on making money,” he adds, noting that DeepMind’s focus on research “brings all sorts of benefits, in terms of prestige and goodwill for the scientific community. There’s many ways value can be attained.”

Hassabis predicts that AlphaFold is a sign of things to come — a project that shows the huge potential of artificial intelligence to handle messy problems like human biology.

“I think we’re at a really exciting moment,” he says. “In the next decade, we, and others in the AI field, are hoping to produce amazing breakthroughs that will genuinely accelerate solutions to the really big problems we have here on Earth.”

Repost: Original Source and Author Link

Categories
AI

Artificial intelligence vs. neurophysiology: Why the difference matters

All the sessions from Transform 2021 are available on-demand now. Watch now.


On the temple of Apollo in Delphi (Greece), it was written: “Cognosce te Ipsum” (Know thy self). It is important to remember these words for everyone who wants to create artificial intelligence.

I continue my series of articles about the nature of human intelligence and the future of artificial intelligence systems. This article is a continuation of the article titled “Symbiosis Instead of Evolution — A New Idea about the Nature of Human Intelligence.”

In the previous article, after analyzing the minimum response time to a simple incoming signal, we found that the human brain with a high degree of probability may turn out to be a binary system, consisting of two functional schemes of response to excitation: reflex and intellectual.

In this article, we will talk about the first, the reflex part. Together, we will try to find out how similar a reflex response scheme really is to an algorithm and how this might affect the future of artificial intelligence.

Similar does not mean exactly the same

What is the difference?

In popular science films, a nerve impulse is presented as a kind of signal that travels through nerve cells like wires. We perceive this as a biological analogy for an electrical impulse.

In fact, this is not at all the case. A nerve impulse is a sharp movement of sodium and potassium ions across the outer membrane of a neuron using potential-dependent ion channels. This process can be compared to the successive falling of a track of cards or dominoes. After each nerve impulse, the neuron must return the ions back to their original positions. In our example, this is how to build a track from cards or dominoes again.

Nerve impulse is hard work. It is important that, in its deep physical essence, a nerve impulse is rather a mechanical work, and not an electrical signal at all, as many think.

This severely limits the rate of signal transmission in biological tissues. The signal travels along non-myelinated small-diameter fibers at a speed of only one meter per second. It is like a slow walk. For larger myelinated fibers, the speed increases to 45-60 kilometers per hour. And only for some large fibers with a thick myelin sheath and special interceptions of Ranvier, the speed reaches 200-300 km per hour.

On average, nerve impulses in our nervous system move 3 million times slower than electrical signals in computer systems. The nerve impulse, in addition to being slow, also makes constant stops at synapses, the junction points of neurons. To continue the path, the signal needs to pass through the synaptic cleft, the junction point of neurons. We can say that the nerve impulse is a rather slow journey with transfers.

All this suggests that the nerve impulse itself is already the result of serious effort, which simply must arrive somewhere at the end of the path.

The computer algorithm is of a completely different nature

deep neural network AI

The algorithms that work in computers are powered by sequences of voltage drops or machine code, consisting of conventional ones and zeros.

In addition to speed and physical essence, there is a long series of important differences between reflex and algorithm. A nerve impulse or reflex is an inevitable response, and an algorithm is a sum of rules, or a set of instructions designed to solve a specific problem.

In other words, the reflex can be wrong, but cannot be silent, and the algorithm, on the contrary, as a rule, does not make mistakes, but may not give an answer if the instruction contained in it cannot be executed.

The reflex knows the answer even before the task, and the algorithm learns the answer only after completing all the necessary steps.

A simple example

Imagine a simple problem to find the value of X in the formula 1 + X + 3 = 6. The algorithm will do this: first 6-1 = 5, then 5-3 = 2, so X = 2. The reflex will immediately answer X = 2. True, this will happen only if the reflex has already encountered such a situation and has empirically found out that answers 1 and 3 are incorrect.

But what if the situation changes and the question becomes more difficult 1 + X + Y = 6. With such a question, the algorithm will remain silent and will not be able to give an answer. Indeed, there is not enough initial data to answer. This option has several correct answers. The algorithm will not be able to figure out which one is accurate.

And for the reflex, nothing has changed, reflex will just answer X = 2, and Y = 3 if reflex has already met such a task before. If not, then reflex will still answer, but most likely with an error.

Why is it like this?

The answer lies in the energy cost of the nerve impulse process. Signal movement in the human nervous system, a very energy-intensive process for which it is first necessary to create a membrane potential (up to 90 mV) on the surface of a neuron, and then sharply shift it, thereby generating a wave of depolarization. At the moment of a nerve impulse, ions quickly move through the membrane, after which the nerve cell must return sodium and potassium ions to their original positions. For this, special molecular pumps (sodium-potassium adenosine triphosphatases) must work.

As a result, the nervous tissue turns out to be the most energy-consuming structure of our body. The human brain weighs on average 1.4 kilograms, or 2% of body weight, and consumes about 20% of all energy available to our body. In some children 4-6 years of age, the energy consumption of the brain reaches 60% of the energy available to the body!

All this forces nature to save the resources of the nervous system as much as possible.

To solve one single simple functional task, the nervous system needs about 100 compactly located neurons. Sea anemones (a class of coral polyps) have such a simple nervous system (100 neurons), which can reproduce (repeat) the original orientation of the body if they are transferred from one place to another.

More difficult tasks, more neurons

neurons

Additional tasks and functions require an increase in the power of the nervous system, which inevitably leads to an increase in the grouping of involved neurons. As a result, hundreds and thousands of voracious nerve cells are needed.

But nature knows how to find solutions even when it seems nothing can be invented. If the work of the nervous system is so expensive, then it is not necessary to obtain the correct answer at such a high price.

It is just cheaper to be wrong.

On the other hand, a mistake is worthless to nature. If the organism is often mistaken, it simply dies, and the one who gives the correct answers takes its place. Even if such answers are the result of a fluke. Figuratively speaking, everything is simple in nature — only those who have given the correct answer live.

This suggests that the work of the nervous system is only superficially similar to the algorithm. In fact, there is no computation at the heart of its work, but a reflex or simple repetition of stereotyped decisions based on memory.

The nature and nervous system of any living organism on our planet simply juggle with pre-written cribs in the form of various memory mechanisms, but outwardly it looks like a computational activity.

In other words, trying to beat the reflex with a computational algorithm is like trying to play fair against a card sharper.

This tactic, combined with synaptic plasticity, gives the biological nervous system tremendous efficiency.

In living nature, the brain is an extremely expensive commodity. Therefore, its operation is based on a simple but cheap reflex, and not an accurate but expensive algorithm. With this method, a small number of neurons solve very complex problems associated, for example, with orientation. The secret is that the biological nervous system does not actually calculate anything, it just remembers the correct answer. Over billions of years of evolution and a period of one’s own life, a universal set of solutions has been created that were previously successful. And if not, then it is not scary to be wrong. This allows even small and primitive nervous systems to simultaneously respond to stimuli and maintain automatic functions such as muscle tone, breathing, digestion, and blood circulation.

Algorithms lose before the competition starts

brain vs integrated circuits

All this suggests that trying to create AI based on existing computational algorithms, we will fundamentally lose to nature, even in examples of simple non-intellectual activities associated, for example, with movement. Our electronic devices will be accurate, but very energy-intensive and, as a result, completely inefficient.

We can already see this in self-driving cars. One of the unexpected problems faced by developers of autonomous control systems is related to energy consumption. Experimental self-driving cars need special high-performance electric generators to power electronic control systems.

While in nature there are amazingly simple nervous systems that perfectly cope with the task of dynamic maneuvering. For example, nurse sharks (which weigh up to 110 kilograms and can attack humans), the brain weighs only 8 grams, and the entire nervous system, together with all the fibers of the peripheral section, a little more than 250 grams.

The main conclusion

The first thing we need to create real artificial intelligence is electronic systems that work on the principles of a biological reflex arc, i.e., biological algorithms with zero discreteness.

It is interesting that the structural block diagrams of biological algorithms existed at the end of the last century, but due to the obscure zero discreteness, they have remained exotic. The only exception was evolutionary algorithms, which formed the basis for evolutionary modeling in the field of computational intelligence.

Biology teaches us that in real life it is not the one who makes mistakes that loses, but the one who does not save resources.

There is no need to be afraid of mistakes. In fact, you need to be afraid of accurate answers paid for by high energy consumption.

But this is only part of the problem, the solution of which will make it possible to create relatively simple artificial systems capable of controlling movement and fine motor skills.

To develop real artificial intelligence applicable in real life, we will have to figure out how the second, the intellectual scheme of the human brain, works.

Dr. Oleksandr Kostikov is a medical doctor by education based in Canada. He is working on a new theoretical concept about the nature of intelligence that also aims to create a completely new and unusual type of artificial intelligence.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

A beginner’s guide to global artificial intelligence policy

Welcome to Neural’s beginner’s guide to AI. This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works.

In addition to the article you’re currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, the difference between human and machine intelligence, and ethics.

In this edition of the guide, we’ll take a glance at global AI policy.

The US, China, Russia, and Europe each approach artificial intelligence development and regulation differently. In the coming years it will be important for everyone to understand what those differences can mean for our safety and privacy.

Yesterday

Artificial intelligence has traditionally been swept in with other technologies when it comes to policy and regulation.

That worked well in the days when algorithm-based tech was mostly used for data processing and crunching numbers. But the deep learning explosion that began around 2014 changed everything.

In the years since, we’ve seen the inception and mass adoption of privacy-smashing technologies such as virtual assistants, facial recognition, and online trackers.

Just a decade ago our biggest privacy concerns, as citizens, involved worrying about the government tracking us through our cell phone signals or snooping on our email.

Today, we know that AI trackers are following our every move online. Cameras record everything we do in public, even in our own neighborhoods, and there were at least 40 million smart speakers sold in Q4 of 2020 alone.

Today

Regulators and government entities around the world are trying to catch up to the technology and implement polices that make sense for their particular brand of governance.

In the US, there’s little in the way of regulation. In fact the US government is highly invested in many AI technologies the global community considers problematic. It develops lethal autonomous weapons (LAWS), its policies allow law enforcement officers to use facial recognition and internet crawlers without oversight, and there are no rules or laws prohibiting “snake oil” predictive AI services.

In Russia, the official policy is one of democratizing AI research by pooling data. A preview of the nation’s first AI policy draft indicates Russia plans to develop tools that allow its citizens to control and anonymize their own data.

However, the Russian government has also been connected to adversarial AI ops targeting governments and civilians around the globe. It’s difficult to discern what rules Russia‘s private sector will face when it comes to privacy and AI.

And, to the best of our knowledge, there’s no declassified data on Russia‘s military policies when it comes to the use of AI. The best we can do is speculate based on past reports and statements made by the country’s current leader, Vladmir Putin.

Putin, speaking to Russian students in 2017, said “whoever becomes the leader in this sphere will become the ruler of the world.”

China, on the other hand, has been relatively transparent about it’s AI programs. In 2017 China released the world’s first robust AI policy plan incorporating modern deep learning technologies and predicted future machine learning tech.

The PRC intends on being the global leader in AI technology by 2030. It’s program to achieve this goal includes massive investments from the private sector, academia, and the government.

US military leaders believe China’s military policies concerning AI are aimed at the development of LAWS that don’t require a human in the loop.

Europe‘s vision for AI policy is a bit different. Where the US, China, and Russia appear focused on the military and global competitive-financial aspects of AI, the EU is defining and crafting policies that put privacy and citizen-safety at the forefront.

In this respect, the EU currently seeks to limit facial recognition and other data-gathering technologies and to ensure citizens are explicitly informed when a product or service records their information.

The future

Predicting the future of AI policy is a tricky matter. Not only do we have to take into account how each nation currently approaches development and regulation, but we have to try to imagine how AI technology itself will advance in each country.

Let’s start with the EU:

  1. Some experts feel the human-centric approach to AI policy that Europe is taking is the example the rest of the world should follow. When it comes to AI tech, privacy is analogous to safety.
  2. But other experts fear the EU is leaving itself wide open to exploitation by adversaries with no regard for obeying its regulations.

In Russia, of course, things are different:

  1. Russia‘s focus on becoming a world leader in AI doesn’t go through big tech or academia, but through the advancement of military technologies – arguably, the only relevant domain it’s globally competitive in.
  2. Iron-fisted rule stifles private-sector development, so it would make sense if Russia kept extremely lax privacy laws in place concerning how the private sector handles the general public. And there’s no reason to believe the Russian government will affect any official policy protecting citizen privacy.

Moving to China, the future’s a bit easier to predict:

  1. China’s all-in on surveillance. Every aspect of Chinese life, for citizens, is affected by intrusive AI systems including a social credit scoring system, ubiquitous facial and emotional recognition, and complete digital monitoring.
  2. There’s little reason to believe China will change its privacy laws, stop engaging in government-sponsored AI IP-theft, or cease its reported production of LAWs technology.

And that just brings us to the US:

  1. Due to a lack of clear policy, the US exists somewhere between China and Russia when it comes to unilateral AI regulation. Unless the long-threatened big tech breakup happens, we can assume Facebook, Google, Amazon, and Microsoft will continue to dictate US policy with their wallets.
  2. AI regulation is a completely partisan issue being handled by a US congress that’s divided. Until such a time as the partisanship clears up somewhat, we can expect US AI policy beyond the private sector to begin and end with lobbyists and the defense budget.

At the end of the day, it’s impossible to make strong predictions because politicians around the globe are still generally ignorant when it comes to the reality of modern AI and the most-likely scenarios for the future.

Technology policy is often a reactionary discipline: countries tend to regulate things only after they’ve proven problematic. And, we don’t know what major events or breakthroughs could prompt radical policy change for any given nation.

In 2021, the field of artificial intelligence is at an inflection point. We’re between eurekas, waiting on autonomy to come of age, and hoping that our world leaders can come to a safe accord concerning LAWS and international privacy regulations.

Repost: Original Source and Author Link

Categories
Tech News

New type of artificial skin can form bruises that heal on their own

Researchers have created an artificial skin that can ‘bruise’ upon impact similar to actual skin, helping reveal when a robot or prosthetic is potentially damaged. The fake bruises are intended to function as a type of warning sign that the artificial limb or structure may need to be evaluated to ensure it doesn’t continue to unintentionally strike an object.

Bruising as an alert

Artificial skin is a material that resembles actual skin; it is commonly used for prosthetics and increasingly with robots. The skin, depending on its design, may be equipped with sensors that provide a degree of sensing capabilities, such as the ability to detect when the limb is in contact with a surface.

Going forward, these artificial skin materials may also feature a ‘bruising’ function that results in discoloration where the surface strikes an object. Unlike a person who may, for example, hit their leg against a post, a robot can’t report when one of its limbs has been struck, potentially resulting in damage that could go undetected until it gets worse.

Beyond ‘e-skin’

The bruisable artificial skin was developed by researchers in China and recently detailed by the American Chemical Society. The material works by detecting forces using ionic signals, making it a conductive hydrogel that exceeds many of the capabilities of electronic skins (“e-skins”), at least when it comes to factors like biocompatibility and stretchiness.

According to the paper detailing the artificial skin, the bruising function is made possible by using a molecule called spiropyran that transitions from a pale yellow to a blue-like color when subjected to mechanical stress. As with actual bruises, this discoloration slowly returns to its original color after several hours.

The skin you know

Testing performed with this ionic hydrogel material (“I-skin”) found that it behaves similar to human skin — it can be stretched, for example, without bruising, but will present the discoloration if subjected to potentially damaging force, such as when repeatedly smacked or aggressively pinched.

Though you won’t yet find this material in use with prosthetic devices and robots, the development paves the way for a life-like artificial skin that may one day behave similar to the real thing. It’s unclear whether the researchers plan to integrate sensing capabilities in the material that may enable robots to also detect when they’re touched.

Repost: Original Source and Author Link

Categories
AI

Evolution, rewards, and artificial intelligence

Elevate your enterprise data technology and strategy at Transform 2021.


Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands.

Natural selection

In their paper, the DeepMind scientists present the following hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.”

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. I’m not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that don’t get eliminated.

According to Dawkins, “In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple — that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.”

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organism’s survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didn’t, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMind’s scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

Reinforcement learning and artificial general intelligence

Reinforcement learning artificial intelligence

In their paper, DeepMind’s scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, “A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour.”

In an online debate in December, computer scientist Richard Sutton, one of the paper’s co-authors, said, “Reinforcement learning is the first computational theory of intelligence… In reinforcement learning, the goal is to maximize an arbitrary reward signal.”

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science.

The scientists further wrote in their paper, “According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].”

This is where hypothesis separates from practice. The keyword here is “complex.” The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they can’t offer “theoretical guarantee on the sample efficiency of reinforcement learning agents.”)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we don’t have a fraction of the compute power needed to create quantum-scale simulations of the world.

Let’s say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still don’t have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power you’ll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Robot working in kitchen

Above: Image credit: Depositphotos

Many will say that you don’t need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: “In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.”

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutor’s mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of “cleanliness” as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for “cleanliness” would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, there’s a tradeoff between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Artificial intelligence and the McData-fueled future of capitalism

Ba da ba ba bah, McDonald’s is capturing and storing biometric data on its customers without their knowledge or consent.

Per a report from The Register, McDonald’s may be facing a class action lawsuit after an Illinois customer sued the mega-corporation for allegedly violating the state’s Biometric Information Privacy Act (BIPA):

(The plaintiff) sued McDonald’s … on behalf of himself and all other affected residents of Illinois. He claimed the fast-chow biz has broken BIPA by not obtaining written consent from its customers to collect and process their voice data.

Illinois has some of the stiffest biometric privacy laws in the US.

The lawsuit apparently stems from the company’s use of automated drive-thru order takers in the form of chatbots.

Drive-thru customers were subjected to experimental natural language processing (NLP) AI in the state, in at least 10 of the company’s locations. While it’s unclear exactly what AI systems McDonald’s was using during the trial, it stands to reason the company would need to collect and store user data in order to train its AI.

The big picture

It’s hard to spot precedence in the wild, but there’s no denying the world sits on the rocky precipice of embracing autonomy. This very well could be the legal catalyst that kicks off the big business V big government debate over how we’re going to go about transitioning to the next technology paradigm for capitalism.

From a purely business-oriented POV, McDonald’s might not be in as bad a position as it appears. What’s an eight-figure lawsuit to company worth nearly $200 billion?

McDonald’s has been dabbling in AI systems for years now, and there’s an argument to be made that it’s poised to lead the charge when it comes to autonomous systems.

The perfect storm

Autonomous robotics technology is nothing new. Today it powers automotive factories and the garment manufacturing industry.

And that makes it easy for us to imagine other industries, such as fast food, adopting a similar approach. We’ve certainly heard a lot about burger-flipping robots and the end of entry-level jobs for the past decade.

The majority of discourse on automation focuses on the one-for-one human costs of replacement. We often envision the debate being about whether the efficiency and corporate labor cost reductions are worth the potential mass displacement of human workers.

But what if we stop thinking about McDonald’s like a greasy spoon and start thinking of it like Facebook, Google, or Microsoft.

The mainstream my recognize those as a social network, search giant, and OS developer respectively, but the truth of the matter is each one is an AI-first company. And with each passing year, AI endeavors make up a greater portion of their profits and net worth.

[Read: Global AI market predicted to reach nearly $1 trillion by 2028]

If McDonald’s were to convert its global market position as a restaurateur into a horizontal entry into the technology sector… interesting things could happen.

McDonald’s, but as an AI company

Strip away the what and how of where McDonald’s exists as a global corporation and you can compare it to other big tech businesses. The most apt comparison might be Facebook.

McDonald’s serves approximately one percent of the global population on a daily basis. Facebook, by contrast, reaches approximately 25% of the population. The biggest difference between the two, arguably, is that consumers typically have to pay to use the former’s services while Facebook monetizes its customers.

Let’s imagine a new McDonald’s where the food no longer costs money. Like Facebook, all you’d have to do is sign up and create a profile. Then, you could either go to a McDonald’s location to pick up food or request a delivery.

Every few orders, however, you may be asked to do something simple such as filling out a series of questionnaires similar to those “I’m not a robot” CAPTCHA’s where you click on the traffic lights or bicycles.

You might be tasked with ordering via voice or handwriting, so the system can capture your biometric data.

Most of the time, however, you’d just get free food for signing up and agreeing to McDonald’s terms and conditions.

Behold: Hypercapitalism

If this sounds a bit like socialism or communism, just remember: there’s no such thing as a free lunch. Whatever data McDonald’s could gather would be worth a fortune. It’s already a globally recognized brand with more than 38,000 locations in 100 countries.

The reason why so many big tech companies have pivoted to AI is because it’s a trillionaire’s market. Anyone can gather data, but only a few organizations have the money and infrastructure to gather data from billions of people at a time – and even fewer can ensure they’ll keep coming back for more no matter what.

There’s nothing stopping McDonald’s from using its burgers and nuggets to achieve the same goals as Facebook does with Candy Crush and conservative conspiracy theories.

The picture starts to come into focus when you consider that Facebook was founded in 2004 and it’s worth $280 billion while the first McDonald’s opened in 1955 and its only worth $170 billion.

Could McDonald’s turn feeding the hungry into the next big global data-gathering endeavor? What would you do for a “free” cheeseburger?



Repost: Original Source and Author Link

Categories
AI

Artificial intelligence research continues to grow as China overtakes US in AI journal citations

The artificial intelligence boom isn’t slowing yet, with new figures showing a 34.5 percent increase in the publication of AI research from 2019 to 2020. That’s a higher percentage growth than 2018 to 2019 when the volume of publications increased by 19.6 percent.

China continues to be a growing force in AI R&D, overtaking the US for overall journal citations in artificial intelligence research last year. The country already publishes more AI papers than any other country, but the United States still has more cited papers at AI conferences — one indicator of the novelty and significance of the underlying research.

These figures come from the fourth annual AI Index, a collection of statistics, benchmarks, and milestones meant to gauge global progress in artificial intelligence. The report is collated with the help of Stanford University, and you can read all 222 pages here.

In many ways, the report confirms trends identified in past years: the sheer volume of AI research is growing across a number of metrics, China continues to be increasingly influential, and investors are pumping yet more money into AI firms.

However, details reveal subtleties about the AI scene. For example, while private investment in AI increased 9.3 percent in 2020 (a higher increase than 2018 to 2019 of 5.7 percent), the number of newly funded companies receiving funds decreased for the third year in a row. There are several ways to interpret this, but it suggests that investors expect that the winner-takes-all dynamic that has defined the tech industry — in which digital economies of scale tend to reward a few dominant players — will be replicated in the AI world.

The report’s section on technical advances also confirms the major trends in AI capabilities, the biggest of which is the industrialization of computer vision. This field has seen incredible progress during the AI boom, with services like object and facial recognition now commonplace. Similarly, generative technologies, which can create video, images, and audio, continue to increase in quality and availability. As the report notes, this trend “promises to generate a tremendous range of downstream applications of AI for both socially useful and less useful purposes.” Useful applications include cheaper computer-generated media, while malicious outcomes include misinformation and AI revenge porn.

One area of AI research that seems like it’s just beginning to come into its own is biotech. The drug discovery and design sector received the most private investment of any sector in 2020 ($13.8 billion, 4.5 times more than in 2019), and experts canvassed for AI Index’s report cited DeepMind’s AlphaFold program, which uses machine learning to fold proteins, as one of the most significant breakthroughs in AI in 2020. (The other frequently cited breakthrough last year was OpenAI’s text-generation program GPT-3.)

One area where the Index AI report struggles to gauge progress, though, is in ethics. This is a wide-ranging area, spanning everything from the politics of facial recognition to algorithmic bias, and discussion of these topics is increasingly prominent. In 2020, stories like Google’s firing of researcher Timnit Gebru and IBM’s exit from the facial recognition business drove discussions of how AI technology should be applied. But while companies are happy paying lip service to ethical principles, the report notes that most of these “commitments” are non-binding and lack institutional frameworks. As has been noted in the past: AI ethics for many companies is simply a way to slow roll criticism.

Repost: Original Source and Author Link

Categories
AI

Ex-Google engineer Anthony Levandowski has closed his artificial intelligence church

Anthony Levandowski, a former Google engineer convicted of stealing self-driving car secrets, is closing his artificial intelligence-focused church Way of the Future (WOTF), TechCrunch reports. It’s a somewhat inauspicious end to the church’s goal of pursuing “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI),” in preparation for the Singularity. Now humanity will be left to deal with rampant AI without the religion’s guidance.

Levandowski started the process of shuttering WOTF in June 2020, according to documents filed in the state of California found by TechCrunch. WOTF never had regular meetings or a physical church building. The church’s funds, totaling $175,172, have been donated to the NAACP Legal Defense Fund.

Owning and operating an AI church is really just one piece of the larger Levandowski story, which The Verge has covered as part of the legal saga between Uber and Google’s Waymo. Levandowski worked on autonomous vehicles at Google, then started his own self-driving trucking company Otto, which he later sold to Uber. At some point in the journey from Google to Uber, Levandowski took some internal documents from the search giant, leading to Google suing Uber in 2017 and settling in 2018. Levandowski was convicted of trade secret theft in 2020.

Levandowski was sentenced to a shortened 18 months in prison to be served following the pandemic, but was pardoned by former President Donald Trump before he left office. With Way of the Future a thing of the past, no prison sentence to serve, and another autonomous driving company to oversee, Levandowski seems like he can wait for AI to surpass humans in relative peace.

Repost: Original Source and Author Link

Categories
AI

National Security Commission on Artificial Intelligence issues report on how to maintain U.S. dominance

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The National Security Commission on Artificial Intelligence today released its report today with dozens of recommendations for President Joe Biden, Congress, and business and government leaders. China, the group said, represents the first challenge to U.S. technological dominance that threatens economic and military power for the first time since the end of World War II.

The commissioners call for a $40 billion investment to expand and democratize AI research and development a “modest down payment for future breakthroughs”, and encourage an attitude toward investment in innovation from policymakers akin that which led to building the interstate highway system in the 1950s.

The report recommends several changes that could shape business, tech, and national security. For example, amid a global shortage of semiconductors, the report calls for the United States to stay “two generations ahead” of China in semiconductor manufacturing and suggests a hefty tax credit for semiconductor manufacturers. President Biden pledged support for $32 billion to address a global chip shortage and last week signed an executive order to investigate supply chain issues.

“I really hope that Congress deeply considers the report and its recommendations,” AWS CEO Andy Jassy said today as part of a meeting held to approve the report. “I think there’s meaningful urgency to get moving on these needs, and it’s important to realize that you can’t just flip a switch and have these capabilities in place. It takes steady, committed hard work over a long period of time to bring these capabilities to fruition.”

Commissioners who helped compile the report include Oracle CEO Safra Catz, Microsoft chief scientist Eric Horvitz, Google Cloud AI chief Andrew Moore, and Jassy, who takes over as CEO of Amazon later this year. Publication of the final report is the last act of the temporary commission Congress formed in 2018 to advise federal policy.

The 756-page report calling for the United States to be AI-ready by 2025 was approved by commissioners in a vote today. Moore and Horvitz abstained from chapters 2 and 11 of the report due to perceived conflicts of interest.

“I think it bears repeating that to win in AI, we need more money, more talent, stronger leadership, and collectively we as a commission believe this is a national security priority, and that the steps outlined in the report represent not just our consensus, but a distillation of hundreds and hundreds of experts in technology and policy and ethics, and so I encourage the public and everyone to follow our recommendations,” commission chair and former Google CEO Eric Schmidt said.

Now, Schmidt and other commissioners said, begins the work of selling these ideas to key decision makers in power.

Choice in the report include:

  • The intelligence community should seek to fully automate many tasks by 2030.
  • In line with earlier recommendations, the final report calls for the creation of a Digital Corps for hiring temporary or short-term tech talent, and a Digital Service Academy to create an accredited university to produce government tech talent. The report calls failure to recognize the need to develop a government technical workforce shortsighted and a national security risk.
  • Increase access to open source software for federal government employees in agencies like the Pentagon. The report refers to TensorFlow and PyTorch as “must-have tools in any AI developer’s arsenal.”
  • Private industry should form an organization with $1 billion in funding in the next five years that launches efforts to address inequality.
  • Identify service members with computational thinking.
  • Establish responsible AI leads in each national security agency and branch of the armed forces.
  • The report also calls for the U.S. State Department to increase its presence in U.S. and technology hubs around the world.
  • Triple the number of national AI research institutes. The first institutes were introduced in August 2020.
  • Set policy for agencies critical to national security to allow people to report irresponsible AI deployments.
  • Double AI research and development spending until 2026, when levels will hit $32 billion.
  • The report also calls immigration a “national security imperative” and that immigration policy could slow progress for China. Commissioners recommend doubling the number of employment-based green cards, creating visas for entrepreneurs and the makers of emerging and disruptive technology, and giving green cards to every AI PhD graduate from an accredited U.S. university. Leadership in 5G telecommunications and robotics are also referred to as national security imperatives in the report.

Government beyond defense

Within government, the report also goes beyond recommendations for the Pentagon, extending recommendations to Congress for border security and federal agencies like the FBI or Department of Homeland Security. For example, the report criticizes a lack of transparency of AI systems used by federal agencies as potentially affecting civil liberties and calls for Congress to amend impact assessment and disclosure reporting requirements to include civil rights and civil liberty reports for new AI systems or major updates to existing systems.

“For the United States, as for other democratic countries, official use of AI must comport with principles of limited government and individual liberty. These principles do not uphold themselves. In a democratic society, any empowerment of the state must be accompanied by wise restraints to make that power legitimate in the eyes of its citizens,” the report reads.

A statement from ACLU senior staff attorney Patrick Toomey, whose work focuses on national security, said the report acknowledges some dangers of AI in its recommendations, but “it should have gone further and insisted that the government establish critical civil rights protections now, before these systems are widely deployed by intelligence agencies and the military. Congress and the executive branch must prioritize these safeguards, and not wait until after dangerous systems have already become entrenched.”

The report argues that the consolidation of the AI industry threatens U.S. technological competitiveness in a number of important ways, exacerbating trends like brain drain and stifled competition.

China and U.S. foreign policy

Increases in funding and investment by China to be an AI leader by 2025 means more time is dedicated to China in the report than any other foreign nation. The report concludes that the United States could lose military technical superiority to China within the next decade.

“We have every reason to think that the competition with China will increase,” Schmidt said during the NSCAI meeting today.

To ward off rising models of techno-authoritarian governance like the kind practiced in China, the report calls for the United States to establish an Emerging Technology Coalition with allies. The report calls for high-level, ongoing diplomatic dialogue with China to discuss challenges emerging technology like AI presents in order to find areas for cooperation toward global challenges like climate change. That body could also act as a forum for sharing concerns or grievances about practices inconsistent with American values. Bilateral talks between the United States and China were previously recommended by AI policy expert and former White House economist R. David Edelman.

In defense, commissioners do not support a treaty for the global prohibition of AI-enabled autonomous weaponry since it is “not currently in the interest of U.S. or international security,” and because the report concludes that China and Russia would ignore any such commitment. Instead, the report calls for developing standards for autonomous weaponry.

In other matters related to foreign policy and international affairs, the commissions calls for an international agreement to never automate use of nuclear weapons, and to seek similar commitments from Russia and China.

The need for leadership is stressed throughout the report. For President Biden, the report recommends an executive order aimed at protecting intellectual property, and create a Technology Competitiveness Council in part to deal with intellectual property issues and establish national plans.

Oracle CEO Safra Catz called collaboration within and among Department of Defense, U.S. government, and allies critical, and said that leadership in government is needed. “There’s so many important steps that have to be taken now.”

“It is our great hope that like-minded democratic nations work together to make sure that technologies around AI do not leak into adversarial hands that will give them an advantage over our systems and that we will unite together in the safe and responsible deployment of this kind of technology in military systems,” commissioner and In-Q-Tel founder Gilman Louie said today.

The United States has been working with international groups like the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI), while last year defense and diplomatic officials from United States allies met to discuss ethical use of AI in warfare.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link