Categories
AI

The real opportunity in creative AI: Deepening human creativity

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


We are on the cusp of a paradigm shift brought by generative AI — but it isn’t about making creativity “quick and easy.” Generative technology opens new heights of human expression and helps creators find their authentic voices.

How we create is changing. The blog you read earlier today may have been made with generative AI. Within 10 years, most creative content will be produced with generative technologies. 

The idea of using AI systems to create content is anything but new. By the 1980s David Cope had already created EMI (Experiments in Musical Intelligence), which composed music in the style of Bach and Vivaldi, and Harold Cohen was showcasing artwork created by his system, AARON. 

What is new is the long-overdue release of this technology from the ivory towers of academia and into the consumer market. Perhaps the single largest push in the industry came from OpenAI, with a B2B approach that shares its large, multi-purpose generative models with a wide range of startups. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Some of the most popular include Copy.AI and Jasper, which focus on simplifying the writing of marketing copy, but GPT-3 is also powering stories, news articles (don’t worry, not this one!), and dialog systems, to name a few. The impending release of DALL-E 2, a powerful text-to-image generator, will further expand opportunities for business-savvy entrepreneurs who would like to join the revolution without needing to develop their own technical IP.  

While OpenAI is supplying firms with versatile technology, startups are out there providing value to consumers. What is the role of generative AI in our everyday lives? What impact will it have on humanity at large and specifically our creative expression? What do consumers actually want from creative generative AI? 

Deepening human creativity

It is true that generative methods can be targeted to reduce work opportunities for creative people, helping larger corporations make a few extra bucks. But what I argue here is that this is actually a relatively weak opportunity. There are plenty of creatives who already work for (more than) reasonable fees, and competing with the quality of their work remains very challenging for autonomous AI systems. 

Generative AI has better uses than destroying jobs. Humans have a profound need to express themselves, and we will continue to express ourselves until the end of time. Consequently, the real opportunity in creative, generative technology is in deepening human creativity. In fact, the original purpose of the creators of generative AI, including David Cope and Harold Cohen back in the 1980s, was precisely to enrich their own creative expression. 

Since then, academics and technologically-savvy artists have been building systems to generate music, art and everything in between to enrich their own creative process. This has been how creative AI has been used since its inception, and this is where its true potential lies. 

I’ve been tackling the commercialization of generative AI since before the popularization of large generative models. My team and I have been crafting new experiences from scratch, from the development of the generative AI models to the smallest UI/UX decisions. This allowed us to gain profound insight into users’ needs in this emerging domain. 

A partner in creativity

Identifying the key user need — the desire for deeper self-expression — made all the difference. Users don’t want “quick and easy.” They want to go deeper within themselves, to find the right words to express how they feel. They want to be better artists, but only in a way that comes from finding new depths in their own unique creative expression.

Generative AI as a partner to help us find our authentic voice may seem like a contradiction. But other technological advancements — from electric pianos and Photoshop to Digital audio workstations and Canva — have always been used to expand human expression. Generative AI systems are no different. In fact, by acting like a creative partner, generative AI is best positioned as a personalized guide to help us discover new layers of creative expression dormant within ourselves.  

Before we wrap up, a final word of advice. I love what OpenAI is doing for the industry. But I would also like to encourage startups in the space not to shy away from developing your own AI. OpenAI is powerful — but it is also general-purpose, and not ideal for any specific application. For every unique domain, specialized generative technology can make a world of a difference to the user’s experience. It will also make your startup that much more defensible against competition. 

Seize the opportunity

Technology has long been used to speed up tasks and eliminate labor. Perhaps we’ve forgotten that these were never the only aims of computing innovation. When it comes to generative AI, the opportunities are a lot greater. We have before us the chance to take human creativity to new heights. 

The marriage of generative AI with the thriving new creator ecosystem is paving the way to a new world, ripe with opportunity that can only be compared with the tech boom of the 1990s. The question isn’t whether generative AI will take over, but who will seize the opportunity early enough to shape the industry and reap the largest harvest. 

Maya Ackerman is the CEO and Co-Founder of Silicon Valley startup WaveAI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Watch a swarm of drones autonomously track a human through a dense forest

Scientists from China’s Zhejiang University have unveiled a drone swarm capable of navigating through a dense bamboo forest without human guidance.

The group of 10 palm-sized drones communicate with one another to stay in formation, sharing data collected by on-board depth-sensing cameras to map their surroundings. This method means that if the path in front of one drone is blocked, it can use information collected by its neighbors to plot a new route. The researchers note that this technique can also be used by the swarm to track a human walking through the same environment. If one drone loses sight of the target, others are able to pick up the trail.

In the future, write the scientists in a paper published in the journal Science Robotics, drone swarms like this could be used for disaster relief and ecological surveys.

“In natural disasters like earthquakes and floods, a swarm of drones can search, guide, and deliver emergency supplies to trapped people,” they write. “For example, in wildfires, agile multicopters can quickly collect information from a close view of the front line without the risk of human injury.”

However, experts say the work also has clear military potential. A number of nations — most prominently the US, China, Russia, Israel, and the UK — are currently developing drone swarms that could be used in war. Militaries tend to invoke surveillance and reconnaissance as the most common applications for this work, but the same technology could undoubtedly be used to track and attack both combatants and civilians.

An illustration from the paper showing how multiple drones can be used to track a target even if the view from one drone is blocked.
Image: Science Robotics / Xin Zhou et al

Elke Schwarz, a senior lecturer at Queen Mary University of London whose specialisms include the use of drones in combat, says this research has clear military potential.

“The capability to navigate cluttered environments, for example, is desirable for a range of military purposes, including for urban warfare,” Schwarz tells The Verge. “As is the ability to ‘follow a human’ — here I can see how this converges with projects that seek to develop lethal drone capabilities that minimize risk to on-the-ground soldiers in urban environments.”

The recent war between Russia and Ukraine has shown how quickly drone technology can be adapted for the battlefield and what a devastating effect it can have. Both sides in the conflict are using cheap consumer drones for reconnaissance and, sometimes, offense. One method involves using drones to drop grenades onto opposing forces. A recent video showed Ukrainian troops using what appears to be a DJI Phantom 3 drone (price-tag: $500) to drop a grenade through the sunroof of a car supposedly driven by Russian soldiers.

What makes drone swarms potentially more dangerous than lone machines, though, is not just their numbers but their autonomy. No single human can simultaneously control a swarm of 10 drones, but if this task can be offloaded to algorithms then military planners are more likely to embrace the use of this sort of autonomous system in war.

Drones in the swarm are capable of navigating through gaps as small as 30 centimeters.
Image: Science Robotics / Xin Zhou et al

Currently, drone swarms are limited in their application. The most common real world use-case is creating elaborate light shows. But in these scenarios, drones are following preset trajectories in open spaces, using tracking technology like GPS to locate themselves.

The research from Zhejiang University advances on this by using only on-board sensors and algorithms to control the drones’ flight without prior mapping of their environment. “This is the first time there’s a swarm of drones successfully flying outside in an unstructured environment, in the wild,” Enrica Soria, a drone swarm researcher at the Swiss Federal Institute of Technology Lausanne, told AFP. Soria added that the work was “impressive.”

In their paper, the scientists note that approaches to drone swarms tend to follow one of two programming paradigms: either “bird” or “insect.” In an “insect” swarm, the focus is on fast, reactive movements that require less forward-planning while a “bird” swarm tries to direct drones along long, flowing paths (the latter being the researchers’ approach). Both methods have their trade-offs, as thinking like an insect requires less computing power, but planning like a bird is more energy efficient. But, as the computing capacity of hardware improves, programming bird-like behavior has become more attainable.

Schwarz notes that although the focus in such drone swarm research is often on these technological achievements, this can obscure the trickier questions of how such work should be deployed. She cites the observations of 20th century US mathematician Norbert Wiener, whose work laid the foundations for AI development.

Says Schwarz: “[Weiner] said — in the 1960s — that there is a disastrous focus on and obsession with ‘know-how’, which tends to eclipse the moral question we should be asking: what is it good for.”

Repost: Original Source and Author Link

Categories
AI

A visit to the human factory

Will Jackson, CEO of robotics company Engineered Arts, says he isn’t sure what’s worse: the angry emails that accuse him of building machines that will one day overthrow humanity or the speculative ones enquiring if the sender can fuck the robots.

“Everybody wants to see a humanoid robot,” Jackson says. “They love to imagine all these things that are going to happen. Part of what we do is fulfilling that desire.” (Though not, he is careful to stress, the sex-robot stuff.)

Footage of Engineered Arts’ most recent creation, a gray-skinned bot named Ameca, went viral last December with clips showing an android with an exposed metal torso and eerily realistic facial expressions interacting with researchers. (“Android” being the correct term for a human-shaped robot, from the ancient Greek andro for “man” and eides for “form.”)

In one video, Ameca frowns as an off-screen employee reaches out to touch its nose before smoothly reaching up to stop his arm in a whir of electric motors. It’s an uncanny moment that sets off alarm bells for the viewer: the shock is that a robot would want to establish this boundary between it and us — a desire that is, ironically, very human.

“Got just a tad scared when it raised its hand to his arm. Thought it was just gonna snap it.” Says another, “I know this is scary, but I love this and I want more.”

It’s these emotions — curiosity, fear, excitement — that are Engineered Arts’ stock-in-trade. The company makes its money selling its robots for entertainment and education. They’re used by academics for research; by marketing teams for publicity stunts; and placed in museums, airports, and malls to welcome visitors. “Anywhere you’ve got a big crowd of people to interact with,” says Jackson.

The machines can run on autopilot, reacting to passersby with preset banter. Or they can be controlled remotely, with unseen handlers responding to queries from the crowd as in this video filmed at CES. In the near future, though, Engineered Arts wants to equip its robots with more sophisticated chatbot software that would let them respond fluidly to queries without any human guidance.

More than entertainers, though, these robots are heralds of the future. As technology improves and androids become more realistic, the question of how we relate to such machines is going to become more pressing. Are fucking and fighting the only two responses we can imagine?

A prototype robot used to test facial expressions.

Humanity’s interest in androids seems like a modern obsession, but this is far from the truth. We’ve been dreaming of artificial humans for thousands of years — from the singing, gold-forged Celedones of ancient Greek myth to the golem of Jewish folklore, molded from clay and animated by sacred words. The term “robot,” by comparison, is a more recent coinage, first appearing in 1920 in the play RUR, or Rossum’s Universal Robots. Here, machines are stand-ins for a newly brutalized working class (the term robot comes from the Slavonic robota, meaning “forced labor”) forced into mechanical postures and destined to revolt.

A diagram of Jacques de Vaucanson’s digesting duck (that mistakenly assumed the food was being processed).
Image: Public Domain

Before they were surrogates for class fear, though, automata in Europe were spectacles. Automata invented in the medieval era are still familiar today, like the jacquemarts, or “jacks of the clock” — human figures that strike bells in Europe’s grand astronomical clocks. Others were elaborate one-offs, like the mechanical lion gifted to Francis I of France in 1515. Designed by Leonardo da Vinci, the lion was reportedly capable of walking up to the king unaided before opening its chest to reveal a bouquet of flowers inside.

As clockwork improved, designs became more complex. The 18th-century engineer Jacques de Vaucanson put on theatrical shows featuring automata that could play the flute and tambourine. His most famous machine, though, imitated basic biology: it was a duck that appeared to eat, drink, and defecate — an achievement that led the philosopher Voltaire to praise Vaucanson as the “new Prometheus.”

As with the robots built by Engineered Arts, these automata inspired a range of reactions. Some people celebrated their artificiality, seeing the machines as proof of humanity’s technological achievements while others ascribed spiritual properties to these machines, claiming they blurred the boundaries between artificial and biological life. Such theorizing was not trivial, either, inspiring thinkers like René Descartes to suggest that humans and animals were only another sort of advanced machine (though the latter category lacked soul or consciousness).

A desire to project agency and intelligence onto inanimate matter, though, is deeply human, says Beth Singler, a digital anthropologist at the University of Cambridge. “You don’t have to go as far as Ameca has with facial features before people start bringing animated entities into what I call their cosmology of potential beings,” she tells The Verge. “There’s this sense that what is around us could be intelligence, and different cultures react to that in different ways.”

Traditions like Shinto and Buddhism are more open about this impulse to ascribe soul to objects, says Singler, but the same instincts run deep in the West. “We like to think we’re immune to this because we had the Enlightenment and became very serious and rational,” she says. “But I don’t see that. When I see people’s interactions with animated technological entities — and that can be everything from a robot to a Roomba — I see that same animistic tendency.” In other words: we still want to believe.

Engineered Arts’ creations blend robotics and special effects.

Engineered Arts knows how to play upon such instincts. As Jackson explains, “It’s amazing the simple things you can do to make a machine look sentient.” In the company’s early days, for example, they hit upon a useful trick with speech recognition. Instead of programming a chatbot that analyzed what people were saying, his engineers coded a program that repeated the last thing the robot heard and swapped the words “you” and “I” in any sentence. “So you say to the robot ‘I love you,’ and it says back, ‘you love me,’” he says. “And you think ‘oh my god, it understands me,’ but no, all I did was swap two words around.”

The company explores these questions from its headquarters in Falmouth in the UK. It’s an unassuming location for such sci-fi work: a fishing town with a population of a little over 20,000 on the southwestern tip of the country in the county of Cornwall. It’s a region with a distinct sense of local identity, where inhabitants are proud to have more in common with Celtic neighbors in Ireland and France than with the rest of England. Jackson himself is a local, Falmouth born and raised, and says he couldn’t have imagined settling elsewhere.

The sense of remoteness fits the work. The company’s headquarters, in a large industrial building on the edge of town, has the quiet and airy feel of an artisan’s workshop. On the day that I pay a visit, a storm is blowing into town, sending whistles through the various departments. There’s coding with its multi-monitor standing desks and mugs extolling the virtues of rock climbing; costuming with its rails of outfits and wigs; and engineering — the largest area — populated by huge machine tools that are noisily slicing up blocks of aluminum.

The decorative motif that unifies the spaces, though, is the body parts. Wherever you go in the building, there are mechanical limbs, silicone faces, and disembodied heads scattered on desks and shelves. Exploring the place feels like going behind the scenes at Westworld: it’s eerie to see the human form broken down into its constituent components, but you soon become accustomed to the sight. Before you know it, you’re pulling at mechanical hands and rubber faces with the curious innocence of a child.

Engineered Arts CEO Will Jackson shows off the detail in a rubber face.

For some, this is one of the dangers of creating realistic robots. As you get used to treating human-like automata as automata, you may slowly find yourself treating humans the same. It’s similar to the dilemma parents have with young children and Alexa. Should they be polite to the AI assistant because it encourages them to be polite to humans? Or is that the wrong way to treat a piece of software coded and controlled by a huge multinational corporation?

As I ponder this, Jackson and I walk past a desk laden with mechanical widgets undergoing stress tests. Pistons have been nailed to a wooden plank while, on a stand, tiny pulleys lift and lower a cup full of screws. And, true to Singler’s suggestion that humans will ascribe a bit of soul to just about anything that moves, I feel passing sympathy even for these tortured components.

“We’re testing those actuators for fingers,” Jackson says. “It’s all about longevity: how many times can you run that back and forth.” The goal is a million cycles, though the motors — found on a Chinese wholesale site — have only gone through a few hundred thousand so far. They were likely designed to open and close CD drives, he says, but if they prove reliable, they’ll have a new use opening and closing artificial hands.

Engineered Arts doesn’t build its robots entirely from scratch, but the company’s involvement in every part of their construction — from molding rubber faces to programming robot brains — makes its wares almost unique in the market. Probably only Disney’s Imagineering team, which builds animatronics for its theme parks, combines so much disparate expertise under a single roof, says Jackson. And Disney isn’t selling what it makes.

Engineered Arts’ first robot — RoboThespian — was much less realistic than its current models.

Since its founding in 2005, Engineered Arts has made a half dozen or so robots. But its latest model, Ameca, is undoubtedly the most sophisticated yet. After our initial tour, Jackson takes us to see one of three operational units. As he boots up the machine’s operating system on a laptop, the automaton comes to life. It scrunches its cheeks, raises its eyebrows, and then grimaces and blinks. It’s like watching a newborn baby cycle through facial expressions. There’s a sense that the hardware hasn’t yet been fully connected to the software.

It’s these facial expressions that encapsulate Engineered Arts’ ambitions. “The human face is this massive bandwidth communication tool,” says Jackson. “You have a physical interface that people recognize.” As a species, we’re hard-wired to identify faces, but Ameca is so lifelike that it takes barely any effort to project intelligence where there is none. As Jackson prompts the robot to trot out some pre-programmed phrases, I reach up to see what the face feels like — and hesitate. Jackson reassures me that it’s not dangerous, but my worry was that it was disrespectful.

Engineered Arts deploys all sorts of methods to compound the impression of sentience. Jackson is particularly proud of the clavicle, which can move forward and back as well as pitch, roll, and yaw. All this helps convey subtle emotions like anticipation and apprehension. Microphones in the robot’s ears allow it to triangulate sound and turn to nearby noise while cameras in its eyeballs run a simple machine vision program to track hands and faces. The result is that if you move into Ameca’s presence or speak to it, it responds like a human would. It turns to look at you, and, naturally, you look back. It’s the start of a relationship.

This is why the company builds androids specifically, says Jackson: because we naturally respond to them like humans. The form just doesn’t make sense for any other task. “The only good reason to build a humanoid is to interact and be friendly with people,” he says. Robots should be built to carry out specific tasks as efficiently as possible, which is why “the best robot dishwasher is a square box — it’s not a humanoid wandering around your house, messing with your plates.”

There are just too many engineering challenges in replicating the efficiency and dexterity of the human body. Electric motors are far more bulky and power-hungry than organic muscle, while digital control systems still aren’t able to emulate our mobility, dexterity, and perception. In the field of robotics, this is known as Moravec’s paradox: the fact that it’s much easier to build an AI that can beat a chess grandmaster than a robot with the physical skills of a toddler.

One of Engineered Arts’ employees puts the finishing touch on an eyeball.

Despite this, advances in some areas of AI, like machine vision and natural language understanding, have rekindled old ambitions to construct the perfect human robot. When I ask Jackson what he thinks of Elon Musk’s plan to create an android worker for his factories, he’s incredulous. “When [Musk] jumped on the bandwagon with the Tesla Bot, we were absolutely rolling around in laughter,” he says. He suggests the tech CEO will certainly come up with something (“he’s got a budget and he can spot talent”). But there’s no way he’ll make a machine that can replace humans — something Musk has promised with absolute certainty.

If you want to see why Musk’s plans will fail, says Jackson, just look at Boston Dynamics. That’s a company that has been developing robots for decades, but its most advanced android — Atlas — is still restricted to demos and research. For now, humans are just so much better at being humans. “They self-repair, they self-replicate, and they run off a packet of cornflakes,” he says, speculating that Musk’s desire to create a perfectly pliant worker perhaps says more about his well-documented problems with human labor than his grasp of the possibilities of robotic engineering.

What Musk can do, though, is trigger people’s imaginations — just like Engineered Arts. That’s part of the reason why, when he brought out a dancing man in a spandex suit in lieu of his Tesla Bot last year, so many fans were willing to give him the benefit of the doubt: people want to believe in robots.

Engineered Arts is much more upfront about this sort of “trickery” (a term Jackson finds a little ungenerous). Unlike one of the company’s rivals, Hanson Robotics, the makers of the Sophia robot, the company doesn’t pretend its machines are conscious. When Sophia goes on late night talk shows and declares that it’s a friend to humanity or that it wants a child, experts spit feathers. “It’s obviously bullshit,” AI ethics researcher Joanna Bryson told me a few years ago after Sophia had been made a “citizen” of Saudi Arabia as a PR stunt. In interviews with Engineered Arts’ employees, though, they stress the reality of these machines: they’re advanced animatronics — not the first draft of the robot apocalypse.

You could argue that the company still contributes to these misconceptions by sharing clips of Ameca without full context, but Jackson’s response is that some people will always willfully misunderstand what they see. “If an actor plays a baddie in the film, people hiss at him when they see him in the street,” he says. “It’s an inability to distinguish between fantasy and reality.”

Ameca is Engineered Arts’ most recent — and most realistic — creation.

After spending time with Ameca, my own ability to distinguish fantasy and reality is, I think, intact. But there are certainly moments when the illusion is complete and convincing. Often, it’s just a single gesture — a sweep of the hands or a squint of the eye — but, just for a second, you can believe that this assemblage of motors and circuits standing in front of you is something more than the sum of its parts.

Looking over the history of automata, there’s one particular type of robot that Ameca reminds me of: the robotic saint. There are numerous examples of such religious automata from the late medieval era onwards, including life-size sculptures of Christ and the Virgin Mary that were equipped with articulated limbs and animated by puppetry or clockwork. These artifacts were often incorporated into religious ceremonies, engaging audiences with their miraculous attributes, and, though it may be odd to think of robots as miraculous agents, they are certainly superhuman: they do not die and cannot age. And in our current era of machine learning hype and mysticism — when tech bros start religions dedicated to AI gods and researchers speculate on Twitter as to whether neural nets are conscious — I think this tendency to turn the technological into something spiritual is stronger than ever.

Singler specializes in cultural reactions to AI and says this is a consistent theme in her studies. She notes how frequently AI stock images recall religious imagery like The Creation of Adam or how people talk about being “blessed by the algorithm” on social media, creating folk traditions on how to extract favorable results from these mysterious entities. “When it comes to AI it’s easy to see it as super-intelligence and almost fitting into that God-space very quickly,” she says.

In this light, Engineered Arts’ robots are not only devices for entertainment but also a tangible way to interact with this powerful new force in the world — a way for audiences to engage with anxieties about the future and technology. Jackson says that after people have gotten over the initial surprise of seeing a robot like Ameca, their next reaction is to critique. “When people see our robots [they] pick up on all the things that are wrong. ‘Oh that blink was wrong,’ they say. Or, ‘A real person would never have done that,’” he says. “They’re differentiating themselves from the machine. I think it’s reassuring: ‘I don’t need to worry, that machine’s not as good as me.’”

The next step for Ameca is a version that walks, says Jackson, and he shows me a prototype pair of metal legs, bending and flexing the knees. He says his work ultimately reminds him of the magnificence of nature. The more he tries to re-create the human body, the greater his sense of “awe and wonder” — and his realization of how far human ingenuity has to go to compete. “You look at biological systems and then you try and emulate it, and you end up thinking — and I’m not religious — but you end up thinking, ‘How the hell did this happen?’”

Photography by James Vincent / The Verge

Repost: Original Source and Author Link

Categories
AI

DeepMind says its new AI coding engine is as good as an average human programmer

DeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.

Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI — a program that can autonomously tackle coding challenges that are currently the domain of humans only. “In the longer-term, we’re excited by [AlphaCode’s] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software,” said Vinyals.

AlphaCode was tested against challenges curated by Codeforces, a competitive coding platform that shares weekly problems and issues rankings for coders similar to the Elo rating system used in chess. These challenges are different from the sort of tasks a coder might face while making, say, a commercial app. They’re more self-contained and require a wider knowledge of both algorithms and theoretical concepts in computer science. Think of them as very specialized puzzles that combine logic, maths, and coding expertise.

In one example challenge that AlphaCode was tested on, competitors are asked to find a way to convert one string of random, repeated s and t letters into another string of the same letters using a limited set of inputs. Competitors cannot, for example, just type new letters but instead have to use a “backspace” command that deletes several letters in the original string. You can read a full description of the challenge below:

An example challenge titled “Backspace” that was used to evaluate DeepMind’s program. The problem is of medium difficulty, with the left side showing the problem description, and the right side showing example test cases.
Image: DeepMind / Codeforces

Ten of these challenges were fed into AlphaCode in exactly the same format they’re given to humans. AlphaCode then generated a larger number of possible answers and winnowed these down by running the code and checking the output just as a human competitor might. “The whole process is automatic, without human selection of the best samples,” Yujia Li and David Choi, co-leads of the AlphaCode paper, told The Verge over email.

AlphaCode was tested on 10 of challenges that had been tackled by 5,000 users on the Codeforces site. On average, it ranked within the top 54.3 percent of responses, and DeepMind estimates that this gives the system a Codeforces Elo of 1238, which places it within the top 28 percent of users who have competed on the site in the last six months.

“I can safely say the results of AlphaCode exceeded my expectations,” Codeforces founder Mike Mirzayanov said in a statement shared by DeepMind. “I was sceptical [sic] because even in simple competitive problems it is often required not only to implement the algorithm, but also (and this is the most difficult part) to invent it. AlphaCode managed to perform at the level of a promising new competitor.”

An example interface of AlphaCode tackling a coding challenge. The input is given as it is to humans on the left and the output generated on the right.
Image: DeepMind

DeepMind notes that AlphaCode’s current skill set is only currently applicable within the domain of competitive programming but that its abilities open the door to creating future tools that make programming more accessible and one day fully automated.

Many other companies are working on similar applications. For example, Microsoft and the AI lab OpenAI have adapted the latter’s language-generating program GPT-3 to function as an autocomplete program that finishes strings of code. (Like GPT-3, AlphaCode is also based on an AI architecture known as a transformer, which is particularly adept at parsing sequential text, both natural language and code). For the end user, these systems work just like Gmails’ Smart Compose feature — suggesting ways to finish whatever you’re writing.

A lot of progress has been made developing AI coding systems in recent years, but these systems are far from ready to just take over the work of human programmers. The code they produce is often buggy, and because the systems are usually trained on libraries of public code, they sometimes reproduce material that is copyrighted.

In one study of an AI programming tool named Copilot developed by code repository GitHub, researchers found that around 40 percent of its output contained security vulnerabilities. Security analysts have even suggested that bad actors could intentionally write and share code with hidden backdoors online, which then might be used to train AI programs that would insert these errors into future programs.

Challenges like these mean that AI coding systems will likely be integrated slowly into the work of programmers — starting as assistants whose suggestions are treated with suspicion before they are trusted to carry out work on their own. In other words: they have an apprenticeship to carry out. But so far, these programs are learning fast.

Repost: Original Source and Author Link

Categories
AI

I regret to inform you that Digital Human as a Service (DHaaS) is now an acronym

Science fiction movies have prepared us for the distinct possibility that artificial intelligence will walk among us someday. How soon? No one can say — but that isn’t stopping a raft of companies by trying to sell “digital humans” before that whole intelligence thing gets figured out. Ah, but what if you don’t want to buy a digital human because that sounds icky? Rent one, of course! That’s why we now have the regrettable acronym Digital Human as a Service (DHaaS).

The actual news here is that Japanese telecom giant KDDI has partnered with a firm named Mawari (which means something along the lines of “surroundings” in Japanese) to create a virtual assistant you can “see” through the window of your smartphone in augmented reality, one who might automatically pop up to give you directions and interact if you point your phone at a real-world location. (You’ll also see walking directions and indoor maps in the video, but those simply appear to be packaged together as part of the proof of concept.)

If you peek the video atop this post, you can see it’s not that much more advanced than, say, Pokémon Go. But behind the scenes, the partners claim that KDDI’s 5G network, Amazon’s low-latency AWS Wavelength edge computing nodes, and a proprietary codec from Mawari combine to let “digital humans” stream to your phone in real time instead of running natively on your phone’s chip.

That “substantially lower[s] the heavy processing requirements of real-time digital humans, reducing cost, data size and battery consumption while unlocking scalability,” according to the press release. (It’s true that AR apps like Pokémon Go tend to chow down on battery, but it’s not just graphics to blame; some of that is running GPS, camera and cellular simultaneously.)

Who’s going to jump on board to actually populate the metaverse with experiences designed for KDDI and Mawari’s “digital humans” and pay monthly, quarterly or annually for the “service” part of the acronym? That’s always the question, but there’s no shortage of companies looking to lean into the buzzy metaverse these days. And if they can leverage their existing buzzwords like “5G”, “AI” and “Edge compute,” so much the better. It takes a lot of work to look like you’re paying attention to the future, and you never know if this is the moment someone actually manages to make fetch happen.

Want some more digital humans? We’ve got you covered:

Repost: Original Source and Author Link

Categories
AI

DeepMind creates ‘transformative’ map of human proteins drawn by artificial intelligence

AI research lab DeepMind has created the most comprehensive map of human proteins to date using artificial intelligence. The company, a subsidiary of Google-parent Alphabet, is releasing the data for free, with some scientists comparing the potential impact of the work to that of the Human Genome Project, an international effort to map every human gene.

Proteins are long, complex molecules that perform numerous tasks in the body, from building tissue to fighting disease. Their purpose is dictated by their structure, which folds like origami into complex and irregular shapes. Understanding how a protein folds helps explain its function, which in turn helps scientists with a range of tasks — from pursuing fundamental research on how the body works, to designing new medicines and treatments.

Previously, determining the structure of a protein relied on expensive and time-consuming experiments. But last year DeepMind showed it can produce accurate predictions of a protein’s structure using AI software called AlphaFold. Now, the company is releasing hundreds of thousands of predictions made by the program to the public.

“I see this as the culmination of the entire 10-year-plus lifetime of DeepMind,” company CEO and co-founder Demis Hassabis told The Verge. “From the beginning, this is what we set out to do: to make breakthroughs in AI, test that on games like Go and Atari, [and] apply that to real-world problems, to see if we can accelerate scientific breakthroughs and use those to benefit humanity.”

A gif of two rotating protein fold models made up of curls and swirling lines. AlphaFold’s predictions are overlayed on the models, with 90.7 GDT accuracy on the left and 93.3 GDT accuracy on the right.

Two examples of protein structures predicted by AlphaFold (in blue) compared with experimental results (in green).
Image: DeepMind

There are currently around 180,000 protein structures available in the public domain, each produced by experimental methods and accessible through the Protein Data Bank. DeepMind is releasing predictions for the structure of some 350,000 proteins across 20 different organisms, including animals like mice and fruit flies, and bacteria like E. coli. (There is some overlap between DeepMind’s data and pre-existing protein structures, but exactly how much is difficult to quantify because of the nature of the models.) Most significantly, the release includes predictions for 98 percent of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. It isn’t the first public dataset of human proteins, but it is the most comprehensive and accurate.

If they want, scientists can download the entire human proteome for themselves, says AlphaFold’s technical lead John Jumper. “There is a HumanProteome.zip effectively, I think it’s about 50 gigabytes in size,” Jumper tells The Verge. “You can put it on a flash drive if you want, though it wouldn’t do you much good without a computer for analysis!”

After launching this first tranche of data, DeepMind plans to keep adding to the store of proteins, which will be maintained by Europe’s flagship life sciences lab, the European Molecular Biology Laboratory (EMBL). By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,” according to Edith Heard, director general of the EMBL.

The data will be free in perpetuity for both scientific and commercial researchers, says Hassabis. “Anyone can use it for anything,” the DeepMind CEO noted at a press briefing. “They just need to credit the people involved in the citation.”

Understanding a protein’s structure is useful for scientists across a range of fields. The information can help design new medicines, synthesize novel enzymes that break down waste materials, and create crops that are resistant to viruses or extreme weather. Already, DeepMind’s protein predictions are being used for medical research, including studying the workings of SARS-CoV-2, the virus that causes COVID-19.

New data will speed these efforts, but scientists note it will still take a lot of time to turn this information into real-world results. “I don’t think it’s going to be something that changes the way patients are treated within the year, but it will definitely have a huge impact for the scientific community,” Marcelo C. Sousa, a professor at the University of Colorado’s biochemistry department, told The Verge.

Scientists will have to get used to having such information at their fingertips, says DeepMind senior research scientist Kathryn Tunyasuvunakool. “As a biologist, I can confirm we have no playbook for looking at even 20,000 structures, so this [amount of data] is hugely unexpected,” Tunyasuvunakool told The Verge. “To be analyzing hundreds of thousands of structures — it’s crazy.”

Notably, though, DeepMind’s software produces predictions of protein structures rather than experimentally determined models, which means that in some cases further work will be needed to verify the structure. DeepMind says it spent a lot of time building accuracy metrics into its AlphaFold software, which ranks how confident it is for each prediction.

Example protein structures predicted by AlphaFold.
Image: DeepMind

Predictions of protein structures are still hugely useful, though. Determining a protein’s structure through experimental methods is expensive, time-consuming, and relies on a lot of trial and error. That means even a low-confidence prediction can save scientists years of work by pointing them in the right direction for research.

Helen Walden, a professor of structural biology at the University of Glasgow, tells The Verge that DeepMind’s data will “significantly ease” research bottlenecks, but that “the laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain.

Sousa, who has previously used data from AlphaFold in his work, says for scientists the impact will be felt immediately. “In our collaboration we had with DeepMind, we had a dataset with a protein sample we’d had for 10 years, and we’d never got to the point of developing a model that fit,” he says. “DeepMind agreed to provide us with a structure, and they were able to solve the problem in 15 minutes after we’d been sitting on it for 10 years.”

Why protein folding is so difficult

Proteins are constructed from chains of amino acids, which come in 20 different varieties in the human body. As any individual protein can be comprised of hundreds of individual amino acids, each of which can fold and twist in different directions, it means a molecule’s final structure has an incredibly large number of possible configurations. One estimate is that the typical protein can be folded in 10^300 ways — that’s a 1 followed by 300 zeroes.

Because proteins are too small to examine with microscopes, scientists have had to indirectly determine their structure using expensive and complicated methods like nuclear magnetic resonance and X-ray crystallography. The idea of determining the structure of a protein simply by reading a list of its constituent amino acids has been long theorized but difficult to achieve, leading many to describe it as a “grand challenge” of biology.

In recent years, though, computational methods — particularly those using artificial intelligence — have suggested such analysis is possible. With these techniques, AI systems are trained on datasets of known protein structures and use this information to create their own predictions.

DeepMind’s AlphaFold software has significantly increased the accuracy of computational protein-folding, as shown by its performance in the CASP competition.
Image: DeepMind

Many groups have been working on this problem for years, but DeepMind’s deep bench of AI talent and access to computing resources allowed it to accelerate progress dramatically. Last year, the company competed in an international protein-folding competition known as CASP and blew away the competition. Its results were so accurate that computational biologist John Moult, one of CASP’s co-founders, said that “in some sense the problem [of protein folding] is solved.”

DeepMind’s AlphaFold program has been upgraded since last year’s CASP competition and is now 16 times faster. “We can fold an average protein in a matter of minutes, most cases seconds,” says Hassabis. The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

Liam McGuffin, a professor at Reading University who developed some of the UK’s leading protein-folding software, praised the technical brilliance of AlphaFold, but also noted that the program’s success relied on decades of prior research and public data. “DeepMind has vast resources to keep this database up to date and they are better placed to do this than any single academic group,” McGuffin told The Verge. “I think academics would have got there in the end, but it would have been slower because we’re not as well resourced.”

Why does DeepMind care?

Many scientists The Verge spoke to noted the generosity of DeepMind in releasing this data for free. After all, the lab is owned by Google-parent Alphabet, which has been pouring huge amounts of resources into commercial healthcare projects. DeepMind itself loses a lot of money each year, and there have been numerous reports of tensions between the company and its parent firm over issues like research autonomy and commercial viability.

Hassabis, though, tells The Verge that the company always planned to make this information freely available, and that doing so is a fulfillment of DeepMind’s founding ethos. He stresses that DeepMind’s work is used in lots of places at Google — “almost anything you use, there’s some of our technology that’s part of that under the hood” — but that the company’s primary goal has always been fundamental research.

“The agreement when we got acquired is that we are here primarily to advance the state of AGI and AI technologies and then use that to accelerate scientific breakthroughs,” says Hassabis. “[Alphabet] has plenty of divisions focused on making money,” he adds, noting that DeepMind’s focus on research “brings all sorts of benefits, in terms of prestige and goodwill for the scientific community. There’s many ways value can be attained.”

Hassabis predicts that AlphaFold is a sign of things to come — a project that shows the huge potential of artificial intelligence to handle messy problems like human biology.

“I think we’re at a really exciting moment,” he says. “In the next decade, we, and others in the AI field, are hoping to produce amazing breakthroughs that will genuinely accelerate solutions to the really big problems we have here on Earth.”

Repost: Original Source and Author Link

Categories
AI

AI-driven HR seeks to balance ‘human’ and ‘resources’

All the sessions from Transform 2021 are available on-demand now. Watch now.


Human resources (HR) is an area that is ripe for automation, and in particular, the kind of automation made possible by artificial intelligence (AI). HR, after all, is a cost center at most organizations, which means organizations are always looking for ways to keep costs as low as possible.

And yet, HR is rife with complex, time-consuming processes that, so far, have required the unique logic and intuitive thinking that only humans can provide.

A New World

But all that is changing with the newest generations of AI-driven HR platforms. Globality’s Sonia Mathai notes that everything from hiring and onboarding to scheduling and benefits management, and all the way to termination and access control, AI is creating a new brand of HR that is leaner, more accurate, and less costly than traditional HR.

For one thing, she says, AI-driven HR is available 24/7, delivering user-friendly services via fully conversational chatbots that provide immediate responses to most questions with no wait-listing. At the same time, AI can provide a more personalized experience due to its access to real-time data. And as seen with AI in other business units, all of this allows human reps to shed the rote, repetitive aspects of the job to focus on more creative, strategic solutions to endemic issues.

HR is such an important function at most companies that it should not be deployed lightly or haphazardly, according to Thirdera CIO Jeff Gregory. In a recent interview with Venture Beat, he pointed out that HR acts as the “steward of a company” and maintains the pulse of the health and development of employees. So it must consistently present the right information even when employees do not ask the right questions. For this reason, AI must learn the ins and outs of HR processes and resource utilization just like any employee, which is why it is best for it to start small and then work its way up to more complicated and consequential functions.

Be careful that AI doesn’t get you into legal trouble as well, says Eric Dunleavy, director of litigation and employment services at DCI Consulting Group, and Michelle Duncan, an attorney with Jackson Lewis. It’s one thing to use AI to prescreen applications, evaluate interviews, and mine social media. It’s quite another to have it decide who gets hired or promoted, particularly with the numerous examples of AI showing bias in regards to race, gender, age, and other factors. In the end, it is up to the company to ensure that all employees, whether human or digital, abide by established laws like Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act.

Crunching Numbers

Perhaps the most profound impact AI will have on HR is in analytics, rather than hiring or employee self-service tools. At its heart, HR is a numbers game, according to Erik van Vulpen, founder of the AIHR Academy, and AI is a whiz with numbers. For instance, AI can delve deep into turnover data to divine why employees are leaving and what can be done to correct it. As well, AI can assess the impact of learning and development programs, or determine which new hires will become top performers.  Ultimately, this will replace the “gut feeling” approach to decision-making in traditional HR shops to one that is more data-driven and quantifiable.

It’s been said that employees are the enterprise’s most valuable resource. In this case, organizations should proceed with caution when deciding how quickly and how thoroughly they want to integrate AI into their HR processes. People who take their jobs seriously might not maintain that attitude if they feel they cannot get a fair shake from an algorithm.

The best way to avoid this is to ensure that AI is trained to deliver positive outcomes, preferably ones that benefit the individual and the organization alike. If this is not possible, then there should be mechanisms in place, either human-driven or artificial, explaining why a given result has emerged and what the employee may do to alter it.

In the end, we all want to be treated fairly no matter who, or what, is making the decisions.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Intuit expanded userbase with AI assistants and virtual human experts

All the sessions from Transform 2021 are available on-demand now. Watch now.


When it comes to filing taxes, some people prefer to handle it all by themselves. Other people prefer to let the experts take care of everything. For the people somewhere in the middle, Intuit has a service called TurboTax Live, which utilizes AI to match customers with experts who will help guide them through the process.

“There’s really room for the idea of ‘do it with me’ and … you need some help and you want some guidance,’” Marianna Tessel, Intuit chief technology officer, said during a session at VentureBeat’s Transform 2021 summit.

There is more to the service beyond matching customers to experts based on scheduling. Intuit also factors in hundreds of attributes to find the right expert to address each customer’s unique needs. This application of AI allows the service to match customers with the best expert on hand within minutes via chat or video call.

The result? Intuit’s user base has increased by 70% over the past year. Customer service wait times decreased by 15%. Additionally, Intuit anticipates its user base increasing by another 90% this year.

Growth through AI

In response to a question from VentureBeat CEO Matt Marshall on how much of their success can be attributed to AI, Tessel acknowledged that while there was “no question” that there was a boost from people working remotely due to the pandemic, Intuit believes that most of the growth has happened because of the high quality and intrinsic convenience of their service, bolstered by AI.

Intuit invests into AI in three distinct fields:

  1. Knowledge engineering, which helps codify tax compliance rules into code so computers can help customers understand what information is needed and what the next step is.
  2. Machine learning, used extensively to help matchmake customers with experts and to help personalize products based on customer data.
  3. Natural language processing, so AI that can listen to the spoken words of customers and read written words, such as information on a tax document.

Tessel says that using all these fields in combination is how their AI can read a tax document, identify what type of document it is, and figure out what to do with the information on it.

When asked about lessons learned, Tessel emphasized the positive impact of engineering hygiene, asking the right questions when the numbers don’t look great and conducting root cause analyses. She also emphasized that while the migration to the cloud was difficult, not having to worry about managing infrastructure was a big boost for the company.

For Intuit, AI “is a machine and human collaboration, a lot more than we expected,” Tessel said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

The human genome is (almost) complete — here’s what’s left to do

The release of the draft human genome sequence in 2001 was a seismic moment in our understanding of the human genome and paved the way for advances in our understanding of the genomic basis of human biology and disease.

But sections were left unsequenced, and some sequence information was incorrect. Now, two decades later, we have a much more complete version, published as a preprint (which is yet to undergo peer review) by an international consortium of researchers.

Technological limitations meant the original draft human genome sequence covered just the “euchromatic” portion of the genome — the 92% of our genome where most genes are found, and which is most active in making gene products such as RNA and proteins.

The newly updated sequence fills in most of the remaining gaps, providing the full 3.055 billion base pairs (“letters”) of our DNA code in its entirety. This data has been made publicly available, in the hope other researchers will use it to further their research.

Why did it take 20 years?

Much of the newly sequenced material is the “heterochromatic” part of the genome, which is more “tightly packed” than the euchromatic genome and contains many highly repetitive sequences that are very challenging to read accurately.

These regions were once thought not to contain any important genetic information but they are now known to contain genes that are involved in fundamentally important processes such as the formation of organs during embryonic development. Among the 200 million newly sequenced base pairs are an estimated 115 genes predicted to be involved in producing proteins.

Two key factors made the completion of the human genome possible:

1. Choosing a very special cell type

The newly published genome sequence was created using human cells derived from a very rare type of tissue called a complete hydatidiform mole, which occurs when a fertilized egg loses all the genetic material contributed to it by the mother.

Most cells contain two copies of each chromosome, one from each parent and each parent’s chromosome contributing a different DNA sequence. A cell from a complete hydatidiform mole has two copies of the father’s chromosomes only, and the genetic sequence of each pair of chromosomes is identical. This makes the full genome sequence much easier to piece together.

2. Advances in sequencing technology

After decades of glacial progress, the Human Genome Project achieved its 2001 breakthrough by pioneering a method called “shotgun sequencing”, which involved breaking the genome into very small fragments of about 200 base pairs, cloning them inside bacteria, deciphering their sequences, and then piecing them back together like a giant jigsaw.

This was the main reason the original draft covered only the euchromatic regions of the genome — only these regions could be reliably sequenced using this method.

The latest sequence was deduced using two complementary new DNA-sequencing technologies. One was developed by PacBio and allows longer DNA fragments to be sequenced with very high accuracy. The second, developed by Oxford Nanopore, produces ultra-long stretches of continuous DNA sequence. These new technologies allow the jigsaw pieces to be thousands or even millions of base pairs long, making it easier to assemble.

The new information has the potential to advance our understanding of human biology including how chromosomes function and maintain their structure. It is also going to improve our understanding of genetic conditions such as Down syndrome that have an underlying chromosomal abnormality.

Is the genome now completely sequenced?

Well, no. An obvious omission is the Y chromosome because the complete hydatidiform mole cells used to compile this sequence contained two identical copies of the X chromosome. However, this work is underway and the researchers anticipate their method can also accurately sequence the Y chromosome, despite it having highly repetitive sequences.

Even though sequencing the (almost) complete genome of a human cell is an extremely impressive landmark, it is just one of several crucial steps towards fully understanding humans’ genetic diversity.

The next job will be to study the genomes of diverse populations (the complete hydatidiform mole cells were European). Once the new technology has matured sufficiently to be used routinely to sequence many different human genomes, from different populations, it will be better positioned to make a more significant impact on our understanding of human history, biology, and health.

Both care and technological development are needed to ensure this research is conducted with a full understanding of the diversity of the human genome to prevent exacerbation of health disparities by limiting discoveries to specific populations.

Article by Melissa Southey, Chair Precision Medicine, Monash University and Tu Nguyen-Dumont, Senior research fellow, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Repost: Original Source and Author Link

Categories
Tech News

This AI robot mimics human expressions to build trust with users

Scientists at Columbia University have developed a robot that mimics the facial expressions of humans to gain their trust.

Named Eva, the droid uses deep learning to analyze human facial gestures captured by a camera. Cables and motors then pull on different points of the robot’s soft skin to mimic the expressions of nearby people in real-time.

The effect is pretty creepy, but the researchers say that giving androids this ability can facilitate more natural and engaging human-robot interactions.

Eva produces different expressions by utilizing one or more of six basic emotions: anger, disgust, fear, joy, sadness, and surprise. Per the study paper:

For example, while joy would correspond to one facial expression, the combination of joy and surprise would result in happily surprised, which would correspond to a separate facial expression.

[Read: This dude drove an EV from the Netherlands to New Zealand — here are his 3 top road trip tips]

The team trained the robot to generate these expressions by filming it making a series of random faces. Eva’s neural networks then learned to match the humanoid’s gestures to those of human faces captured on its video camera.

Credit: Creative Machines Lab/Columbia Engineering