Categories
AI

How enterprises can get from siloed data to machine learning innovation


Presented by Wizeline

Any enterprise can unlock AI — but only if leaders know how to actually leverage their data, define problems and iterate. In this VB On-Demand event, join industry experts as they dig into how enterprises can turn data into company-wide AI and machine learning solutions.

Watch free on demand!


Companies are foundering in the quest to realize AI objectives, not to mention a return on their investment in AI, and it comes down to the right data and the right expertise, says Paula Martinez, CEO and co-founder of Marvik, a machine learning consultancy.

First, there’s the expense and effort of uncovering good quality data from the mountain that’s always growing, and labeling it properly to actually put analytics and machine learning developments into production. And then, going from proof of concept to a production-ready solution with quality standards that can be launched at scale is another enormous obstacle — and a large part of that is finding a team with the right skills to carry out the task successfully. Finally, some companies face problems just in narrowing down a business problem.

“Launching a proof of concept with cloud services or other developers can be fast and easy to do,” Martinez says. “But when the problems are complex and very specific to an organization or an industry, requiring very specific knowledge and experience, the solution must be fine-tuned for that particular case — and that takes a specialized skill set.”

The marriage of data and expertise

Getting to 95% percent accuracy might take days or weeks, but getting 99% accuracy can take months, as well as another degree of expertise and knowledge. Without that knowledge, collecting and organizing the right data is usually overlooked. And without the necessary resources allocated to find and prepare this data for machine learning models, that information stays siloed.

“Working with large organizations, we usually find ourselves working with information that is fragmented in different systems, in different business units of the company,” Martinez says.

Today, many companies are making efforts to consolidate and leverage that data with data lakes and warehouses, but it’s a complex and time-consuming task to reorganize and implement any of these solutions, and figure out how to take advantage of this information.

“It makes sense to invest money and time to improve how we handle data inside companies,” she says. “We need to plan for that, and decide on a road map for a successful AI investment overall.”

How companies can launch AI experiments

There’s no recipe for un-siloing and actually leveraging data — it’s a case-by-case situation. But in terms of methodology, Martinez recommends that clients start any AI project with a minimum viable product (MVP), to determine if this is the path toward adding business value to the company.

From the definition and design of the solution, to the creation of an action plan, and later the implementation of the AI solutions needed requires planning for the right technology stack, too. That must take into account the nature of the data, and how the company envisions implementing the solution. For example, will it run in the cloud, on edge devices, mobile or other applications? Does it need to run in real time, or in intervals? There are so many variables to take into account when defining the technology stack, and so many technologies coming into the market.

“An advantage we have is that we get to test new tools as they come to the market, and we can find out which ones work and which ones don’t,” she says. “We get access to so many organizations, and we get to see the pros and the cons of every cloud tool and technology.”

Her biggest piece of advice when planning and while implementing is to not be afraid to experiment — which just means forging ahead even if you’re not certain of the results. AI is about to explode, so companies need to start combining technology with business understanding and data right now, to ensure they’re prepared to face a far more competitive future.

“The good news, I think, is that external help is available to speed up the process,” she says. “We often see companies not innovating because they’re afraid, or they feel it’s outside of their reach, even though they have the correct data. Don’t be afraid to innovate. Just make sure you reach out to the right team to do it.”

To learn more about finding the data you need and tapping the right resources to experiment, iterate and innovate in a demanding market, don’t miss this VB On Demand event.


Start streaming now!

Agenda

  • How enterprises are leveraging AI and machine learning, NLP, RPA and more
  • Defining and implementing an enterprise data strategy
  • Breaking down silos, assembling the right teams and increasing collaboration
  • Identifying data and AI efforts across the company
  • The implications of relying on legacy stacks and how to get buy-in for change

Presenters

  • Paula Martinez, CEO and Co-Founder, Marvik
  • Hayde Martinez, Data Technology Program Lead, Wizeline
  • Victor Dey, Tech Editor, VentureBeat (moderator)

Repost: Original Source and Author Link

Categories
AI

10 years later, deep learning ‘revolution’ rages on, say AI pioneers Hinton, LeCun and Li

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Artificial intelligence (AI) pioneer Geoffrey Hinton, one of the trailblazers of the deep learning “revolution” that began a decade ago, says that the rapid progress in AI will continue to accelerate.

In an interview before the 10-year anniversary of key neural network research that led to a major AI breakthrough in 2012, Hinton and other leading AI luminaries fired back at some critics who say deep learning has “hit a wall.” 

“We’re going to see big advances in robotics — dexterous, agile, more compliant robots that do things more efficiently and gently like we do,” Hinton said.

Other AI pathbreakers, including Yann LeCun, head of AI and chief scientist at Meta and Stanford University professor Fei-Fei Li, agree with Hinton that the results from the groundbreaking 2012 research on the ImageNet database — which was built on previous work to unlock significant advancements in computer vision specifically and deep learning overall — pushed deep learning into the mainstream and have sparked a massive momentum that will be hard to stop. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

In an interview with VentureBeat, LeCun said that obstacles are being cleared at an incredible and accelerating speed. “The progress over just the last four or five years has been astonishing,” he added.

And Li, who in 2006 invented ImageNet, a large-scale dataset of human-annotated photos for developing computer vision algorithms, told VentureBeat that the evolution of deep learning since 2012 has been “a phenomenal revolution that I could not have dreamed of.” 

Success tends to draw critics, however. And there are strong voices who call out the limitations of deep learning and say its success is extremely narrow in scope. They also maintain the hype that neural nets have created is just that, and is not close to being the fundamental breakthrough that some supporters say it is: that it is the groundwork that will eventually help us get to the anticipated “artificial general intelligence” (AGI), where AI is truly human-like in its reasoning power. 

Looking back on a booming AI decade

Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote this past March about deep learning “hitting a wall” and says that while there has certainly been progress, “we are fairly stuck on common sense knowledge and reasoning about the physical world.” 

And Emily Bender, professor of computational linguistics at the University of Washington and a regular critic of what she calls the “deep learning bubble,” said she doesn’t think that today’s natural language processing (NLP) and computer vision models add up to “substantial steps” toward “what other people mean by AI and AGI.” 

Regardless, what the critics can’t take away is that huge progress has already been made in some key applications like computer vision and language that have set thousands of companies off on a scramble to harness the power of deep learning, power that has already yielded impressive results in recommendation engines, translation software, chatbots and much more. 

However, there are also serious deep learning debates that can’t be ignored. There are essential issues to be addressed around AI ethics and bias, for example, as well as questions about how AI regulation can protect the public from being discriminated against in areas such as employment, medical care and surveillance. 

In 2022, as we look back on a booming AI decade, VentureBeat wanted to know the following: What lessons can we learn from the past decade of deep learning progress? And what does the future hold for this revolutionary technology that’s changing the world, for better or worse?

Geoffrey Hinton

AI pioneers knew a revolution was coming

Hinton says he always knew the deep learning “revolution” was coming. 

“A bunch of us were convinced this had to be the future [of artificial intelligence],” said Hinton, whose 1986 paper popularized the backpropagation algorithm for training multilayer neural networks. “We managed to show that what we had believed all along was correct.” 

LeCun, who pioneered the use of backpropagation and convolutional neural networks in 1989, agrees. “I had very little doubt that eventually, techniques similar to the ones we had developed in the 80s and 90s” would be adopted, he said. 

What Hinton and LeCun, among others, believed was a contrarian view that deep learning architectures such as multilayered neural networks could be applied to fields such as computer vision, speech recognition, NLP and machine translation to produce results as good or better than those of human experts. Pushing back against critics who often refused to even consider their research, they maintained that algorithmic techniques such as backpropagation and convolutional neural networks were key to jumpstarting AI progress, which had stalled since a series of setbacks in the 1980s and 1990s. 

Meanwhile, Li, who is also codirector of the Stanford Institute for Human-Centered AI and former chief scientist of AI and machine learning at Google, had also been confident that her hypothesis — that with the right algorithms, the ImageNet database held the key to advancing computer vision and deep learning research — was correct. 

“It was a very out-of-the-box way of thinking about machine learning and a high-risk move,” she said, but “we believed scientifically that our hypothesis was right.” 

However, all of these theories, developed over several decades of AI research, didn’t fully prove themselves until the autumn of 2012. That was when a breakthrough occurred that many say sparked a new deep learning revolution.

In October 2012, Alex Krizhevsky and Ilya Sutskever, along with Hinton as their Ph.D. advisor, entered the ImageNet competition, which was founded by Li to evaluate algorithms designed for large-scale object detection and image classification. The trio won with their paper ImageNet Classification with Deep Convolutional Neural Networks, which used the ImageNet database to create a pioneering neural network known as AlexNet. It proved to be far more accurate at classifying different images than anything that had come before. 

The paper, which wowed the AI research community, built on earlier breakthroughs and, thanks to the ImageNet dataset and more powerful GPU hardware, directly led to the next decade’s major AI success stories — everything from Google Photos, Google Translate and Uber to Alexa, DALL-E and AlphaFold.

Since then, investment in AI has grown exponentially: The global startup funding of AI grew from $670 million in 2011 to $36 billion U.S. dollars in 2020, and then doubled again to $77 billion in 2021. 

The year neural nets went mainstream

After the 2012 ImageNet competition, media outlets quickly picked up on the deep learning trend. A New York Times article the following month, Scientists See Promise in Deep-Learning Programs [subscription required], said: “Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.” What is new, the article continued, “is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just ‘neural nets’ for their resemblance to the neural connections in the brain.” 

AlexNet was not alone in making big deep learning news that year: In June 2012, researchers at Google’s X lab built a neural network made up of 16,000 computer processors with one billion connections that, over time, began to identify “cat-like” features until it could recognize cat videos on YouTube with a high degree of accuracy. At the same time, Jeffrey Dean and Andrew Ng were doing breakthrough work on large-scale image recognition at Google Brain. And at 2012’s IEEE Conference on Computer Vision and Pattern Recognition, researchers Dan Ciregan et al. significantly improved upon the best performance for convolutional neural networks on multiple image databases. 

All told, by 2013, “pretty much all the computer vision research had switched to neural nets,” said Hinton, who since then has divided his time between Google Research and the University of Toronto. It was a nearly total AI change of heart from as recently as 2007, he added, when “it wasn’t appropriate to have two papers on deep learning at a conference.” 

Fei-Fei Li

A decade of deep learning progress

Li said her intimate involvement in the deep learning breakthroughs – she personally announced the ImageNet competition winner at the 2012 conference in Florence, Italy – meant it comes as no surprise that people recognize the importance of that moment. 

“[ImageNet] was a vision started back in 2006 that hardly anybody supported,” said Li. But, she added, it “really paid off in such a historical, momentous way.” 

Since 2012, the progress in deep learning has been both strikingly fast and impressively deep. 

“There are obstacles that are being cleared at an incredible speed,” said LeCun, citing progress in natural language understanding, translation in text generation and image synthesis.

Some areas have even progressed more quickly than expected. For Hinton, that includes using neural networks in machine translation, which saw great strides in 2014. “I thought that would be many more years,” he said. And Li admitted that advances in computer vision  — such as DALL-E — “have moved faster than I thought.” 

Dismissing deep learning critics 

However, not everyone agrees that deep learning progress has been jaw-dropping. In November 2012, Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote an article for the New Yorker [subscription required] in which he said ,“To paraphrase an old parable, Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.” 

Today, Marcus says he doesn’t think deep learning has brought AI any closer to the “moon” — the moon being artificial general intelligence, or human-level AI  —  than it was a decade ago.

“Of course there’s been progress, but in order to get to the moon, you would have to solve causal understanding and natural language understanding and reasoning,” he said. “There’s not been a lot of progress on those things.” 

Marcus said he believes that hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning, is the way forward to combat the limits of neural networks.

For their part, both Hinton and LeCun dismiss Marcus’ criticisms.

“[Deep learning] hasn’t hit a wall – if you look at the progress recently, it’s been amazing,” said Hinton, though he has acknowledged in the past that deep learning is limited in the scope of problems it can solve. 

There are “no walls being hit,” added LeCun. “I think there are obstacles to clear and solutions to those obstacles that are not entirely known,” he said. “But I don’t see progress slowing down at all … progress is accelerating, if anything.” 

Still, Bender isn’t convinced. “To the extent that they’re talking about simply progress towards classifying images according to labels provided in benchmarks like ImageNet, it seems like 2012 had some qualitative breakthroughs,” she told VentureBeat by email. “If they are talking about anything grander than that, it’s all hype.”

Issues of AI bias and ethics loom large

In other ways, Bender also maintains that the field of AI and deep learning has gone too far. “I do think that the ability (compute power + effective algorithms) to process very large datasets into systems that can generate synthetic text and images has led to us getting way out over our skis in several ways,” she said. For example, “we seem to be stuck in a cycle of people ‘discovering’ that models are biased and proposing trying to debias them, despite well-established results that there is no such thing as a fully debiased dataset or model.” 

In addition, she said that she would “like to see the field be held to real standards of accountability, both for empirical claims made actually being tested and for product safety – for that to happen, we will need the public at large to understand what is at stake as well as how to see through AI hype claims and we will need effective regulation.” 

However, LeCun pointed out that “these are complicated, important questions that people tend to simplify,” and a lot of people “have assumptions of ill intent.” Most companies, he maintained, “actually want to do the right thing.” 

In addition, he complained about those not involved in the science and technology and research of AI.

“You have a whole ecosystem of people kind of shooting from the bleachers,” he said, “and basically are just attracting attention.”  

Deep learning debates will certainly continue

As fierce as these debates can seem, Li emphasizes that they are what science is all about. “Science is not the truth, science is a journey to seek the truth,” she said. “It’s the journey to discover and to improve — so the debates, the criticisms, the celebration is all part of it.” 

Yet, some of the debates and criticism strike her as “a bit contrived,” with extremes on either side, whether it’s saying AI is all wrong or that AGI is around the corner. “I think it’s a relatively popularized version of a deeper, much more subtle, more nuanced, more multidimensional scientific debate,” she said. 

Certainly, Li pointed out, there have been disappointments in AI progress over the past decade –- and not always about technology. “I think the most disappointing thing is back in 2014 when, together with my former student, I cofounded AI4ALL and started to bring young women, students of color and students from underserved communities into the world of AI,” she said. “We wanted to see a future that is much more diverse in the AI world.” 

While it has only been eight years, she insisted the change is still too slow. “I would love to see faster, deeper changes and I don’t see enough effort in helping the pipeline, especially in the middle and high school age group,” she said. “We have already lost so many talented students.” 

Yann LeCun

The future of AI and deep learning 

LeCun admits that some AI challenges to which people have devoted a huge amount of resources have not been solved, such as autonomous driving. 

“I would say that other people underestimated the complexity of it,” he said, adding that he doesn’t put himself in that category. “I knew it was hard and would take a long time,” he claimed. “I disagree with some people who say that we basically have it all figured out … [that] it’s just a matter of making those models bigger.” 

In fact, LeCun recently published a blueprint for creating “autonomous machine intelligence” that also shows how he thinks current approaches to AI will not get us to human-level AI. 

But he also still sees vast potential for the future of deep learning: What he is most personally excited about and actively working on, he says, is getting machines to learn more efficiently — more like animals and humans. 

“The big question for me is what is the underlying principle on which animal learning is based — that’s one reason I’ve been advocating for things like self-supervised learning,” he said. “That progress would allow us to build things that we are currently completely out of reach, like intelligent systems that can help us in our daily lives as if they were human assistants, which is something that we’re going to need because we’re all going to wear augmented reality glasses and we’re going to have to interact with them.” 

Hinton agrees that there is much more deep learning progress on the way. In addition to advances in robotics, he also believes there will be another breakthrough in the basic computational infrastructure for neural nets, because “currently it’s just digital computing done with accelerators that are very good at doing matrix multipliers.” For backpropagation, he said, analog signals need to be converted to digital. 

“I think we will find alternatives to backpropagation that work in analog hardware,” he said. “I’m pretty convinced that in the longer run we’ll have almost all the computation done in analog.” 

Li says that what is most important for the future of deep learning is communication and education. “[At Stanford HAI], we actually spend an excessive amount of effort to educate business leaders, government, policymakers, media and reporters and journalists and just society at large, and create symposiums, conferences, workshops, issuing policy briefs, industry briefs,” she said.  

With technology that is so new, she added, “I’m personally very concerned that the lack of background knowledge doesn’t help in transmitting a more nuanced and more thoughtful description of what this time is about.” 

How 10 years of deep learning will be remembered

For Hinton, the past decade has offered deep learning success “beyond my wildest dreams.” 

But, he emphasizes that while deep learning has made huge gains, it should be also remembered as an era of computer hardware advances. “It’s all on the back of the progress in computer hardware,” he said. 

Critics like Marcus say that while some progress has been made with deep learning, “I think it might be seen in hindsight as a bit of a misadventure,” he said. “I think people in 2050 will look at the systems from 2022 and be like, yeah, they were brave, but they didn’t really work.” 

But Li hopes that the last decade will be remembered as the beginning of a “great digital revolution that is making all humans, not just a few humans, or segments of humans, live and work better.” 

As a scientist, she added, “I will never want to think that today’s deep learning is the end of AI exploration.” And societally, she said she wants to see AI as “an incredible technological tool that’s being developed and used in the most human-centered way – it’s imperative that we recognize the profound impact of this tool and we embrace the human-centered framework of thinking and designing and deploying AI.” 

After all, she pointed out: “How we’re going to be remembered depends on what we’re doing now.”  

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Repost: Original Source and Author Link

Categories
AI

Is Intel Labs’ brain-inspired AI approach the future of robot learning? 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Can computer systems develop to the point where they can think creatively, identify people or items they have never seen before, and adjust accordingly — all while working more efficiently, with less power? Intel Labs is betting on it, with a new hardware and software approach using neuromorphic computing, which, according to a recent blog post, “uses new algorithmic approaches that emulate how the human brain interacts with the world to deliver capabilities closer to human cognition.” 

While this may sound futuristic, Intel’s neuromorphic computing research is already fostering interesting use cases, including how to add new voice interaction commands to Mercedes-Benz vehicles; create a robotic hand that delivers medications to patients; or develop chips that recognize hazardous chemicals.

A new approach in the face of capacity limits

Machine learning-driven systems, such as autonomous cars, robotics, drones, and other self-sufficient technologies, have relied on ever-smaller, more-powerful, energy-efficient processing chips. Though traditional semiconductors are now reaching their miniaturization and power capacity limits, compelling experts to believe that a new approach to semiconductor design is required. 

One intriguing option that has piqued tech companies’ curiosity is neuromorphic computing. According to Gartner, traditional computing technologies based on legacy semiconductor architecture will reach a digital wall by 2025. This will force changes to new paradigms such as neuromorphic computing, which mimics the physics of the human brain and nervous system by utilizing spiking neural networks (SNNs) – that is, the spikes from individual electronic neurons activate other neurons in a cascading chain. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Neuromorphic computing will enable fast vision and motion planning at low power, Yulia Sandamirskaya, a research scientist at Intel Labs in Munich, told VentureBeat via email. “These are the key bottlenecks to enable safe and agile robots, capable to direct their actions at objects in dynamic real-world environments.”

In addition, neuromorphic computing “expands the space of neural network-based algorithms,” she explained. By co-locating memory and compute in one chip, it allows for energy-efficient processing of signals and enables on-chip continual, lifelong learning.

One size does not fit all in AI computing

As the AI space becomes increasingly complex, a one-size-fits-all solution cannot optimally address the unique constraints of each environment across the spectrum of AI computing.

“Neuromorphic computing could offer a compelling alternative to traditional AI accelerators by significantly improving power and data efficiency for more complex AI use cases, spanning data centers to extreme edge applications,” Sandamirskaya said.

Neuromorphic computing is quite similar to how the brain transmits and receives signals from biological neurons that spark or identify movements and sensations in our bodies. However, compared to traditional approaches, where systems orchestrate computation in strict binary terms, neuromorphic chips compute more flexibly and broadly. In addition, by constantly re-mapping neural networks, the SNNs replicate natural learning, allowing the neuromorphic architecture to make decisions in response to learned patterns over time.

These asynchronous, event-based SNNs enable neuromorphic computers to achieve orders of magnitude power and performance advantages over traditional designs. Sandamirskaya explained that neuromorphic computing will be especially advantageous for applications that must operate under power and latency constraints and adapt in real time to unforeseen circumstances. 

A study by Emergen Research predicts that the worldwide neuromorphic processing industry will reach $11.29 billion by 2027.

Intel’s real-time learning solution

Neuromorphic computing will be especially advantageous for applications that must operate under power and latency constraints and must adapt in real-time to unforeseen circumstances, said Sandamirskaya.

One particular challenge is that intelligent robots require object recognition to substantially comprehend working environments. Intel Labs’ new neuromorphic computing approach to neural network-based object learning — in partnership with the Italian Institute of Technology and the Technical University of Munich — is aimed at future applications like robotic assistants interacting with unconstrained environments, including those used in logistics, healthcare, or elderly care. 

In a simulated setup, a robot actively senses objects by moving its eyes through an event-based camera or dynamic vision sensor. The events collected are used to drive a spiking neural network (SNN) on Intel’s neuromorphic research chip, called Loihi. If an object or view is new to the model, its SNN representation is either learned or modified. The network recognizes the object and provides feedback to the user, if the object is known. This neuromorphic computing technology allows robots to continuously learn about every nuance in their environment.

Intel and its collaborators successfully demonstrated continual interactive learning on the Loihi neuromorphic research chip, measuring about 175-times lower energy to learn a new object instance with similar or better speed and accuracy compared to conventional methods running on a central processing unit (CPU). 

Computation is more energy-efficient

Sandamirskaya said computation is more energy efficient because it uses clockless, asynchronous circuits that naturally exploit sparse, event-driven analysis. 

“Loihi is the most versatile neuromorphic computing platform that can be used to explore many different types of novel bio-inspired neural-network algorithms,” she said, including deep learning to attractor networks, optimization, or search algorithms, sparse coding, or symbolic vector architectures.

Loihi’s power efficiency also shows promise for making assistive technologies more valuable and effective in real-world situations. Since Loihi is up to 1,000 times more energy efficient than general-purpose processors, a Loihi-based device could require less frequent charging, making it ideal for use in daily life.

Intel Labs’ work contributes to neuronal network-based machine learning for robots with a small power footprint and interactive learning capability. According to Intel, such research is a crucial step in improving the capabilities of future assistive or manufacturing robots.

“On-chip learning will enable ongoing self-calibration of future robotic systems, which will be soft and thus less rigid and stable, as well as fast learning on the job or in an interactive training session with the user,” Sandamirskaya said. 

Intel Labs: The future is bright for neuromorphic computing

Neuromorphic computing isn’t yet available as a commercially viable technology.

While Sandamirskaya says the neuromorphic computing movement is “gaining steam at an amazing pace,” commercial applications will require improvement of neuromorphic hardware in response to application and algorithmic research — as well as the development of a common cross-platform software framework and deep collaborations across industry, academia and governments. 

Still, she is hopeful about the future of neuromorphic computing.

“We’re incredibly excited to see how neuromorphic computing could offer a compelling alternative to traditional AI accelerators,” she said, “by significantly improving power and data efficiency for more complex AI use cases spanning data center to extreme edge applications.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

How machine learning helps the New York Times power its paywall

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Every organization applying artificial intelligence (AI) and machine learning (ML) to their business is looking to use these powerful technologies to tackle thorny problems. For the New York Times, one of the biggest challenges is striking a balance between meeting its latest target of 15 million digital subscribers by 2027 while also getting more people to read articles online. 

These days, the multimedia giant is digging into that complex cause-and-effect relationship using a causal machine learning model, called the Dynamic Meter, which is all about making its paywall smarter. According to Chris Wiggins, chief data scientist at the New York Times, for the past three or four years the company has worked to understand their user journey scientifically in general and the workings of the paywall.

Back in 2011, when the Times began focusing on digital subscriptions, “metered” access was designed so that non-subscribers could read the same fixed number of articles every month before hitting a paywall requiring a subscription. That allowed the company to gain subscribers while also allowing readers to explore a range of offerings before committing to a subscription. 

Machine learning for better decision-making

Now, however, the Dynamic Meter can set personalized meter limits — that is, by powering the model with data-driven user insights — the causal machine learning model can be prescriptive, determining the right number of free articles each user should get so they get interested enough in the New York Times to subscribe to continue reading more. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

According to a blog post written by Rohit Supekar, a data scientist on the New York Times’ algorithmic targeting team, at the top of the site’s subscription funnel are unregistered users. At a specific meter limit, they are shown a registration wall that blocks access and asks them to create an account. This allows them access to more free content, and a registration ID allows the company to better understand their activity. Once registered users reach another meter limit, they are served a paywall with a subscription offer. The Dynamic Meter model learns from all of this registered user data and determines the appropriate meter limit to optimize for specific key performance indicators (KPIs). 

The idea, said Wiggins, is to form a long-term relationship with readers. “It’s a much slower problem in which people engage over the span of weeks or months,” he said. “Then, at some point, you ask them to become a subscriber and see whether or not you did a good job.” 

Causal AI helps understand what would have happened

The most difficult challenge in building the causal machine learning model was in setting up the robust data pipeline to understand the user activity for over 130 million registered users on the New York Times’ site, said Supekar.

The key technical advancement powering the Dynamic Meter is around causal AI, a machine learning method where you want to build models which can predict what would have happened. 

“We’re really trying to understand the cause and effect,” he explained.

If a particular user was given a different number of free articles, what would be the likelihood that they would subscribe or the likelihood that they would read a certain number of articles? This is a complicated question, he explained, because in reality, they can only observe one of these outcomes. 

“If we give somebody 100 free articles, we have to guess what would have happened if they were given 50 articles,” he said. “These sorts of questions fall in the realm of causal AI.”

Supekar’s blog post explained that it’s clear how the causal machine learning model works by performing a randomized control trial, where certain groups of people are given different numbers of free articles and the model can learn based on this data. As the meter limit for registered users increases, the engagement measured by the average number of page views gets larger. But it also leads to a reduction in subscription conversions because fewer users encounter the paywall. The Dynamic Meter has to both optimize for and balance a trade-off between conversion engagement.

“For a specific user who got 100 free articles, we can determine what would have happened if they got 50 because we can compare them with other registered users who were given 50 articles,” said Supekar. This is an example of why causal AI has become popular, because “There are a lot of business decisions, which have a lot of revenue impact in our case, where we would like to understand the relationship between what happened and what would have happened,” he explained. “That’s where causal AI has really picked up steam.” 

Machine learning requires understanding and ethics

Wiggins added that with so many organizations bringing AI into their businesses for automated decision-making, they really want to understand what is going to happen. 

“It’s different from machine learning in the service of insights, where you do a classification problem once and maybe you study that as a model, but you don’t actually put the ML into production to make decisions for you,” he said. Instead, for a business that wants AI to really make decisions, they want to have an understanding of what’s going on. “You don’t want it to be a blackbox model,” he pointed out.

Supekar added that his team is conscious of algorithmic ethics when it comes to the Dynamic Meter model. “Our exclusive first-party data is only about the engagement people have with the Times content, and we don’t include any demographic or psychographic features,” he said. 

The future of the New York Times paywall

As for the future of the New York Times’ paywall, Supekar said he is excited about exploring the science about the negative aspects of introducing paywalls in the media business. 

“We do know if you show paywalls we get a lot of subscribers, but we are also interested in knowing how a paywall affects some readers’ habits and the likelihood they would want to return in the future, even months or years down the line,” he said. “We want to maintain a healthy audience so they can potentially become subscribers, but also serve our product mission to increase readership.” 

The subscription business model has these kinds of inherent challenges, added Wiggins.

“You don’t have those challenges if your business model is about clicks,” he said. “We think about how our design choices now impact whether someone will continue to be a subscriber in three months, or three years. It’s a complex science.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

Ubisoft’s Rocksmith+ guitar learning service arrives on PC next week

Ubisoft’s subscription service will arrive on September 6th, the publisher . Following a nearly year-long delay, the guitar learning platform will be available on PC exclusively through the . With 5,000 songs available at lunch, including tunes from Alicia Keys, The Clash and Santana, Ubisoft claims Rocksmith+ will feature the “largest catalog of official songs ever offered in a music learning service.” Additionally, the company has pledged to add “millions” of more tracks in the future.

If you played a previous Rocksmith release or participated in the closed beta, you can take advantage of loyalty pricing. Subscribe for three months upfront and you’ll receive one month free. You can also prepay for a year of service and Ubisoft will give you an additional three months for free. One-, three- and 12-month subscriptions are priced at $15, $40 and $100 per respective billing period.

You will need a way to connect your electric, acoustic or bass guitar to your computer. Your first option is to download the Rocksmith Plus Connect app on an iOS or Android device. It will use your phone’s built-in microphone to detect your playing. Alternatively, you can use Ubisoft’s Rocksmith Real Tone Cable to connect your instrument. The advantage offered by the latter option is that you can add effects to your playing. Ubisoft notes it’s also possible to use a third-party audio interface, but not every single one will work and the company won’t offer you technical support in that case.

Notably, Ubisoft makes no mention of the previously announced PlayStation and Xbox versions of Rocksmith+. It does note, however, that the mobile release will arrive this fall.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Today’s demand for real-time data analytics at the edge marks the dawn of a new era in machine learning (ML): edge intelligence. That need for time-sensitive data is, in turn, fueling a massive AI chip market, as companies look to provide ML models at the edge that have less latency and more power efficiency. 

Conventional edge ML platforms consume a lot of power, limiting the operational efficiency of smart devices, which live on the edge. Those devices are also hardware-centric, limiting their computational capability and making them incapable of handling varying AI workloads. They leverage power-inefficient GPU- or CPU-based architectures and are also not optimized for embedded edge applications that have latency requirements. 

Even though industry behemoths like Nvidia and Qualcomm offer a wide range of solutions, they mostly use a combination of GPU- or data center-based architectures and scale them to the embedded edge as opposed to creating a purpose-built solution from scratch. Also, most of these solutions are set up for larger customers, making them extremely expensive for smaller companies.

In essence, the $1 trillion global embedded-edge market is reliant on legacy technology that limits the pace of innovation.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

A new machine learning solution for the edge

ML company Sima AI seeks to address these shortcomings with its machine learning-system-on-chip (MLSoC) platform that enables ML deployment and scaling at the edge. The California-based company, founded in 2018, announced today that it has begun shipping the MLSoC platform for customers, with an initial focus of helping solve computer vision challenges in smart vision, robotics, Industry 4.0, drones, autonomous vehicles, healthcare and the government sector.

The platform uses a software-hardware codesign approach that emphasizes software capabilities to create edge-ML solutions that consume minimal power and can handle varying ML workloads. 

Built on 16nm technology, the MLSoC’s processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management — all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices.

The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. This software architecture enables Sima AI to support a wide range of frameworks (e.g., TensorFlow, PyTorch, ONNX, etc.) and compile over 120+ networks. 

The MLSoC promise – a software-first approach

Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency – all of which are required to make ML effortless for the embedded edge.

Sima AI’s MLSoC platform differs from other existing solutions as it solves all these areas at the same time with its software-first approach. 

The MLSoC platform is flexible enough to address any computer vision application, using any framework, model, network, and sensor with any resolution. “Our ML compiler leverages the open-source Tensor Virtual Machine (TVM) framework as the front-end, and thus supports the industry’s widest range of ML models and ML frameworks for computer vision,” Krishna Rangasayee, CEO and founder of Sima AI, told VentureBeat in an email interview. 

From a performance point of view, Sima AI’s MLSoC platform claims to deliver 10x better performance in key figures of merit such as FPS/W and latency than alternatives. 

The company’s hardware architecture optimizes data movement and maximizes hardware performance by precisely scheduling all computation and data movement ahead of time, including internal and external memory to minimize wait times. 

Achieving scalability and push-button results

Sima AI offers APIs to generate highly optimized MLSoC code blocks that are automatically scheduled on the heterogeneous compute subsystems. The company has created a suite of specialized and generalized optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the machine learning-accelerator (MLA) block. 

For Rangasayee, the next phase of Sima AI’s growth is focused on revenue and scaling their engineering and business teams globally. As things stand, Sima AI has raised $150 million in funding from top-tier VCs such as Fidelity and Dell Technologies Capital. With the goal of transforming the embedded-edge market, the company has also announced partnerships with key industry players like TSMC, Synopsys, Arm, Allegro, GUC and Arteris. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

How self-supervised learning may boost medical AI progress

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Self-supervised learning has been a fast-rising trend in artificial intelligence (AI) over the past couple of years, as researchers seek to take advantage of large-scale unannotated data to develop better machine learning models. 

In 2020, Yann Lecun, Meta’s chief AI scientist, said supervised learning, which entails training an AI model on a labeled data set, would play a diminishing role as supervised learning came into wider use. 

“Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode,” he told a virtual session audience during the International Conference on Learning Representation (ICLR) 2020. And in a 2021 Meta blog post, LeCun explained that self-supervised learning “obtains supervisory signals from the data itself, often leveraging the underlying structure in the data.” Because of that, it can make use of a “variety of supervisory signals across co-occurring modalities (e.g., video and audio) and across large datasets — all without relying on labels.” 

Growing use of self-supervised learning in medicine

Those advantages have led to the notable growing use of self-supervised learning in healthcare and medicine, thanks to the vast amount of unstructured data available in that industry – including electronic health records and datasets of medical images, bioelectrical signals, and sequences and structures of genes and proteins. Previously, the development of medical applications of machine learning had required manual annotation of data, often by medical experts. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

This was a bottleneck to progress, said Pranav Rajpurkar, assistant professor of biomedical informatics at Harvard Medical School. Rajpurkar leads a research lab focused on deep learning for label-efficient medical image interpretation, clinician-AI collaboration design, and open benchmark curation. 

“We’ve seen a lot of exciting advancements with our stable data sets,” he told VentureBeat.

But a “paradigm shift” was necessary to go from 100 algorithms that do very specific medical tasks to the thousands needed without going about a laborious, intensive process. That is where self-supervised learning, with its ability to predict any unobserved or hidden part of an input from any observed or unhidden part of an input, has been a game-changer. 

Highlighting self-supervised learning

In a recent review paper in Nature Biomedical Engineering, Rajpurkar, along with cardiologist, scientist and author Eric Topol and student researcher Rayan Krishnan, highlighted self-supervised methods and models used in medicine and healthcare, as well as and promising applications of self-supervised learning for the development of models leveraging multimodal datasets, and the challenges in collecting unbiased data for their training. 

The paper, Rajpurkar said, was aimed at “communicating the opportunities and challenges that underlie this the shift in paradigm we’re going to see over the upcoming years in many applications of AI, most certainly including medicine.” 

With self-supervised learning, Rajpurkar explained that he, “… can learn about a certain data source, whether that’s a medical image or signal, by using unlabeled data. That allows me a great starting point to do any task I care about within medicine and beyond without actually collecting large stable datasets.”

Big achievements unlocked

In 2019 and 2020, Rajpurkar’s lab saw some of the first big achievements that self-supervised learning was unlocking for interpreting medical images, including chest X-rays. 

“With a few modifications to algorithms that helped us understand natural images, we reduced the number of chest X-rays that had to be seen with a particular disease before we could start to do well at identifying that disease,” he said. 

Rajpurkar and his colleagues applied similar principles to electrocardiograms.

“We showed that with some ways of applying self-supervised learning, in combination with a bit of physiological insights in the algorithm, we were able to leverage a lot of unlabeled data,” he said.

Since then, he has also applied self-supervised learning to lung and heart sound data.

“What’s been very exciting about deep learning as a whole, but especially in the recent year or two, is that we’ve been able to transfer our methods really well across modalities,” Rajpurkar said. 

Self-supervised learning across modalities

For example, another soon-to-be-published paper showed that even with zero-annotated examples of diseases on chest X-rays, Rajpurkar’s team was actually able to detect diseases on chest X-rays and classify them nearly at the level of radiologists across a variety of pathologies.  

“We basically learned from images paired with radiology reports that were dictated at the time of their interpretation, and combined these two modalities to create a model that could be applied in a zero-shot way – meaning labeled samples were not necessary to be able to classify different diseases,” he said. 

Whether you’re working with proteins or images or text, the process is borrowing from the same sort of set of frameworks and methods and terminologies in a way that is more unified than it was even two or three years ago.

“That’s exciting for the field because it means that a set of advances on a general set of tools helps everybody working across and on these very specific modalities,” he said. 

In medical image interpretation, which has been Rajpurkar’s research focus for many years, this is “absolutely revolutionary,” he said. “Rather than thinking of solving problems one at a time and iterat[ing] this process 1,000 times, I can solve a much larger set of problems all at once.”

Momentum to apply methods

These possibilities have created momentum towards developing and applying self-supervised learning methods in medicine and healthcare, and likely for other industries that also have the ability to collect data at scale, said Rajpurkar, especially those industries that don’t have the sensitivity associated with medical data. 

Going forward, he adds that he is interested in getting closer to solving the full swath of potential tasks that a medical expert does.

“The goal has always been to enable intelligent systems that can increase the accessibility of medicine and healthcare to a large audience,” he said, adding that what excites him is building solutions that don’t just solve one narrow problem: “We’re working toward a world with models that combine different signals so physicians or patients are able to make intelligent decisions about diagnoses and treatments.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Solve the problem of unstructured data with machine learning

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


We’re in the midst of a data revolution. The volume of digital data created within the next five years will total twice the amount produced so far — and unstructured data will define this new era of digital experiences. 

Unstructured data — information that doesn’t follow conventional models or fit into structured database formats — represents more than 80% of all new enterprise data. To prepare for this shift, companies are finding innovative ways to manage, analyze and maximize the use of data in everything from business analytics to artificial intelligence (AI). But decision-makers are also running into an age-old problem: How do you maintain and improve the quality of massive, unwieldy datasets?

With machine learning (ML), that’s how. Advancements in ML technology now enable organizations to efficiently process unstructured data and improve quality assurance efforts. With a data revolution happening all around us, where does your company fall? Are you saddled with valuable, yet unmanageable datasets — or are you using data to propel your business into the future?

Unstructured data requires more than a copy and paste

There’s no disputing the value of accurate, timely and consistent data for modern enterprises — it’s as vital as cloud computing and digital apps. Despite this reality, however, poor data quality still costs companies an average of $13 million annually

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

To navigate data issues, you may apply statistical methods to measure data shapes, which enables your data teams to track variability, weed out outliers, and reel in data drift. Statistics-based controls remain valuable to judge data quality and determine how and when you should turn to datasets before making critical decisions. While effective, this statistical approach is typically reserved for structured datasets, which lend themselves to objective, quantitative measurements.

But what about data that doesn’t fit neatly into Microsoft Excel or Google Sheets, including: 

  • Internet of things (IoT): Sensor data, ticker data and log data 
  • Multimedia: Photos, audio and videos
  • Rich media: Geospatial data, satellite imagery, weather data and surveillance data
  • Documents: Word processing documents, spreadsheets, presentations, emails and communications data

When these types of unstructured data are at play, it’s easy for incomplete or inaccurate information to slip into models. When errors go unnoticed, data issues accumulate and wreak havoc on everything from quarterly reports to forecasting projections. A simple copy and paste approach from structured data to unstructured data isn’t enough — and can actually make matters much worse for your business. 

The common adage, “garbage in, garbage out,” is highly applicable in unstructured datasets. Maybe it’s time to trash your current data approach. 

The do’s and don’ts of applying ML to data quality assurance

When considering solutions for unstructured data, ML should be at the top of your list. That’s because ML can analyze massive datasets and quickly find patterns among the clutter — and with the right training, ML models can learn to interpret, organize and classify unstructured data types in any number of forms. 

For example, an ML model can learn to recommend rules for data profiling, cleansing and standardization — making efforts more efficient and precise in industries like healthcare and insurance. Likewise, ML programs can identify and classify text data by topic or sentiment in unstructured feeds, such as those on social media or within email records.

As you improve your data quality efforts through ML, keep in mind a few key do’s and don’ts: 

  • Do automate: Manual data operations like data decoupling and correction are tedious and time-consuming. They’re also increasingly outdated tasks given today’s automation capabilities, which can take on mundane, routine operations and free up your data team to focus on more important, productive efforts. Incorporate automation as part of your data pipeline — just make sure you have standardized operating procedures and governance models in place to encourage streamlined and predictable processes around any automated activities. 
  • Don’t ignore human oversight: The intricate nature of data will always require a level of expertise and context only humans can provide, structured or unstructured. While ML and other digital solutions certainly aid your data team, don’t rely on technology alone. Instead, empower your team to leverage technology while maintaining regular oversight of individual data processes. This balance corrects any data errors that get past your technology measures. From there, you can retrain your models based on those discrepancies. 
  • Do detect root causes: When anomalies or other data errors pop up, it’s often not a singular event. Ignoring deeper problems with collecting and analyzing data puts your business at risk of pervasive quality issues across your entire data pipeline. Even the best ML programs won’t be able to solve errors generated upstream — again, selective human intervention shores up your overall data processes and prevents major errors.
  • Don’t assume quality: To analyze data quality long term, find a way to measure unstructured data qualitatively rather than making assumptions about data shapes. You can create and test “what-if” scenarios to develop your own unique measurement approach, intended outputs and parameters. Running experiments with your data provides a definitive way to calculate its quality and performance, and you can automate the measurement of your data quality itself. This step ensures quality controls are always on and act as a fundamental feature of your data ingest pipeline, never an afterthought.

Your unstructured data is a treasure trove for new opportunities and insights. Yet only 18% of organizations currently take advantage of their unstructured data — and data quality is one of the top factors holding more businesses back. 

As unstructured data becomes more prevalent and more pertinent to everyday business decisions and operations, ML-based quality controls provide much-needed assurance that your data is relevant, accurate, and useful. And when you aren’t hung up on data quality, you can focus on using data to drive your business forward.

Just think about the possibilities that arise when you get your data under control — or better yet, let ML take care of the work for you.

Edgar Honing is senior solutions architect at AHEAD.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Improving AI-assisted conversation with zero-shot learning

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Zero-shot learning is a relatively new technique in machine learning (ML) that’s already having a major impact. With this method, ML systems such as neural networks require zero or very few “shots” in order to arrive at the “correct” answer. It has primarily gained ground in fields such as image classification and object detection and for Natural Language Processing (NLP), addressing the twin challenges in ML of having “too much data” as well as “not enough data”. 

But the potential for zero-shot learning extends well beyond the static visual or linguistic fields. Many other use cases are emerging with applications across almost every industry and field, helping to spur re-imagination of the way humans approach that most human of activities — conversation. 

How does zero-shot learning work? 

Zero-shot learning allows models to learn to recognize things they haven’t been introduced to before. Rather than the traditional method of sourcing and labelling huge data sets — which are then used to train supervised models — zero-shot learning appears little short of magical. The model does not need to be shown what something is in order to learn to recognize it. Whether you’re training it to identify a cat or a carcinoma, the model uses different types of auxiliary information associated with the data to interpret and deduce. 

Assimilating zero-shot learning with ML networks holds many advantages for developers across a wide range of fields. First, it dramatically speeds up ML projects because it cuts down on the most labor-intensive phases, data prep and the creation of custom, supervised models. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Second, once developers have learned the basics of zero-shot learning, what they can achieve radically expands. Increasingly, developers appreciate that once a modest initial knowledge gap is bridged, zero-shot learning techniques enable them to dream much, much bigger with what they can achieve with ML. 

Finally, the technique is very useful when models need to tread a fine line between being general enough to understand a broad range of situations while at the same time being able to pinpoint meaning or relevant information within that broad context. What’s more, this process can take place in real time. 

How zero-shot learning improves conversation intelligence

The ability to pick out the right meaning from a broad spectrum in real time means zero-shot learning is transforming the art of conversation. Specifically, pioneering businesses have found ways to apply zero-shot learning to improve outcomes in high-value interactions, typically in customer support and sales. In these scenarios, humans assisted by AI are coached to respond better to information that the customer is providing, to close deals faster and ultimately deliver higher customer satisfaction. 

Generating sales opportunities

Conversational AI, developed using zero-shot learning, is already being deployed to recognize upselling opportunities, such as every time a prospect or customer talks about pricing. There are hundreds of different ways the topic could present itself — for example, “I’m tight on budget”, “How much does that cost?”, “I don’t have that budget”, “The price is too high.” Unlike traditional supervised models, in which data scientists need to gather data, train the system, then test, evaluate and benchmark it, the machine can use zero-shot learning, to very quickly begin to train itself. 

Going beyond simply identifying particular topics, trackers in real-time streams can make recommendations in response to particular situations. During a call with a customer service or sales agent in a financial services company, for example, if a tracker detects a person is in financial difficulty, it can offer an appropriate response to this information (a loan, for instance). 

Developing AI-assisted human interactions

Coaching and training are among the most promising applications for zero-short learning in such conversation-based scenarios. In these cases, the AI is working alongside humans, assisting them to better fulfil their role. 

There are two main ways this works. After a customer-agent call is over, the system can generate a report summarizing the interaction, rating how it was conducted according to pre-agreed Key Performance Indicators (KPIs) and providing recommendations. The other approach is for the system to respond in real time during the call with targeted recommendations based on context, effectively training agents on the optimal way to handle calls. 

On-the-job training with zero-shot learning

In this way, zero-shot learning systems address an essential, perennial challenge for sales teams who have until now relied on laborious, expensive training supplemented with sales scripts for staff that aim to coach them on the best way to identify and respond to the needs of the customer. 

Training represents a hefty investment for businesses, especially in high-churn sales environments. Sales staff turnover has recently been riding around 10 percentage points higher. Industry studies suggest that even among the biggest companies, sales reps tend only to stay in the job 18 months before churning. It is a worrying trend, especially when you consider that it takes an average of three months to train them initially. Zero-shot inference systems don’t just help with initial training. Arguably their most powerful feature is their ability to provide on-the-job recommendations that help the sales rep — and the company — succeed. 

Beyond training to career coaching

This ability to improve output and performance through AI-assisted coaching does not just benefit companies, it can be tailored to accelerate an employee’s personal career trajectory. Consider a scenario in which a zero-shot learning system works with an employee to help them attain their personal 360 targets. A goal like “convert X% more leads” becomes more attainable when assisted by an ML model primed to spot and develop opportunities the employee alone might miss. 

Turning conversations into insights

Zero-shot learning is a relatively new technique and we are only just beginning to understand its full breadth of applications. Particularly suited to situations where models need to be trained to pinpoint meaning within a broad context, conversational intelligence is rapidly emerging as a leading development area. For data scientists, developers and time-sensitive cost-conscious business leaders alike, conversational intelligence systems require no specialist model training, accelerating processes and cutting lead times. 

Although conversational intelligence applications are thriving, alongside the better known image detection and Natural Language Processing (NLP) use cases, the reality is that we have barely scratched the surface of what zero-shot learning can achieve. 

For example, my company is working with clients seeking to solve problems to radically improve conversational AI’s capabilities when it comes not only to coaching and training, but also how ML systems improve productivity by compressing and contextualizing business information, how they improve compliance, clamp down on harassment behaviors or profanity and increase engagement in virtual events, all through the use of zero-shot learning models. 

Toshish Jawale is CTO of Symbl.ai

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Ray, the machine learning tech behind OpenAI, levels up to Ray 2.0

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Over the last two years, one of the most common ways for organizations to scale and run increasingly large and complex artificial intelligence (AI) workloads has been with the open-source Ray framework, used by companies from OpenAI to Shopify and Instacart. 

Ray enables machine learning (ML) models to scale across hardware resources and can also be used to support MLops workflows across different ML tools. Ray 1.0 came out in September 2020 and has had a series of iterations over the last two years. 

Today, the next major milestone was released, with the general availability of Ray 2.0 at the Ray Summit in San Francisco. Ray 2.0 extends the technology with the new Ray AI Runtime (AIR) that is intended to work as a runtime layer for executing ML services.  Ray 2.0 also includes capabilities designed to help simplify building and managing AI workloads.

Alongside the new release, Anyscale, which is the lead commercial backer of Ray, announced a new enterprise platform for running Ray. Anyscale also announced a new $99 million round of funding co-led by existing investors Addition and Intel Capital with participation from Foundation Capital. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“Ray started as a small project at UC Berkeley and it has grown far beyond what we imagined at the outset,” said Robert Nishihara, cofounder and CEO at Anyscale, during his keynote at the Ray Summit.

OpenAI’s GPT-3 was trained on Ray

It’s hard to understate the foundational importance and reach of Ray in the AI space today.

Nishihara went through a laundry list of big names in the IT industry that are using Ray during his keynote. Among the companies he mentioned is ecommerce platform vendor Shopify, which uses Ray to help scale its ML platform that makes use of TensorFlow and PyTorch. Grocery delivery service Instacart is another Ray user, benefitting from the technology to help train thousands of ML models. Nishihara noted that Amazon is also a Ray user across multiple types of workloads.

Ray is also a foundational element for OpenAI, which is one of the leading AI innovators, and is the group behind the GPT-3 Large Language Model and DALL-E image generation technology.

“We’re using Ray to train our largest models,” Greg Brockman, CTO and cofounder of OpenAI, said at the Ray Summit. “So, it has been very helpful for us in terms of just being able to scale up to a pretty unprecedented scale.”

Brockman commented that he sees Ray as a developer-friendly tool and the fact that it is a third-party tool that OpenAI doesn’t have to maintain is helpful, too.

“When something goes wrong, we can complain on GitHub and get an engineer to go work on it, so it reduces some of the burden of building and maintaining infrastructure,” Brockman said.

More machine learning goodness comes built into Ray 2.0

For Ray 2.0, a primary goal for Nishihara was to make it simpler for more users to be able to benefit from the technology, while providing performance optimizations that benefit users big and small.

Nishihara commented that a common pain point in AI is that organizations can get tied into a particular framework for a certain workload, but realize over time they also want to use other frameworks. For example, an organization might start out just using TensorFlow, but realize they also want to use PyTorch and HuggingFace in the same ML workload. With the Ray AI Runtime (AIR) in Ray 2.0, it will now be easier for users to unify ML workloads across multiple tools.

Model deployment is another common pain point that Ray 2.0 is looking to help solve, with the Ray Serve deployment graph capability.

“It’s one thing to deploy a handful of machine learning models. It’s another thing entirely to deploy several hundred machine learning models, especially when those models may depend on each other and have different dependencies,” Nishihara said. “As part of Ray 2.0, we’re announcing Ray Serve deployment graphs, which solve this problem and provide a simple Python interface for scalable model composition.”

Looking forward, Nishihara’s goal with Ray is to help enable a broader use of AI by making it easier to develop and manage ML workloads.

“We’d like to get to the point where any developer or any organization can succeed with AI and get value from AI,” Nishihara said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link