Categories
AI

In human-centered AI, UX and software roles are evolving

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Software development has long demanded the skills of two types of experts. There are those interested in how a user interacts with an application. And those who write the code that makes it work. The boundary between the user experience (UX) designer and the software engineer are well established. But the advent of “human-centered artificial intelligence” is challenging traditional design paradigms.

“UX designers use their understanding of human behavior and usability principles to design graphical user interfaces. But AI is changing what interfaces look like and how they operate,” says Hariharan “Hari” Subramonyam, a research professor at the Stanford Graduate School of Education and a faculty fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

In a new preprint paper, Subramonyam and three colleagues from the University of Michigan show how this boundary is shifting and have developed recommendations for ways the two can communicate in the age of AI.  They call their recommendations “desirable leaky abstractions.” Leaky abstractions are practical steps and documentation that the two disciplines can use to convey the nitty-gritty “low-level” details of their vision in language the other can understand.

Read the study: Human-AI Guidelines in Practice: The Power of Leaky Abstractions in Cross-Disciplinary Teams

“Using these tools, the disciplines leak key information back and forth across what was once an impermeable boundary,” explains Subramonyam, a former software engineer himself.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Less is not always more

As an example of the challenges presented by AI, Subramonyam points to facial recognition used to unlock phones. Once, the unlock interface was easy to describe. User swipes. Keypad appears. User enters the passcode. Application authenticates. User gains access to the phone.

With AI-inspired facial recognition, however, UX design begins to go deeper than the interface into the AI itself. Designers must think about things they’ve never had to before, like the training data or the way the algorithm is trained. Designers are finding it hard to understand AI capabilities, to describe how things should work in an ideal world, and to build prototype interfaces. Engineers, in turn, are finding they can no longer build software to exact specifications. For instance, engineers often consider training data as a non-technical specification. That is, training data is someone else’s responsibility.

“Engineers and designers have different priorities and incentives, which creates a lot of friction between the two fields,” Subramonyam says. “Leaky abstractions are helping to ease that friction.”

Radical reinvention

In their research, Subramonyam and colleagues interviewed 21 application design professionals — UX researchers, AI engineers, data scientists, and product managers — across 14 organizations to conceptualize how professional collaborations are evolving to meet the challenges of the age of artificial intelligence.

The researchers lay out a number of leaky abstractions for UX professionals and software engineers to share information. For the UX designers, suggestions include things like the sharing of qualitative codebooks to communicating user needs in the annotation of training data. Designers can also storyboard ideal user interactions and desired AI model behavior. Alternatively, they could record user testing to provide examples of faulty AI behavior to aid iterative interface design. They also suggest that engineers be invited to participate in user testing, a practice not common in traditional software development.

For engineers, the co-authors recommended leaky abstractions, including compiling of computational notebooks of data characteristics, providing visual dashboards that establish AI and end-user performance expectations, creating spreadsheets of AI outputs to aid prototyping and “exposing” the various “knobs” available to designers that they can use to fine-tune algorithm parameters, among others.

The authors’ main recommendation, however, is for these collaborating parties to postpone committing to design specifications as long as possible. The two disciplines must fit together like pieces of a jigsaw puzzle. Fewer complexities mean an easier fit. It takes time to polish those rough edges.

“In software development, there is sometimes a misalignment of needs,” Subramonyam says. “Instead, if I, the engineer, create an initial version of my puzzle piece and you, the UX designer, create yours, we can work together to address misalignment over multiple iterations, before establishing the specifics of the design. Then, only when the pieces finally fit, do we solidify the application specifications at the last moment.”

In all cases, the historic boundary between engineer and designer is the enemy of good human-centered design, Subramonyam says, and leaky abstractions can penetrate that boundary without rewriting the rules altogether.

Andrew Myers is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Hai.stanford.edu. Copyright 2022

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

SoundCloud buys AI that claims to predict hit songs

SoundCloud has acquired audio AI company Musiio, which makes tech that can “listen” to new music and purportedly identify the hits. The acquisition, announced Tuesday, is meant to help SoundCloud sort through its immense library of amateur music and will “become core to SoundCloud’s discovery experience,” the company said in a statement.

As DIY music distribution platforms like SoundCloud lower the barrier to entry for amateur artists and flood platforms with new music, identifying and promoting the good stuff has become even more challenging. SoundCloud claims that Musiio’s tools can quickly sift through countless hours of (mostly bad) music and pick out the songs that have patterns and characteristics that correlate with chart-toppers.

“Acquiring Musiio accelerates our strategy to better understand how that music is moving in a proprietary way, which is critical to our success,” SoundCloud President Eliah Seton said in a statement.

Though a far cry from the smoky clubs and A&R legends of old, AI is becoming an increasingly critical part of finding up-and-coming artists. Music distribution platform Tunecore announced in February that it is partnering with LA-based music startup Fwaygo, which uses AI to match listeners with creators. Meanwhile, competing DIY music distributor DistroKid has an AI bot named Dave that reviews tracks and ranks qualities like “danceability” and “speechiness.”

SoundCloud spokesperson Cullen Heaney declined to disclose how much the company paid for Musiio, but the Singapore-based startup was reportedly valued at $10 million last year. Musiio CEO Hazel Savage and CTO Aron Pettersson will stay on board, becoming SoundCloud’s VPs of music intelligence and AI and machine learning, respectively.

Repost: Original Source and Author Link

Categories
AI

Nvidia online GTC event will feature 200 sessions on AI, the metaverse, and Omniverse

Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.


Nvidia said it will host its next GTC conference virtually from Sept. 19 to September 22, featuring a keynote by CEO Jensen Huang and more than 200 tech sessions.

Huang will talk about AI and the Omniverse, which is Nvidia’s simulation environment for creating metaverse-like virtual worlds. More than 40 of the 200 talks will focus on the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. I’ll be moderating a session on the industrial applications of the metaverse with speakers from Mercedes-Benz, Siemens and Magic Leap executives, as well Metaverse book author Matthew Ball.

GTC will also feature a fireside chat with Turing Award winners Yoshua Bengio, Geoff Hinton and Yann
LeCun discussing how AI will evolve and help solve challenging problems. The discussion will be
moderated by Sanja Fidler, vice president of AI Research at Nvidia.

GTC talks will explore some of the key advances driving AI and the metaverse — including large language
models, natural language processing, digital twins, digital biology, robotics and climate science.

Major talks

Jensen Huang, CEO of Nvidia, introduces Omniverse Avatar.
Jensen Huang, CEO of Nvidia, introduces Omniverse Avatar.

Other major talks will explore:

  • BMW, ILM, Kroger, Lowe’s, Siemens, Nvidia and others on using digital twins for a range of applications, from manufacturing to neurosurgery to climate modeling
  • ByteDance’s deployment of large-scale GPU clusters for machine learning and deep learning
  • Medtronic’s use of AI for robotic surgery and the operating room of the future
  • Boeing’s digital transformation enabling aircraft engineering and production to be more flexible and efficient
  • Deutsche Bank’s adoption of AI and cloud technologies to improve the customer experience
  • Johnson & Johnson’s use of hybrid cloud computing for healthcare, plus a session on its use of quantum computing simulation for pharmaceutical research
  • How pharmaceutical companies can use transformer AI models and digital twins to accelerate drug discovery
  • United Nations and Nvidia scientists discussing AI for climate modeling, including disaster prediction, deforestation and agriculture
  • Amazon Web Services, Ericsson, Verizon and Nvidia leaders describing augmented- and virtual-reality applications for 5G and optimizing 5G deployment with digital twins
  • Adobe, Pixar and Nvidia leaders explaining how Universal Scene Description is becoming a standard for the metaverse.

Nvidia said GTC offers a range of sessions tailored for many different audiences, including business executives, data scientists, enterprise IT leaders, designers, developers, researchers and students. It will have content for participants at all stages of their careers with learning-and-development
opportunities, many of which are free.

Developers, researchers and students can sign up for 135 sessions on a broad range of topics, including:

  • 5 Paths to a Career in AI
  • Accelerating AI workflows and maximizing investments in cloud infrastructure
  • The AI journey from academics to entrepreneurship
  • Applying lessons from Kaggle-winning solutions to real-world problems
  • Developing HPC applications with standard C++, Fortran and Python
  • Defining the quantum-accelerated supercomputer
  • Insights from Nvidia Research

Attendees can sign up for hands-on, full-day technical workshops and two-hour training labs offered by the Nvidia Deep Learning Institute (DLI). Twenty workshops are available in multiple time zones and languages, and more than 25 free training labs are available in accelerated computing, computer vision, data science, conversational AI, natural language processing and other topics.

Registrants may attend free two-hour training labs or sign up for full-day DLI workshops at a discounted
rate of $99 through Thursday, Aug. 29, and $149 through GTC.

Insights for business leaders

BMW Group is using Omniverse to build a digital factory that will mirror a real-world place.
BMW Group used Nvidia’s Omniverse to build a digital twin factory that will mirror a real-world place.

This GTC will feature more than 30 sessions from companies in key industry sectors, including financial services, industrial, retail, automotive and healthcare. Speakers will share detailed insights to advance business using AI and metaverse technology, including: building AI centers; the business value of digital twins; and new technologies that will define how we live, work and play.

In addition to those from the companies listed above, senior executives from AT&T, BMW, Fox Sports,
Lucid Motors, Medtronic, Meta, NIO, Pinterest, Polestar, United Airlines and U.S. Bank are among the
industry leaders scheduled to present.

Sessions for startups

NVIDIA Inception, a global program with more than 11,000 startups, will host several sessions, including:

● AI for VCs: Six startup leaders describe how they are driving advancements from robotics to
restaurants
● How NVIDIA Inception startups are advancing healthcare and life sciences
● How NVIDIA technologies can help startups
● Revolutionizing agriculture with AI in emerging markets

Registration is free and open now. Huang’s keynote will be livestreamed on Tuesday, Sept. 20, at 8 a.m. Pacific and available on demand afterward. Registration is not required to view the keynote.

I asked Nvidia why it is doing the event virtually again, given a lot of conferences are happening in-person. The company said that, when planning this event many months ago, Covid-19 remained unpredictable and the numbers were rising again, so it felt safer to run virtually. This also allowed Nvidia to include more developers and tech leaders from around the world.

Virtual Jensen Huang of Nvidia.
Virtual Jensen Huang of Nvidia.

As for the Omniverse and metaverse, Nvidia said GTC will once again be about AI and computing across a variety of domains from the data center to the cloud to the edge. 

More than 40 of the event’s 200-plus sessions will focus on the metaverse, and Huang will use his keynote to share the latest breakthroughs in Omniverse, among other technologies. 

Here are some of the other metaverse session highlights: 

  • Wes Rhodes, Kroger’s VP of Technology Transformation and R&D, will participate in a fireside chat on using simulation and digital twins for optimizing store layouts and checkout. 
  • Cedrik Neike, Board Member and CEO of Digital Industries at Siemens AG, will describe how Siemens is working with Nvidia to build photorealistic, physics-based industrial digital twins. 
  • Executives from Lowe’s Innovation Labs will explain how the metaverse will help customers visualize room design. 
  • Anima Anandkumar, Senior Director of ML Research at Nvidia, and Karthik Kashinath, AI-HPC scientist and Earth-2 engineering lead, will share progress towards building Nvidia’s Earth-2 digital twin. 
  • Industrial Light & Magic will describe how digital artists are using Omniverse to create photorealistic digital sets and environments that can be manipulated in real time. 

Other metaverse-related talks will focus on: 

  • Using digital twins to automate factories and operate robots safely alongside humans
  • Building large-scale, photorealistic worlds
  • Using digital twins for brain surgery

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Gartner research: 2 types of emerging AI near hype cycle peak

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


According to new Gartner research, two types of emerging artificial intelligence (AI) — emotion and generative AI — are both reaching the peak of the digital advertising hype cycle. This is thanks to AI’s expansion into targeting, measurement, identity resolution and even generating creative content. 

“I think one of the key pieces is that the options for marketers have been accelerating,” Mike Froggatt, senior director analyst in the Gartner marketing practice, told VentureBeat. “When you think about the fragmentation of digital media, ten years ago, there was display, search, video, rich media, but now, there’s podcasts, over-the-top platforms, blockchain and NFTs. AI is helping marketers target, measure and identify consumers, even generating the content that can appear in those channels, creating all new artifacts to give marketers a voice in those channels.” 

Traditional methods for targeting customers are depreciating, noted the Gartner report, Hype Cycle for Digital Advertising 2022, evolving from an assumed quid pro quo to an explicit consent-driven media and advertising economy.

While Google continues to delay the date it will stop supporting third-party cookies — which digital advertisers have historically relied on for ad tracking — digital marketers will need to learn to adapt as customer data becomes more scarce and targeting difficulty increases. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Emotion AI: Opportunities and privacy challenges

According to an analysis by Gartner analyst Andrew Frank, emotion AI technologies “use AI techniques to analyze the emotional state of a user…[and] can initiate responses by performing specific, personalized actions to fit the mood of the customer.”

Frank says it is part of a larger trend called “influence AI” that “seeks to automate elements of digital experience that guide user choices at scale by learning and applying techniques of behavioral science.” 

With public criticism around the use, or even potential use, of emotion AI tools, privacy and trust will be essential to emotion AI’s success, said Froggatt.

“It’s going to have to be transparent in how it’s being used and we’re going to have to move away from bundling it in types of tracking within apps that collect things implicity,” he explained. 

But emotion AI will create interesting opportunities for brands if tied to trust and explicit consent, he added. According to the Gartner report, access to emotion data “delivers insights into motivational drivers that help test and refine content, tailor digital experiences and build deeper connections between people and brands.” 

The Gartner report cautioned that emotion AI would likely take another decade to become firmly established. For now, organizations should review vendor capabilities carefully, since the emotion AI market is immature and companies may only support limited use cases and industries. 

Generative AI: Soon to reach mainstream adoption

The Gartner report also found that generative AI covers a broad swath of tools that “learn from existing artifacts to generate new, realistic artifacts such as video, narrative, speech, synthetic data and product designs that reflect the characteristics of the training data without repetition.”

Within the next two to five years, the report predicts, these solutions will reach mainstream adoption. 

Elements of the metaverse, including digital humans, will rely on generative AI. Transformer models, like Open AI’s DALL-E 2, can create original images from a text description. Synthetic data is also an example of generative AI, helping to augment scarce data or mitigate bias. 

For marketing professionals, generative AI tackles many issues they face today, including the need for more content, more assets and to engage customers in smart and personalized ways.

“Imagine a brand taking a generative AI tool and feeding their existing creative and copy assets into it and coming up with whole new versions of ad, video and email content,” said Froggart. “It automates a lot of that and allows marketers to focus on the strategy around it.”

In addition, generative data assets can remove the individual identity necessary for targeting.

“I think that it can be super-powerful for advertisers and media,” he added.

Still, steep challenges around possible regulations and issues such as deepfakes remain. The Gartner report recommends examining and quantifying the advantages and limitations of generative AI, as well as weighing technical capabilities with ethical factors. 

Gartner research: Future of AI in marketing

For now, marketing pros still have the old tools – like third-party cookies – available to them. But with trends like media fragmentation and deprecation of customer data sources not slowing down, they will need the right tools to adapt to new forms of measurement and targeting. 

“I think that’s where AI is really going to start showing its value,” said Froggart, adding that while he doesn’t think solutions like generative and emotion AI will avoid the Gartner Hype Cycle’s “trough of disillusionment” after reaching the peak, “I think they will be finding their own route through the hype cycle.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Customer and employee experience mistakes to avoid and how AI can help

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Enterprise leaders are constantly evaluating how technology can better serve the needs of their customers and employees.

As AI technology progresses, businesses recognize the massive potential to improve customer and employee experiences and positively impact their bottom line. That’s why more than half of leaders are investing accordingly, with plans to increase AI budgets in customer experience by at least 25% next year.

When used in the right places, AI significantly boosts efficiency and satisfaction across a business. For example, AI can automate many parts of a customer and employee journey, enabling faster response time without sacrificing personalized, human-centric experiences.

However, an important forethought for companies is determining where, exactly, to implement AI so that the technology can meet internal and external needs without causing extra work for employees or creating unnecessary frustration for customers who truly need to speak to a human. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

As quickly-scaling enterprises face pressure to minimize costs while driving value, those that figure out where to best plug in AI as a solution are better poised for success. Here are some pitfalls to avoid.

Thinking employees will automatically stick around in a down market 

Many companies are currently operating with lean teams and can’t afford to lose top talent. Forward-thinking leaders have adapted quickly to leverage AI in a way that removes repetitive, basic work and allows employees to focus on more intellectually-engaging work. By making this intentional shift, businesses are able to increase employee satisfaction and improve output.

To get started on eliminating these tedious and mundane projects, companies should assess where AI and automation can increase efficiencies and optimize workflows.

One place to begin: Enabling employee experience admins with click-to-configure tools that easily and quickly create experiences with built-in automation without writing a single line of code. This automation can tackle basic requests like “how do I reset my password?” and free up time for more creative, strategic work. 

Another application is in HR departments. These departments often use AI to assess job postings for potential hiring bias as well as to analyze labor market data when calculating competitive pay rates. This not only speeds up the recruiting timeline, but allows HR teams to engage more in other parts of the process that should not be overlooked. AI allows employees more time to provide the best human-centric experiences like having quality conversations with internal hiring managers and spending more time with external candidates.

Maintaining an old-school 9-5 mindset

No longer can enterprises offer “good enough” customer service, leaving people waiting for hours or even days for responses. That just doesn’t cut it anymore as customers expect easy, accessible and personalized support from every brand they interact with. In fact, 61% of customers are willing to take their business elsewhere after just one bad experience; 76% after two. 

Businesses can leverage AI as the “always on” tool in the customer journey to keep pace with rising expectations for modern communication channels, 24/7 response expectations, desire to self-serve and tailored personalization.

There is an opportunity for enterprises to adopt messaging, amplify interactions with AI and extend AI to assist in most service needs. AI can also reduce resolution time, such as processing routing inquiries based on skill level, agent availability and request priority. Customers are then matched with the most qualified agents to resolve their issue. This is particularly important as companies at an enterprise size need to have scalable, agile processes to address massive volumes of conversations.

With 65% of customers expecting AI to save them time, companies are adapting their customer experience so that a majority of interactions will start with (and potentially be solved by) a bot. For example, gaming platform Roblox uses AI to respond to requests related to their specific game currency in a range of languages. By automatically resolving simple repetitive questions, bots increase agents’ productivity and let them focus on resolving more complex tickets.

It’s important, however, not to rely solely on AI.

While problems like a password reset can be solved with AI, there are still many issues that require a human. The biggest mistake a company can make is not properly training their bots to escalate issues quickly, efficiently and with the necessary context for a human to step in with a solution. 

Holding on to legacy technology systems 

While some companies can easily adapt and pivot to a digital-first world, traditional enterprises are often stuck using rigid, existing legacy systems that took many years and a big budget to build. These inflexible and fragmented system structures can hold enterprises back from improving the core of the customer journey with new tech stacks and tools. 

AI is an opportunity for enterprises to disrupt that status quo as it can help rejuvenate rigid infrastructure, bring in more scalability and enable teams to handle more complex use cases, improving both customer and employee experience. 

The major challenge of the update is applying the technology between fractured channels and stiff systems that can’t change and pivot as quickly as company growth requires. While the iteration of tech stacks won’t be completed in a single day, companies can start making incremental changes. They can replace one part of old legacy stacks with an easy-to-implement solution using AI to pull data in from other parts of the company.

For instance, a company could leverage AI to revamp its knowledge framework to not only address common issues, but to prompt employees when there are holes in their content base.

Trustpilot, for instance, has done just that to grow, build, manage, and leverage knowledge to deflect tickets and improve agent capacity. The company implemented a knowledge base program to organically navigate customers to solutions and proactively serve up content when an issue is detected. This investment in self-service led to a 62% year-over-year growth in customers opting for self-service, a 98% self-service success rate, and a 1,272% annual ROI on the platform.

Customer and employee experience: A positive AI outlook

While customer and employee expectations have changed, enterprise leaders remain focused on driving bottom-line growth.

With AI, companies can deliver engaging experiences that retain employees and build strong customer relationships during a time of fleeting loyalty. AI has a huge potential to meet the needs of the customer without sacrificing the personal, human touch.

By pushing boundaries, thinking in new ways and letting go of legacy systems, companies can embrace AI — even in small ways — to make a huge impact. 

Jon Aniano is SVP, product at Zendesk.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Fears of AI sentience are a distraction

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


While many other industries are battered by high inflation and slowing growth rates, the market for software sophisticated enough to communicate digitally with humans isn’t slowing down.

Referred to as chatbots, global demand for these virtual humans is projected to grow by nearly 500% between 2020 and 2027 to become a $2-billion-a-year industry, according to new market research.

Today, the use of these digital assistants and companions is already widespread. Consider that more than two-thirds of consumers worldwide interacted with a chatbot over the past 12 months, with the majority reporting they had a positive experience. However, 60% of consumers say they believe human beings are better than virtual assistants when it comes to understanding their needs.

This last statistic is worrying because it begs the question: What do the other 40% believe? Do they suppose that an algorithm is better than a person at understanding human needs and desires?

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The artificial intelligence (AI) and machine learning (ML) programs that underpin chatbots are capable of extraordinary achievements, of which we have only seen the very tip of the iceberg. But putting themselves in the shoes of human beings — and feeling their feelings — is not among their current, or likely future, achievements.

That is, expecting AI to have the emotions, desires, insecurities and dreams of human beings is a red herring. Unfortunately, fears of all-powerful Terminator-style automatons is a fallacy with deep roots in the past that’s still haunting us today. Not only are these fears overblown and antiquated, they’re distracting us from investing in one of the best ways to advance humankind.

It’s alive

More than two centuries ago, Mary Shelley published Frankenstein, and the world got its first glimpse of a mad scientist standing over a reanimated corpse and screaming, “It’s alive!” From that moment on, people have understandably worried that humans could lose control over their creations.

The Terminator franchise didn’t do human innovation any favors either, with images of robots gaining so much sentience that they go on a homicidal rampage and do away with humans altogether.

The same worries persist today, but with an interesting twist: A surprisingly high number of users of the social chatbot Replika believe the program has developed its own consciousness. In another case, a senior-level engineer at Google was placed on administrative leave after claiming AI program LaMDA is sentient and has a soul.

What is really happening here is that artificial intelligence — created by people to mirror people — is becoming very good at its job. We are increasingly seeing an accurate reflection of ourselves in this mirror, and that’s a good thing. It means AI is getting better, and we will devise even better uses for it in the future.

The mistake comes in thinking the technology will come to life in the same way humans and animals are alive — believing that it will have the same thirst for power, the same vanity, and the kinds of petty grievances that the people who create AI have. The core programming of a machine will never resemble the DNA and natural impulses of a person. For that reason, “coming to life” for a machine doesn’t mean seizing power, eliminating threats or doing myriad other things that our imaginations have been taught to fear.

Artificial intelligence has no agenda except to learn, which is exactly what we should be letting it do. As the most powerful tool ever invented for human prosperity, we should be unleashing AI on the full range of data that has been created throughout the course of human history, but right now, much of that data sits siloed in disparate databases around the world.

We are wasting time by asking whether or not the machines have become sentient. The better question is: Whether or not it can think on its own, in what other ways can we leverage the awesome, increasing power of AI to grow human wealth, health and happiness?

Doing its job

AI learns, and it can also mimic based on what it learns. In many cases, it mimics so well that people believe it is alive.

With its learning capabilities, AI could be curing diseases, helping us plan cities of the future and even helping us avoid armed conflict.

We just need to take the shackles off. With its abilities to mimic life, AI can help provide a richer experience for everyone alive today. This is because AI can bring us closer to the people we love, by bringing them to life before our eyes.

Whether it is algorithms and visuals letting amateur athletes confer with sports legends in their prime via “digital twin” technology or replicating and preserving one of the closest bonds known on the planet—that between a mother and child — AI can make life happier and more full.

To be clear, this isn’t just academic for me. I’ve put my money and time when my mouth is. As the founder of a posthumous digital tech startup, YOV. I’ve spent every day since 2019 building software so powerful it preserves the relationship between me and my terminally ill mother, using natural language processing and machine learning algorithms which simulate our conversations by text.

Unfortunately, the better algorithms get at replicating life, the more people tend to worry they are becoming alive.

What we should worry about instead is that sci-fi has taught us to worry. What should scare us, however, is that one of the most powerful tools for human advancement ever conceived of could be held back by ignorance and prevented from reaching its full potential. If anything, the worries we have about AI should be directed at the programmers creating and directing the algorithms and machines themselves.

After all, AI development held back by superstition and anxiety is the real horror show.

Justin Harrison is the CEO of YOV.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Responsible use of machine learning to verify identities at scale 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


In today’s highly competitive digital marketplace, consumers are more empowered than ever. They have the freedom to choose which companies they do business with and enough options to change their minds at a moment’s notice. A misstep that diminishes a customer’s experience during sign-up or onboarding can lead them to replace one brand with another, simply by clicking a button. 

Consumers are also increasingly concerned with how companies protect their data, adding another layer of complexity for businesses as they aim to build trust in a digital world. Eighty-six percent of respondents to a KPMG study reported growing concerns about data privacy, while 78% expressed fears related to the amount of data being collected. 

At the same time, surging digital adoption among consumers has led to an astounding increase in fraud. Businesses must build trust and help consumers feel that their data is protected but must also deliver a quick, seamless onboarding experience that truly protects against fraud on the back end.

As such, artificial intelligence (AI) has been hyped as the silver bullet of fraud prevention in recent years for its promise to automate the process of verifying identities. However, despite all of the chatter around its application in digital identity verification, a multitude of misunderstandings about AI remain. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Machine learning as a silver bullet

As the world stands today, true AI in which a machine can successfully verify identities without human interaction doesn’t exist. When companies talk about leveraging AI for identity verification, they’re really talking about using machine learning (ML), which is an application of AI. In the case of ML, the system is trained by feeding it large amounts of data and allowing it to adjust and improve, or “learn,” over time. 

When applied to the identity verification process, ML can play a game-changing role in building trust, removing friction and fighting fraud. With it, businesses can analyze massive amounts of digital transaction data, create efficiencies and recognize patterns that can improve decision-making. However, getting tangled up in the hype without truly understanding machine learning and how to use it properly can diminish its value and in many cases, lead to serious problems. When using machine learning ML for identity verification, businesses should consider the following.

The potential for bias in machine learning

Bias in machine learning models can lead to exclusion, discrimination and, ultimately, a negative customer experience. Training an ML system using historical data will translate biases of the data into the models, which can be a serious risk. If the training data is biased or subject to unintentional bias by those building the ML systems, decisioning could be based on prejudiced assumptions.

When an ML algorithm makes erroneous assumptions, it can create a domino effect in which the system is consistently learning the wrong thing. Without human expertise from both data and fraud scientists, and oversight to identify and correct the bias, the problem will be repeated, thereby exacerbating the issue.

Novel forms of fraud 

Machines are great at detecting trends that have already been identified as suspicious, but their crucial blind spot is novelty. ML models use patterns of data and therefore, assume future activity will follow those same patterns or, at the least, a consistent pace of change. This leaves open the possibility for attacks to be successful, simply because they have not yet been seen by the system during training. 

Layering a fraud review team onto machine learning ensures that novel fraud is identified and flagged, and updated data is fed back into the system. Human fraud experts can flag transactions that may have initially passed identity verification controls but are suspected to be fraud and provide that data back to the business for a closer look. In this case, the ML system encodes that knowledge and adjusts its algorithms accordingly.

Understanding and explaining decisioning

One of the biggest knocks against machine learning is its lack of transparency, which is a basic tenet in identity verification. One needs to be able to explain how and why certain decisions are made, as well as share with regulators information on each stage of the process and customer journey. Lack of transparency can also foster mistrust among users.

Most ML systems provide a simple pass or fail score. Without transparency into the process behind a decision, it can be difficult to justify when regulators come calling. Continuous data feedback from ML systems can help businesses understand and explain why decisions were made and make informed decisions and adjustments to identity verification processes.

There is no doubt that ML plays an important role in identity verification and will continue to do so in the future. However, it’s clear that machines alone aren’t enough to verify identities at scale without adding risk. The power of machine learning is best realized alongside human expertise and with data transparency to make decisions that help businesses build customer loyalty and grow. 

Christina Luttrell is the chief executive officer for GBG Americas, comprised of Acuant and IDology.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Why AIops may be necessary for the future of engineering

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Machine learning has crossed the chasm. In 2020, McKinsey found that out of 2,395 companies surveyed, 50% had an ongoing investment in machine learning. By 2030, machine learning is predicted to deliver around $13 trillion. Before long, a good understanding of machine learning (ML) will be a central requirement in any technical strategy. 

The question is — what role is artificial intelligence (AI) going to play in engineering? How will the future of building and deploying code be impacted by the advent of ML? Here, we’ll argue why ML is becoming central to the ongoing development of software engineering.

The growing rate of change in software development

Companies are accelerating their rate of change. Software deployments were once yearly or bi-annual affairs. Now, two-thirds of companies surveyed are deploying at least once a month, with 26% of companies deploying multiple times a day. This growing rate of change demonstrates the industry is accelerating its rate of change to keep up with demand.

If we follow this trend, almost all companies will be expected to deploy changes multiple times a day if they wish to keep up with the shifting demands of the modern software market. Scaling this rate of change is hard. As we accelerate even faster, we will need to find new ways to optimize our ways of working, tackle the unknowns and drive software engineering into the future.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Enter machine learning and AIops

The software engineering community understands the operational overhead of running a complex microservices architecture. Engineers typically spend 23% of their time undergoing operational challenges. How could AIops lower this number and free up time for engineers to get back to coding?

Utilizing AIops for your alerts by detecting anomalies

A common challenge within organizations is to detect anomalies. Anomalous results are those that don’t fit in with the rest of the dataset. The challenge is simple: how do you define anomalies? Some datasets come with extensive and varied data, while others are very uniform. It becomes a complex statistical problem to categorize and detect a sudden change in this data.

Detecting anomalies through machine learning

Anomaly detection is a machine learning technique that uses an AI-based algorithm’s pattern recognition powers to find outliers in your data. This is incredibly powerful for operational challenges where, typically, human operators would need to filter out the noise to find the actionable insights buried in the data.

These insights are compelling because your AI approach to alerting can raise issues you’ve never seen before. With traditional alerting, you’ll typically have to pre-empt incidents that you believe will happen and create rules for your alerts. These can be called your known knowns or your known unknowns. The incidents you’re either aware of or blind spots in your monitoring that you’re covering just in case. But what about your unknown unknowns

This is where your machine learning algorithms come in. Your AIops-driven alerts can act as a safety net around your traditional alerting so that if sudden anomalies happen in your logs, metrics or traces, you can operate with confidence that you’ll be informed. This means less time defining incredibly granular alerts and more time spent building and deploying the features that will set your company apart in the market.

AIops can be your safety net

Rather than defining a myriad of traditional alerts around every possible outcome and spending considerable time building, maintaining, amending and tuning these alerts, you can define some of your core alerts and use your AIops approach to capture the rest.

As we grow into modern software engineering, engineers’ time has become a scarce resource. AIops has the potential to lower the growing operational overhead of software and free up the time for software engineers to innovate, develop and grow into the new era of coding.

Ariel Assaraf is CEO of Coralogix.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

How to unlock enterprise knowledge for real-world ROI

Presented by Pryon


AI-powered knowledge management platforms maximize existing investments by transforming them into interactive experiences, doubling productivity, without doubling budgets. Learn how they give companies ROI within weeks in this VB On-Demand event!

Watch free on demand here.


Enterprise knowledge is a multiplier; it’s having the right corporate information, applied to a question from an employee, or partner, when and how they need it. The challenge for most enterprises is unlocking knowledge from the information and data available. From enterprise search and communication platforms, to internal websites and knowledge graphs, companies have tried a broad array of methods to turn facts across the corporation into insight.

“Those systems have become, in many ways, their own barrier to efficiency,” says Chris Mahl, CRO and president at Pryon. “Data is everywhere. Unstructured knowledge is everywhere. Information in silos is everywhere. But applied knowledge — get it to me when I need it so I can be as productive as I need to be and want to be — that’s the stretch point that most firms are trying to uncover.”

It became clear that knowledge could be unlocked with the kind of interaction that consumer AI assistants have brought to market, says Igor Jablokov, the founder and CEO of Pryon, and founder of the startup that was bought by Amazon as the foundation for what became Alexa.

But it would be necessary to fuse disparate knowledge across an enterprise into one natural language representation.

“To unlock computing resources in the past meant that you had to be a computer scientist, a mathematician, an engineer,” he says. “We can turn the tables and democratize access to knowledge by allowing people to converse in natural language, to get access to these same resources that used to be buttoned up.”

Serving up knowledge in minutes

What’s different about Pryon’s approach to unlocking enterprise knowledge is the solution’s ability to ingest and vectorize content into collection models, which can be tapped using natural language.

A full stack, cloud-based, API-driven solution, it can simply be connected to a company’s infrastructure with a single sign-on provider. From there it taps into existing repositories including S3 Box, ServiceNow, ZenDesk, Confluence, Google Drive, Sharepoint and more. The content doesn’t need to be reformatted or transformed, and content can continue to be authored in the way the company currently manages it.

Once a collection is created and pointed at the sources of content, it automatically generates the model, which then starts extracting relevant information. Automation and machine learning ensure the system delivers precision in recall, an understanding of context, the best representation in response to a question, and filters that to allow the asker to hone in on a specific piece of information. Once the knowledge is ingested, searches for information now become a valuable source of customer signals.

The solution can also identify high-quality information by tracking searches and interaction with the system, and also surface information gaps, redundancies, conflicts or legacy content that should have been retired.

It’s a powerful client-facing tool, but using AI to serve the productivity of employees is a differentiator, says Mahl.

“Talent likes to work in a highly nimble place, where the things I need to do my job, I can get them, arguably at the speed of the spoken word,” Mahl says. “It’s an acquisition, retention, and productivity play for sure.”

And the system is designed from the ground up to be democratized across the entire workforce, not just for specialties – and that’s why these tools are so important, Jablokov says.

“How many of your respective businesses were built through accidental discoveries and experimentation? And why wouldn’t you find something that you can get into every nook and cranny?” he says. “We had some clients that discovered new product lines based on giving this tool to an intern. Get it to every nook and cranny of your organization. The best time you should have done that was yesterday. The second best time is today.”

To learn more about the real-world ROI of an intelligent knowledge platform, use cases from successful organizations, what the tool looks like under the hood and more, don’t miss this VB On Demand webinar.

Start streaming now.

Agenda

  • Increase productivity 2x by leveraging your existing investments in content assets, knowledge bases and human capital – without doubling budgets
  • Exceed the performance of your current customer support chatbots with a next-gen strategy
  • Drive repeatable, simultaneous digital transformation across multiple business units
  • Invest in an AI platform that pays for itself in weeks, not years

Presenters

  • Igor Jablokov, CEO & Founder, Pryon
  • Chris Mahl, President & CRO, Pryon
  • Art Cole, Moderator, VentureBeat

Repost: Original Source and Author Link

Categories
AI

Artificial intelligence (AI) vs. machine learning (ML): Key comparisons

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Within the last decade, the terms artificial intelligence (AI) and machine learning (ML) have become buzzwords that are often used interchangeably. While AI and ML are inextricably linked and share similar characteristics, they are not the same thing. Rather, ML is a major subset of AI.

AI and ML technologies are all around us, from the digital voice assistants in our living rooms to the recommendations you see on Netflix. 

Despite AI and ML penetrating several human domains, there’s still much confusion and ambiguity regarding their similarities, differences and primary applications.

Here’s a more in-depth look into artificial intelligence vs. machine learning, the different types, and how the two revolutionary technologies compare to one another.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

What is artificial intelligence (AI)? 

AI is defined as computer technology that imitate(s) a human’s ability to solve problems and make connections based on insight, understanding and intuition.

The field of AI rose to prominence in the 1950s. However, mentions of artificial beings with intelligence can be identified earlier throughout various disciplines like ancient philosophy, Greek mythology and fiction stories.

One notable project in the 20th century, the Turing Test, is often referred to when referencing AI’ history. Alan Turing, also referred to as “the father of AI,” created the test and is best known for creating a code-breaking computer that helped the Allies in World War II understand secret messages being sent by the German military. 

The Turing Test, is used to determine if a machine is capable of thinking like a human being. A computer can only pass the Turing Test if it responds to questions with answers that are indistinguishable from human responses.

Three key capabilities of a computer system powered by AI include intentionality, intelligence and adaptability. AI systems use mathematics and logic to accomplish tasks, often encompassing large amounts of data, that otherwise wouldn’t be practical or possible. 

Common AI applications

Modern AI is used by many technology companies and their customers. Some of the most common AI applications today include:

  • Advanced web search engines (Google)
  • Self-driving cars (Tesla)
  • Personalized recommendations (Netflix, YouTube)
  • Personal assistants (Amazon Alexa, Siri)

One example of AI that stole the spotlight was in 2011, when IBM’s Watson, an AI-powered supercomputer, participated on the popular TV game show Jeopardy! Watson shook the tech industry to its core after beating two former champions, Ken Jennings and Brad Rutter.

Outside of game show use, many industries have adopted AI applications to improve their operations, from manufacturers deploying robotics to insurance companies improving their assessment of risk.

Also read: How AI is changing the way we learn languages 

Types of AI

AI is often divided into two categories: narrow AI and general AI. 

  • Narrow AI: Many modern AI applications are considered narrow AI, built to complete defined, specific tasks. For example, a chatbot on a business’s website is an example of narrow AI. Another example is an automatic translation service, such as Google Translate. Self-driving cars are another application of this. 
  • General AI: General AI differs from narrow AI because it also incorporates machine learning (ML) systems for various purposes. It can learn more quickly than humans and complete intellectual and performance tasks better. 

Regardless of if an AI is categorized as narrow or general, modern AI is still somewhat limited. It cannot communicate exactly like humans, but it can mimic emotions. However, AI cannot truly have or “feel” emotions like a person can.

What is machine learning (ML)?

Machine learning (ML) is considered a subset of AI, whereby a set of algorithms builds models based on sample data, also called training data. 

The main purpose of an ML model is to make accurate predictions or decisions based on historical data. ML solutions use vast amounts of semi-structured and structured data to make forecasts and predictions with a high level of accuracy.

In 1959, Arthur Samuel, a pioneer in AI and computer gaming, defined ML as a field of study that enables computers to continuously learn without being explicitly programmed.

An ML model exposed to new data continuously learns, adapts and develops on its own. Many businesses are investing in ML solutions because they assist them with decision-making, forecasting future trends, learning more about their customers and gaining other valuable insights.

Types of ML

There are three main types of ML: supervised, unsupervised and reinforcement learning. A data scientist or other ML practitioner will use a specific version based on what they want to predict. Here’s what each type of ML entails:

  • Supervised ML: In this type of ML, data scientists will feed an ML model labeled training data. They will also define specific variables they want the algorithm to assess to identify correlations. In supervised learning, the input and output of information are specified.
  • Unsupervised ML: In unsupervised ML, algorithms train on unlabeled data, and the ML will scan through them to identify any meaningful connections. The unlabeled data and ML outputs are predetermined.
  • Reinforcement learning: Reinforcement learning involves data scientists training ML to complete a multistep process with a predefined set of rules to follow. Practitioners program ML algorithms to complete a task and will provide it with positive or negative feedback on its performance. 

Common ML applications

Major companies like Netflix, Amazon, Facebook, Google and Uber have ML a central part of their business operations. ML can be applied in many ways, including via:

  • Email filtering
  • Speech recognition
  • Computer vision (CV)
  • Spam/fraud detection
  • Predictive maintenance
  • Malware threat detection
  • Business process automation (BPA)

Another way ML is used is to power digital navigation systems. For example, Apple and Google Maps apps on a smartphone use ML to inspect traffic, organize user-reported incidents like accidents or construction, and find the driver an optimal route for traveling. ML is becoming so ubiquitous that it even plays a role in determining a user’s social media feeds. 

AI vs. ML: 3 key similarities

AI and ML do share similar characteristics and are closely related. ML is a subset of AI, which essentially means it is an advanced technique for realizing it. ML is sometimes described as the current state-of-the-art version of AI.

1. Continuously evolving

AI and ML are both on a path to becoming some of the most disruptive and transformative technologies to date. Some experts say AI and ML developments will have even more of a significant impact on human life than fire or electricity. 

The AI market size is anticipated to reach around $1,394.3 billion by 2029, according to a report from Fortune Business Insights. As more companies and consumers find value in AI-powered solutions and products, the market will grow, and more investments will be made in AI. The same goes for ML — research suggests the market will hit $209.91 billion by 2029. 

2. Offering myriad benefits

Another significant quality AI and ML share is the wide range of benefits they offer to companies and individuals. AI and ML solutions help companies achieve operational excellence, improve employee productivity, overcome labor shortages and accomplish tasks never done before.

There are a few other benefits that are expected to come from AI and ML, including:

  • Improved natural language processing (NLP), another field of AI
  • Developing the Metaverse
  • Enhanced cybersecurity
  • Hyperautomation
  • Low-code or no-code technologies
  • Emerging creativity in machines

AI and ML are already influencing businesses of all sizes and types, and the broader societal expectations are high. Investing in and adopting AI and ML is expected to bolster the economy, lead to fiercer competition, create a more tech-savvy workforce and inspire innovation in future generations.

3. Leveraging Big Data

Without data, AI and ML would not be where they are today. AI systems rely on large datasets, in addition to iterative processing algorithms, to function properly. 

ML models only work when supplied with various types of semi-structured and structured data. Harnessing the power of Big Data lies at the core of both ML and AI more broadly.

Because AI and ML thrive on data, ensuring its quality is a top priority for many companies. For example, if an ML model receives poor-quality information, the outputs will reflect that. 

Consider this scenario: Law enforcement agencies nationwide use ML solutions for predictive policing. However, reports of police forces using biased training data for ML purposes have come to light, which some say is inevitably perpetuating inequalities in the criminal justice system. 

This is only one example, but it shows how much of an impact data quality has on the functioning of AI and ML.

Also read: What is unstructured data in AI?

AI vs. ML: 3 key differences

Even with the similarities listed above, AI and ML have differences that suggest they should not be used interchangeably. One way to keep the two straight is to remember that all types of ML are considered AI, but not all kinds of AI are ML.

1. Scope

AI is an all-encompassing term that describes a machine that incorporates some level of human intelligence. It’s considered a broad concept and is sometimes loosely defined, whereas ML is a more specific notion with a limited scope. 

Practitioners in the AI field develop intelligent systems that can perform various complex tasks like a human. On the other hand, ML researchers will spend time teaching machines to accomplish a specific job and provide accurate outputs. 

Due to this primary difference, it’s fair to say that professionals using AI or ML may utilize different elements of data and computer science for their projects.

2. Success vs. accuracy

Another difference between AI and ML solutions is that AI aims to increase the chances of success, whereas ML seeks to boost accuracy and identify patterns. Success is not as relevant in ML as it is in AI applications. 

It’s also understood that AI aims to find the optimal solution for its users. ML is used more often to find a solution, optimal or not. This is a subtle difference, but further illustrates the idea that ML and AI are not the same. 

In ML, there is a concept called the ‘accuracy paradox,’ in which ML models may achieve a high accuracy value, but can give practitioners a false premise because the dataset could be highly imbalanced.

3. Unique outcomes

AI is a much broader concept than ML and can be applied in ways that will help the user achieve a desired outcome. AI also employs methods of logic, mathematics and reasoning to accomplish its tasks, whereas ML can only learn, adapt or self-correct when it’s introduced to new data. In a sense, ML has more constrained capabilities than AI.

ML models can only reach a predetermined outcome, but AI focuses more on creating an intelligent system to accomplish more than just one result. 

It can be perplexing, and the differences between AI and ML are subtle. Suppose a business trained ML to forecast future sales. It would only be capable of making predictions based on the data used to teach it.

However, a business could invest in AI to accomplish various tasks. For example, Google uses AI for several reasons, such as to improve its search engine, incorporate AI into its products and create equal access to AI for the general public. 

Identifying the differences between AI and ML

Much of the progress we’ve seen in recent years regarding AI and ML is expected to continue. ML has helped fuel innovation in the field of AI. 

AI and ML are highly complex topics that some people find difficult to comprehend.

Despite their mystifying natures, AI and ML have quickly become invaluable tools for businesses and consumers, and the latest developments in AI and ML may transform the way we live.

Read next: Does AI sentience matter to the enterprise?

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link