Categories
AI

Specialization is key in an exploding AI chatbot market

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Amid an exploding market for AI chatbots, companies that target their virtual assistants to specialized enterprise sectors may get a firmer foothold than general chatbots, according to Gartner analysts. That’s not news to Zowie, which claims to be the only AI-powered chatbot technology specifically built for ecommerce companies that use customer support to drive sales – no small feat in an industry in which customer service teams answer tens of thousands of repetitive questions daily. Today, the company announced it has secured $14 million in series A funding led by Tiger Global Management, bringing its total to $20 million.

Chatbots — sometimes referred to as virtual assistants — are built on conversational AI platforms (CAIP) that combine speech-based technology, natural language processing and machine learning tools to develop applications for use across many verticals. 

The CAIP market is extremely diverse, both in vendor strategies and in the enterprise needs that vendors target, says Magnus Revang, a Gartner research vice president covering AI, chatbots and virtual assistants.

The CAIP market is comprised of about 2,000 vendors worldwide. As a result, companies looking to implement AI chatbot technology often struggle with vendor selection, according to a 2020 Gartner report coauthored by Revang, “Making Sense of the Chatbot and Conversational AI Platform Market.

The report points out that In a market where no one CAIP vendor is vastly ahead of the pack, companies will need to select the provider best fit for their current short and midterm needs. 

The secret sauce in the AI chatbot market

That is Zowie’s secret sauce: specialization. Specifically, the company focuses on the needs of ecommerce providers, Maya Schaefer, Zowie’s chief executive officer and cofounder, told VentureBeat. The platform enables brands to improve their customer relationships and start generating revenue from customer service. 

Plenty of other CAIPs provide services for companies that sell products. But their solutions are also targeted to other verticals, such as banking, telecom and insurance.  Examples include Boost AI, Solvvy and Ada. Other chatbots — Ada is an example — can also be geared for use in the financial technology and software-as-a-service industries to answer questions, for instance, about a non- functioning system.

Zowie is built using the company’s automation technology, ‘Zowie X1’, to analyze the meaning and sentiment to find repetitive questions and trends. 

Zowie claims to automate 70% of inquiries ecommerce brands typically receive, such as “where’s my package?” or “how can I change my shipping address?” she says. The solution also includes a suite of tools that allows agents to provide personalized care and product recommendations, she says.

For example, if a customer says, “I would like help choosing new shoes” the system hands the request to a live product expert.

Before implementation, the platform analyzes a customer’s historical chats, frequently asked questions and knowledge base to automatically learn which questions to automate. It uses AI capabilities to analyze new questions and conversations, delivering more automation. 

By analyzing patterns, the AI chatbot can tell when something new or unusual is happening and alerts the customer service team, Schaefer said.

Live agents may also have difficulties upselling customers, so Zowie gives unique insights about customers to agents–what product they are looking at, did they order before and if so, what did they order – and has a direct product catalog integration which enables agents to send product suggestions in the form of a carousel, she added.

Time to optimization

In 2019, Schaefer and cofounder Matt Ciolek developed the all-in-one, modular CAIP designed for building highly customizable chatbots. Schaefer estimates that within six weeks of implementation, Zowie answers about 92% of customer inquiries such as “where’s my package?” and “what are the store hours?”

“Managers and agents don’t have to think about how to improve customer experience — our system will detect new trends and propose how to optimize the system automatically,” she said. “In a way, we automate the automation.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Adobe has built a deepfake tool, but it doesn’t know what to do with it

Deepfakes have made a huge impact on the world of image, audio, and video editing, so why isn’t Adobe, corporate behemoth of the content world, getting more involved? Well, the short answer is that it is — but slowly and carefully. At the company’s annual Max conference today, it unveiled a prototype tool named Project Morpheus that demonstrates both the potential and problems of integrating deepfake techniques into its products.

Project Morpheus is basically a video version of the company’s Neural Filters, introduced in Photoshop last year. These filters use machine learning to adjust a subject’s appearance, tweaking things like their age, hair color, and facial expression (to change a look of surprise into one of anger, for example). Morpheus brings all those same adjustments to video content while adding a few new filters, like the ability to change facial hair and glasses. Think of it as a character creation screen for humans.

The results are definitely not flawless and are very limited in scope in relation to the wider world of deepfakes. You can only make small, pre-ordained tweaks to the appearance of people facing the camera, and can’t do things like face swaps, for example. But the quality will improve fast, and while the feature is just a prototype for now with no guarantee it will appear in Adobe software, it’s clearly something the company is investigating seriously.

What Project Morpheus also is, though, is a deepfake tool — which is potentially a problem. A big one. Because deepfakes and all that’s associated with them — from nonconsensual pornography to political propaganda — aren’t exactly good for business.

Now, given the looseness with which we define deepfakes these days, Adobe has arguably been making such tools for years. These include the aforementioned Neural Filters, as well as more functional tools like AI-assisted masking and segmentation. But Project Morpheus is obviously much more deepfakey than the company’s earlier efforts. It’s all about editing video footage of humans — in ways that many will likely find uncanny or manipulative.

Changing someone’s facial expression in a video, for example, might be used by a director to punch up a bad take, but it could also be used to create political propaganda — e.g. making a jailed dissident appear relaxed in court footage when they’re really being starved to death. It’s what policy wonks refer to as a “dual-use technology,” which is a snappy way of saying that the tech is “sometimes maybe good, sometimes maybe shit.”

This, no doubt, is why Adobe didn’t once use the word “deepfake” to describe the technology in any of the briefing materials it sent to The Verge. And when we asked why this was, the company didn’t answer directly but instead gave a long answer about how seriously it takes the threats posed by deepfakes and what it’s doing about them.

Adobe’s efforts in these areas seem involved and sincere (they’re mostly focused on content authentication schemes), but they don’t mitigate a commercial problem facing the company: that the same deepfake tools that would be most useful to its customer base are those that are also potentially most destructive.

Take, for example, the ability to paste someone’s face onto someone else’s body — arguably the ur-deepfake application that started all this bother. You might want such a face swap for legitimate reasons, like licensing Bruce Willis’ likeness for a series of mobile ads in Russia. But you might also be creating nonconsensual pornography to harass, intimidate, or blackmail someone (by far the most common malicious application of this technology).

Regardless of your intent, if you want to create this sort of deepfake, you have plenty of options, none of which come from Adobe. You can hire a boutique deepfake content studio, wrangle with some open-source software, or, if you don’t mind your face swaps being limited to preapproved memes and gifs, you can download an app. What you can’t do is fire up Adobe Premiere or After Effects. So will that change in the future?

It’s impossible to say for sure, but I think it’s definitely a possibility. After all, Adobe survived the advent of “Photoshopped” becoming shorthand for digitally edited images in general, and often with negative connotations. And for better or worse, deepfakes are slowly losing their own negative associations as they’re adopted in more mainstream projects. Project Morpheus is a deepfake tool with some serious guardrails (you can only make prescribed changes and there’s no face-swapping, for example), but it shows that Adobe is determined to explore this territory, presumably while gauging reactions from the industry and public.

It’s fitting that as “deepfake” has replaced “Photoshopped” as the go-to accusation of fakery in the public sphere, Adobe is perhaps feeling left out. Project Morpheus suggests it may well catch up soon.

Repost: Original Source and Author Link

Categories
AI

Imec launches sustainable semiconductor program

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Imec, a semiconductor research accelerator in Leuven, Belgium, announced the first fruits of the Sustainable Semiconductor Technologies and Systems (SSTS) research program aimed at reducing the semiconductor industry’s carbon footprint. Early participants include large system companies such as Apple and Microsoft and semiconductor suppliers such as ASM, ASML, Kurita, Screen and Tokyo Electron. 

Today, the semiconductor industry plays a small role in global carbon emissions. But the industry is facing unprecedented demand for new chips. A big concern is that growing demand could increase the semiconductor industry’s relative proportion of carbon emissions, particularly as other industries reduce their carbon impact. 

Semiconductor manufacturing requires large amounts of energy and water and can create hazardous waste. In the short run, the SSTS effort will help companies understand and optimize their systems to minimize carbon emissions. Eventually, better metrics could also help mitigate other problems caused by semiconductor production.

A growing concern in the semiconductor industry

Researchers have shown that nearly 75% of a mobile device’s carbon footprint comes from its fabrication and half of this comes from the chips. The SSTS team has already built a virtual fab to simulate the environmental impact of future logic and memory devices. 

The SSTS team is also starting to enrich and simulate data with and check the representativeness of models used. Lars-Åke Ragnarsson, program director at SSTS, told VentureBeat that the early efforts are prototypes of simulated fabs rather than true digital twins. However, the hope is that future efforts will allow enterprises to weave these models into digital twins to help optimize ongoing manufacturing operations. 

He stressed the importance of sharing emissions’ data across the entire manufacturing lifecycle and all steps in the supply chain. By developing metrics, everyone will get a better idea of what actions can improve sustainability versus lessen it. For example, today, some companies may quantify carbon footprint on a per-chip basis, while others do it on a per dollar of product basis. Unified standards will provide an apples-to-apples comparison on the merits of various adjustments. 

Some different sources of carbon emissions include electrical consumption, direct emissions and water usage. The group is already showing the potential impact of higher efficiency pumps and recovering hydrogen. Down the road, they are hoping to identify high-impact processes for logic and memory technologies where manufacturers should focus efforts. They also want to develop guidelines for manufacturers, fabless companies, equipment providers and material providers that could help minimize carbon impact.

Early start

The project is still in its infancy. Currently, chip manufacturing only accounts for about 0.1% of global carbon emissions. But the concern is that without any new approach, they could grow by three times by 2030 and have an outsized impact on carbon emissions as other industries reduce their carbon footprint. “The goal should be to say at 0.1%,” Ragnarsson said.

They plan to add other environmental aspects, such as material costs, down the road. Other tools will make it easier to explore how new manufacturing and computing techniques such as molecular computing might impact the climate footprint. They also plan to develop tools to help share these findings and broader lifecycle data with other stakeholders such as politicians and citizens. 

“We need to work together as a team,” Ragnarsson said. “This is not competitive. We need to work together to make this happen.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Making this album with AI ‘felt like wandering in an enormous labyrinth’

The scare is over and the fun can begin. That’s how I tend to think of creative endeavors involving artificial intelligence these days. We’ve moved past, I think, hyperbolic claims about AI making human art redundant and can now enjoy all the possibilities this technology affords. In that light, Shadow Planet — a new album made as a three-way collaboration between two humans and AI — shows exactly what sort fun can be had.

Shadow Planet is the creation of writer Robin Sloan, musician Jesse Solomon Clark, and Jukebox, a machine learning music program made by OpenAI. After an off-the-cuff Instagram conversation between Sloan and Clark about starting a band (named The Cotton Modules), the two began exchanging tapes of music. A seasoned composer, Clark sent seeds of songs to Sloan who fed them into Jukebox, which is trained on a huge dataset of 1.2 million songs and tries to autocomplete any audio it hears. The AI program, steered by Sloan, then built on Clark’s ideas, which Sloan sent back to him to develop further.

The end result of this three-way trade is Shadow Planet, an atmospheric album in which snippets of folk songs and electronic hooks emerge like moss-covered logs from a fuzzy bog of ambient loops and disintegrating samples. It is a complete album in and of itself: a pocket musical universe to explore.

As Sloan explained to me in an interview over email, the sound of Shadow Planet is in many ways a result of the limitations of Jukebox, which only outputs mono audio at 44.1kHz. “Making this album, I learned that this kind of AI model is absolutely an ‘instrument’ you need to learn to play,” he told me. “It’s basically a tuba! A very… strange… and powerful… tuba…”

It’s this sort of emergent creativity, when machines and humans respond to limitations and advantages in one another’s programming, that makes AI art so interesting. Think about how the evolution of the harpsichord to the piano affected styles of music, for example, and as the ability of the latter to play loudly or softly (rather than the single fixed dynamic of the harpsichord) engendered new musical genres. This, I think, is what’s happening now with a whole range of AI models that are shaping creative output.

You can read my interview with Sloan below, and find out why working with machine learning felt to him “like wandering in an enormous labyrinth.” And you can listen to Shadow Planet on Spotify, Apple Music, iTunes, Bandcamp, or on Sloan and Clark’s website.

This interview has been lightly edited for clarity

Hey Robin, thanks for taking the time to speak to me about this album. First of all, tell me a little bit please about what material Jesse was sending you to start this collaboration? Was it original songs?

Yes! Jesse is a composer for commercials, films, and physical installations — he wrote the generative soundtrack that runs inside the visitor center at Amazon’s Spheres in Seattle. So he’s well-accustomed to sitting down and producing a bunch of musical options. Each tape I received from him had around a dozen small “songlets” on it, some only 20-30 seconds long, others a few minutes, all different, all separated by a bit of silence. So, my first task was always to listen through, decide what I liked best, and copy that to the computer.

And then you fed those into an AI system. Can you tell me a little bit about that program? What was it and how does it work?

I used OpenAI’s Jukebox model, which they trained on ~1.2 million songs, 600K of them in English; it operates on raw audio samples. That’s a huge part of the appeal for me; I find the MIDI-centric AI systems too… polite? They respect the grid too much! The sample-based systems (which I’ve used before, in different incarnations, including to make music for the audiobook of my last novel) are crunchier and more volatile, so I like them better.

To sample the Jukebox model, I used my own customized code. The technique OpenAI describes in their publication is very much like, “Hey, Jukebox, play me a song that sounds like The Beatles,” but I wanted to be able to “weird it up,” so my sampling code allows me to specify many different artists and genres and interpolate between them, even if they don’t have anything in common.

And that’s all just the setup. The sampling process itself is interactive. I’d always start with a “seed” from one of Jesse’s tapes, which would give the model a direction, a vibe to follow. In essence, I’d say to the model: “I’d like something that’s a mix of genre X and Y, sort of like artists A and B, but also, it’s got to follow this introduction: <Jesse’s music plays>”

I’d also, in some cases, specify lyrics. Then, I would go about eight to 10 seconds at a time, generating three options at each step — the computer churns for five to 10 minutes, FUN — then play them back, select one, and continue ahead… or sometimes reject all three and start over. In the end, I’d have a sample between 60-90 seconds long, and I’d print that to tape.

It was, to be honest, an extremely slow and annoying process, but the results were so interesting and evocative that I was always motivated to keep going!

What did Jesse think about the material you were sending him?

He underscores that working with the material was often VERY difficult. Weird instruments would rise up out of nowhere, or the key would change in a strange way, etc. But I think that was part of the fun, too, and the reason to do this project at all: each sample I sent him was a puzzle to solve.

Ultimately, his work was both responsive — “how do I support this sample, help it shine” — and transformative — “what kind of song should this be?” That’s evident on all the songs, but a clear example is “Magnet Train,” where Jesse took pains to showcase and support the vocal performance (weird and dorky and great) and then extended it with elements that suggest “train-ness” — the chugging percussion, etc.

And how exactly did you hone in on this particular sound, do you think? What pushed you in this direction?

Oh, it was definitely the grain of the medium. Early on, I told Jesse that although the model could produce sound at 44.1kHz, it was only in mono. His response was: “Cool! Let’s use mono cassettes then.” And the music he sent back to me was mono, as well. In his final production pass, he added a bit of stereo width, just so the songs weren’t all totally locked in the center, but it’s a pretty “narrow” album generally, and that’s totally because of the AI’s limitation, which we decided to embrace and extend rather than fight. Same goes for the lo-fi, grainy, “radio tuned to a ghost channel” sound — totally an artifact of the way the model produces music, which we amplified further by bouncing the music to tape so many times.

So, in the finished songs that we’re hearing, what proportion of the music is made by AI and what by human? Is it even possible to make that distinction?

It really does vary widely from song to song, and the truth is, in some cases, we lost track! I’d start with a phrase from Jesse, put it through my sampling process, send it back to him, he’d add a layer or extend it, send it back to me, I’d put it BACK through the sampling process… what’s the human / AI breakdown there? It’s all sort of mixed and layered together.

There’s one division that’s clear: anytime you hear anything that sounds like a human voice, whether it’s enunciating lyrics clearly or sort of ooh-ing and ahh-ing, that voice is generated by the AI.

Making this album, I learned that this kind of AI model is absolutely an “instrument” you need to learn to play. And I’ve come to believe that analogy is a lot more useful and generative than like, “AI co-composer” or “automatic AI artist” or whatever other analogy you might have heard or can imagine. It’s basically a tuba! A very… strange… and powerful… tuba…

Haha, right! I’ve spoken to quite a few artists who use machine learning models to make songs or books, and they often talk about the dynamic between them and the AI — whether it was pushing them in a given direction, for example. Did it feel like this for you at all, when you were exploring what music Jukebox could give you?

I love this question, and here’s why: previously, I have been pretty skeptical / critical of the “big [AI] models trained on everything,” even as they’ve risen to prominence. This is a class that includes GPT-3, Jukebox, CLIP, VQGAN, etc. It’s very clear that this approach produces powerful results, but I always thought it was more creatively interesting to take responsibility for your own dataset, understand its composition as a key creative decision, etc. And I still think that’s true, to a degree…

BUT!

The experience of using Jukebox really turned me around on this. For me, it has felt like wandering in an enormous labyrinth or a dead city: huge, full of alleys and arcades. Even now, having used it for so long, I have no idea what’s still waiting in there, what can be found and carried out. Obviously, I’m betraying the fact that I’ve played too many RPGs here… but truly! That’s the feeling, and it’s VERY fun.

With that in mind, then, what do you think making this album with Jesse taught you about the future of AI and creativity? What do you think these systems will be doing in the future?

AI techniques can do a whole bunch of different things for different kinds of artists, of course, but regarding this specific category, the generative model that can produce new music, new sounds. It seems TOTALLY clear to me that these are on the path to becoming a new kind of synthesizer or electric guitar. I think the story will be broadly similar — they’ll go from research project to novelty (which is where we’re at now) to tools for nascent virtuosos (it’s exciting to think about getting to that point!) to commonplace participants in any / every studio.

Repost: Original Source and Author Link

Categories
AI

AI in robotics: Problems and solutions

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Robotics is a diverse industry with many variables. Its future is filled with uncertainty: nobody can predict which way it will develop and what directions will be leading a few years from now. Robotics is also a growing sector of more than 500 companies working on products that can be divided into four categories:

  • Conventional industrial robots,
  • Stationary professional services (such as medical and agricultural applications),
  • Mobile professional services (construction and underwater activities),
  • Automated guided vehicles (AGVs) for carrying small and large loads in logistics or assembly lines.

According to the International Federation of Robotics data, 3 million industrial robots are operating worldwide – the number has increased by 10% over 2021. The global robotics market is estimated at $55.8 billion and is expected to grow to $91.8 billion by 2026 with a 10.5% annual growth rate.

Biggest industry challenges

The field of robotics is facing numerous issues based on its hardware and software capabilities. The majority of challenges surround facilitating technologies like artificial intelligence (AI), perception, power sources, etc. From manufacturing procedures to human-robot collaboration, several factors are slowing down the development pace of the robotics industry.

Let’s look at the significant problems facing robotics:

Intelligence

Different real-world environments may become challenging for robots to comprehend and take suitable action. There is no match for human thinking; thus, robotic solutions are not entirely dependable.

Navigation

There was considerable progress in robots perceiving and navigating the environments – for example, self-driving vehicles. Navigation solutions will continue to evolve, but future robots need to be able to work in environments that are unmapped and not fully understood.

Autonomy

Full autonomy is impractical and too distant as of now. However, we can reason about energy autonomy. Our brains require lots of energy to function; without evolutionary mechanisms of optimizing these processes, they wouldn’t be able to achieve the current levels of human intelligence. This also applies to robotics: more power required decreases autonomy. 

New materials

Elaborate hardware is crucial to today’s robots. Massive work still needs to be performed with artificial muscles, soft robotics, and other items that will help to develop efficient machines.

The above challenges are not unique, and they are generally expected for any developing technology. The potential value of robotics is immense, attracting tremendous investment that focuses on removing existing issues. Among the solutions is collaborating with artificial intelligence.

Robotics and AI

Robots have the potential to replace about 800 million jobs globally in the future, making about 30% of all positions irrelevant. Unsurprisingly, only 7% of businesses currently do not employ AI-based technology but are looking into it. However, we need to be careful when discussing robots and AI, as these terms are often assumed to be identical, which has never been the case.

The definition of artificial intelligence tells about enabling machines to perform complex tasks autonomously. Tools based on AI can solve complicated problems by analyzing large quantities of information and finding dependencies not visible to humans. We at ENOT.ai featured six cases when improvements in navigation, recognition, and energy consumption reached between 48% and 800% after applying AI. 

While robotics is also connected to automation, it combines with other fields – mechanical engineering, computer science, and AI. AI-driven robots can perform functions autonomously with machine learning algorithms. AI robots can be described as intelligent automation applications in which robotics provides the body while AI supplies the brain.

AI applications for robotics

The cooperation between robotics and AI is naturally called to serve mankind. There are numerous valuable applications developed so far, starting from household usage. For example, AI-powered vacuum cleaners have become a part of everyday life for many people.

However, much more elaborate applications are developed for industrial use. Let’s go over a few of them:

  • Agriculture. As in healthcare or other fields, robotics in agriculture will mitigate the impact of labour shortages while offering sustainability. Many apps, for example, Agrobot, enable precision weeding, pruning, and harvesting. Powered by sophisticated software, apps allow farmers to analyze distances, surfaces, volumes, and many other variables.
  • Aerospace. While NASA is looking to improve its Mars rovers’ AI and working on an automated satellite repair robot, other companies want to enhance space exploration through robotics and AI. Airbus’ CIMON, for example, is developed to assist astronauts with their daily tasks and reduce stress via speech recognition while operating as an early-warning system to detect issues.
  • Autonomous driving. After Tesla, you cannot surprise anybody with self-driving cars. Nowadays, there are two critical cases: self-driving robo-taxis and autonomous commercial trucking. In the short-term, advanced driver-assistance systems (ADAS) technology will be essential as the market gets ready for complete autonomy and seeks to gain profits from the technology capabilities.

With advances in artificial intelligence coming on in leaps and bounds every year, it’s certainly possible that the line between robotics and artificial intelligence will become more blurred over the coming decades, resulting in a rocketing increase in valuable applications.

Main market tendency

The competitive field of artificial intelligence in robotics is getting more fragmented as the market is growing and is providing clear opportunities to robot vendors. The companies are ready to make the first-mover advantage and grab the opportunities laid by the different technologies. Also, the vendors view expansion in terms of product innovation and global impact as a path toward gaining maximum market share.

However, there is a clear need for increasing the number of market players. The potential of robotics to substitute routine human work promises to be highly consequential by freeing people’s time for creativity. Therefore, we need many more players to speed up the process. 

Future of AI in robotics

Artificial intelligence and robotics have already formed a concrete aim for business investments. This technology alliance will undoubtedly change the world, and we can hope to see it happen in the coming decade. AI allows robotic automation to improve and perform complicated operations without a hint of error: a straightforward path to excellence. Both industries are the future driving force, and we will see many astounding technological inventions based on AI in the next decade.

Sergey Alyamkin, Ph.D. is CEO and founder of ENOT.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

The future of AI is a conversation with a computer

How would an AI writing program start an article on the future of AI writing? Well, there’s one easy way to find out: I used the best known of these tools, OpenAI’s GPT-3, to do the job for me.

Using GPT-3 is disarmingly simple. You have a text box to type into and a menu on the side to adjust parameters, like the “temperature” of the response (which essentially equates to randomness). You type, hit enter, and GPT-3 completes what you’ve written, be it poetry, fiction, or code. I tried inputting a simple headline and a few sentences about the topic, and GPT-3 began to fill in the details. It told me that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”

So far, so good, I thought. I hit enter again, and the program added a quote from Google’s head of AI, Jeff Dean, then referenced an experimental piece of software from the 1960s before promising that an “AI Revolution” was coming that would reap immense rewards across the fields of science, technology, and medicine.

Fine, I thought. Then I thought a little more and did some googling. I soon discovered that the quote from Dean was made up, that the experimental software never existed, and while the promise of an “AI Revolution” was all well and good, it wasn’t any different from the vague nonsense found in hype-filled press releases. Really, what was most revealing about the future of AI was not what GPT-3 said but how it said it. The medium is the message, as Marshall McLuhan pointed out many years ago. And here, the medium included plausible fabrications; endless output; and, crucially, an opportunity to respond to the robot writer.

If we’re looking ahead at the next 10 years of AI development, trying to predict how we will interact with increasingly intelligent software, it helps to consider those tools that can talk back. AI writing models may only be digital parrots, able to copy form without understanding meaning, but they still create a dialogue with the user. This is something that often seems missing from the introduction of AI systems like facial recognition algorithms (which are imposed upon us) or self-driving cars (where the public becomes the test subject in a dangerous experiment). With AI writing tools, there is the possibility for a conversation.


If you use Gmail or Google Docs, then you’ve probably already encountered this technology. In Google’s products, AI editors lurk in the blank space in front of your cursor, manifesting textual specters that suggest how to finish a sentence or reply to an email. Often, their prompts are just simple platitudes — ”Thanks!”, “Great idea!”, “Let’s talk next week!” — but sometimes these tools seem to be taking a stronger editorial line, pushing your response in a certain direction. Such suggestions are intended to be helpful, of course, but they seem to provoke annoyance as frequently as gratitude.

To understand how AI systems learn to generate such suggestions, imagine being given two lists of words. One starts off “eggs, flour, spatula,” and the other goes “paint, crayons, scissors.” If you had to add the items “milk” and “glitter” to these lists, which would you choose and with how much confidence? And what if that word was “brush” instead? Does that belong in the kitchen, where it might apply an egg wash, or is it more firmly located in the world of arts-and-crafts? Quantifying this sort of context is how AI writing tools learn to make their suggestions. They mine vast amounts of text data to create statistical maps of the relationships between words, and use this information to complete what you write. When you start typing, they start predicting which words should come next.

Features like Gmail’s Smart Reply are only the most obvious example of how these systems — often known as large language models — are working their way into the written world. AI chatbots designed for companionship have become increasingly popular, with some, like Microsoft’s Chinese Xiaoice, attracting tens of millions of users. Choose-your-own-adventure-style text games with AI dungeon masters are attracting users by letting people tell stories collaboratively with computers. And a host of startups offer multipurpose AI text tools that summarize, rephrase, expand, and alter users’ input with varying degrees of competence. They can help you to write fiction or school essays, say their creators, or they might just fill the web with endless spam.

The ability of the underlying software to actually understand language is a topic of hot debate. (One that tends to arrive, time and time again, at the same question: what do we mean by “understand” anyway?). But their fluency across genres is undeniable. For those enamored with this technology, scale is key to their success. It’s by making these models and their training data bigger and bigger that they’ve been able to improve so quickly. Take, for example, the training data used to create GPT-3. The exact size of the input is difficult to calculate, but one estimate suggests that the entirety of Wikipedia in English (3.9 billion words and more than 6 million articles) makes up only 0.6 percent of the total.

Relying on scale to build these systems has benefits and drawbacks. From an engineering perspective, it allows for fast improvements in quality: just add more data and compute to reap fast rewards. The size of large language models is generally measured in their number of connections, or parameters, and by this metric, these systems have increased in complexity extremely quickly. GPT-2, released in 2019, had 1.5 billion parameters, while its 2020 successor, GPT-3, had more than 100 times that — some 175 billion parameters. Earlier this year, Google announced it had trained a language model with 1.6 trillion parameters.

The difference in quality as systems get larger is notable, but it’s unclear how much longer these scaling efforts will reap rewards in quality. Boosters think that sky’s the limit — that these systems will keep on getting smarter and smarter, and that they may even be the first step toward creating a general-purpose artificial intelligence or AGI. But skeptics suggest that the AI field in general is starting to reap diminishing returns as it scales ever up.

A reliance on scale, though, is inextricably linked to the statistical approach that creates uncertainty in these models’ output. These systems have no centralized store of accepted “truths”; no embodied understanding of “what the world is like for humans” and, hence, no way to distinguish fact from fiction or to exercise common sense.

Quiz them on simple trivia, like capital cities or the birthdays of US presidents, and they are right most of the time. But to these systems, truth is simply a statistical feature of their training data. They answer questions correctly because the text they’ve been fed has presented them with the correct information with enough frequency. This means that if you push them on any given topic or stray from the most obvious fields, they will lie thoughtlessly, making up quotes, dates, biographical details, and anything else you want to hear. The same probabilistic approach also means they can stumble over common sense questions. Start quizzing them with slightly fantastical queries, and they will confidently assert, for example, that a pencil is heavier than a toaster or that a blade of grass only has one eye. Such answers reveal the gulf between statistical and embodied intelligence.

To get a better understanding of these AI language models, I’ve been playing with a variety for the past few weeks; from tools sold to copywriters to flexible, multipurpose systems like GPT-3. The experience has been dizzying. Often, I’m amazed by the fluency, insight, and creativity of these systems. As part of a project for The Verge’s 10-year anniversary, for example, I used GPT-3 to write technopagan spells for a zine, feeding it a prompt (below in bold) which it completed with a four-step ritual (of which I’m showing only the first step):

Technopaganism is a new sort of magic that blends traditional witchcraft with the power of being online all the time. What follows is a technopagan spell designed to protect your phone from spam and improve the quality of messages you receive. Step 1) Take your cell phone and go outside. Close your eyes, hold the device in front of you, and intone: “I offer you, O Spirit of Technopagan Magic, my eternal and unending electronic connection to the net. And in return I ask you for magick to combine my will with the will of the system — make me impervious to the garbage that flows through my inbox. Make all messages flow together, and I will choose among them what is right for me to see. Grant me the Finger of the Internet, and I will strike at spam, and cast out the spammers back to the dark places where they come from.”

But other times, I’m stunned by how limited these programs are. Something that’s often overlooked is just how much human curation is needed to shape their output. The text above was not the first response I received from GPT-3, and I had to go through several iterations to generate a response that was both cogent and funny. It helped, of course, that the task I’d set GPT-3 was an imaginative and open-ended one: it played into the program’s strengths (and I think GPT-3’s success in such tasks has led some users and observers to exaggerate the intelligence of these systems). Other times, though, the software produced nonsensical content even within the fanciful framing I’d given it. Another “spell” it generated in response to the same prompt was much less focused, adding fictitious social media handles, tech headlines, and non-existent URLs to the spell’s instructions:

Perform a ritual to seal the crown circle into the device. May the gods help us all, the Apple iPhone X reference for is broken! Destroy the holy symbol! —tianong xiong (@tianongxiong) September 13, 2018 To confirm the rumor: Facebook Messenger’s search bar does have a search bar hidden right next to it pic.twitter.com/DghBL5DmhU — Warl Darkh (@dkpt) September 12, 2018

You can argue that this is just creativity of a different sort, and that of course a proper technopagan spell would include URLs. But it’s also obvious the machine has gone off-piste.

Despite such weaknesses, there’s already talk of AI systems taking over writers’ jobs. Naturally, I wondered if a computer could write articles for The Verge (and not just this one). I played around with different models, inputting opening paragraphs into these systems and asking for story ideas. Here is some more from GPT-3 on large language models:

“The human brain is just a really fancy computer,” said Jeff Dean, a Google hardware engineer and AI expert, at a company event in 2016. “It’s a computer made of meat.”

To understand GPT-3, you first have to know a little about how the brain learns. The brain isn’t born knowing how to speak or understand speech. It must first acquire this ability through exposure to the outside world, which is why children tend to learn language so quickly, and why being raised in a household where a language isn’t spoken presents a major obstacle.

As neuroscientist Mark Changizi has shown, the process of learning a language is a massive project. “Language is one of the most complex things that brains can do,” he writes, “and it emerges from the brain’s more basic skills, like vision, hearing, and motor control.”

But how does the brain acquire all this knowledge? The short answer is: via autocomplete.

All these points make sense if you’re not concentrating too hard, but they don’t flow from sentence to sentence. They never follow an argument or build to a conclusion. And again, fabrication is a problem. Both Jeff Dean and Mark Changizi are real people who have been more or less correctly identified (though Dean is now head of AI at Google, and Changizi is a cognitive scientist rather than a neuroscientist). But neither man ever uttered the words that GPT-3 attributed to them, as far as I can tell. Yet despite these problems, there’s also a lot to be impressed by. For example, using “autocomplete” as a metaphor to describe AI language models is both accurate and easy to understand. I’ve done it myself! But is this because it’s simply a common metaphor that others have deployed before? Is it right then to say GPT-3 is “intelligent” to use this phrase or is it just subtly plagiarizing others? (Hell, I ask the same questions about my own writing.)

Where AI language models seem best suited, is creating text that is rote, not bespoke, as with Gmail’s suggested replies. In the case of journalism, automated systems have already been integrated into newsrooms to write “fill in the blanks” stories about earthquakes, sporting events, and the like. And with the rise of large AI language models, the span of content that can be addressed in this way is expanding.

Samanyou Garg is the founder of an AI writing startup named Writesonic, and says his service is used mostly by e-commerce firms. “It really helps [with] product descriptions at scale,” says Garg. “Some of the companies who approach us have like 10 million products on their website, and it’s not possible for a human to write that many.” Fabian Langer, founder of a similar firm named AI Writer, tells The Verge that his tools are often used to pad out “SEO farms” — sites that exist purely to catch Google searches and that create revenue by redirecting visitors to ads or affiliates. “Mostly, it’s people in the content marketing industry who have company blogs to fill, who need to create content,” said Langer. “And to be honest, for these [SEO] farms, I do not expect that people really read it. As soon as you get the click, you can show your advertisement, and that’s good enough.”

It’s this sort of writing that AI will take over first, and which I’ve started to think of as “low-attention” text — a description that applies to both the effort needed to create and read it. Low-attention text is not writing that makes huge demands on our intelligence, but is mostly functional, conveying information quickly or simply filling space. It also constitutes a greater portion of the written world than you might think, including not only marketing blogs but work interactions and idle chit-chat. That’s why Gmail and Google Docs are incorporating AI language models’ suggestions: they’re picking low-hanging fruit.

A big question, though, is what effect will these AI writing systems have on human writing and, by extension, our culture? The more I’ve thought about the output of large language models, the more it reminds me of geofoam. This is a building material made from expanded polystyrene that is cheap to produce, easy to handle, and packed into the voids left over by construction projects. It is incredibly useful but somewhat controversial, due to its uncanny appearance as giant polystyrene blocks. To some, geofoam is an environmentally-sound material that fulfills a specific purpose. To others, it’s a horrific symbol of our exploitative relationship with the Earth. Geofoam is made by pumping oil out of the ground, refining it into cheap matter, and stuffing it back into the empty spaces progress leaves behind. Large language models work in a similar way: processing the archaeological strata of digital text into synthetic speech to fill our low-attention voids.

For those who worry that much of the internet is already “fake” — sustained by botnets, traffic farms, and automatically generated content — this will simply mark the continuation of an existing trend. But just as with geofoam, the choice to use this filler on a wide scale will have structural effects. There is ample evidence, for example, that large language models encode and amplify social biases, producing text that is racist and sexist, or that repeats harmful stereotypes. The corporations in control of these models pay lip service to these problems but don’t think they present serious problems. (Google famously fired two of its AI researchers after they published a detailed paper describing these issues.) And as we offload more of the cognitive burden of writing onto machines, making our low-attention text no-attention text, it seems plausible that we, in turn, will be shaped by the output of these models. Google already uses its AI autocomplete tools to suggest gender-neutral language (replacing “chairman” with “chair,” for example), and regardless of your opinion on the politics of this sort of nudge, it’s worth discussing what the end-point of these systems might be.

In other words: what happens when AI systems trained on our writing start training us?


Despite the problems and limitations of large language models, they’re already being embraced for many tasks. Google is making language models central to its various search products; Microsoft is using them to build automated coding software, and the popularity of apps like Xiaoice and AI Dungeon suggests that the free-flowing nature of AI writing programs is no hindrance to their adoption.

Like many other AI systems, large language models have serious limitations when compared with their hype-filled presentations. And some predict this widespread gap between promise and performance means we’re heading into another period of AI disillusionment. As the roboticist Rodney Brooks put it: “just about every successful deployment [of AI] has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.” But AI writing tools can, to an extent, avoid these problems: if they make a mistake, no one gets hurt, and their collaborative nature means human curation is often baked in.

What’s interesting is considering how the particular characteristics of these tools can be used to our advantage, showing how we might interact with machine learning systems, not in a purely functional fashion but as something exploratory and collaborative. Perhaps the most interesting single use of large language models to date is a book named Phamarko AI: a text written by artist and coder K Allado-McDowell as an extended dialogue with GPT-3.

To create Phamarko AI, Allado-McDowell wrote and GPT-3 responded. “I would write into a text field, I would write a prompt, sometimes that would be several paragraphs, sometimes it would be very short, and then I would generate some text from the prompt,” Allado-McDowell told The Verge. “I would edit the output as it was coming out, and if I wasn’t interested in what it was saying, I would cut that part and regenerate, so I compared it to pruning a plant.”

The resulting text is esoteric and obscure, discussing everything from the roots of language itself to the concept of “hyper-dimensionality.” It is also brilliant and illuminating, showing how writing alongside machines can shape thought and expression. At different points, Allado-McDowell compares the experience of writing using GPT-3 to taking mushrooms and communing with gods. They write: “A deity that rules communication is an incorporeal linguistic power. A modern conception of such might read: a force of language from outside of materiality.” That force, Allado-McDowell suggests, might well be a useful way to think about artificial intelligence. The result of communing with it is a sort of “emergence,” they told me, an experience of “being part of a larger ecosystem than just the individual human or the machine.”

This, I think, is why AI writing is so much more exciting than many other applications of artificial intelligence: because it offers the chance for communication and collaboration. The urge to speak to something greater than ourselves is evident in how these programs are being embraced by early adopters. A number of individuals have used GPT-3 to talk to dead loved ones, for example, turning its statistical intelligence into an algorithmic ouija board. Though such experiments also reveal the limitations. In one of these cases, OpenAI shut down a chatbot shaped to resemble a developer’s dead fiancée because the program didn’t conform to the company’s terms of service. That’s another, less promising reality of these systems: the vast majority are owned and operated by corporations with their own interests, and they will shape their programs (and, in turn, their users) as they see fit.

Despite this, I’m hopeful, or at least curious, about the future of AI writing. It will be a conversation with our machines; one that is diffuse and subtle, taking place across multiple platforms, where AI programs linger on the fringes of language. These programs will be unseen editors to news stories and blog posts, they will suggest comments in emails and documents, and they will be interlocutors that we even talk to directly. It’s impossible that this exchange will only be good for us, and that the deployment of these systems won’t come without problems and challenges. But it will, at least, be a dialogue.



Repost: Original Source and Author Link

Categories
AI

Why AI and autonomous response are crucial for cybersecurity (VB On-Demand)

Presented by Darktrace


Today, cybersecurity is in a state of continuous growth and improvement. In this on-demand webinar, learn how two organizations use a continuous AI feedback loop to identify vulnerabilities, harden defenses and improve the outcomes of their cybersecurity programs.

Watch free on-demand here.


The security risk landscape is in tremendous flux, and the traditional on-premises approach to cybersecurity is no longer enough. Remote work has become the norm, and outside the office walls, employees are letting down their personal security defenses. Cyber risks introduced by the supply chain via third parties are still a major vulnerability, so organizations need to think about not only their defenses but those of their suppliers to protect their priority assets and information from infiltration and exploitation.

And that’s not all. The ongoing Russia-Ukraine conflict has provided more opportunities for attackers, and social engineering attacks have ramped up tenfold and become increasingly sophisticated and targeted. Both play into the fears and uncertainties of the general population. Many security industry experts have warned about future threat actors leveraging AI to launch cyber-attacks, using intelligence to optimize routes and hasten their attacks throughout an organization’s digital infrastructure.

“In the modern security climate, organizations must accept that it is highly likely that attackers could breach their perimeter defenses,” says Steve Lorimer, group privacy and information security officer at Hexagon. “Organizations must focus on improving their security posture and preventing business disruption, so-called cyber resilience. You don’t have to win every battle, but you must win the important ones.”

ISOs need to look for cybersecurity options that alleviate some resource challenges, add value to their team, and reduce response time. Self-learning AI trains itself using unlabeled data. Autonomous response is a technology that calculates the best action to take to contain in-progress attacks at machine speed, preventing attacks from spreading throughout the business and interrupting crucial operations. And both are becoming essential for a security program to address these challenges.

Why self-learning AI is essential in the new cybersecurity landscape

Attackers are constantly innovating, transforming old attack patterns into new ones. Self-learning AI can detect when something in an organization’s digital infrastructure changes, identify behaviors or patterns that haven’t been seen previously, and act to quarantine the potential threat before it can escalate into a full-blown crisis, disrupting business. 

“It’s about building layers at the end of the day,” Lorimer adds. “AI will always be a supporting element, not a replacement for human teams and knowledge. AI can empower human teams and decrease the burden. But we can never entirely rely on machines; you need the human element to make gut feeling decisions and emotional reactions to influence more significant business decisions.”

The advantages of autonomous response

Often, cyber attacks start slowly; many take months to move between reconnaissance and penetration, but the most important components of an attack happen very quickly. Autonomous response unlocks the ability to react at machine speed to identify and contain threats in that short window.

The second key advantage of autonomous response is that it enables “always-on” defense. Even with the best intentions in the world, security teams will always be constrained by resources. There aren’t enough people to defend everything all the time. Organizations need a layer that can augment the human team, providing them time to think and respond with crucial human context, like business and strategy acumen. Autonomous response capabilities allow the AI to make decisions instantaneously. These micro-decisions give human teams enough time to make those macro-decisions.

Leveling up: Leveraging attack path modeling

Once an organization has matured its thinking to the point of assumed breach, the next question is understanding how attackers traverse the network, Lorimer says. Now, AI can help businesses better understand their own systems and identify the most high-risk paths an attacker might take to reach their crown jewels or most important information and assets.

This attack simulation allows them to harden defenses around their most vulnerable areas, Lorimer says. And self-learning AI is really all about a paradigm shift: instead of building up defenses based on historical attack data, you need to be able to defend against novel threats.

Attack path modeling (APM) is a revolutionary technology because it allows organizations to map the paths where security teams may not have as much visibility or may not have originally thought of as vulnerable. The network is never static; a large, modern, and innovative enterprise constantly changes. So, APM can run continuously and alert teams of new attack paths created via new integrations with a third party or a new device joining the digital infrastructure.

“This continuous, AI-based approach allows organizations to harden their defenses continually, rather than relying on biannual, or even more infrequent, red teaming exercises,” Lorimer says. “APM enables organizations to remediate vulnerabilities in the network proactively.”

Choosing a cybersecurity solution

When choosing a cybersecurity solution, there are a few things ISOs need to look for, Lorimer says. First, the solution should augment the human teams without creating substantial additional work. The technologies should be able to increase the value that an organization delivers.

ISOs should also look to repair any significant overlaps or gaps in technology in their existing security stacks. Today’s solutions can replace much of the existing stack with better, faster, more optimized, more automated and technology-led approaches. 

Beyond the technology itself, ISOs must seek out a vendor that adds human expertise and contextual analysis on top.

“For example, Darktrace’s Security Operations Center (SOC) and Ask the Expert services allow our team at Hexagon to glean insights from their global fleet, partner community, and entire customer base,” Lorimer says. “Darktrace works with companies across all different industries and geographies, and that context allows us to understand threats and trends that may not have immediately impacted us yet.” 

Hexagon operates in two key industry sectors: manufacturing and software engineering, and so each facet of the business faces different, specific threats from different threat actors. Darktrace’s SOC offers insights from broader industry experts and analysts based on their wealth of knowledge. 

But even with the best tools, you can’t solve every problem. You need to focus on solving the issues that will genuinely affect your ability to deliver to your customers and, thus, your bottom line. You should establish controls that can help manage and reduce that risk.

“It’s all about getting in front of issues before they can escalate and mapping out potential consequences,” Lorimer says. “It all comes down to understanding risk for your organization.”

For more insight into the current threat landscape and to learn more about how AI can transform your cybersecurity program, don’t miss this VB On-Demand event!

Watch free on-demand here.

You’ll learn about:

  • Protecting and securing citizens, nations, facilities, and data with autonomous decision making
  • Applying continuous AI feedback systems to improve outcomes and harden security systems
  • Simulating real-world scenarios to understand attack paths adversaries may leverage against critical assets
  • Fusing the physical and digital worlds to create intelligent security for infrastructure

Presenters:

  • Nicole Eagan,Chief Strategy Officer and AI Officer, Darktrace
  • Norbert Hanke, Executive Vice President, Hexagon
  • Mike Beck,Global CISO, Darktrace
  • Steve Lorimer, Group Privacy & Information Security Officer, Hexagon
  • Chris Preimesberger,Moderator, Contributing Writer, VentureBeat

Repost: Original Source and Author Link

Categories
AI

Microsoft is giving businesses access to OpenAI’s powerful AI language model GPT-3

It’s the AI system once deemed too dangerous to release to the public by its creators. Now, Microsoft is making an upgraded version of the program, OpenAI’s autocomplete software GPT-3, available to business customers as part of its suite of Azure cloud tools.

GPT-3 is the best known example of a new generation of AI language models. These systems primarily work as autocomplete tools: feed them a snippet of text, whether an email or a poem, and the AI will do its best to continue what’s been written. Their ability to parse language, however, also allows them to take on other tasks like summarizing documents, analyzing the sentiment of text, and generating ideas for projects and stories — jobs with which Microsoft says its new Azure OpenAI Service will help customers.

Here’s an example scenario from Microsoft:

“A sports franchise could build an app for fans that offers reasoning of commentary and a summary of game highlights, lowlights and analysis in real time. Their marketing team could then use GPT-3’s capability to produce original content and help them brainstorm ideas for social media or blog posts and engage with fans more quickly.”

GPT-3 is already being used for this sort of work via an API sold by OpenAI. Startups like Copy.ai promise that their GPT-derived tools will help users spruce up work emails and pitch decks, while more exotic applications include using GPT-3 to power a choose-your-own-adventure text game and chatbots pretending to be fictional TikTok influencers.

While OpenAI will continue selling its own API for GPT-3 to provide customers with the latest upgrades, Microsoft’s repackaging of the system will be aimed at larger businesses that want more support and safety. That means their service will offer tools like “access management, private networking, data handling protections [and] scaling capacity.”

It’s not clear how much this might cannibalize OpenAI’s business, but the two companies already have a tight partnership. In 2019, Microsoft invested $1 billion in OpenAI and became its sole cloud provider (a vital relationship in the compute-intensive world of AI research). Then, in September 2020, Microsoft bought an exclusive license to directly integrate GPT-3 into its own products. So far, these efforts have focused on GPT-3’s code-generating capacities, with Microsoft using the system to build autocomplete features into its suite of PowerApps applications and its Visual Studio Code editor.

These limited applications make sense given the huge problems associated with large AI language models like GPT-3. First: a lot of what these systems generate is rubbish, and requires human curation and oversight to sort the good from the bad. Second: these models have also been shown time and time again to incorporate biases found in their training data, from sexism to Islamaphobia. They are more likely to associate Muslims with violence, for example, and hew to outdated gender stereotypes. In other words: if you start playing around with these models in an unfiltered format, they’ll soon say something nasty.

Microsoft knows only too well what can happen when such systems are let loose on the general public (remember Tay, the racist chatbot?). So, it’s trying to avoid these problems with GPT-3 by introducing various safeguards. These include granting access to use the tool by invitation only; vetting customers’ use cases; and providing “filtering and monitoring tools to help prevent inappropriate outputs or unintended uses of the service.”

However, it’s not clear if these restrictions will be enough. For example, when asked by The Verge how exactly the company’s filtering tools work, or whether there was any proof that they could reduce inappropriate outputs from GPT-3, the company dodged the question.

Emily Bender, a professor of computational linguistics at the University of Washington who’s written extensively on large language models, says Microsoft’s reassurances are lacking in substance. “As noted in [Microsoft’s] press release, GPT-3’s training data potentially includes ‘everything from vulgar language to racial stereotypes to personally identifying information,’” Bender told The Verge over email. “I would not want to be the person or company accountable for what it might say based on that training data.”

Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency — meaning AI systems should be accountable and understandable. Despite this, says Bender, the exact composition of GPT-3’s training data is a mystery and Microsoft is claiming that the system “understands” language — a framing that is strongly disputed by many experts. “It is concerning to me that Microsoft is leaning in to this kind of AI hype in order to sell this product,” said Bender.

But although Microsoft’s GPT-3 filters may be unproven, it can avoid a lot of trouble by simply selecting its customers carefully. Large language models are certainly useful as long as their output is checked by humans (though this requirement does negate some of the promised gains in efficiency). As Bender notes, if Azure OpenAI Service is just helping to write “communication aimed at business executives,” it’s not too problematic.

“I would honestly be more concerned about language generated for a video game character,” she says, as this implementation would likely run without human oversight. “I would strongly recommend that anyone using this service avoid ever using it in public-facing ways without extensive testing ahead of time and humans in the loop.”

Repost: Original Source and Author Link

Categories
AI

Report: Sustainability is a top 10 priority for CEOs this year

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


A recent survey of CEOs and senior executives by Gartner, Inc. revealed significant shifts in their thinking regarding people, purpose, prices and productivity in 2022, specifically on matters of sustainability, workforce issues and inflation.

Among the key findings, artificial intelligence (AI) is now reported as the most impactful new technology among CEOs for the third year in a row. Conversely, 63% of CEOs see the metaverse as either not applicable or very unlikely to be a key technology for their business.

Additionally, for the first time in the history of the survey, CEOs placed environmental sustainability in their top 10 strategic business priorities, coming in at 8th place. Nearly three-quarters of CEOs agreed that increasing environmental, social and governance (ESG) efforts attracts investors toward their companies. Sustainability also appears as a competitive differentiator for CEOs in 2022 and 2023, on the same level as brand trust among respondents.

CEOS' Top 10 Strategic Business Priority Areas for 2022-2023. In order: growth; tech-related; workforce; corporate; financial; products and services; customer; environmental sustainability; cost; sales.

Workforce issues, such as talent retention, moved up in priority for CEOs for the second year in a row, only slightly behind technology-related issues such as digitalization and cybersecurity, and significantly ahead of financial issues such as profitability and cash flow.

Regarding inflation, 62% of CEOs see general price inflation as a persistent or long-term issue. Their top response to inflation is to raise prices (51% of respondents), rather than responding with productivity and efficiency (22% of respondents).

Overall, CEO perspectives shifted significantly in 2022, impacted by the ongoing effects of the pandemic and more recently the Russian invasion of Ukraine. However, CEOs’ digital business ambition continues to rise, unabated by the pandemic and related crises.

The annual Gartner 2022 CEO and Senior Business Executive Survey was conducted between July 2021 through December 2021 among over 400 CEOs and other senior business executives in North America, EMEA and APAC across different industries, revenue and company sizes.

Read the full report by Gartner.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Facebook is shutting down its Face Recognition tagging program

Meta (formerly known as Facebook) is discontinuing Facebook’s Face Recognition feature following a lengthy privacy battle. Meta says the change will roll out in the coming weeks. As part of it, the company will stop using facial recognition algorithms to tag people in photographs and videos, and it will delete the facial recognition templates that it uses for identification.

Meta artificial intelligence VP Jerome Pesenti calls the change part of a “company-wide move to limit the use of facial recognition in our products.” The move follows a lawsuit that accused Facebook’s tagging tech of violating Illinois’ biometric privacy law, leading to a $650 million settlement in February. Facebook previously restricted facial recognition to an opt-in feature in 2019.

“Looking ahead, we still see facial recognition technology as a powerful tool,” writes Pesenti in a blog post, citing possibilities like face-based identity verification. “But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole.” Pesenti notes that regulators haven’t settled on comprehensive privacy regulation for facial recognition. “Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

Pesenti says more than one-third of Facebook’s daily active users had opted into Face Recognition scanning, and over a billion face recognition profiles will be deleted as part of the upcoming change. As part of the change, Facebook’s automated alt-text system for blind users will no longer name people when it’s analyzing and summarizing media, and it will no longer suggest people to tag in photographs or automatically notify users when they appear in photos and videos posted by others.

Facebook’s decision won’t stop independent companies like Clearview AI — which built huge image databases by scraping photos from social networks, including Facebook — from using facial recognition algorithms trained with that data. US law enforcement agencies (alongside other government divisions) work with Clearview AI and other companies for facial recognition-powered surveillance. State or national privacy laws would be needed to restrict the technology’s use more broadly.

By shutting down a feature it’s used for years, Meta is hoping to bolster user confidence in its privacy protections as it prepares a rollout of potentially privacy-compromising virtual and augmented reality technology. The company launched a pair of camera-equipped smart glasses in partnership with Ray-Ban earlier this year, and it’s gradually launching 3D virtual worlds on its Meta VR headset platform. All these efforts will require a level of trust from users and regulators, and giving up Facebook auto-tagging — especially after a legal challenge to the program — is a straightforward way to bolster it.

Repost: Original Source and Author Link