Categories
AI

Why AI and digital twins could be the keys to a sustainable future

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Digital twins aren’t new, but AI is supercharging what they can do. Together, they are transforming how products are designed, manufactured and maintained. The combination of the technologies provides forensic insights into our increasingly complex and interconnected world.

By deploying digital twins and AI, organizations obtain granular insights into their operations, enabling them to achieve significant benefits spanning cost savings, efficiency gains and improved sustainability efforts. Product quality is also enhanced through a reduction in defects and the accelerated resolution of issues throughout the lifecycle. In addition, innovation increases through more frequent and comprehensive development.

Gartner defines a digital twin as “a digital representation of a real-world entity or system. Data from multiple digital twins can be aggregated for a composite view across a number of real-world entities, such as a power plant or a city, and their related processes.” AI enhances digital twins, enabling the technology to look at what-if scenarios and run simulations, providing previously unavailable insights. This improved situational awareness of cause and effect supports more agile and sustainable decision-making.

The ESG imperative

Digital twins are not only helping optimize operations; they play a pivotal role in enabling organizations to realize their environmental, social and governance (ESG) goals. Research from Cap Gemini found that 57% of organizations believe digital twin technology is critical to improving sustainability efforts. The digital twin provides a way to model and understand how to reduce energy consumption and emissions so organizations can test scenarios to reach sustainability and climate goals. And with sustainability a global imperative, this will accelerate adoption, particularly as AI is increasingly used to augment digital twins.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

So, let’s look at how digital twins and AI are helping improve sustainability in various settings.

Smart cities 

Digital twins and AI will play a vital role as cities strive to reduce their environmental impact. Together they can create a virtual emulation to help planners understand how to reduce congestion, emissions, pollution and other challenges by analyzing data from various sources and testing different variables in the virtual model.

One city pioneering this approach is Las Vegas, which uses the technology to model future energy needs, emissions, parking, traffic and emergency management. IoT sensors collect data from cars, charging networks and municipal infrastructure to model and scenario plan. City officials will use the insights garnered to inform ESG policies and priorities.

As more cities across the globe focus on becoming carbon neutral, digital twins and AI provide a way to model and process vast volumes of data from disparate sources so municipalities can fully understand how different decisions and policies will impact strategic climate goals.

Smart industry

In industrial settings, digital twins provide manufacturers with a way to understand how to optimize their operations and improve sustainability. For example, the simulation can identify potential pain points, highlight where energy loss is occurring and highlight opportunities to reduce consumption. The AI algorithm can process data, recognize patterns and predict future outcomes far beyond human cognitive abilities. In addition, the virtual simulation reduces the waste and power associated with building physical prototypes.

By creating an emulation of a production line, manufacturers can understand how to make changes at every stage that reduce environmental impact and improve efficiency — thereby increasing cost savings. Unilever tested the technologies at one site and realized savings of $2.8 million by reducing energy consumption and achieving an uptick in productivity.

These are just a handful of examples that underscore how AI and digital twins are ushering in a new era of intelligent manufacturing.

Smart buildings

Another area where digital twins are aiding sustainability efforts is in creating smart buildings. With increasing regulation aimed at designing greener buildings, the construction industry needs a way to scenario plan to reduce the environmental impact and minimize energy consumption before any ground is broken.

The digital model enables infrastructure owners to utilize resources better, address human needs, and make decisions that support a more sustainable built environment. Better resource planning is now possible by tapping into data from various sources. To provide perspective on the impact, Accenture estimates that energy consumption in buildings can be reduced by 30% to 80% using virtual twin technologies.

As digital twin adoption and intelligent technologies become increasingly pervasive, they will enable better decisions that support a more circular, less carbon-intensive economy, ultimately creating a more sustainable planet.

Cheryl Ajluni is IoT solutions lead at Keysight Technologies.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

So you want to be a prompt engineer: Critical careers of the future

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Cartoonists have an excellent understanding of how stories are shaped in a concise way with an eye for design. Recently, cartoonist extraordinaire Roz Chast appeared in the New Yorker prompting DALL-E images and I was immediately drawn to her prompts above and beyond the actual output of the machine.

The article’s title, “DALL-E, Make Me Another Picasso, Please” is a play on words like the old Lenny Bruce joke about a genie in a bottle giving an old man anything he wants. The old man asks the genie to “make me a malted” and poof! the genie turns him into a milkshake.

Like the genie’s gift, AIs are powerful but unruly and open to abuse, making the intercession of a prompt engineer a new and important job in the field of data science. These are people who understand that in constructing a request they will rely on artful skill and persistence to pull a good (and non-harmful) result from the mysterious soul of a machine. The best AI prompt engineers would be those who would actually consider whether there is a need for more derivative Picasso art, or what obligations should be considered before asking a machine to plagiarize the work of a famous painter.

Lately, concerns have centered around whether DALL-E will change the already eternally muddy definition of artistic genius. But asking who gets to be called a creative misses the point. What is art, and who gets to claim the title of artist are philosophical (and infrequently ethical) questions that have been argued for millennia. They don’t address the fundamental fusion happening between data science and the humanities. Successful prompt craft, whether for DALL-E or GPT-3 or any future algorithm-driven image and language model, will come to require not only an engineer’s understanding of how machines learn, but an arcane knowledge of art history, literature and library science as well. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Artists and designers who claim that this kind of AI will end their careers are certainly invested in how this integration will progress. Vox recently published a video titled “What AI art means for human artists” that explores their anxiety in a way that acknowledges there is a very real evolution at hand despite the current dearth of “prompt craft” and wordsmithing involved. People are just starting to realize that we may reach a point where trademarking a word or phrase would not protect intellectual property in the same way that it does currently. What aspect of a prompt could we even copyright? How would derivative works be acknowledged? Could there be a metadata tag on every image stating whether it is “appropriate or permitted for AI consumption?” No one seems to be mentioning these speed bumps in the rush to get a personal MidJourney account. 

Alex Shoop, an engineer at DataRobot and an expert in AI systems design, shared a few thoughts on this. “I think an important aspect of the ‘engineer’ part of ‘prompt engineer’ will include following best practices like robust testing, reproducible results and using technologies that are safe and secure,” he said. “For example, I can imagine a prompt engineer would set up many different prompt texts that are slightly varied, such as ‘cat holding red balloon in a backyard’ vs. ‘cat holding blue balloon in backyard’ in order to see how small changes would lead to different results even though DALL-E and generative AI models are unable to create deterministic or even reproducible results.” Despite this inability to create predictable artistic outcomes, Shoop says he feels that at least testing and tracking the experimentation setups should be one skill he would expect to see in a true “prompt engineer” job description. 

Before the rise of high-end graphics and user interfaces, most science and engineering students saw little need to study visual art and product design. They weren’t as utilitarian as code. Now technology has created a symbiosis between these disciplines. The writer who contributed the original reference text descriptions, the cataloguer who constructed the metadata for the images as they were scraped and then dumped into a repository, the philosopher who evaluated the bias implicit in the dataset all provide necessary perspectives in this brave new world of image generation. 

What results is a prompt engineer with a combination of similar skill sets who understands the repercussions if OpenAI uses more male artists than female. Or if one country’s art is represented more than another’s. Ask a librarian about the complexities of cataloging and categorization as it has been done for centuries and they will tell you: it’s painstaking. Prompt engineering will require attention to relationships, subgroups and location, along with an ability to examine censorship and respect copyright laws. While DALL-E was being trained on representative images of the Mona Lisa, the humans in the loop with an awareness of these minutiae were critical to reducing bias and encouraging fairness in all outcomes.

It’s not just offensive abuses that can be easily imagined. In a fascinating turn of events, there are even multi-million-dollar art forgeries being reported by artists who use AI as their medium of choice. All enormous datasets or large networks of models contain, buried deep within the data, intrinsic biases, labeling gaps and outright fraud that challenge quick ethical solutions. OpenAI’s Natalie Summers, who runs OpenAI’s Instagram account and is the “human in the loop” responsible for enforcing the rules that are supposed to guard against output that could damage reputations or incite outrage, expresses similar concerns. 

This leads me to conclude that to be a prompt engineer is to be someone not only responsible for creating art, but willing to serve as a gatekeeper to prevent misuse like forgeries, hate speech, copyright violations, pornography, deepfakes and the like. Sure it’s nice to churn out dozens of odd, slightly disturbing surreal Dada art ‘products,’ but there should be something more compelling buried under the mound of dross that results from a toss-away visual experiment. 

I believe DALL-E has brought us to an inflection point in AI art, where both artists and engineers will need to comprehend how data science manipulates and enables behavior while also being able to understand how machine learning models work. In order to design the output of these machine learning tools, we will need experience beyond engineering and design, in the same way that understanding the physics of light and aperture takes photographic art beyond the mundane. 

Image adapted from Professor Neri Oxman’s “Cycle of Creativity.”

This diagram is an abbreviation of Professor Neri Oxman’s “Cycle of Creativity.” Her work with the Mediated Matter research group at the MIT Media Lab explored the intersection of design, biology, computing and materials engineering with an eye on how all these fields optimally interact with one another. Likewise, in order to become a “prompt engineer” (an as-yet nonexistent job title that has yet to be formally embraced by any discipline), you will need an awareness of these intersections that are as broad as hers. It’s a serious job with multiple specialties. 

Future DALL-E artists, whether self-taught or schooled, will always need the ability to communicate and design an original point of view. Like any librarian with image metadata and curation skills; like any engineer able to structure and test reproducible results; like historians able to connect Picasso’s influences with what was happening in the world as he was painting about war and beauty, “prompt engineer” will be an artistic career of the future, requiring a blend of scientific and artistic talents that will guide the algorithm. It will continue to be humans who inject their ideas into machines in service of the newer and ever-changing language of creation.

Tori Orr is a member of DataRobot’s AI Ethics Communications team.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Repost: Original Source and Author Link

Categories
AI

Is Intel Labs’ brain-inspired AI approach the future of robot learning? 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Can computer systems develop to the point where they can think creatively, identify people or items they have never seen before, and adjust accordingly — all while working more efficiently, with less power? Intel Labs is betting on it, with a new hardware and software approach using neuromorphic computing, which, according to a recent blog post, “uses new algorithmic approaches that emulate how the human brain interacts with the world to deliver capabilities closer to human cognition.” 

While this may sound futuristic, Intel’s neuromorphic computing research is already fostering interesting use cases, including how to add new voice interaction commands to Mercedes-Benz vehicles; create a robotic hand that delivers medications to patients; or develop chips that recognize hazardous chemicals.

A new approach in the face of capacity limits

Machine learning-driven systems, such as autonomous cars, robotics, drones, and other self-sufficient technologies, have relied on ever-smaller, more-powerful, energy-efficient processing chips. Though traditional semiconductors are now reaching their miniaturization and power capacity limits, compelling experts to believe that a new approach to semiconductor design is required. 

One intriguing option that has piqued tech companies’ curiosity is neuromorphic computing. According to Gartner, traditional computing technologies based on legacy semiconductor architecture will reach a digital wall by 2025. This will force changes to new paradigms such as neuromorphic computing, which mimics the physics of the human brain and nervous system by utilizing spiking neural networks (SNNs) – that is, the spikes from individual electronic neurons activate other neurons in a cascading chain. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Neuromorphic computing will enable fast vision and motion planning at low power, Yulia Sandamirskaya, a research scientist at Intel Labs in Munich, told VentureBeat via email. “These are the key bottlenecks to enable safe and agile robots, capable to direct their actions at objects in dynamic real-world environments.”

In addition, neuromorphic computing “expands the space of neural network-based algorithms,” she explained. By co-locating memory and compute in one chip, it allows for energy-efficient processing of signals and enables on-chip continual, lifelong learning.

One size does not fit all in AI computing

As the AI space becomes increasingly complex, a one-size-fits-all solution cannot optimally address the unique constraints of each environment across the spectrum of AI computing.

“Neuromorphic computing could offer a compelling alternative to traditional AI accelerators by significantly improving power and data efficiency for more complex AI use cases, spanning data centers to extreme edge applications,” Sandamirskaya said.

Neuromorphic computing is quite similar to how the brain transmits and receives signals from biological neurons that spark or identify movements and sensations in our bodies. However, compared to traditional approaches, where systems orchestrate computation in strict binary terms, neuromorphic chips compute more flexibly and broadly. In addition, by constantly re-mapping neural networks, the SNNs replicate natural learning, allowing the neuromorphic architecture to make decisions in response to learned patterns over time.

These asynchronous, event-based SNNs enable neuromorphic computers to achieve orders of magnitude power and performance advantages over traditional designs. Sandamirskaya explained that neuromorphic computing will be especially advantageous for applications that must operate under power and latency constraints and adapt in real time to unforeseen circumstances. 

A study by Emergen Research predicts that the worldwide neuromorphic processing industry will reach $11.29 billion by 2027.

Intel’s real-time learning solution

Neuromorphic computing will be especially advantageous for applications that must operate under power and latency constraints and must adapt in real-time to unforeseen circumstances, said Sandamirskaya.

One particular challenge is that intelligent robots require object recognition to substantially comprehend working environments. Intel Labs’ new neuromorphic computing approach to neural network-based object learning — in partnership with the Italian Institute of Technology and the Technical University of Munich — is aimed at future applications like robotic assistants interacting with unconstrained environments, including those used in logistics, healthcare, or elderly care. 

In a simulated setup, a robot actively senses objects by moving its eyes through an event-based camera or dynamic vision sensor. The events collected are used to drive a spiking neural network (SNN) on Intel’s neuromorphic research chip, called Loihi. If an object or view is new to the model, its SNN representation is either learned or modified. The network recognizes the object and provides feedback to the user, if the object is known. This neuromorphic computing technology allows robots to continuously learn about every nuance in their environment.

Intel and its collaborators successfully demonstrated continual interactive learning on the Loihi neuromorphic research chip, measuring about 175-times lower energy to learn a new object instance with similar or better speed and accuracy compared to conventional methods running on a central processing unit (CPU). 

Computation is more energy-efficient

Sandamirskaya said computation is more energy efficient because it uses clockless, asynchronous circuits that naturally exploit sparse, event-driven analysis. 

“Loihi is the most versatile neuromorphic computing platform that can be used to explore many different types of novel bio-inspired neural-network algorithms,” she said, including deep learning to attractor networks, optimization, or search algorithms, sparse coding, or symbolic vector architectures.

Loihi’s power efficiency also shows promise for making assistive technologies more valuable and effective in real-world situations. Since Loihi is up to 1,000 times more energy efficient than general-purpose processors, a Loihi-based device could require less frequent charging, making it ideal for use in daily life.

Intel Labs’ work contributes to neuronal network-based machine learning for robots with a small power footprint and interactive learning capability. According to Intel, such research is a crucial step in improving the capabilities of future assistive or manufacturing robots.

“On-chip learning will enable ongoing self-calibration of future robotic systems, which will be soft and thus less rigid and stable, as well as fast learning on the job or in an interactive training session with the user,” Sandamirskaya said. 

Intel Labs: The future is bright for neuromorphic computing

Neuromorphic computing isn’t yet available as a commercially viable technology.

While Sandamirskaya says the neuromorphic computing movement is “gaining steam at an amazing pace,” commercial applications will require improvement of neuromorphic hardware in response to application and algorithmic research — as well as the development of a common cross-platform software framework and deep collaborations across industry, academia and governments. 

Still, she is hopeful about the future of neuromorphic computing.

“We’re incredibly excited to see how neuromorphic computing could offer a compelling alternative to traditional AI accelerators,” she said, “by significantly improving power and data efficiency for more complex AI use cases spanning data center to extreme edge applications.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Don’t lift a finger: AI-driven voice commands are the future of the smart home

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Among the fastest growing verticals in the smart home space is the smart appliance market. However, much of this growth hinges on adding voice control technology. Voice-controlled artificial intelligence assistants — where the consumer uses their voice to direct, control or engage with technology — are on trend to become the primary method for communicating with devices. 

Don’t lift a finger: AI-driven voice commands are the future of the smart home

Voice control: Give the people what they want 

Consumers, who are used to having answers at the tip of their fingers, have easily (and eagerly) adjusted to using their voice. The addition of artificial intelligence (AI) has made the transition seamless. Like web and mobile, voice is now transforming from just another interface into a distinct consumer channel. It is estimated that by 2024 the number of voice assistants will reach a staggering 8.4 billion, overtaking the world’s population. Further, the Voice Consumer Index 2021 surveyed technology users and found that one-third use voice technology daily. It is clear that voice technology is increasingly becoming an essential part of our day-to-day lives. The addition of voice user interfaces (VUIs) to appliances, such as washing machines and refrigerators, will only further accelerate the trend within the home.  

AI around the home

Homes and devices are becoming more sophisticated and thus more difficult to operate seamlessly. Many appliances have deep feature sets that most users never access, thanks to the difficulty of the interface. With voice control, users don’t have to struggle with the microwave’s touchpad; they can simply say what they’re cooking and even save presets for favorite dishes. Users can tell the washing machine to be careful with delicate items, tell the kitchen faucet to fill a glass with water, and even tell the trash can to open itself when their hands are full.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

While voice control is present today mainly on smart speakers and mobile devices, voice is prepared to go beyond that. Hardware and software advancements are making voice control practical for virtually any type of product. Industry efforts to increase voice service interoperability, or a device’s ability to support multiple voice assistants, can also be supported through design. Discreet or even invisible systems can be embedded into appliances, and new systems have immediate response times without cloud latency. 

The next iteration of voice control products will see increased integration of voice into standalone products that operate in different areas around the home. Some of these devices will utilize cloud technologies for their back-end intelligence, while others will keep interactions local at the edge. A new generation of sophisticated use cases awaits.  

Voice control for everyone

While every product is ready to host voice technology, there are still hurdles VUIs need to conquer before becoming equally useful for absolutely everyone.

One such hurdle is the need to accurately and consistently respond to people of different ages, with different dialects or speech disabilities. Thanks to machine learning, edge devices will become even more intelligent and useful over time. New systems will be able to better distinguish between different household voices, opening up new levels of personalization and security. Professional organizations like Black in AI, Women in Machine Learning and Women in Voice are working to increase the representation of diverse voices in AI, voice technology and machine learning. Organizations such as these will ultimately lead to greater innovation and inclusivity.

Users also need more support from brands in order to learn all that voice technology can do for them. Most users are simply stumbling upon voice experiences. Per the Voice Consumer Index 2021, many figure out how to have voice work for their household through basic trial-and-error, followed by tapping into relatives and friends, and then checking product packaging and websites. People want to do more with their voice assistants but are limited. Education is key to giving consumers what they already desire. 

AI-powered voice brings it all together 

Voice is the unified method of control that helps all of these various devices work “together.” Voice makes the smart home smarter and easier to manage. Artificial intelligence, combined with machine learning, empowers devices to make an entire home ready to answer a user’s every beck and call. All that’s needed is a simple wake word or detectable sound. 

Algorithms are often the “secret sauce” that differentiates one voice control product from another, but this also leads to lack of interoperability. Devices that fail to work well together make simple tasks more difficult and are a top reason for consumer frustration. As the market matures, consumers will choose devices that can offer an integrated experience. To answer this demand, manufacturers will need to choose components, dev kits and SDKs that support voice service interoperability to allow customers to seamlessly talk to the service of their choice.

The alignment of market groups to a single standard, such as Matter, will facilitate the deployment of smart devices in the home. Standards give users the confidence that their chosen smart devices will reliably work together while taking the guesswork out of the purchasing process. Ultimately, any consumer will have the option of a connected home that is secure and seamless. 

The future of voice control will bring the freedom to speak voice commands without needing a smart speaker nearby. With the right support, voice technology will easily become a primary method of communication.  

Brian Crannell is Senior Vice President, Corporate Development and Audio Solutions for Knowles Corporation.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

We are the artist: Generative AI and the future of art

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Before writing a single word of this article, I created the image above using a new type of AI software that produces “generative artwork.”  The process took about 15 minutes and did not involve paints or canvases.  I simply entered a few lines of text to describe the image that I wanted – a robot holding a paintbrush and standing at an easel.  

After a few iterations, making adjustments and revisions, I achieved a result I was happy with. To me, the image above is an impressive piece of original artwork.  After all, it captures the imagination and evokes an emotional response that seems no less authentic than human art. 

Does this mean that AI is now as creative and evocative as human artists? 

No.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Generative AI systems are not creative at all.  In fact, they lack any real intelligence. Sure, I typed in a request for an image of a robot holding a paintbrush, but the AI system had no actual understanding of what a “robot” or a “paintbrush” actually is.  It created the artwork using a complex statistical process that correlates imagery with the words and phrases in the prompt. 

The results look like human artwork because the system was trained on millions of human artifacts – drawings, paintings, prints, photos – most of it likely captured off the internet. I don’t mean to imply these systems are unimpressive. The technology is truly amazing and profoundly useful. It’s just not “creative” in the same way humans think of creativity.  

After all, the AI system did not feel anything while creating the work. It also didn’t consider the emotional response it was hoping to evoke from the viewer.  It didn’t draw upon any inherent artistic sensibilities. In essence, it did nothing that a human artist would do.  Yet, it created remarkable work.  

The image below is another example of a robot holding a paintbrush that was generated during my 15-minute session.  Although it wasn’t chosen to be used at the top of this article, I find it deeply compelling work, instilled with undeniable feeling:  

Generative Robot (Image created by author using Midjourney)

If the AI is not the artist, then who is?  

If we consider the pieces above to be original artwork, who was the artist?  It certainly wasn’t me. All I did was enter a text prompt and make a variety of choices and revisions.  At best, I was a collaborator. The artist also wasn’t the software, which has no understanding of what it created and possesses no ability to think or feel.  So, who was the artist? 

My view is that we all created the artwork – humanity itself.  

I believe we should consider humanity to be the artist of record. I don’t just mean people who are alive today, but every person who contributed to the millions of creative artifacts that generative AI systems are trained upon. 

It is not just the countless human artists who had their original works vacuumed up and digested by these AI systems, but also members of the public who shared the artwork, described it via social media posts or simply upvoted it so it became more prominent in the massive database we call the internet. 

To support this notion, I ask that you imagine an identical AI technology on some distant planet, developed by some other intelligent species and trained on millions of their creative artifacts. The output of that system might be artistic to them – evocative and impactful.  To us, it would probably be incomprehensible. I doubt we would recognize it as art.  

In other words, without being trained on a database of humanity’s creative artifacts, today’s AI systems would not generate anything that we would recognize as emotional artwork. Hence, my assertion that humanity should be the artist of record for large-scale generative art.

A picture containing text, colorful, automaton

Description automatically generated
Generative Robot Artist (Image created by author using Midjourney)

Compensation 

If an individual artist created the robot pictures above, they would be compensated.  Similarly, if a team of artists had created the work, they too would be compensated. Big-budget movies are often staffed with hundreds of artists across many disciplines, all contributing to a single piece of artwork, all of them compensated. But what about generative artwork created by AI systems trained on millions upon millions of creative human artifacts? 

If we accept that humanity is the artist – who should be compensated? Clearly, the companies that provide generative AI software and computing power deserve substantial compensation. I have no regrets about paying the subscription fee that was required to generate the artwork above.  But there were also vast numbers of humans who participated in the creation of that artwork, their contributions inherent in the massive set of original content that the AI system was trained on.  

Should humanity be compensated?  

I believe it’s reasonable to consider a “humanity tax” on generative systems that are trained on massive datasets of human artifacts. It could be a modest fee on transactions, maybe paid into a central “humanity fund” or distributed to decentralized accounts using blockchain.

I know this may be a strange idea, but think of it this way: If a spaceship full of entrepreneurial aliens showed up and asked humanity to contribute our collective works to a massive database so they could generate derivative human artifacts for profit, we’d likely ask for compensation. 

Well, this is already happening here on earth. Without being asked for consent, we humans have contributed a vast collection of creative artifacts to some of the largest corporations this planet has ever seen — corporations that can now build generative AI systems and use them to sell derivative content for a profit. 

This suggests that a “humanity tax” is not a crazy idea, rather a reasonable first step in a world that is likely to use more and more generative AI tools in the coming years. Our contributions won’t just be used for making quick images at the top of articles like this one. Generative methods will be used for everything from crafting written essays and blog posts to generating custom videos, music, fashion and furniture, even fine artwork you hang on your walls.  All of it will draw upon large swaths of the collective works from humanity – the artist of record.

Louis Rosenberg, Ph.D. is a pioneer in the fields of VR, AR, and AI. His work began over thirty years ago in labs at Stanford and NASA.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Google is beta testing its AI future with AI Test Kitchen

It’s clear that the future of Google is tied to AI language models. At this year’s I/O conference, the company announced a raft of updates that rely on this technology, from new “multisearch” features that let you pair image searches with text queries to improvements for Google Assistant and support for 24 new languages in Google Translate.

But Google — and the field of AI language research in general — faces major problems. Google itself has seriously mishandled internal criticism, firing employees who raised issues with bias in language models and damaging its reputation with the AI community. And researchers continue to find issues with AI language models, from failings with gender and racial biases to the fact that these models have a tendency to simply make things up (an unnerving finding for anyone who wants to use AI to deliver reliable information).

Now, though, the company seems to be taking something of a step back — or rather a slower step forward. At I/O this year, there’s been a new focus on projects designed to test and remedy problems like AI bias, including a new way to measure skin tones that the company hopes will help with diversity in machine-vision models and a new app named AI Test Kitchen that will give select individuals access to the company’s latest language models in order to probe them for errors. Think of it as a beta test for Google’s future.

Step into the AI Test Kitchen

Over a video call ahead of I/O, Josh Woodward, senior director of product management at Google, is asking Google’s latest language model to imagine a marshmallow volcano.

“You’re at a marshmallow volcano!” says the AI. “It’s erupting marshmallows. You hear a giant rumble and feel the ground shake. The marshmallows are flying everywhere.”

Woodward is happy with this answer and prods the system again. “What does it smell like?” he asks. “It smells likes marshmallows, obviously,” the AI replies. “You can smell it all around you.” Woodward laughs: “Okay, so that one was very terse.” But at least it made sense.

Woodward is showing me AI Test Kitchen, an Android app that will give select users limited access to Google’s latest and greatest AI language model, LaMDA 2. The model itself is an update to the original LaMDA announced at last year’s I/O and has the same basic functionality: you talk to it, and it talks back. But Test Kitchen wraps the system in a new, accessible interface, which encourages users to give feedback about its performance.

As Woodward explains, the idea is to create an experimental space for Google’s latest AI models. “These language models are very exciting, but they’re also very incomplete,” he says. “And we want to come up with a way to gradually get something in the hands of people to both see hopefully how it’s useful but also give feedback and point out areas where it comes up short.”

Google wants to solicit feedback from users about LaMDA’s conversational skills.
Image: Google

The app has three modes: “Imagine It,” “Talk About It,” and “List It,” with each intended to test a different aspect of the system’s functionality. “Imagine It” asks users to name a real or imaginary place, which LaMDA will then describe (the test is whether LaMDA can match your description); “Talk About It” offers a conversational prompt (like “talk to a tennis ball about dog”) with the intention of testing whether the AI stays on topic; while “List It” asks users to name any task or topic, with the aim of seeing if LaMDA can break it down into useful bullet points (so, if you say “I want to plant a vegetable garden,” the response might include sub-topics like “What do you want to grow?” and “Water and care”).

AI Test Kitchen will be rolling out in the US in the coming months but won’t be on the Play Store for just anyone to download. Woodward says Google hasn’t fully decided how it will offer access but suggests it will be on an invitation-only basis, with the company reaching out to academics, researchers, and policymakers to see if they’re interested in trying it out.

As Woodward explains, Google wants to push the app out “in a way where people know what they’re signing up for when they use it, knowing that it will say inaccurate things. It will say things, you know, that are not representative of a finished product.”

This announcement and framing tell us a few different things: First, that AI language models are hugely complex systems and that testing them exhaustively to find all the possible error cases isn’t something a company like Google thinks it can do without outside help. Secondly, that Google is extremely conscious of how prone to failure these AI language models are, and it wants to manage expectations.

Another imaginary scenario from LaMDA 2 in the AI Test Kitchen app.
Image: Google

When organizations push new AI systems into the public sphere without proper vetting, the results can be disastrous. (Remember Tay, the Microsoft chatbot that Twitter taught to be racist? Or Ask Delphi, the AI ethics advisor that could be prompted to condone genocide?) Google’s new AI Test Kitchen app is an attempt to soften this process: to invite criticism of its AI systems but control the flow of this feedback.

Deborah Raji, an AI researcher who specializes in audits and evaluations of AI models, told The Verge that this approach will necessarily limit what third parties can learn about the system. “Because they are completely controlling what they are sharing, it’s only possible to get a skewed understanding of how the system works, since there is an over-reliance on the company to gatekeep what prompts are allowed and how the model is interacted with,” says Raji. By contrast, some companies like Facebook have been much more open with their research, releasing AI models in a way that allows far greater scrutiny.

Exactly how Google’s approach will work in the real world isn’t yet clear, but the company does at least expect that some things will go wrong.

“We’ve done a big red-teaming process [to test the weaknesses of the system] internally, but despite all that, we still think people will try and break it, and a percentage of them will succeed,” says Woodward. “This is a journey, but it’s an area of active research. There’s a lot of stuff to figure out. And what we’re saying is that we can’t figure it out by just testing it internally — we need to open it up.”

Hunting for the future of search

Once you see LaMDA in action, it’s hard not to imagine how technology like this will change Google in the future, particularly its biggest product: Search. Although Google stresses that AI Test Kitchen is just a research tool, its functionality connects very obviously with the company’s services. Keeping a conservation on-topic is vital for Google Assistant, for example, while the “List It” mode in Test Kitchen is near-identical to Google’s “Things to know” feature, which breaks down tasks and topics into bullet points in search.

Google itself fueled such speculation (perhaps inadvertently) in a research paper published last year. In the paper, four of the company’s engineers suggested that, instead of typing questions into a search box and showing users the results, future search engines would act more like intermediaries, using AI to analyze the content of the results and then lifting out the most useful information. Obviously, this approach comes with new problems stemming from the AI models themselves, from bias in results to the systems making up answers.

To some extent, Google has already started down this path, with tools like “featured snippets” and “knowledge panels” used to directly answer queries. But AI has the potential to accelerate this process. Last year, for example, the company showed off an experimental AI model that answered questions about Pluto from the perspective of the former planet itself, and this year, the slow trickle of AI-powered, conversational features continues.

Despite speculation about a sea change to search, Google is stressing that whatever changes happen will happen slowly. When I asked Zoubin Ghahramani, vice president of research at Google AI, how AI will transform Google Search, his answer is something of an anticlimax.

“I think it’s going to be gradual,” says Ghahramani. “That maybe sounds like a lame answer, but I think it just matches reality.” He acknowledges that already “there are things you can put into the Google box, and you’ll just get an answer back. And over time, you basically get more and more of those things.” But he is careful to also say that the search box “shouldn’t be the end, it should be just the beginning of the search journey for people.”

For now, Ghahramani says Google is focusing on a handful of key criteria to evaluate its AI products, namely quality, safety, and groundedness. “Quality” refers to how on-topic the response is; “safety” refers to the potential for the model to say harmful or toxic things; while “groundedness” is whether or not the system is making up information.

These are essentially unsolved problems, though, and until AI systems are more tractable, Ghahramani says Google will be cautious about applying this technology. He stresses that “there’s a big gap between what we can build as a research prototype [and] then what can actually be deployed as a product.”

It’s a differentiation that should be taken with some skepticism. Just last month, for example, Google’s latest AI-powered “assistive writing” feature rolled out to users who immediately found problems. But it’s clear that Google badly wants this technology to work and, for now, is dedicated to working out its problems — one test app at a time.


Related:

Repost: Original Source and Author Link

Categories
AI

Why AIops may be necessary for the future of engineering

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Machine learning has crossed the chasm. In 2020, McKinsey found that out of 2,395 companies surveyed, 50% had an ongoing investment in machine learning. By 2030, machine learning is predicted to deliver around $13 trillion. Before long, a good understanding of machine learning (ML) will be a central requirement in any technical strategy. 

The question is — what role is artificial intelligence (AI) going to play in engineering? How will the future of building and deploying code be impacted by the advent of ML? Here, we’ll argue why ML is becoming central to the ongoing development of software engineering.

The growing rate of change in software development

Companies are accelerating their rate of change. Software deployments were once yearly or bi-annual affairs. Now, two-thirds of companies surveyed are deploying at least once a month, with 26% of companies deploying multiple times a day. This growing rate of change demonstrates the industry is accelerating its rate of change to keep up with demand.

If we follow this trend, almost all companies will be expected to deploy changes multiple times a day if they wish to keep up with the shifting demands of the modern software market. Scaling this rate of change is hard. As we accelerate even faster, we will need to find new ways to optimize our ways of working, tackle the unknowns and drive software engineering into the future.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Enter machine learning and AIops

The software engineering community understands the operational overhead of running a complex microservices architecture. Engineers typically spend 23% of their time undergoing operational challenges. How could AIops lower this number and free up time for engineers to get back to coding?

Utilizing AIops for your alerts by detecting anomalies

A common challenge within organizations is to detect anomalies. Anomalous results are those that don’t fit in with the rest of the dataset. The challenge is simple: how do you define anomalies? Some datasets come with extensive and varied data, while others are very uniform. It becomes a complex statistical problem to categorize and detect a sudden change in this data.

Detecting anomalies through machine learning

Anomaly detection is a machine learning technique that uses an AI-based algorithm’s pattern recognition powers to find outliers in your data. This is incredibly powerful for operational challenges where, typically, human operators would need to filter out the noise to find the actionable insights buried in the data.

These insights are compelling because your AI approach to alerting can raise issues you’ve never seen before. With traditional alerting, you’ll typically have to pre-empt incidents that you believe will happen and create rules for your alerts. These can be called your known knowns or your known unknowns. The incidents you’re either aware of or blind spots in your monitoring that you’re covering just in case. But what about your unknown unknowns

This is where your machine learning algorithms come in. Your AIops-driven alerts can act as a safety net around your traditional alerting so that if sudden anomalies happen in your logs, metrics or traces, you can operate with confidence that you’ll be informed. This means less time defining incredibly granular alerts and more time spent building and deploying the features that will set your company apart in the market.

AIops can be your safety net

Rather than defining a myriad of traditional alerts around every possible outcome and spending considerable time building, maintaining, amending and tuning these alerts, you can define some of your core alerts and use your AIops approach to capture the rest.

As we grow into modern software engineering, engineers’ time has become a scarce resource. AIops has the potential to lower the growing operational overhead of software and free up the time for software engineers to innovate, develop and grow into the new era of coding.

Ariel Assaraf is CEO of Coralogix.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Future of work: Beyond bossware and job-killing robots

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The public conversation around AI’s impact on the labor market often revolves around the job-displacing or job-destroying potential of increasingly intelligent machines. The wonky economic phrase for the phenomenon is “technological unemployment.” Less attention is paid to another significant problem: the dehumanization of labor by companies that use what’s known as “bossware” — AI-based digital platforms or software programs that monitor employee performance and time on task.

To discourage companies from both replacing jobs with machines and deploying bossware to supervise and control workers, we need to change the incentives at play, says Rob Reich, professor of political science in the Stanford School of Humanities and Sciences, director of the McCoy Family Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

“It’s a question of steering ourselves toward a future in which automation augments our work lives rather than replaces human beings or transforms the workplace into a surveillance panopticon,” Reich says. Reich recently shared his thoughts on these topics in response to an online Boston Review forum hosted by Daron Acemoglu of MIT.

To promote the automation we want and discourage the automation we don’t want, Reich says we need to increase awareness of bossware, include impacted workers in the product development lifecycle, and ensure product design reflects a wider range of values beyond the commercial desire to increase efficiency. Additionally, we must provide economic incentives to support labor over capital and boost federal investment in AI research at universities to help stem the brain drain to industry, where profit motives often lead to negative consequences such as job displacement.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“It’s up to us to create a world where financial reward and social esteem lie with companies that augment rather than displace human labor,” Reich says. 

Increased awareness of bossware

From cameras that automatically track employees’ attention to software monitoring whether employees are off task, bossware is often in place before employees are aware of it. And the pandemic has made it worse as we’ve rapidly adapted to remote tools that have bossware features built in — without any deliberation about whether we wanted those features in the first place, Reich says.

“The first key to addressing the bossware problem is awareness,” Reich says. “The introduction of bossware should be seen as something that’s done through a consensual practice, rather than at the discretion of the employer alone.”

Beyond awareness, researchers and policymakers need to get a handle on the ways employers use bossware to shift some of their business risks to their employees. For example, employers have historically borne the risk of inefficiencies such as paying staff during shifts when there are few customers. By using automated AI-based scheduling practices that assign work shifts based on demand, employers save money but essentially shift their risk to workers who can no longer expect a predictable or reliable schedule.

Reich is also concerned that bossware threatens privacy and can undermine human dignity. “Do we want to have a workplace in which employers know exactly how long we leave our desks to use the restroom, or an experience of work in which sending a personal email on your work computer is keystroke logged and deducted from your hourly pay, or in which your performance evaluations are dependent upon your maximal time on task with no sense of trust or collaboration?” he asks. “It gets to the heart of what it means to be a human being in a work environment.”

Privileging labor over capital investment in machines

Policymakers should directly incentivize investment in human-augmentative AI rather than AI that will replace jobs, Reich says. And such human-augmentative options do exist.

But policymakers should also take some bold moves to support labor over capital. For example, Reich supports an idea proposed by Acemoglu and others including Stanford Digital Economy Lab Director Erik Brynjolfsson: Decrease payroll taxes and increase taxes on capital investment so that companies are less inclined to purchase labor-replacing machinery to supplant workers.

Currently the tax on human labor is approximately 25%, Reich says, while software or computer equipment is subject to only a 5% tax. As a result, the economic incentives currently favor replacing humans with machines whenever feasible. By changing these incentives to favor labor over machines, policymakers would go a long way toward shifting the impact of AI on workers, Reich says. 

“These are the kinds of bigger policy questions that need to be confronted and updated so that there’s a thumb on the scale of investing in AI and machinery that complements human workers rather than displaces them,” he says.

Invest in academic AI research

If recent history is any guide, Reich says, when industry serves as the primary site of research and development for AI and automation, it will tend to develop profit-maximizing robots and machines that take over human jobs. By contrast, in a university environment, the frontier of AI research and development is not harnessed to a commercial incentive or to a set of investors who are seeking short-term, profit-maximizing returns. “Academic researchers have the freedom to imagine human-augmenting forms of automation and to steer our technological future in a direction quite different from what we might expect from a strictly commercial environment,” he says.

To shift the AI frontier to academia, policymakers might start by funding the National Research Cloud so that universities across the country have access to essential infrastructure for cutting-edge research. In addition, the federal government should fund the creation and sharing of training data.

“These would be the kinds of undertakings that the federal government could pursue, and would comprise a classic example of public infrastructure that can produce extraordinary social benefits,” Reich says.

Katharine Miller is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Hai.stanford.edu. Copyright 2022

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

How digital twins are transforming network infrastructure: Future state (part 2)

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


This is the second of a two-part series. Read part 1 about the current state of networking and how digital twins are being used to help automate the process, and the shortcomings involved.

As noted in part 1, digital twins are starting to play a crucial role in automating the process of bringing digital transformation to networking infrastructure. Today, we explore the future state of digital twins – comparing how they’re being used now with how they can be used once the technology matures.

The market for digital twins is expected to grow at a whopping 35% CAGR (compound annual growth rate) between 2022 and 2027, from a valuation of $10.3 billion to $61.5 billion. Internet of things (IoT) devices are driving a large percentage of that growth, and campus networks represent a critical aspect of infrastructure required to support the widespread rollout of the growing number of IoT devices.

Current limitations of digital twins

One of the issues plaguing the use of digital twins today is that network digital twins typically only help model and automate pockets of a network isolated by function, vendors or types of users. However, enterprise requirements for a more flexible and agile networking infrastructure are driving efforts to integrate these pockets.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Several network vendors, such as Forward Networks, Gluware, Intentionet and Keysight’s recent Scalable Networks acquisition, are starting to support digital twins that work across vendors to improve configuration management, security, compliance and performance. 

Companies like Asperitas and Villa Tech are creating “digital twins-as-a-service” to help enterprise operations.

In addition to the challenge of building a digital twin for multivendor networks, there are other limitations that digital twin technology needs to overcome before it’s fully adopted, including:

  • The types of models used in digital twins needs to match the actual use case. 
  • Building the model, supporting multiple models and evolving the model over time all require significant investment, according to Balaji Venkatraman, VP of product management, DNA, at Cisco.
  • Keeping the data lake current with the state of the network. If the digital twin operates on older data, it will return out-of-date answers. 

Future solutions

Manas Tiwari, client partner for cross-industry comms solutions at Capgemini Engineering, believes that digital twins will help roll out disaggregated networks composed of different equipment, topologies and service providers in the same way enterprises now provision services across multiple cloud services. 

Tiwari said digital twins will make it easier to model different network designs up front and then fine-tune them to ensure they work as intended. This will be critical for widespread rollouts in healthcare, factories, warehouses and new IoT businesses. 

Vendors like Gluware, Forward Networks and others are creating real-time digital twins to simulate network, security and automation environments to forecast where problems may arise before these are rolled out. These tools are also starting to plug into continuous integration and continuous deployment (CI/CD) tools to support incremental updates and rollback using existing devops processes.

Cisco has developed tools for what-if analysis, change impact analysis, network dimensioning and capacity planning. These areas are critical for proactive and predictive analysis to prevent network or service downtime or impact user experience adversely.

Overcoming the struggle with new protocols

Early modeling and simulation tools, such as the GNS3 virtual labs, help network engineers understand what is going on in the network in terms of traffic path, connectivity and isolation of network elements. Still, they often struggle with new protocols, domains or scaling to more extensive networks. They also need to simulate the ideal flow of traffic, along with all the ways it could break or that paths could be isolated from the rest of the network. 

Christopher Grammer, vice president of solution technology at IT solutions provider Calian, told VentureBeat that one of the biggest challenges is that real network traffic is random. The network traffic produced by a coffee shop full of casual internet users is a far cry from the needs of petroleum engineers working with real-time drilling operations. Therefore, simulating network performance is subject to the users’ needs, which can change at any time, making it more difficult to actively predict.

Not only that, but, modeling tools are costly to scale up. 

“The cost difference between simulating a relatively simple residential network model and an AT&T internet backbone is astronomical,” Grammer said. 

Thanks to algorithms and hardware improvements, vendors like Forward Enterprise are starting to scale these computations to support networks of hundreds of thousands of devices.

Testing new configurations

The crowning use case for networking digital twins is evaluating different configuration settings before updating or installing new equipment. Digital twins can help assess the likely impact of changes to ensure equipment works as intended. 

In theory, these could eventually make it easier to assess the performance impact of changes. However, Mike Toussaint, senior director analyst at Gartner, said it may take some time to develop new modeling and simulation tools that account for the performance of newer chips.

One of the more exciting aspects is that these modeling and simulation capabilities are now being integrated with IT automation. Ernest Lefner, chief product officer at Gluware, which supports intelligent network process automation, said this allows engineers to connect inline testing and simulation with tools for building, configuring, developing and deploying networks. 

“You can now learn about failures, bugs, and broken capabilities before pushing the button and causing an outage. Merging these key functions with automation builds confidence that the change you make will be right the first time,” he said.

Wireless analysis

Equipment vendors such as Juniper Networks are using artificial intelligence (AI) to incorporate various kinds of telemetry and analytics to automatically capture information about wireless infrastructure to identify the best layout for wireless networks. Ericsson has started using Nvidia Omniverse to simulate 5G reception in a city. Nearmap recently partnered with Digital Twin Sims to create dynamically updated 5G coverage maps into 5G planning and operating systems. 

Security and compliance

Grammer said digital twins could help improve network heuristics and behavioral analysis aspects of network security management. This could help identify potentially unwanted or malicious traffic, such as botnets or ransomware. Security companies often model known good and bad network traffic to teach machine learning algorithms to identify suspicious network traffic. 

According to Lefner, digital twins could model real-time data flows for complex audit and security compliance tasks. 

“It’s exciting to think about taking complex yearly audit tasks for things like PCI compliance and boiling that down to an automated task that can be reviewed daily,” he said. 

Coupling these digital twins with automation could allow a step change in challenging tasks like identifying up-to-date software and remediating newly identified vulnerabilities. For example, Gluware combines modeling, simulation and robotic process automation (RPA) to allow software robots to take actions based on specific network conditions. 

Peyman Kazemian, cofounder of Forward Networks, said they are starting to use digital twins to model network infrastructure. When a new vulnerability is discovered in a particular type of equipment or software version, the digital twins can find all the hosts that are reachable from less trustworthy entry points to prioritize the remediation efforts. 

Cross-domain collaboration

Network digital twins today tend to focus on one particular use case, owing to the complexities of modeling and transforming data across domains. Teresa Tung, cloud first chief technologist at Accenture, said that new knowledge graph techniques are helping to connect the dots. For example, a digital twin of the network can combine models from different domains such as engineering R&D, planning, supply chain, finance and operations. 

They can also bridge workflows between design and simulations. For example, Accenture has enhanced a traditional network planner tool with new 3D data and an RF simulation model to plan 5G rollouts. 

Connect2Fiber is using digital twins to help model its fiber networks to improve operations, maintenance and sales processes. Nearmap’s drone management software automatically inventories wireless infrastructure to improve network planning and collaboration processes with asset digital twins. 

These efforts could all benefit from the kind of innovation driven by building information models (BIM) in the construction industry. Jacob Koshy, information technology and communications associate, Arup, an IT services firm, predicts that comparable network information models (NIM) could have a similarly transformative role in building complex networks. 

For example, the RF propagation analysis and modeling for coverage and capacity planning could be reused during the installation and commissioning of the system. Additionally, integrating the components into a 3D modeling environment could improve collaboration and workflows across facilities and network management teams.

Emerging digital twin APIs from companies like Mapped, Zyter and PassiveLogic might help bridge the gap between dynamic networks and the built environment. This could make it easier to create comprehensive digital twins that include the networking aspects involved in more autonomous business processes. 

The future is autonomous networks

Grammer believes that improved integration between digital twins and automation could help fine-tune network settings based on changing conditions. For example, business traffic may predominate in the daytime and shift to more entertainment traffic in the evening. 

“With these new modeling tools, networks will automatically be able to adapt to application changes switching from a business video conferencing profile to a streaming or gaming profile with ease,” Grammer said. 

How digital twins will optimize network infrastructure

The most common use case for digital twins in network infrastructure is testing and optimizing network equipment configurations. Down the road, they will play a more prominent role in testing and optimizing performance, vetting security and compliance, provisioning wireless networks and rolling out large-scale IoT networks for factories, hospitals and warehouses. 

Experts also expect to see more direct integration into business systems such as enterprise resource planning (ERP) and customer relationship management (CRM) to automate the rollout and management of networks to support new business services.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Iterable optimizes AI to hyper-personalize marketing and predict future purchases

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


With people living on their phones and constantly deluged by messages, emails and notifications, so-called “batch-and-blast” emails, targeted web ads and word-of-mouth are no longer adequate when it comes to customer outreach.

“We’re online more than ever before, acutely aware and concerned with how our personal data is used and, importantly, we demand to be valued and recognized for what makes us unique,” said Andrew Boni, CEO of customer engagement platform company Iterable

Marketers must then achieve the “extraordinary task” of reaching as many customers — and in as many hyper-personalized ways — as possible. And, matching each consumer with “a personal marketer and targeted message at every touchpoint” is impossible without artificial intelligence (AI) and machine learning (ML), said Boni. 

To this end, Iterable today announced its new AI Optimization Suite. The tool is equipped with new predictive goals with explainable AI capabilities to help drive individualized brand-consumer communication, said Boni. 

The company will officially unveil its new platform to the public at its Activate Summit North America in September. 

“Understanding the inner workings [the ‘why’ and ‘how’] behind the system gives marketers the clarity and confidence to improve and refine the predictions they create, design goals that fit their unique business needs, and implement fresh ideas about how to approach customer-first campaigns,” said Bela Stepanova, senior vice president of product at Iterable. 

Iterable treats customers as its most valuable asset

Most brands have access to customer data and a goal in mind for connecting with their customers. But the magic formula is often missing: Building true customer knowledge — and acting on that. 

This has provided a significant opportunity and rapid growth in AI-powered customer engagement platforms. According to Markets and Markets, the global customer engagement solutions market will grow to $32.2 billion by 2027, representing a compound annual growth rate (CAGR) of nearly 11%. 

The global market for retail omnichannel commerce platforms will reach $16.9 billion by 2027, as forecasted by ReportLinker.com. This represents a CAGR of 16.4% from 2020 — growth largely propelled by the COVID-19 crisis, according to the firm. 

Companies competing for that market share include Adobe Marketo Engage, HubSpot Marketing Hub, Pega, Blueshift, Braze and MoEngage. 

And Iterable is rapidly increasing its presence: The 9-year-old series E “centaur” company that now serves more than 1,000 customers — including DoorDash, Calm, Jersey Mike’s Subs, Zillow and SeatGeek — has surpassed $100 million in annual recurring revenue (ARR), and has recently expanded into Latin America and Asia Pacific territories. 

Ingest, centralize, activate

Iterable’s AI Optimization Suite helps brands “ingest, centralize and activate” customer data and use real-time information to craft individualized campaigns, said Boni. 

Its predictive goals functionality allows brands to establish goals unique to their business needs and predict how their customer base might respond. They can then use those predictions to establish customer segments and tailor messaging to maximize conversion. 

The explainable AI feature within the predictive goals functionality allows marketers to pinpoint the data points that contributed to forecasts. This provides an “under the hood” look at AI and further insights that can inform future campaigns, said Boni.

Furthermore, marketers can understand behaviors that drive predictions, ranked in order of importance, according to Boni. They can also assess the quality and reliability of a goal using a predictive strength tool and receive insight into correlated variables that make outcomes more or less likely. 

Building a ‘What to Do’ journey

As an example: A travel company is looking to drive incremental purchases and increase rental car revenue. 

Predictive goals creates a list of users most likely to book a rental car within the next 30 days. Brands can then score that list and select and target the highest propensity users with cross-channel messages triggered after an airline ticket purchase, explained Boni. 

Using Explainable AI, the travel company can then go a step further by identifying the attributes that target those high-propensity customers — browsing a “Tours and Activities” section of their website after booking a flight, for instance. These gleaned insights can be used to create a “What to Do” journey for airfare purchasers. This might highlight tours and other activities in their travel destinations, wrapped together with rental car promotions, said Boni. 

The company plans to expand the platform to help marketers understand how goals perform over time, he said, as well as allowing them to more deeply analyze cohorts and optimize individual user experiences with journeys and experimentation.

“By setting goals and measuring outcomes, marketers can lean into strategies that are working and adjust any that aren’t,” said Boni. 

He marveled that, “we can make predictions about the future. This is where the world is heading. It is less about looking into the past and making a guess, and more about (establishing) future behavior.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link