Categories
AI

How enterprises can realize the full potential of their data for AI (VB On Demand)


Presented by Wizeline


Many enterprises are facing barriers to leveraging their data, and making AI a company-wide reality. In this VB On-Demand event, industry experts dig into how enterprises can unlock all the potential of data to tackle complex business problems and more.

Watch on demand now!


Across industries and regions, realizing the promise of AI can mean very different things for every enterprise — but for every business, it starts with exploding the potential of the wealth of data they’re sitting on. But according to Hayde Martinez, data technology program lead at Wizeline, the obstacles to unlocking data have less to do with actually implementing AI, and more with the AI culture inside a company. That means companies are stalled at step zero — defining objectives and goals.

For a company just beginning to realize the benefits of data, AI efforts are usually an isolated undertaking, managed by an isolated team, with goals that aren’t aligned with the overall company vision. Larger companies further down the data and AI road also have to break down silos, so that all departments and teams are aligned and efforts aren’t duplicated or at cross purposes.

“In order to be aligned, you need to define that strategy, define priorities, define the needs of the business,” Martinez says. “Some of the biggest obstacles right now are just being sure of what you’re going to do and how you’re going to do it, rather than the implementation itself, as well as bringing everyone on board with AI efforts.”

The steps in the data process

Data has to go through a number of steps in order to be leveraged: data extraction, cleansing, data processing, creating predictive models, creating new experiments and then finally, creating data visualization. But step zero is still always defining the goals and objectives, which is what drives the whole process.

One of the first considerations is to start with a discovery workshop — soliciting input from all stakeholders that will use this information or are asking for predictive models, or anyone that has a weighted opinion on the business. To ensure that the project goes smoothly, don’t prioritize hard skills over soft skills. Stakeholders are often not data scientists or machine learning engineers; they might not even have a technical background.

“You have to be able, as a team or as an individual, to make others trust your data and your predictions,” she explains. “Even though your model was amazing and you used a state-of-the-art algorithm, if you’re not able to demonstrate that, your stakeholders will not see the benefit of the data, and that work can be thrown in the trash.”

Making sure that you clearly understand the objectives and goals is key here, as well as ongoing communication. Keep stakeholders in the loop and go back to them to reaffirm your direction, and ask questions to continue to adjust and refine. That helps ensure that when you deliver your predictive model or your AI promise, it will be strongly aligned to what they’re expecting.

Another consideration in the data process is iteration, trying new things and building from there, or taking a new tack if something doesn’t work, but never taking too long to decide what you’ll do next.

“It’s called data science because it’s a science, and follows the scientific method,” Martinez says. “The scientific method is building hypotheses and proving them. If your hypothesis was not proven, then try another way to prove it. If then that’s not possible, then create another hypothesis. Just iterate.”

Common step zero mistakes

Often companies stepping into AI waters look first at similar companies to mimic their efforts, but that can actually slow down or even stop an AI project. Business problems are as unique as fingerprints, and there are myriad ways to tackle any one issue with machine learning.

Another common issue is going immediately to hiring a data scientist with the expectation that it’s one and done — that they’ll be able to not only handle the entire process from extracting data and cleaning data to defining objectives, graphic visualization, predictive models, and so on, but can immediately jump into making AI happen. That’s just not realistic.

First a centralized data repository needs to be created to not only make it easier to build predictive models, but to also break down silos so that any team can access the data it needs.

Data scientists and data engineers also cannot work alone, separately from the rest of the company — the best way to take advantage of data is to be familiar with its business context, and the business itself.

“If you understand the business, then every decision, every change, every process, every modification of your data will be aligned,” she says. “This is a multidisciplinary work. You need to involve strong business understanding along with UI/UX, legal, ethics and other disciplines. The more diverse, the more multidisciplinary the team is, the better the predictive model can be.”

To learn more about how enterprises can fully leverage their data to launch AI with real ROI, how to choose the right tools for every step of the data process and more, don’t miss this VB On Demand event.


Start streaming now!

Agenda

  • How enterprises are leveraging AI and machine learning, NLP, RPA and more
  • Defining and implementing an enterprise data strategy
  • Breaking down silos, assembling the right teams and increasing collaboration
  • Identifying data and AI efforts across the company
  • The implications of relying on legacy stacks and how to get buy-in for change

Presenters

  • Paula Martinez, CEO and Co-Founder, Marvik
  • Hayde Martinez, Data Technology Program Lead, Wizeline
  • Victor Dey, Tech Editor, VentureBeat (moderator)

Repost: Original Source and Author Link

Categories
AI

Start talking: The true potential of conversational AI in the enterprise

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


From a smart assistant that helps you increase your credit card limit, to an airline chatbot that tells you if you can change your flight, to Alexa who operates your household appliances on command, conversational AI is everywhere in daily life. And now it is making its way into the enterprise.

Best understood as a combination of AI technologies — Natural Language Processing (NLP), Speech Recognition, and Deep Learning — conversation AI allows people and computers to have spoken or written conversations in everyday language in real-time. And, it is seeing good demand, with one source projecting that the market will grow 20% year on year to $32 billion by 2030.

Broader AI scope

Organizations have been quick to adopt conversational AI in front-end applications — for example, to answer routine service queries, support live call center agents with alerts and actionable insights, and personalize customer experiences. Now, they are also discovering its potential for deployment within internal enterprise systems and processes.

Popular enterprise use cases for conversational AI include the IT helpdesk where a bot can help employees resolve common problems with their laptops or business applications; human resource solutions for travel and expense reporting; and recruitment processes where a chatbot guides candidates through the company’s website or social media channel. It informs them on what documents they must submit and even makes preliminary selection of resumes. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

While there is no denying that conversational AI offers attractive opportunities to innovate and differentiate, it presents some challenges, as well. Managing an enterprise conversational AI landscape with disparate technologies and solutions that do not communicate with each other is only one problem. Inadequate automation of repetitive processes across the conversational AI lifecycle and the lack of an integrated development approach can extend the implementation timeline. Last but by no means least, AI talent is in short supply. 

By adopting some thoughtful practices, enterprises can improve their conversational AI outcomes.

Five best practices for successful conversational AI

1. Do it with purpose

Conversational AI should be implemented with a specific purpose, and not just as a gimmick. Questions, such as what kind of experience to provide to customers, employees, and partners, and how to align conversational AI with organizational goals, will help to identify the right purpose. Also, the solution should address activities involving the processing of multiple data points — for example, answering questions about loan eligibility, which can add significant value to the customer experience — rather than working on tasks that can be accomplished with predefined shortcuts. 

2. Mind your language

Taking a conversation-first approach is important for scaling technology across the enterprise. But since different people speak naturally in different ways, the understanding must extend not only to the words being used but also the intent.  If the NLP solution being used is not capable enough, it will create friction in the interaction. 

3. Do it yourself

Low-code/no-code platforms are giving rise to citizen developers, that is, business or non-technical employees who write software applications without the involvement of IT staff. Going forward, this could help to overcome the shortage of AI skills plaguing most enterprises. 

4. Personalize, extremely

Among the many features of conversational AI are contextual awareness and intent recognition. The technology can recall and translate massive information from past conversations in human-like fashion, and also understand what the speakers are asking even when they don’t “follow the script.” These capabilities yield remembered insights that enterprises can exploit to personalize everything to individual preferences, from products and services to offers and experiences. 

5. Eye on the past and the future

Conversational AI should take an approach that relies on historical insights and continuous post-production evolution using telemetry data on user demands, to improve stickiness and adoption. Strategically speaking, organizations must incorporate good governance when automating a conversational AI lifecycle. This means that, irrespective of the technology being used, the underlying architecture must support plug-and-play and the organization should be able to benefit from using the new technology. 

In short, to gain traction within the enterprise, conversational AI should enable intelligent, convenient, and informed decisions at any point in the user journey. A holistic and technology-agnostic approach, good governance, and internal lifecycle automation with supportive development operations are the key factors of success in conversational AI implementation.

Bali (Balakrishna) DR is senior vice president, service offering head — ECS, AI and Automation at Infosys.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
Security

Google AI flagged parents’ accounts for potential abuse over nude photos of their sick kids

A concerned father says that after using his Android smartphone to take photos of an infection on his toddler’s groin, Google flagged the images as child sexual abuse material (CSAM), according to a report from The New York Times. The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and spurred a police investigation, highlighting the complications of trying to tell the difference between potential abuse and an innocent photo once it becomes part of a user’s digital library, whether on their personal device or in cloud storage.

Concerns about the consequences of blurring the lines for what should be considered private were aired last year when Apple announced its Child Safety plan. As part of the plan, Apple would locally scan images on Apple devices before they’re uploaded to iCloud and then match the images with the NCMEC’s hashed database of known CSAM. If enough matches were found, a human moderator would then review the content and lock the user’s account if it contained CSAM.

The Electronic Frontier Foundation (EFF), a nonprofit digital rights group, slammed Apple’s plan, saying it could “open a backdoor to your private life” and that it represented “a decrease in privacy for all iCloud Photos users, not an improvement.”

Apple eventually placed the stored image scanning part on hold, but with the launch of iOS 15.2, it proceeded with including an optional feature for child accounts included in a family sharing plan. If parents opt-in, then on a child’s account, the Messages app “analyzes image attachments and determines if a photo contains nudity, while maintaining the end-to-end encryption of the messages.” If it detects nudity, it blurs the image, displays a warning for the child, and presents them with resources intended to help with safety online.

The main incident highlighted by The New York Times took place in February 2021, when some doctor’s offices were still closed due to the COVID-19 pandemic. As noted by the Times, Mark (whose last name was not revealed) noticed swelling in his child’s genital region and, at the request of a nurse, sent images of the issue ahead of a video consultation. The doctor wound up prescribing antibiotics that cured the infection.

According to the NYT, Mark received a notification from Google just two days after taking the photos, stating that his accounts had been locked due to “harmful content” that was “a severe violation of Google’s policies and might be illegal.”

Like many internet companies, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft’s PhotoDNA for scanning uploaded images to detect matches with known CSAM. In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl.

In 2018, Google announced the launch of its Content Safety API AI toolkit that can “proactively identify never-before-seen CSAM imagery so it can be reviewed and, if confirmed as CSAM, removed and reported as quickly as possible.” It uses the tool for its own services and, along with a video-targeting CSAI Match hash matching solution developed by YouTube engineers, offers it for use by others as well.

Google “Fighting abuse on our own platforms and services”:

We identify and report CSAM with trained specialist teams and cutting-edge technology, including machine learning classifiers and hash-matching technology, which creates a “hash”, or unique digital fingerprint, for an image or a video so it can be compared with hashes of known CSAM. When we find CSAM, we report it to the National Center for Missing and Exploited Children (NCMEC), which liaises with law enforcement agencies around the world.

A Google spokesperson told the Times that Google only scans users’ personal images when a user takes “affirmative action,” which can apparently include backing their pictures up to Google Photos. When Google flags exploitative images, the Times notes that Google’s required by federal law to report the potential offender to the CyberTipLine at the NCMEC. In 2021, Google reported 621,583 cases of CSAM to the NCMEC’s CyberTipLine, while the NCMEC alerted the authorities of 4,260 potential victims, a list that the NYT says includes Mark’s son.

Mark ended up losing access to his emails, contacts, photos, and even his phone number, as he used Google Fi’s mobile service, the Times reports. Mark immediately tried appealing Google’s decision, but Google denied Mark’s request. The San Francisco Police Department, where Mark lives, opened an investigation into Mark in December 2021 and got ahold of all the information he stored with Google. The investigator on the case ultimately found that the incident “did not meet the elements of a crime and that no crime occurred,” the NYT notes.

“Child sexual abuse material (CSAM) is abhorrent and we’re committed to preventing the spread of it on our platforms,” Google spokesperson Christa Muldoon said in an emailed statement to The Verge. “We follow US law in defining what constitutes CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms. Additionally, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.”

While protecting children from abuse is undeniably important, critics argue that the practice of scanning a user’s photos unreasonably encroaches on their privacy. Jon Callas, a director of technology projects at the EFF called Google’s practices “intrusive” in a statement to the NYT. “This is precisely the nightmare that we are all concerned about,” Callas told the NYT. “They’re going to scan my family album, and then I’m going to get into trouble.”

Repost: Original Source and Author Link

Categories
AI

The untapped potential of HPC + graph computing

In the past few years, AI has crossed the threshold from hype to reality. Today, with unstructured data growing by 23% annually in an average organization, the combination of knowledge graphs and high performance computing (HPC) is enabling organizations to exploit AI on massive datasets.

Full disclosure: Before I talk about how critical graph computing +HPC is going to be, I should tell you that I’m CEO of a graph computing, AI and analytics company, so I certainly have a vested interest and perspective here. But I’ll also tell you that our company is one of many in this space — DGraph, MemGraph, TigerGraph, Neo4j, Amazon Neptune, and Microsoft’s CosmosDB, for example, all use some form of HPC + graph computing. And there are many other graph companies and open-source graph options, including OrientDB, Titan, ArangoDB, Nebula Graph, and JanusGraph. So there’s a bigger movement here, and it’s one you’ll want to know about.

Knowledge graphs organize data from seemingly disparate sources to highlight relationships between entities. While knowledge graphs themselves are not new (Facebook, Amazon, and Google have invested a lot of money over the years in knowledge graphs that can understand user intents and preferences), its coupling with HPC gives organizations the ability to understand anomalies and other patterns in data at unparalleled rates of scale and speed.

There are two main reasons for this.

First, graphs can be very large: Data sizes of 10-100TB are not uncommon. Organizations today may have graphs with billions of nodes and hundreds of billions of edges. In addition, nodes and edges can have a lot of property data associated with them. Using HPC techniques, a knowledge graph can be sharded across the machines of a large cluster and processed in parallel.

The second reason HPC techniques are essential for large-scale computing on graphs is the need for fast analytics and inference in many application domains. One of the earliest use cases I encountered was with the Defense Advanced Research Projects Agency (DARPA), which first used knowledge graphs enhanced by HPC for real-time intrusion detection in their computer networks. This application entailed constructing a particular kind of knowledge graph called an interaction graph, which was then analyzed using machine learning algorithms to identify anomalies. Given that cyberattacks can go undetected for months (hackers in the recent SolarWinds breach lurked for at least nine months), the need for suspicious patterns to be pinpointed immediately is evident.

Today, I’m seeing a number of other fast-growing use cases emerge that are highly relevant and compelling for data scientists, including the following.

Financial services — fraud, risk management and customer 360

Digital payments are gaining more and more traction — more than three-quarters of people in the US use some form of digital payments. However, the amount of fraudulent activity is growing as well. Last year the dollar amount of attempted fraud grew 35%. Many financial institutions still rely on rules-based systems, which fraudsters can bypass relatively easily. Even those institutions that do rely on AI techniques can typically analyze only the data collected in a short period of time due to the large number of transactions happening every day. Current mitigation measures therefore lack a global view of the data and fail to adequately address the growing financial fraud problem.

A high-performance graph computing platform can efficiently ingest data corresponding to billions of transactions through a cluster of machines, and then run a sophisticated pipeline of graph analytics such as centrality metrics and graph AI algorithms for tasks like clustering and node classification, often using Graph Neural Networks (GNN) to generate vector space representations for the entities in the graph. These enable the system to identify fraudulent behaviors and prevent anti-money laundering activities more robustly. GNN computations are very floating-point intensive and can be sped up by exploiting tensor computation accelerators.

Secondly, HPC and knowledge graphs coupled with graph AI are essential to conduct risk assessment and monitoring, which has become more challenging with the escalating size and complexity of interconnected global financial markets. Risk management systems built on traditional relational databases are inadequately equipped to identify hidden risks across a vast pool of transactions, accounts, and users because they often ignore relationships among entities. In contrast, a graph AI solution learns from the connectivity data and not only identifies risks more accurately but also explains why they are considered risks. It is essential that the solution leverage HPC to reveal the risks in a timely manner before they turn more serious.

Finally, a financial services organization can aggregate various customer touchpoints and integrate this into a consolidated, 360-degree view of the customer journey. With millions of disparate transactions and interactions by end users — and across different bank branches – financial services institutions can evolve their customer engagement strategies, better identify credit risk, personalize product offerings, and implement retention strategies.

Pharmaceutical industry — accelerating drug discovery and precision medicine

Between 2009 to 2018, U.S. biopharmaceutical companies spent about $1 billion to bring new drugs to market. A significant fraction of that money is wasted in exploring potential treatments in the laboratory that ultimately do not pan out. As a result, it can take 12 years or more to complete the drug discovery and development process. In particular, the COVID-19 pandemic has thrust the importance of cost-effective and swift drug discovery into the spotlight.

A high-performance graph computing platform can enable researchers in bioinformatics and cheminformatics to store, query, mine, and develop AI models using heterogeneous data sources to reveal breakthrough insights faster. Timely and actionable insights can not only save money and resources but also save human lives.

Challenges in this data and AI-fueled drug discovery have centered on three main factors — the difficulty of ingesting and integrating complex networks of biological data, the struggle to contextualize relations within this data, and the complications in extracting insights across the sheer volume of data in a scalable way. As in the financial sector, HPC is essential to solving these problems in a reasonable time frame.

The main use cases under active investigation at all major pharmaceutical companies include drug hypothesis generation and precision medicine for cancer treatment, using heterogeneous data sources such as bioinformatics and cheminformatic knowledge graphs along with gene expression, imaging, patient clinical data, and epidemiological information to train graph AI models. While there are many algorithms to solve these problems, one popular approach is to use Graph Convolutional Networks (GCN) to embed the nodes in a high-dimensional space, and then use the geometry in that space to solve problems like link prediction and node classification.

Another important aspect is the explainability of graph AI models. AI models cannot be treated as black boxes in the pharmaceutical industry as actions can have dire consequences. Cutting-edge explainability methods such as GNNExplainer and Guided Gradient (GGD) methods are very compute-intensive therefore require high-performance graph computing platforms.

The bottom line

Graph technologies are becoming more prevalent, and organizations and industries are learning how to make the most of them effectively. While there are several approaches to using knowledge graphs, pairing them with high performance computing is transforming this space and equipping data scientists with the tools to take full advantage of corporate data.

Keshav Pingali is CEO and co-founder of Katana Graph, a high-performance graph intelligence company. He holds the W.A.”Tex” Moncrief Chair of Computing at the University of Texas at Austin, is a Fellow of the ACM, IEEE and AAAS, and is a Foreign Member of the Academia Europeana.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Realizing IoT’s potential with AI and machine learning

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


The key to getting more value from industrial internet of things (IIoT) and IoT platforms is getting AI and machine learning (ML) workloads right. Despite the massive amount of IoT data captured, organizations are falling short of their enterprise performance management goals because AI and ML aren’t scaling for the real-time challenges organizations face. If you solve the challenge of AI and ML workload scaling right from the start, IIoT and IoT platforms can deliver on the promise of improving operational performance.

Overcoming IoT’s growth challenges

More organizations are pursuing edge AI-based initiatives to turn IoT’s real-time production and process monitoring data into results faster. Enterprises adopting IIoT and IoT are dealing with the challenges of moving the massive amount of integrated data to a datacenter or centralized cloud platform for analysis and derive recommendations using AI and ML models. The combination of higher costs for expanded datacenter or cloud storage, bandwidth limitations, and increased privacy requirements are making edge AI-based implementations one of the most common strategies for overcoming IoT’s growth challenges.

In order to use IIoT and IoT to improve operational performance, enterprises must face the following challenges:

  • IIoT and IoT endpoint devices need to progress beyond real-time monitoring to provide contextual intelligence as part of a network. The bottom line is that edge AI-based IIoT / IoT networks will be the de facto standard in industries that rely on supply chain visibility, velocity, and inventory turns within three years or less. Based on discussions VentureBeat has had with CIOs and IT leaders across financial services, logistics, and manufacturing, edge AI is the cornerstone of their IoT and IIoT deployment plans. Enterprise IT and operations teams want more contextually intelligent endpoints to improve end-to-end visibility across real-time IoT sensor-based networks. Build-out plans include having edge AI-based systems provide performance improvement recommendations in real time based on ML model outcomes.
  • AI and ML modeling must be core to an IIoT/IoT architecture, not an add-on. Attempting to bolt-on AI and ML modeling to any IIoT or IoT network delivers marginal results compared to when it’s designed into the core of the architecture. The goal is to support model processing in multiple stages of an IIoT/IoT architecture while reducing networking throughput and latency. Organizations that have accomplished this in their IIoT/IoT architectures say their endpoints are most secure. They can take a least-privileged access approach that’s part of their Zero Trust Security framework.
  • IIoT/IoT devices need to be adaptive enough in design to support algorithm upgrades. Propagating algorithms across an IIoT/IoT network to the device level is essential for an entire network to achieve and keep in real-time synchronization. However, updating IIoT/IoT devices with algorithms is problematic, especially for legacy devices and the networks supporting them. It’s essential to overcome this challenge in any IIoT/IoT network because algorithms are core to AI edge succeeding as a strategy. Across manufacturing floors globally today, there are millions of programmable logic controllers (PLCs) in use, supporting control algorithms and ladder logic. Statistical process control (SPC) logic embedded in IIoT devices provides real-time process and product data integral to quality management succeeding. IIoT is actively being adopted for machine maintenance and monitoring, given how accurate sensors are at detecting sounds, variations, and any variation in process performance of a given machine. Ultimately, the goal is to predict machine downtimes better and prolong the life of an asset. McKinsey’s study Smartening up with Artificial Intelligence (AI) – What’s in it for Germany and its Industrial Sector? found that IIoT-based data combined with AI and ML can increase machinery availability by more than 20%. The McKinsey study also found that inspection costs can be reduced by up to 25%, and annual maintenance costs reduced overall by up to 10%. The following graphic is from the study:
Using IIoT sensors to monitor stock and vibration of production equipment is a leading use case that combines real-time monitoring and ML algorithms to extend the useful life of machinery while ensuring maintenance schedules are accurate.

Above: Using IIoT sensors to monitor stock and vibration of production equipment is a leading use case that combines real-time monitoring and ML algorithms to extend the useful life of machinery while ensuring maintenance schedules are accurate.

  • IIoT/IoT platforms with a unique, differentiated market focus are gaining adoption the quickest. For a given IIoT/IoT platform to gain scale, each needs to specialize in a given vertical market and provide the applications and tools to measure, analyze, and run complex operations. An overhang of horizontally focused IoT platform providers rely on partners for the depth vertical markets require when the future of IIoT/IoT growth meets the nuanced needs of a specific market. It is a challenge for most IoT platform providers to accomplish greater market verticalization, as their platforms are built for broad, horizontal market needs. A notable exception is Honeywell Forge, with its deep expertise in buildings (commercial and retail), industrial manufacturing, life sciences, connected worker solutions, and enterprise performance management. Ivanti Wavelink’s acquisition of an IIoT platform from its technology and channel partner WIIO Group is more typical. The pace of such mergers, acquisitions, and joint ventures will increase in IIoT/IoT sensor technology, platforms, and systems, given the revenue gains and cost reductions companies are achieving across a broad spectrum of industries today.
  • Knowledge transfer must occur at scale. As workers retire while organizations abandon the traditional apprentice model, knowledge transfer becomes a strategic priority. The goal is to equip the latest generation of workers with mobile devices that are contextually intelligent enough to provide real-time data about current conditions while providing contextual intelligence and historical knowledge. Current and future maintenance workers who don’t have decades of experience and nuanced expertise in how to fix machinery will be able to rely on AI- and ML-based systems that index captured knowledge and can provide a response to their questions in seconds. Combining knowledge captured from retiring workers with AI and ML techniques to answer current and future workers’ questions is key. The goal is to contextualize the knowledge from workers who are retiring so workers on the front line can get the answers they need to operate, repair, and work on equipment and systems.

How IIoT/IoT data can drive performance gains

A full 90% of enterprise decision-makers believe IoT is critical to their success, according to Microsoft’s IoT Signals Edition 2 study. Microsoft’s survey also found that 79% of enterprises adopting IoT see AI as either a core or a secondary component of their strategy. Prescriptive maintenance, improving user experiences, and predictive maintenance are the top three reasons enterprises are integrating AI into their IIoT/IoT plans and strategies.

Microsoft's IoT Signals Edition 2 Study explores AI, Digital Twins, edge computing, and IIoT/IoT technology adoption in the enterprise.

Above: Microsoft’s IoT Signals Edition 2 Study explores AI, digital twins, edge computing, and IIoT/IoT technology adoption in the enterprise.

Based on an analysis of the use cases provided in the Microsoft IoT Signals Edition 2 study and conversations VentureBeat has had with manufacturing, supply chain, and logistics leaders, the following recommendations can improve IIOT/IoT performance:

  • Business cases that include revenue gains and cost reductions win most often. Manufacturing leaders looking to improve track-and-trace across their supply chains using IIoT discovered cost reduction estimates weren’t enough to convince their boards to invest. When the business case showed how greater insight accelerated inventory turns, improved cash flow, freed up working capital, or attracted new customers, funding for pilots wasn’t met with as much resistance as when cost reduction alone was proposed. The more IIoT/IoT networks deliver the data platform to support enterprise performance management real-time reporting and analysis, the more likely they would be approved.
  • Design IIoT/IoT architectures today for AI edge device expansion in the future. The future of IIoT/IoT networks will be dominated by endpoint devices capable of modifying algorithms while enforcing least privileged access. Sensors’ growing intelligence and real-time process monitoring improvements are making them a primary threat vector on networks. Designing in microsegmentation and enforcing least privileged access to the individual sensor is being achieved across smart manufacturing sites today.
  • Plan now for AI and ML models that can scale to accounting and finance from operations. The leader of a manufacturing IIoT project said that the ability to interpret what’s going on from a shop-floor perspective on financials in real time sold senior management and the board on the project. Knowing how trade-offs on suppliers, machinery selection, and crew assignments impact yield rates and productivity gains are key. A bonus is that everyone on the shop floor knows if they hit their numbers for the day or not. Making immediate trade-offs on product quality analysis helps alleviate variances in actual costing on every project, thanks to IIoT data.
  • Design in support of training ML models at the device algorithm level from the start. The more independent a given device can be from a contextual intelligence standpoint, including fine-tuning its ML models, the more valuable the insights it will provide. The goal is to know how and where to course-correct in a given process based on analyzing data in real time. Device-level algorithms are showing potential to provide data curation and contextualization today. Autonomous vehicles’ sensors are training ML models continually, using a wide spectrum of data including radar to interpret the road conditions, obstacles, and the presence or absence of a driver. The following graphic from McKinsey’s study Smartening up with Artificial Intelligence (AI) – What’s in it for Germany and its Industrial Sector? explains how these principles apply to autonomous vehicles.
Autonomous vehicles' reliance on a wide spectrum of data and ML models to interpret and provide prescriptive guidance resembles companies' challenges in keeping operations on track. 

Above: Autonomous vehicles’ reliance on a wide spectrum of data and ML models to interpret and provide prescriptive guidance resembles companies’ challenges in keeping operations on track.

Real-time IoT data holds the insights needed by digital transformation initiatives to succeed. However, legacy technical architectures and platforms limit IoT data’s value by not scaling to support AI and ML modeling environments, workloads, and applications at scale. As a result, organizations accumulating massive amounts of IoT data, especially manufacturers, need an IoT platform purpose-built to support new digital business models.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Capital One uses NLP to discuss potential fraud with customers over SMS

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!


Capital One has a 99% success rate when it comes to understanding customer responses to an SMS fraud alert, according to Ken Dodelin, the company’s VP of mobile, web, conversational AI, and messaging products. Dodelin was speaking today about how the bank harnesses the power of personalization and automation in a conversation with VentureBeat senior reporter Sage Lazzaro at VentureBeat’s Transform 2021 virtual conference.

When Capital One notices an anomaly in a customer’s transactions, it reaches out over SMS and asks the customer to verify those transaction details. If the customer doesn’t recognize the transaction, Capital One can proceed to treat it as fraudulent.

By adding a third-party natural language processing/understanding solution, the AI assistant Eno is able to understand customers’ written responses, such as “that was me shopping in Philadelphia,” which is not easy for machines to understand, Dodelin said.

Capital One first considered using AI for customer service in 2016, when the company was among the early recipients of Amazon Echo devices. Back then, Amazon was searching for partners across industries to see how they might create a conversational experience. Capital One said it became the first bank to develop a skill — a special program that enabled customers to accomplish tasks on Amazon platforms. In the following years, Capital One started to incorporate natural language understanding into its SMS alerts, as well as its website and mobile apps.

The current AI assistant has evolved a lot from the initial version, Dodelin said. First, the assistant is available to chat with customers in more places. Whether the customer is inquiring about the bank or a car loan, they have an opportunity to ask questions about their account. Second, the company said it hasn’t restricted chats to conversations initiated by the customer. The company relies on its advanced data infrastructure to anticipate customer needs and reach out proactively, either through push notifications or email. The interaction would include important information and actions customers would expect from a human assistant.

One challenge Capital One had to address was what to do if the customer wanted something not included in the options displayed on the screen. “Now we have to not just design experiences for the things we expect them to get, but continuously learn about all the other things that are coming in and the different ways they are coming in,” Dodelin said.

Context matters when applying AI technologies to customer service. In many cases, scripts are relatively consistent, regardless of who the customer is or their specific circumstances. But when creating an experience, it is important to remember that customers are being contacted under very different circumstances. Levity may not be appropriate during a moment of emotional and financial stress, for example.

Capital One has continued to enhance the service so it will proactively anticipate where a customer might need help and respond in an appropriate tone, Dodelin said.

Another challenge is anticipating the breadth of questions customers have. Customers who encounter issues often lack an outlet to express their frustration beyond having a human assistant pick up the phone, he said. Learning more about those experiences helps the AI assistant provide better answers and lets the company adjust which options are included in the user interface.

“As we learn more, we got better and expanded the audience that [the AI assistant] is available to,” Dodelin said. Capital One did not make the service available to all customers but started with a small segment of its credit card business. Over time, the company has opened the service to more customers.

“It’s a lot of work done by some very talented people here at Capital One to try to make it successful in all these different circumstances,” Dodelin concluded.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

CapGemini on the potential of data ecosystems to drive business value

Join executive leaders at the Data, Analytics, & Intelligent Automation Summit, presented by Accenture. Watch now!


Companies can learn a lot from each other’s data. By sharing, exchanging and collaborating with data, in what Capgemini calls data ecosystems, organizations would be able to generate more revenue, boost customer satisfaction, improve productivity, and reduce costs.

CapGemini’s chief data architect, Steve Jones, and artificial intelligence and analytics group offer leader Anne-Laure Thieullent discussed the business impact of AI and data ecosystems with Maribel Lopez, founder and principal analyst at Lopez Research, during VentureBeat’s Transform 2021 virtual conference on Monday.

Mastering data

The CapGemini Research Institute released a report on analyzing companies’ grasp of data. The report focused on two factors — data behaviors and technology. Data behaviors refers to how companies use technology to make decisions and how development of data literacy skills are set up through the business to democratize the use of data, Thieullent explained. The second factor looks at the the technology the companies used, or how they have progressed in using machine learning techniques on their values forecasts or analytics.

CapGemini’s researchers found that only 16% of organizations could be classified as advanced along both factors. These so-called data masters are 22% more profitable than their peers, and they generated 70% more revenue on average per employee, the report found.

Taking data and AI systems seriously

When did companies begin to take data and AI seriously? Part of the interest came about because of the hype around AI, Thieullent said.

“With the fear that jobs like legal, marketing, HR, and finance are going to be disrupted by AI and automation, business leaders have started to really get into tech and understand how they could master their own destiny,” Thieullent said.

Many businesses have also become more interconnected. For example, retailers that want to understand their customers’ buying patterns can learn a lot by looking at the data collected by their logistic partners. In the pharmaceutical industry, companies can learn from each other’s research and development efforts.

“What if you could learn from drug discovery data from many pharma companies in order to bring to market safer drugs faster?” Thieullent asked.

Recognizing the value of data ecosystems

Thieullent defined a data ecosystem as “partnerships among organizations that allow them to share data and insights to create new value.” The potential of ecosystem-based data sharing can reach more than 9% of the total annual revenue of an organization, she noted.

Companies that deeply engage in data ecosystems generate more sales from fixed assets and twice the market capitalization, Thieullent said.

The benefits also go beyond finances: CapGemini found that data ecosystems can improve customer satisfaction by 15%, increase efficiency by 14%. and reduce annual costs by 11%.

One of the reasons companies began sharing data was because they saw what AI can do with the data, CapGemini’s Jones said. For instance, forecasting the supply chain can involve data related to weather, logistics, and manufacturing. This kind of interconnectivity has pushed companies to look beyond the limitations of their businesses.

“They’re recognizing that they can actually have business decisions and make optimizations which are driven by things that aren’t under their control, that aren’t what they know,” Jones said.

Starting internally

Companies should start from within, and consider what success looks like for different teams and striking a balance. For the chief information officer, it’s about changing people’s mindsets to think of data as an asset and not just what system or table the data comes from, Jones said.

Jones’ advice: “Stop sweating the data technology. Stop sweating the infrastructure technologies. Industrialization, standardization, and looking at how you govern data as an asset, not table storage mechanisms.”

Thieullent urged companies to start thinking about the data they don’t have and who they can work with to get it. “Think of what you could do if you had access to the data to better know your end customer and build new products,” she said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Oracle’s Autonomous Data Warehouse expansion offers potential upside for tech professionals

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


In March, Oracle announced an expansion to their Autonomous Data Warehouse that can bring the benefits of ADW — automating previously manual tasks — to large groups of new potential users. Oracle calls the expansion “the first self-driving database,” and its goal with the new features is to “completely transform cloud data warehousing from a complex ecosystem … that requires extensive expertise into an intuitive, point-and-click experience” that will enable all types of professionals to access, work with, and build business insights with data, from engineers to analysts and data scientists to business users, all without the help of IT.

A serious bottleneck to data work delivering business value across industries is the amount of expertise required at many steps along the data pipeline. The democratization of data tooling is about increasing ROI when it comes to an organization’s data capabilities, as well as increasing the total addressable market for Oracle’s ADW. Oracle is also reducing the total cost of ownership with elastic scaling and auto-scaling for changing workloads. We spoke with George Lumpkin, Neil Mendelson, and William Endress from Oracle, who shared their time and perspective for this article.

The landscape: democratization of data tooling

There is a growing movement of data tooling democratization, and the space is getting increasingly crowded with tools such as AWS SageMaker Studio (which we have reviewed here, here, and here), DataRobot, Qlik, Tableau, and Looker. It is telling that in recent times, Google has acquired Looker and Salesforce has acquired Tableau. On top of this, the three major cloud providers are all providing drag-and-drop data tooling, to various extents: AWS has an increasing amount of GUI-based data transformation and machine learning tools; Microsoft Azure has a point-and-click visual interface for machine learning, “data preparation, feature engineering, training algorithms, and model evaluation”; and Google Cloud Platform has similar functionality as part of their Cloud AutoML offering.

In their announcement, Oracle frames the AWS enhancements as self-service tools for:

  • Analysts, including loading and transforming data, building business models, and extracting insights from data (note that ADW also provides some interesting third-party integrations, such as automatically building data models that can be consumed by Tableau or Qlik).
  • Data scientists (and “citizen data scientists”), along with building and deploying machine learning models (in a video, Andrew Mendelsohn, executive VP of Oracle Database Server Technologies, describes how data scientists can “easily create models with AutoML” and “integrate ML models into apps via REST or SQL”).
  • LoB developers, including Low-Code App Dev and API-Driven Development.

Oracle Autonomous Data Warehouse competes with incumbent products including Amazon Redshift, Azure Synapse, Google BigQuery, and Snowflake. But Oracle does not necessarily see ADW as directly competitive, targeting existing on-premises customers in the short run but with an eye to self-service ones in the longer term. As Lumpkin explained, “Many of Oracle’s Autonomous Data Warehouse customers are existing on-prem users of Oracle who are looking to migrate to the cloud. However, we have also designed Autonomous Data Warehouse for the self-service market, with easy interfaces that allow sales and marketing operations teams to move their team’s workloads to the cloud.”

Oracle’s strategy highlights a tension in tech: Traditional CIOs with legions of database administrators (DBAs) are worried about the migration to the cloud. DBAs who have built entire careers around being an expert at patching and tuning databases may find themselves lacking work in a self-service world where cloud providers like Oracle are patching and tuning enterprise databases.

CIOs who measure their success based on headcount and on-premises spend might also be worried. As Mendelson put it: “70% of what the DBA used to do should be automated.” Given that Oracle’s legacy business is still catering towards DBAs and CIOs, how do they feel about potentially upsetting their traditional advocates? While they acknowledged that automation would reduce some of the tasks traditionally performed by DBAs, they were not worried about complete job redundancy. Lumpkin explained, “By lowering the total cost of ownership for analytics, the business will be demanding 5x the number of databases.” In other words, DBAs and CIOs will see the same transformation that accountants saw with the advent of the spreadsheet, and there should be plenty of higher-level strategic work for DBAs in the new era for Oracle cloud.

Of course, this isn’t to say there won’t be any changes. After all, change is inevitable as certain functions are automated away. DBAs need to refocus on their unique value add. “Some DBAs may have built their skill sets around patching Oracle databases,” explains Lumpkin. “That’s now automated because it was the same for every customer, and we could do it more consistently and reliably in the cloud. It was never adding value to the customer. What you want is your people doing work that is unique to your datasets and your organization.”

We did a deep dive into different parts of ADW tools. Here’s what we found.

Autonomous Data Warehouse setup

The automated provisioning and database setup tools were well done. The in-app screens and tutorials mostly adhered to one another and we could get set up in about five minutes. That said, there were still some rather annoying steps. For example, the user needs to create both a “database user” and an “analytics user.” This makes a lot of sense on centrally administered databases serving an entire enterprise, but is overkill for a tool for a single analyst trying to get started (much less a tutorial for an analyst tool). The vast majority of data scientists and data analysts do not want to be database administrators, and the tool could benefit from a mode that hides this detail from the end user. This is a shortcoming that Oracle understands. As Lumpkin explains, “We have been looking at how to simplify the create-user flow for new databases. There are competing best practices for security [separation of duties between multiple users] and fastest onboarding experiences [with only one user].” But overall, the documentation is very well done, and onboarding is straightforward but could be a bit smoother.

Data insights

The automated insights tool is also interesting and could prove powerful. The insights run many queries against your dataset, generating predicted values against a target column. They then highlight the unexpected values where the predicted values deviate significantly from actual values. The algorithm appears to be running multiple groupbys and identifying groups with highly unexpected values. While this may lead to some risk of data dredging if used naively, it does provide some quick speedups: Some large fraction of data analysis comes from understanding unexpected results, and this feature can help with that.

Business model

One of the pervasive challenges with data modeling is defining business logic on raw enterprise data. Typically, this logic might reside in the heads of individual business analysts, leading to the inconsistent application of business logic across reports by different analysts. Oracle’s Data Tools provide a “Business Model” centralizing business logic into the database, increasing consistency and improving performance via caching. The tool offers some excellent features, like automatically detecting schemas and finding the keys for table joins. However, some of these features may not be very robust. While the tool could identify many valuable potential table joins in the tutorial movie dataset, it could only find a small subset of the relationships in the publicly available MovieLens dataset. Nonetheless, this is a valuable tool for solving a critical enterprise problem.

Data transform

The data transform tool provides a GUI to specify functions to clean data. Cleaning data is the No. 1 job of a data scientist or data analyst, making this a critical feature. Unfortunately, the tool has made certain questionable design choices. They stem from the use of a GUI: Rather than specifying the transformation using a CREATE TABLE query in SQL, they ask you to write code in a GUI, awkwardly connecting functions with lines and clicking through menus to select options. While the end result is a CREATE TABLE query, this abandons the syntax that data scientists and analysts are familiar with, makes code less reproducible and less portable, and ultimately makes analysts and their queries more dependent on Oracle’s GUI. Data professionals may wish to avoid this feature if they are eager to develop transferable skills and sidestep tool lock-in.

To be clear, there are useful drag-and-drop features in a SQL integrated development environment (IDE). For example, Count.co, which offers a BI notebook for analysts, supports drag and drop for table and field names into SQL queries. This nicely connects the data catalog to the SQL IDE query and helps prevent misspelled table or field names without abandoning the fundamental text-based query scripts we are used to. Overall, it felt much more natural as an interface.

Oracle Machine Learning

Oracle’s Machine Learning offering is growing and now includes ML notebooks, AutoML capabilities, and model deployment tools. One of the big challenges for Oracle and its competitors will be to demonstrate utility to data scientists and, more generally, people working in both ML and AI. While these new capabilities have come a long way, there’s still room for improvement. Making data scientists use Apache Zeppelin-based notebooks will likely hamper adoption when so many of us are Jupyter natives; so will preventing users from custom-installing Python packages, such as PyTorch and TensorFlow.

The problem Oracle is attempting to solve here is one of the biggest in the space: How do you get data scientists and machine learners to use enterprise data that sits in databases such as Oracle DBs? The ability to use familiar objects such as pandas data frames and APIs such as matplotlib and scikit-learn is a good step in the right direction, as is the decision to host notebooks. However, we need to see more: Data scientists often prototype code on their laptops in Jupyter Notebooks, VSCode, or PyCharm (among many other choices) with cutting-edge OSS package releases. When they move their code to production, they need enterprise tools that mimic their local workflows and allow them to utilize the full suite of OSS packages.

A representative of Oracle said that the ability to custom install packages on Autonomous Database is a road map item to address in future releases. In the meantime, the inclusion of scikit-learn in OML4Py allows users to work with familiar Python ML algorithms directly in notebooks or through embedded Python execution, where user-defined Python functions run in database-spawned and controlled Python engines. This supplements the scalable, parallelized, and distributed in-database algorithms and provides the ability to manipulate data in database tables and views using Python syntax. Overall, this is a step in the right direction.

Oracle Machine Learning’s documentation and example notebook library is extensive and valuable, allowing us to get up and running in a notebook in a matter of minutes with intuitive SQL and Python examples of anomaly detection, classification, and clustering among many others. This is welcome in a tooling landscape that all too often falls short in useful DevRel material. Learning new tooling is a serious bottleneck, and Oracle has removed a lot of friction here with their extensive documentation.

Oracle has also recognized that the MLOps space is heating up and that table stakes include the need to deploy and productionize machine learning models. To this end, OML4Py provides a REST API with Embedded Python Execution, as well as providing a REST API that allows users to store ML models and create scoring endpoints for them. It is welcome that this functionality not only supports classification and regression OML models, but also Open Neural Network Exchange (ONNX) format models, which include TensorFlow. Once again, the documentation here is extensive and very useful.

Graph Analytics

Oracle’s Graph Analytics offers the ability to run graph queries on databases. It is unique in that it allows users to directly query their data warehouse data. In contrast, Neptune, AWS’ graph solution, requires loading data from their data warehouse (Redshift). Graph Analytics uses PGQL, an Oracle-supported language that queries graph data in the same way that SQL queries structured tabular data. The language’s design is closer to SQL, and it is released under the open-source Apache 2.0 License. However, the main contributor is an Oracle employee, and Oracle is the only vendor supporting PGQL. The preferred mode of interacting with PGQL is through the company’s proprietary Graph Studio tool, which doesn’t promote reproducibility, advanced workflows, or interfacing with the rest of the development ecosystem. Lumpkin promised that REST APIs with Python and Java would be coming soon.

Perhaps unsurprisingly, Oracle’s graph query language appears to be less popular than Cypher, the query language supported by neo4j, a rival graph database (i.e., the PGQL language has 114 stars on GitHub, while neo4j has 8K+ stars). A proposal to bring together PGQL, Cypher, and G-Core has over 95% support from users for nearly 4K votes, has its own landing page, and is gaining traction internationally. While the survey methodology may be questionable — the proposal is authored by the Neo4j team on a Neo4j website — it’s understandable why graph database users would prefer a more commonly used open standard. Hopefully, graph query standards will emerge to streamline competing standards and simplify graph querying for data scientists.

Final thoughts

Oracle is a large incumbent in an increasingly crowded space that’s moving rapidly. The company is playing catch-up, with recent developments in open source tooling and the long tail of emerging data tooling businesses as well as with the ever-growing total addressable market of the space. We’re not only talking about just well-seasoned data scientist and machine learning engineers, but the increasing number of data analysts and citizen data scientists.

For Oracle, best known for its database software, these recent moves are intended to update its offerings to the data analytics, data science, machine learning, and AI spaces. In many ways, this is the data tooling equivalent of Disney making moves to streaming with Disney+. For the most part, Oracle’s recent expansion of its Autonomous Data Warehouse delivers on its promise: to bring the benefits of ADW to large groups of new potential users. There are some lingering questions around whether these tools will meet all the needs of working data professionals, such as being able to work with their open-source packages of choice. We urge Oracle to prioritize such developments on its road map, as access to open source tooling is now table stakes for working data scientists.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Tech News

Samsung Galaxy S22 leak tips iPhone 13-matching potential

The Samsung Galaxy S22 was leaked today in three sizes, each of them either relatively small or surprisingly large. The Samsung Galaxy S22 would appear to be spreading out their options, according to the data shared. The Samsung Galaxy S21 had display sizes of 6.2-inches, 6.7-inches, and 6.8-inches. That wasn’t all that far off from the Samsung Galaxy S20, which came with display options of 6.2, 6.7, and 6.9-inches.

The latest leak this week from MauriQHD suggests there’ll still be three display sizes for the Samsung Galaxy S22. The largest display size is tipped at 6.81-inches splitting the gap between the largest sizes of the Galaxy S20 and Galaxy S21.

The medium Galaxy S22 (probably the Galaxy S22+) is rumored to roll with a 6.55-inch display. This is .15-inch smaller than the standard “plus” devices in the Galaxy S line over the past couple of years. This size also cuts (on the high side) between the small and medium Galaxy S display sizes of the last couple of generations.

The Samsung Galaxy S22, the smallest device in the Galaxy S22 lineup, was tipped this week to come with a 6.06-inch display. That’s approximately 0.14-inch smaller than the previous smallest Galaxy S device (main lineup) device released over the last couple of generations. It’s also quite possible we’ll see some advanced camera hiding tech with this generation of devices.

The leak linked above also notes that the Galaxy S22 Ultra (or whatever the largest device is called), will work with a low-temperature polycrystalline oxide (LTPO) display, likely something like what we saw rumored earlier this year as created by Samsung Display for the next major iPhone release. Can you imagine the newest Samsung Galaxy S and the newest iPhone released with the same sort of display tech the same year?



Repost: Original Source and Author Link

Categories
AI

Lip-syncing app Wombo shows the messy, meme-laden potential of deepfakes

You’ve probably already seen a Wombo video floating around your social media. Maybe it was Ryu from Street Fighter singing the “Witch Doctor” or the last three heads of the US Federal Reserve miming in unison to Rick Astley’s “Never Going to Give You Up.” Each clip features exaggerated facial expressions and uncanny, sometimes nightmarish animation. They’re stupid, fun, and offer a useful look at the current state of deepfakes.

It’s certainly getting quicker and easier to make AI-generated fakes, but the more convincing they are, the more work is needed. The realistic Tom Cruise deepfakes that went viral on TikTok, for example, required an experienced VFX artist, a top-flight impersonator, and weeks of preparation to pull off. One-click fakes that can be created with zero effort and expertise, by comparison, still look like those made by the Wombo app and will continue to do so for the immediate future. In the short term, at least, deepfakes are going to be obviously fabricated and instant meme-bait.

The Wombo app launched late last month from Canada after a short development process. “Back in August 2020 I had the idea for Wombo while smoking a joint with my roommate on the roof,” app creator and Wombo CEO Ben-Zion Benkhin tells The Verge. Releasing the product was “an enormous joy,” he says. “I’ve been following the AI space, following the meme space, following the deepfake space, and just saw the opportunity to do something cool.” In just a few weeks, Benkhin estimates the app has seen some 2 million downloads.

Wombo is free and easy to use. Just snap a picture of your face or upload an image from your camera roll, and push a button to have the image lip-sync to one of a handful of meme-adjacent songs. The app’s software will work its magic on anything that even vaguely resembles a face and many things that don’t. Although similar apps in the past have been dogged by privacy fears, Benkhin is adamant users’ data is safe. “We take privacy really seriously,” he says. “All the data gets deleted and we don’t share it or send it to anyone else.”

The app’s name comes from esports slang, specifically Super Smash Bros. Melee. “If a player lands like a crazy combination then the casters will start yelling ‘Wombo Combo! Wombo Combo!’” says Benkhin. True to these origins, Wombo has proved particularly popular with gamers who’ve used it to animate characters from titles like League of Legends, Fallout: New Vegas, and Dragon Age. “I did some digging into [the origins of the slang],” says Benkhin, “and apparently there was some pizza place that started all this, where they would put a shit-ton of toppings on all their pizza and call it a Wombo Combo.”

Benkhin says the app works by morphing faces using predefined choreography. He and his team shot the base video for each song in his studio (“which is really just a room in my apartment”) and then use these to animate each image. “We steal the motions from their face and apply it to your photo,” he says. The app is also an example of the fast-paced world of AI research, where new techniques can become consumer products in a matter of weeks. Benkhin notes that the software is built “on top of existing work” but with subsequent tweaks and improvements that make it “our own proprietary model.”

Currently, Wombo offers just 14 short clips of songs to lip-sync with, but Benkhin says he plans to expand these options soon. When asked whether the app has the proper licenses for the music it uses, he demures to answer but says the team is working on it.

As with TikTok, though, it seems the reach offered by Wombo could help ameliorate license-holders’ worries about rights. Wombo has already been approached by artists wanting to get their music on the app, says Benkhin, and it’s likely this could offer a revenue stream in addition to the current premium tier (which pays for priority processing and no in-app ads). “It’s going to give [artists] a completely new way of engaging audiences,” he says. “It gives them this new viral marketing tool.”

Wombo is far from the first app to use machine learning to create quick and fun deepfakes. Others include ReFace and FaceApp. But it’s the latest example of what will be an ever-more prominent trend, as deepfake apps become the latest meme templates, allowing users to mash together favorite characters, trending songs, choreographed dances, public figures, and so much more. The future of deepfakes will definitely be memeified.



Repost: Original Source and Author Link