InRule: 64% worry about job security while working with AI

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.

Nearly two-thirds (64%) of enterprise decision-makers with responsibility for machine learning, application development, and decision management in their organizations are worried about job security, according to new research by business software company InRule.

People working with AI worry about job security.

Above: 64% of decision-makers consider job security as their biggest personal challenge with AI technologies.

Image Credit: InRule

There are many use cases for AI in the enterprise, from driving market and customer insights to testing new products, mitigating compliance, and addressing privacy risks, and many decision-makers report feeling overwhelmed by the options. At least one-third of decision-makers report too many use cases across business functions like sales, marketing, and customer experience. There were 53% of respondents in the survey who said customer experiences was the top business function for AI — and that they have too many AI use cases in that area.

The problem of having too many use cases will continue to increase as 67% of decision-makers said they expect their AI/ML usage to increase over the next year-and-a-half.

Challenges with collaboration impede AI success. More than half (51%) of decision-makers say their organization has too much data, and 42% struggle to identify and gain access to the right data. Organizational silos exacerbate the inaccessibility of data, hindering collaboration between experts and data scientists.

AI operations are critical to gaining essential insights about customers and markets, but there are myths and misconceptions that may stifle AI projects before they can get off the ground, InRule’s study found. One such misperception is that AI projects can’t be done without enough data scientists, when the reality is that there are many AI and ML tools available.

Another is that using AI can have unintended consequences that could harm the business. Sixty-four percent of decision-makers said it is “Important” or “Critical” for their organization to defend or prove the efficacy of its digital decisions. With the growing number of privacy regulations, enterprises have to be able to justify what they are doing with the data. Even so, 58% of decision-makers find defending or proving the efficacy of their digital decisions challenging. They are willing to share visual representations of their outcomes and inputs used, but less likely to show the code they used or the questions driving the decisions, the study found.

Part of that may be because many organizations don’t have the right tools, technology, process, and culture to identify the right questions for digital decisioning, InRule found. More than half (57%) of decision-makers report not having the tools and technology in place to identify the right questions for their digital decisions and 42% don’t have the right processes or a culture of collaboration, the study said.

The study, which consisted of three interviews and an online survey of 302 U.S.-based individuals, focused on decision-makers’ perceptions of AI. “AI is a critical source of industry competitiveness. The fastest path to AI solutions is to formulate and execute a strategy to scale AI use cases based on reality unencumbered by myths,” the report said.

Read the full report from InRule.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


TikTok Tom Cruise deepfake creator: public shouldn’t worry about ‘one-click fakes’

When a series of spookily convincing Tom Cruise deepfakes went viral on TikTok, some suggested it was a chilling sign of things to come — harbinger of an era where AI will let anyone make fake videos of anyone else. The video’s creator, though, Belgium VFX specialist Chris Ume, says this is far from the case. Speaking to The Verge about his viral clips, Ume stresses the amount of time and effort that went into making each deepfake, as well as the importance of working with a top-flight Tom Cruise impersonator, Miles Fisher.

“You can’t do it by just pressing a button,” says Ume. “That’s important, that’s a message I want to tell people.” Each clip took weeks of work, he says, using the open-source DeepFaceLab algorithm as well as established video editing tools. “By combining traditional CGI and VFX with deepfakes, it makes it better. I make sure you don’t see any of the glitches.”

Ume has been working with deepfakes for years, including creating the effects for the “Sassy Justice” series made by South Park’s Trey Parker and Matt Stone. He started working on Cruise when he saw a video by Fisher announcing a fictitious run for president by the Hollywood star. The pair then worked together on a follow-up and decided to put a series of “harmless” clips up on TikTok. Their account, @deeptomcruise, quickly racked up tens of thousands of followers and likes. Ume pulled the videos briefly but then restored them.

“It’s fulfilled its purpose,” he says of the account. “We had fun. I created awareness. I showed my skills. We made people smile. And that’s it, the project is done.” A spokesperson from TikTok told The Verge that the account was well within its rules for parody uses of deepfakes, and Ume notes that Cruise — the real Tom Cruise — has since made his own official account, perhaps as a result of seeing his AI doppelgänger go viral.

Deepfake technology has been developing for years now, and there’s no doubt that the results are getting more realistic and easier to make. Although there has been much speculation about the potential harm such technology could cause in politics, so far these effects have been relatively nonexistent. Where the technology is definitely causing damage is in the creation of revenge porn or nonconsensual pornography of women. In those cases, the fake videos or images don’t have to be realistic to create tremendous damage. Simply threatening someone with the release of fake imagery, or creating rumors about the existence of such content, can be enough to ruin reputations and careers.

The Tom Cruise fakes, though, show a much more beneficial use of the technology: as another part of the CGI toolkit. Ume says there are so many uses for deepfakes, from dubbing actors in film and TV, to restoring old footage, to animating CGI characters. What he stresses, though, is the incompleteness of the technology operating by itself.

Creating the fakes took two months to train the base AI models (using a pair of NVIDIA RTX 8000 GPUs) on footage of Cruise, and days of further processing for each clip. After that, Ume had to go through each video, frame by frame, making small adjustments to sell the overall effect; smoothing a line here and covering up a glitch there. “The most difficult thing is making it look alive,” he says. “You can see it in the eyes when it’s not right.”

Ume says a huge amount of credit goes to Fisher; a TV and film actor who captured the exaggerated mannerisms of Cruise, from his manic laugh to his intense delivery. “He’s a really talented actor,” says Ume. “I just do the visual stuff.” Even then, if you look closely, you can still see moments where the illusion fails, as in the clip below where Fisher’s eyes and mouth glitch for a second as he puts the sunglasses on.

Blink and you’ll miss it: look closely and you can see Fisher’s mouth and eye glitch.
GIF: The Verge

Although Ume’s point is that his deepfakes take a lot of work and a professional impersonator, it’s also clear that the technology will improve over time. Exactly how easy it will be to make seamless fakes in the future is difficult to predict, and experts are busy developing tools that can automatically identify fakes or verify unedited footage.

Ume, though, says he isn’t too worried about the future. We’ve developed such technology before and society’s conception of truth has more or less survived. “It’s like Photoshop 20 years ago, people didn’t know what photo editing was, and now they know about these fakes,” he says. As deepfakes become more and more of a staple in TV and movies, people’s expectations will change, as they did for imagery in the age of Photoshop. One thing’s for certain, says Ume, and it’s that the genie can’t be put back in the bottle. “Deepfakes are here to stay,” he says. “Everyone believes in it.”

Update March 5th, 12:11PM ET: Updated to note that Ume and Fisher has now restored the videos to the @deeptomcruise TikTok account.

Repost: Original Source and Author Link

Tech News

If this March Apple event leak is true, OnePlus has reason to worry

Now that March is here, we’re getting into spring reveal event territory, and today we may have learned the date for the next Apple event. Assuming today’s rumor turns out to be true, that event could just be a couple of weeks away. We’re also hearing about the devices Apple might announce during this event, so thanks to think leak, we could already have a very good idea of what to expect from Apple’s next event.

On Twitter today, Youtuber and noted leaker Jon Prosser suggested that Apple’s next event will be happening on March 23rd. Previous leaks suggested a March 16th date for the event, so even though the leaked information seems to agree that the event is happening at some point in March, Prosser’s leak moves the date back a bit.

In a follow-up tweet, Prosser says a “reliable source” told him that AirTags, iPad Pro, AirPods, and Apple TV are all “ready.” We’re told to “take that however you like,” though the suggestions certainly seems to be that any or all of these products could be revealed during Apple’s event.

It’s worth pointing out that, should this date turn out to be correct, Apple won’t be the only company hosting a reveal event on March 23rd. OnePlus has also confirmed that it will be fully revealing the OnePlus 9 lineup on March 23rd, so if Apple is indeed plotting the same date for its own event and neither company reschedules, that will be a packed day in the world of consumer technology.

We’ll see what Apple announces, but if this event is happening at some point in March, then we should get official word of it soon. We’ll let you know when that official word comes down the pipeline, so stay tuned for more.

Repost: Original Source and Author Link