Categories
AI

The future of AI is a conversation with a computer

How would an AI writing program start an article on the future of AI writing? Well, there’s one easy way to find out: I used the best known of these tools, OpenAI’s GPT-3, to do the job for me.

Using GPT-3 is disarmingly simple. You have a text box to type into and a menu on the side to adjust parameters, like the “temperature” of the response (which essentially equates to randomness). You type, hit enter, and GPT-3 completes what you’ve written, be it poetry, fiction, or code. I tried inputting a simple headline and a few sentences about the topic, and GPT-3 began to fill in the details. It told me that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”

So far, so good, I thought. I hit enter again, and the program added a quote from Google’s head of AI, Jeff Dean, then referenced an experimental piece of software from the 1960s before promising that an “AI Revolution” was coming that would reap immense rewards across the fields of science, technology, and medicine.

Fine, I thought. Then I thought a little more and did some googling. I soon discovered that the quote from Dean was made up, that the experimental software never existed, and while the promise of an “AI Revolution” was all well and good, it wasn’t any different from the vague nonsense found in hype-filled press releases. Really, what was most revealing about the future of AI was not what GPT-3 said but how it said it. The medium is the message, as Marshall McLuhan pointed out many years ago. And here, the medium included plausible fabrications; endless output; and, crucially, an opportunity to respond to the robot writer.

If we’re looking ahead at the next 10 years of AI development, trying to predict how we will interact with increasingly intelligent software, it helps to consider those tools that can talk back. AI writing models may only be digital parrots, able to copy form without understanding meaning, but they still create a dialogue with the user. This is something that often seems missing from the introduction of AI systems like facial recognition algorithms (which are imposed upon us) or self-driving cars (where the public becomes the test subject in a dangerous experiment). With AI writing tools, there is the possibility for a conversation.


If you use Gmail or Google Docs, then you’ve probably already encountered this technology. In Google’s products, AI editors lurk in the blank space in front of your cursor, manifesting textual specters that suggest how to finish a sentence or reply to an email. Often, their prompts are just simple platitudes — ”Thanks!”, “Great idea!”, “Let’s talk next week!” — but sometimes these tools seem to be taking a stronger editorial line, pushing your response in a certain direction. Such suggestions are intended to be helpful, of course, but they seem to provoke annoyance as frequently as gratitude.

To understand how AI systems learn to generate such suggestions, imagine being given two lists of words. One starts off “eggs, flour, spatula,” and the other goes “paint, crayons, scissors.” If you had to add the items “milk” and “glitter” to these lists, which would you choose and with how much confidence? And what if that word was “brush” instead? Does that belong in the kitchen, where it might apply an egg wash, or is it more firmly located in the world of arts-and-crafts? Quantifying this sort of context is how AI writing tools learn to make their suggestions. They mine vast amounts of text data to create statistical maps of the relationships between words, and use this information to complete what you write. When you start typing, they start predicting which words should come next.

Features like Gmail’s Smart Reply are only the most obvious example of how these systems — often known as large language models — are working their way into the written world. AI chatbots designed for companionship have become increasingly popular, with some, like Microsoft’s Chinese Xiaoice, attracting tens of millions of users. Choose-your-own-adventure-style text games with AI dungeon masters are attracting users by letting people tell stories collaboratively with computers. And a host of startups offer multipurpose AI text tools that summarize, rephrase, expand, and alter users’ input with varying degrees of competence. They can help you to write fiction or school essays, say their creators, or they might just fill the web with endless spam.

The ability of the underlying software to actually understand language is a topic of hot debate. (One that tends to arrive, time and time again, at the same question: what do we mean by “understand” anyway?). But their fluency across genres is undeniable. For those enamored with this technology, scale is key to their success. It’s by making these models and their training data bigger and bigger that they’ve been able to improve so quickly. Take, for example, the training data used to create GPT-3. The exact size of the input is difficult to calculate, but one estimate suggests that the entirety of Wikipedia in English (3.9 billion words and more than 6 million articles) makes up only 0.6 percent of the total.

Relying on scale to build these systems has benefits and drawbacks. From an engineering perspective, it allows for fast improvements in quality: just add more data and compute to reap fast rewards. The size of large language models is generally measured in their number of connections, or parameters, and by this metric, these systems have increased in complexity extremely quickly. GPT-2, released in 2019, had 1.5 billion parameters, while its 2020 successor, GPT-3, had more than 100 times that — some 175 billion parameters. Earlier this year, Google announced it had trained a language model with 1.6 trillion parameters.

The difference in quality as systems get larger is notable, but it’s unclear how much longer these scaling efforts will reap rewards in quality. Boosters think that sky’s the limit — that these systems will keep on getting smarter and smarter, and that they may even be the first step toward creating a general-purpose artificial intelligence or AGI. But skeptics suggest that the AI field in general is starting to reap diminishing returns as it scales ever up.

A reliance on scale, though, is inextricably linked to the statistical approach that creates uncertainty in these models’ output. These systems have no centralized store of accepted “truths”; no embodied understanding of “what the world is like for humans” and, hence, no way to distinguish fact from fiction or to exercise common sense.

Quiz them on simple trivia, like capital cities or the birthdays of US presidents, and they are right most of the time. But to these systems, truth is simply a statistical feature of their training data. They answer questions correctly because the text they’ve been fed has presented them with the correct information with enough frequency. This means that if you push them on any given topic or stray from the most obvious fields, they will lie thoughtlessly, making up quotes, dates, biographical details, and anything else you want to hear. The same probabilistic approach also means they can stumble over common sense questions. Start quizzing them with slightly fantastical queries, and they will confidently assert, for example, that a pencil is heavier than a toaster or that a blade of grass only has one eye. Such answers reveal the gulf between statistical and embodied intelligence.

To get a better understanding of these AI language models, I’ve been playing with a variety for the past few weeks; from tools sold to copywriters to flexible, multipurpose systems like GPT-3. The experience has been dizzying. Often, I’m amazed by the fluency, insight, and creativity of these systems. As part of a project for The Verge’s 10-year anniversary, for example, I used GPT-3 to write technopagan spells for a zine, feeding it a prompt (below in bold) which it completed with a four-step ritual (of which I’m showing only the first step):

Technopaganism is a new sort of magic that blends traditional witchcraft with the power of being online all the time. What follows is a technopagan spell designed to protect your phone from spam and improve the quality of messages you receive. Step 1) Take your cell phone and go outside. Close your eyes, hold the device in front of you, and intone: “I offer you, O Spirit of Technopagan Magic, my eternal and unending electronic connection to the net. And in return I ask you for magick to combine my will with the will of the system — make me impervious to the garbage that flows through my inbox. Make all messages flow together, and I will choose among them what is right for me to see. Grant me the Finger of the Internet, and I will strike at spam, and cast out the spammers back to the dark places where they come from.”

But other times, I’m stunned by how limited these programs are. Something that’s often overlooked is just how much human curation is needed to shape their output. The text above was not the first response I received from GPT-3, and I had to go through several iterations to generate a response that was both cogent and funny. It helped, of course, that the task I’d set GPT-3 was an imaginative and open-ended one: it played into the program’s strengths (and I think GPT-3’s success in such tasks has led some users and observers to exaggerate the intelligence of these systems). Other times, though, the software produced nonsensical content even within the fanciful framing I’d given it. Another “spell” it generated in response to the same prompt was much less focused, adding fictitious social media handles, tech headlines, and non-existent URLs to the spell’s instructions:

Perform a ritual to seal the crown circle into the device. May the gods help us all, the Apple iPhone X reference for is broken! Destroy the holy symbol! —tianong xiong (@tianongxiong) September 13, 2018 To confirm the rumor: Facebook Messenger’s search bar does have a search bar hidden right next to it pic.twitter.com/DghBL5DmhU — Warl Darkh (@dkpt) September 12, 2018

You can argue that this is just creativity of a different sort, and that of course a proper technopagan spell would include URLs. But it’s also obvious the machine has gone off-piste.

Despite such weaknesses, there’s already talk of AI systems taking over writers’ jobs. Naturally, I wondered if a computer could write articles for The Verge (and not just this one). I played around with different models, inputting opening paragraphs into these systems and asking for story ideas. Here is some more from GPT-3 on large language models:

“The human brain is just a really fancy computer,” said Jeff Dean, a Google hardware engineer and AI expert, at a company event in 2016. “It’s a computer made of meat.”

To understand GPT-3, you first have to know a little about how the brain learns. The brain isn’t born knowing how to speak or understand speech. It must first acquire this ability through exposure to the outside world, which is why children tend to learn language so quickly, and why being raised in a household where a language isn’t spoken presents a major obstacle.

As neuroscientist Mark Changizi has shown, the process of learning a language is a massive project. “Language is one of the most complex things that brains can do,” he writes, “and it emerges from the brain’s more basic skills, like vision, hearing, and motor control.”

But how does the brain acquire all this knowledge? The short answer is: via autocomplete.

All these points make sense if you’re not concentrating too hard, but they don’t flow from sentence to sentence. They never follow an argument or build to a conclusion. And again, fabrication is a problem. Both Jeff Dean and Mark Changizi are real people who have been more or less correctly identified (though Dean is now head of AI at Google, and Changizi is a cognitive scientist rather than a neuroscientist). But neither man ever uttered the words that GPT-3 attributed to them, as far as I can tell. Yet despite these problems, there’s also a lot to be impressed by. For example, using “autocomplete” as a metaphor to describe AI language models is both accurate and easy to understand. I’ve done it myself! But is this because it’s simply a common metaphor that others have deployed before? Is it right then to say GPT-3 is “intelligent” to use this phrase or is it just subtly plagiarizing others? (Hell, I ask the same questions about my own writing.)

Where AI language models seem best suited, is creating text that is rote, not bespoke, as with Gmail’s suggested replies. In the case of journalism, automated systems have already been integrated into newsrooms to write “fill in the blanks” stories about earthquakes, sporting events, and the like. And with the rise of large AI language models, the span of content that can be addressed in this way is expanding.

Samanyou Garg is the founder of an AI writing startup named Writesonic, and says his service is used mostly by e-commerce firms. “It really helps [with] product descriptions at scale,” says Garg. “Some of the companies who approach us have like 10 million products on their website, and it’s not possible for a human to write that many.” Fabian Langer, founder of a similar firm named AI Writer, tells The Verge that his tools are often used to pad out “SEO farms” — sites that exist purely to catch Google searches and that create revenue by redirecting visitors to ads or affiliates. “Mostly, it’s people in the content marketing industry who have company blogs to fill, who need to create content,” said Langer. “And to be honest, for these [SEO] farms, I do not expect that people really read it. As soon as you get the click, you can show your advertisement, and that’s good enough.”

It’s this sort of writing that AI will take over first, and which I’ve started to think of as “low-attention” text — a description that applies to both the effort needed to create and read it. Low-attention text is not writing that makes huge demands on our intelligence, but is mostly functional, conveying information quickly or simply filling space. It also constitutes a greater portion of the written world than you might think, including not only marketing blogs but work interactions and idle chit-chat. That’s why Gmail and Google Docs are incorporating AI language models’ suggestions: they’re picking low-hanging fruit.

A big question, though, is what effect will these AI writing systems have on human writing and, by extension, our culture? The more I’ve thought about the output of large language models, the more it reminds me of geofoam. This is a building material made from expanded polystyrene that is cheap to produce, easy to handle, and packed into the voids left over by construction projects. It is incredibly useful but somewhat controversial, due to its uncanny appearance as giant polystyrene blocks. To some, geofoam is an environmentally-sound material that fulfills a specific purpose. To others, it’s a horrific symbol of our exploitative relationship with the Earth. Geofoam is made by pumping oil out of the ground, refining it into cheap matter, and stuffing it back into the empty spaces progress leaves behind. Large language models work in a similar way: processing the archaeological strata of digital text into synthetic speech to fill our low-attention voids.

For those who worry that much of the internet is already “fake” — sustained by botnets, traffic farms, and automatically generated content — this will simply mark the continuation of an existing trend. But just as with geofoam, the choice to use this filler on a wide scale will have structural effects. There is ample evidence, for example, that large language models encode and amplify social biases, producing text that is racist and sexist, or that repeats harmful stereotypes. The corporations in control of these models pay lip service to these problems but don’t think they present serious problems. (Google famously fired two of its AI researchers after they published a detailed paper describing these issues.) And as we offload more of the cognitive burden of writing onto machines, making our low-attention text no-attention text, it seems plausible that we, in turn, will be shaped by the output of these models. Google already uses its AI autocomplete tools to suggest gender-neutral language (replacing “chairman” with “chair,” for example), and regardless of your opinion on the politics of this sort of nudge, it’s worth discussing what the end-point of these systems might be.

In other words: what happens when AI systems trained on our writing start training us?


Despite the problems and limitations of large language models, they’re already being embraced for many tasks. Google is making language models central to its various search products; Microsoft is using them to build automated coding software, and the popularity of apps like Xiaoice and AI Dungeon suggests that the free-flowing nature of AI writing programs is no hindrance to their adoption.

Like many other AI systems, large language models have serious limitations when compared with their hype-filled presentations. And some predict this widespread gap between promise and performance means we’re heading into another period of AI disillusionment. As the roboticist Rodney Brooks put it: “just about every successful deployment [of AI] has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.” But AI writing tools can, to an extent, avoid these problems: if they make a mistake, no one gets hurt, and their collaborative nature means human curation is often baked in.

What’s interesting is considering how the particular characteristics of these tools can be used to our advantage, showing how we might interact with machine learning systems, not in a purely functional fashion but as something exploratory and collaborative. Perhaps the most interesting single use of large language models to date is a book named Phamarko AI: a text written by artist and coder K Allado-McDowell as an extended dialogue with GPT-3.

To create Phamarko AI, Allado-McDowell wrote and GPT-3 responded. “I would write into a text field, I would write a prompt, sometimes that would be several paragraphs, sometimes it would be very short, and then I would generate some text from the prompt,” Allado-McDowell told The Verge. “I would edit the output as it was coming out, and if I wasn’t interested in what it was saying, I would cut that part and regenerate, so I compared it to pruning a plant.”

The resulting text is esoteric and obscure, discussing everything from the roots of language itself to the concept of “hyper-dimensionality.” It is also brilliant and illuminating, showing how writing alongside machines can shape thought and expression. At different points, Allado-McDowell compares the experience of writing using GPT-3 to taking mushrooms and communing with gods. They write: “A deity that rules communication is an incorporeal linguistic power. A modern conception of such might read: a force of language from outside of materiality.” That force, Allado-McDowell suggests, might well be a useful way to think about artificial intelligence. The result of communing with it is a sort of “emergence,” they told me, an experience of “being part of a larger ecosystem than just the individual human or the machine.”

This, I think, is why AI writing is so much more exciting than many other applications of artificial intelligence: because it offers the chance for communication and collaboration. The urge to speak to something greater than ourselves is evident in how these programs are being embraced by early adopters. A number of individuals have used GPT-3 to talk to dead loved ones, for example, turning its statistical intelligence into an algorithmic ouija board. Though such experiments also reveal the limitations. In one of these cases, OpenAI shut down a chatbot shaped to resemble a developer’s dead fiancée because the program didn’t conform to the company’s terms of service. That’s another, less promising reality of these systems: the vast majority are owned and operated by corporations with their own interests, and they will shape their programs (and, in turn, their users) as they see fit.

Despite this, I’m hopeful, or at least curious, about the future of AI writing. It will be a conversation with our machines; one that is diffuse and subtle, taking place across multiple platforms, where AI programs linger on the fringes of language. These programs will be unseen editors to news stories and blog posts, they will suggest comments in emails and documents, and they will be interlocutors that we even talk to directly. It’s impossible that this exchange will only be good for us, and that the deployment of these systems won’t come without problems and challenges. But it will, at least, be a dialogue.



Repost: Original Source and Author Link

Categories
Game

Is free-to-play the future of fighting games?

When it comes to video game genres, fighting games tend to be a step behind the curve. The fact that it took an entire pandemic for developers to finally look into better netcode for fighters speaks volumes.

At an even more basic level, the way fighting games operate has been called “archaic” by players who feel that the genre is in need of a switch-up to make them more accessible to modern, mainstream audiences. One of the fighting game community’s main talking points at the moment is free-to-play (FTP), a popular business model that fighting games have largely shied away from, even as games like shooters find success with the practice. If the genre wants to remain competitive in today’s modern landscape, it may need to make that pivot soon — but it’s not as simple as it sounds.

The barriers of the genre

A part of the perceived “need” for the change comes from the sheer number of fighting games out there right now and on the horizon. Maximilian “Dood” Christiansen, a high-profile fighting game content creator, has studied and enjoyed the genre for decades and follows this mindset.

Christiansen has previously noted that older fighting games were able to stay relevant due to a lack of options in the past, which meant one game could hold a player’s attention more easily. Today, there’s a larger variety of quality titles to choose from. However, each brings its own extra costs, from the price of the game itself to DLC to subscriptions that allow players to fight online. Fighting games are expensive enough as is, so your average player isn’t likely to invest in several games simply out of curiosity.

“Most fighting games are gonna result in the same problem until there’s big sales, expansions, or a reason to check out an old thing, because a new thing is really cool,” Christiansen states in a video titled “Please make fighting games free to play“. “But there’s always the barrier, right? You have to pull out your wallet and spend money. If you didn’t have to do that and shit was just free and you can just fire it up and try it, then of course you’re going to find people. That game will always have people playing.”

That paralysis of choice leads to another major problem that Max also brings up: Massive skill gaps. If you’ve ever gone online with older fighting games as a new player, there’s a 90% chance you’ll run into a veteran who can destroy you as they make a sandwich, brush their teeth, and update their LinkedIn account at the same time. For many, it can make games feel “unfair” for new players, creating a barrier to entry.

That’s not to say this issue wouldn’t be a thing with free-to-play fighters as well. However, the nature of this “try before you buy” model would lead to a greatly diverse and potentially ever-growing pool of players at different skill levels. This means more beginners can match up with other beginners instead of being pitted against veterans of the game.

The future of fighters?

A free-to-play model is a proposed solution to those hurdles. Players like Christiansen see going free-to-play as a way to not only keep the initial investing audience around, but to welcome in new players too. That could theoretically reduce the dramatic difference in skill level that new players experience.

Fighting games are expensive compared to a lot of modern multiplayer games (many of which are free-to-play in some way). There’s a chance a new player won’t enjoy the game, but not find out until spending hours learning. One might have the only character they enjoy locked behind DLC, meaning not just paying for the game, but additional content as well. All these barriers are pushed to the sidelines with the FTP model.

While not the most popular with the fighting game community, there are titles that have gone this route. Killer Instinct and the Smash Bros-esque platform fighter Brawlhalla embraced a FTP model. I personally have invested tons of hours into Killer Instinct and found that I love how it handles its model.

When I booted up Killer Instinct for the first time in years, I knew there were only a few characters I was interested in. After researching them, I bought those characters and quickly found my main at a much lower price than purchasing a full fighting game and its DLC. Killer Instinct even allows players to access non-purchased characters as training dummies, which is an ingenious move and something I wish all fighting games would do. It means I don’t have to waste money buying characters I don’t want to use, but need to practice against if I want to stand a chance online.

While it seems similar to the standard fighting game DLC practices, it stands out in an honest manner by allowing curious players to test out two characters for free to learn and find out if they like the game’s engine and mechanics. There’s also a third free-to-use character that rotates throughout the roster on a weekly basis. Implementation such as this gives players the ability to try before they buy. Instead of purchasing an entire package, they can buy the pieces they want. It’s a smart formula for a genre where many only want to play as three or four different characters.

Good implementation of FTP in fighting games can be seen in Brawlhalla, which still boasts 10,000 players online daily, according to Steamcharts. With the upcoming Warner Bros. crossover fighter Multiversus being an FTP fighter with mainstream characters, it may even break those numbers and more records of the genre.

Free-to-play, but not flawless

It all sounds like a slam dunk in theory, but the reality isn’t so simple. Traditional fighting games like Killer InstinctFantasy Strike, and Dead or Alive 6 that have gone FTP still haven’t been able to hold on to a consistent player base despite the model. Brawlhalla has found more success, but it features a more casual playstyle akin to Super Smash Bros. rather than something “hardcore” like Street Fighter.

Free-to-play could introduce new issues for the genre too. Professional fighting game analyst, commentator, and once competitor Sajam talks about how games like Fortnite seemingly drop a never-ending well of skins, wraps, effects, emotes, and more to keep players in their wallets and spending in the item shop. While not the case for every fighter, certain fighting game studios just don’t generate the revenue needed to keep up with such a model. The FTP model has also been poisoned by games that tried, and failed, to make the model work.

Wasn’t Tekken Revolution Free to Play In a screwed up way? We had stats that we can improve which changed the gameplay. We also had randomly occurring critical hits that we had no control over

— Rauschka (@Rauschka_tk) April 5, 2022

Games like Tekken Revolution and Dead or Alive 6: Core Fighters left sour tastes in the mouths of community members when they exposed how scummy the model can feel. Tekken Revolution featured pay-to-win stat boosts and only allowed players to fight in five matches at a time unless they bought a ticket to continue. A general lack of content and support for DoA:6 led to its quick death.

There’s also the fact that fighting games tend to be harder than the other competitive games out there. They aren’t made to be easily digestible to a casual audience like Smash Bros. and its clones like Brawlhalla and Multiversus. Just because you make it free to pick up and learn how to do a quarter-circle forward to half-circle back motion, doesn’t mean many will still want to do so. The genre just isn’t as approachable as, say, a shooter — and that’s another hurdle that many fans used to write off FTP.

While there are issues that can come with going free-to-play, it’s still an approach worth exploring. The positives outweigh the negatives and many community members are on board for a game like Street Fighter 6 taking a Fortnite-like approach. Will developers be willing to upend a classic genre so easily? If the long-archaic past and present of fighting games is anything to go by, probably not. But if a major release ever did, it’s possible that others could follow suit.

Many are looking to Riot’s upcoming fighting game, Project L, as the litmus test of the model in the genre thanks to the company’s past with FTP. With its reach, knowledge, and possible support from the fighting game community, it could easily be the game to revolutionize fighters going forward. However, there are a lot of “ifs” in that argument, so we’ll have to see if it can win over the skeptics.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Why AI is the future of fraud detection

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


The accelerated growth in ecommerce and online marketplaces has led to a surge in fraudulent behavior online perpetrated by bots and bad actors alike. A strategic and effective approach to online fraud detection will be needed in order to tackle increasingly sophisticated threats to online retailers.

These market shifts come at a time of significant regulatory change. Across the globe, new legislation is coming into force that alters the balance of responsibility in fraud prevention between users, brands, and the platforms that promote them digitally. For example, the EU Digital Services Act and US Shop Safe Act will require online platforms to take greater responsibility for the content on their websites, a responsibility that was traditionally the domain of brands and users to monitor and report.

Can AI find what’s hiding in your data?

In the search for security vulnerabilities, behavioral analytics software provider Pasabi has seen a sharp rise in interest in its AI analytics platform for online fraud detection, with a number of key wins including the online reviews platform, Trustpilot. Pasabi maintains its AI models based on anonymised sets of data collected from multiple sources.

Using bespoke models and algorithms, as well as some open source and commercial technology such as TensorFlow and Neo4j, Pasabi’s platform is proving itself to be advantageous in the detection of patterns in both text and visual data. Customer data is provided to Pasabi by its customers for the purposes of analysis to identify a range of illegal activities – – illegal content, scams, and counterfeits, for example – – upon which the customer can then act.

Chris Downie, Pasabi CEO says, “Pasabi’s technology uses AI-driven, behavioral analytics to identify bad actors across a range of online infringements including counterfeit products, grey market goods, fake reviews, and illegal content. By looking for common behavioral patterns across our customers’ data and cross-referencing this with external data that we collect about the reputation of the sources (individuals and companies), the software is perfectly positioned to help online platforms, marketplaces, and brands tackle these threats.”

The proof is in the data

Pasabi shared with VB that their platform is built entirely in-house, with some external services to enhance their data such as translation services. Pasabi’s combination of customer (behavioral) and external (reputational) data is what they say allows them to highlight the biggest threats to their customers.

In the Q&A, Pasabi told VentureBeat that their platform performs analysis on hundreds of data points, which are provided by customers and then combined with Pasabi’s own data collected from external sources. Offenders are then identified at scale, revealing patterns of behavior in the data and potentially uncovering networks working together to mislead consumers.

Anoop Joshi, senior director of legal at Trustpilot said, “Pasabi’s technology finds connections between individuals and businesses, highlighting suspicious behavior and content. For example, in the case of Trustpilot, this can help to detect when individuals are working together to write and sell fake reviews. The technology highlights the most prolific offenders, and enables us to use our investigation and enforcement resources more efficiently and effectively to maintain the integrity of the platform.”

Relevant data is held on Google Cloud services, using logical tenant separation and VPCs. Data is stored securely using encryption in transit and encryption at rest.  Data is stored only for as long as strictly necessary and solely for the purpose of identifying suspicious behavior.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Future Intel Laptops Could Mandate 8-Megapixel Webcams

Tired of a miserly low-resolution webcam on your Intel-powered laptop? That could soon be a thing of the past, if leaked specifications for Intel’s Evo 4.0 platform are anything to go by, as much better picture quality is apparently in the offing.

According to NotebookCheck, the fourth generation of Intel’s Evo platform — which could be introduced with the upcoming Raptor Lake series of processors pegged for the third quarter of 2022 — will mandate 8-megapixel cameras on all laptops running this spec. In other words, if laptop manufacturers want to work with Intel to be Evo-accredited, they will need to up their webcam game.

Riley Young/Digital Trends

High resolution isn’t the only thing that could become a requirement. NotebookCheck claims other specs are likely to be part of the Evo 4.0 specification, including an 80-degree field of view, plus a passing grade on the VCX benchmark.

What is VCX, you ask? Well, Intel is now part of the VCX forum (short for Valued Camera eXperience), which scores laptop webcams based on certain benchmarks. These include texture loss, motion control, sharpness, dynamic range, the camera’s performance under various lighting conditions, and more. At the end, a final score is given. And it now seems that Intel will be expecting manufacturers’ webcams to hit a minimum score (as yet unknown) in order to pass muster.

Interestingly, NotebookCheck’s report says that any webcams placed below the user’s eye line will be awarded negative points in the VCX test. Someone better tell the Huawei MateBook X Pro.

With Intel’s Raptor Lake series set for later in 2022, could we see some of these webcam improvements in this year’s Alder Lake-based laptops? That’s certainly possible. Intel will allegedly have VCX benchmark scores ready by the first quarter of 2022, so we might see a few devices appear that meet these standards before Raptor Lake steps into the limelight. Just don’t bet the farm on it.

Alongside Intel, Microsoft has also reportedly begun enforcing minimum standards for its partner devices. Like Intel, the company wants manufacturers to hit certain specs for webcams, microphones, and speakers. With two giants of the industry pushing manufacturers to up their game, we could finally be able to bid flimsy webcams and crackly mics adieu.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Respawn pulls Titanfall from sale with vague promises for the future

Respawn Entertainment today announced that it has delisted Titanfall, effectively removing it from sale. Titanfall was the first game Respawn Entertainment made as a new studio back in 2014, and now it seems the title is being retired. It isn’t all bad news, though, as those who own the game will still be able to play it even after it disappears from storefronts.

Respawn/Electronic Arts

Respawn pulls the game that started it all

Respawn announced its decision to pull Titanfall in a statement published to Twitter today. “We’ve made the decision to discontinue new sales of the original Titanfall game starting today and we’ll be removing the game from subscription services on March 1, 2022,” Respawn said. “We will, however, be keeping servers live for the dedicated fanbase still playing and those who own the game and are looking to drop into a match.”

Even though Titanfall has disappeared from storefronts and will vanish from streaming services next year, the servers are going to stay live so those who already own the title can continue playing it. While we’re sure most people who wanted to play Titanfall have already purchased the game, those who missed the chance to buy the digital version always have used disc copies they can pick up.

It’s a little bit strange to see Respawn pull Titanfall from storefronts while keeping the servers up and running. Still, if the game has a decent number of players routinely dropping into multiplayer, Respawn probably didn’t want to risk losing consumer goodwill by turning the servers off.

What’s next for the Titanfall series?

After announcing that Titanfall will be delisted, Respawn went on to assure fans that the game won’t be forgotten. “Rest assured, Titanfall is core to Respawn’s DNA and this incredible universe will continue,” the studio said. “Today in Titanfall 2 and Apex Legends, and in the future.”

That part is particularly interesting because it suggests there’s more Titanfall to come. Though the original Titanfall and its sequel have garnered a sizable fanbase, these days, Respawn’s attention is on Apex Legends – a free-to-play battle royale title set in the Titanfall universe that is fairly distinct from a gameplay perspective.

The success of Apex Legends (and the rise of the battle royale genre as an alternative to traditional FPS games) has prompted some Titanfall fans to assume the series is largely over and that we won’t see another Titanfall game in the future. Respawn’s statement today possibly suggests otherwise, though it’s too vague to say for sure.

Still, it’s always possible that Respawn is removing Titanfall from sale because it has something new with the franchise in the works. Until we get confirmation of any such plans, it’s probably safe to assume the company is simply removing the game to focus on Apex Legends. We’ll let you know if Respawn announces anything major in the future, so stay tuned for more.



Repost: Original Source and Author Link

Categories
Computing

Meta Envisions Haptic Gloves As the Future Of the Metaverse

The metaverse seems to be coming, as is the futuristic hardware that will increase immersion in virtual worlds. Meta, the company formerly known as Facebook, has shared how its efforts to usher in that new reality are focusing on how people will actually feel sensations in a virtual world.

The engineers at Meta have developed a number of early prototypes that tackle this goal and they include both haptic suits and gloves that could enable real-time sensations.

Meta’s Reality Labs was tasked to develop, and in many cases invent, new technologies that would enable greater human-computer interaction. The company started by laying out a vision earlier this year about the future of augmented reality (AR) and VR and how to best interact with virtual objects. This kind of research is crucial if we’re moving toward a future where a good chunk of our day is spent inside virtual 3D worlds.

Sean Keller, Reality Labs research director, said that they want to build something that feels just as natural in the AR/VR world as it does in the real world. The problem, he admits, is the technology isn’t yet advanced enough to feel natural and this experience probably won’t arrive for another 10 to 15 years.

According to Keller, we’d ideally use haptic gloves that are soft, lightweight, and able to accurately reproduce the correct pressure, texture, and vibration that corresponds with a virtual object. That requires hundreds of tiny actuators that can simulate physical sensations. Currently, the existing mechanical actuators are too bulky, expensive, and hot to realistically work well. Keller says that it requires softer, more pliable materials.

To solve this problem, the Reality Labs teams turned to research into prosthetic limbs, namely soft robotics and microfluidics. The researchers were able to create the world’s first high-speed microfluidic processor, which is able to control the air flow that moves tiny, soft actuators. The chip tells the valves in the actuators when to move and how far.

Meta researcher holding prototype haptic glove.

The research team was able to create prototype gloves, but the process requires them to be “made individually by skilled engineers and technicians who manufacture the subsystems and assemble the gloves largely by hand.” In order to build haptic gloves at scale for billions of people, new manufacturing processes would have to be invented. Not only do the gloves have to house all of the electronics and sensors, they also have to be slim, lightweight, and comfortable to wear for extended periods of time.

The Reality Labs materials group experimented with various polymers to turn them into fine fibers that could be woven into the gloves. To make it even more efficient, the team is trying to build multiple functions into the fibers including capacitance, conductivity, and sensing.

There have been other attempts at creating realistic haptic feedback. Researchers at the University of Chicago have been experimenting with “chemical haptics.” This involves using various chemicals to simulate different sensations. For example, capsaicin can be used to simulate heat or warmth while menthol does the opposite by simulating coolness.

Meta’s research imto microfluidic processors and tiny sensors woven into gloves may be a bit more realistic than chemicals applied to the skin. It will definitely be interesting to see where Reality Labs takes its research as we move closer to the metaverse.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Deep tech, no-code tools will help future artists make better visual content

This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital.

Despite the hype, the “creator economy” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online.

The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report.

Group of superheroes on a dark background

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content.

Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.

Computer vision will aid livestreaming

Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content.

Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory.

The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today.

Synthetic visual content will be ubiquitous

A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.

By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.

L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Above: L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Image Credit: ©LDV CAPITAL INSIGHTS 2021

In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck.

Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media.

Artificial intelligence will deliver motion capture to the masses

Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content.

Graph displaying motion capture landscape

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K

Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more.

Nearly all content will be gamified

Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement.

Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content.

AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.

People will interact with many digital beings

We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online.

Five examples of digital identities

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI.

These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more.

Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article.

Machine learning will curate, authenticate, and moderate content

For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created.

In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails.

Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications.

It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required.

Visual content tools are the greatest opportunity in the creator economy

The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation.

Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Seeing into our future with Nvidia’s Earth-2

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


This article was contributed by Jensen Huang, Founder and CEO, NVIDIA

The earth is warming. The past seven years are on track to be the seven warmest on record. The emissions of greenhouse gases from human activities are responsible for approximately 1.1°C of average warming since the period 1850-1900.

What we’re experiencing is very different from the global average. We experience extreme weather — historic droughts, unprecedented heatwaves, intense hurricanes, violent storms, and catastrophic floods. Climate disasters are the new norm.

We need to confront climate change now. Yet, we won’t feel the impact of our efforts for decades. It’s hard to mobilize action for something so far in the future. But we must know our future today — see it and feel it — so we can act with urgency.

To make our future a reality today, simulation is the answer.

To develop the best strategies for mitigation and adaptation, we need climate models that can predict the climate in different regions of the globe over decades.

Unlike predicting the weather, which primarily models atmospheric physics, climate models are multidecade simulations that model the physics, chemistry, and biology of the atmosphere, waters, ice, land, and human activities.

Climate simulations are configured today at 10- to 100-kilometer resolutions.

But greater resolution is needed to model changes in the global water cycle — water movement from the ocean, sea ice, land surface, and groundwater through the atmosphere and clouds. Changes in this system lead to intensifying storms and droughts.

Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what’s currently available. It would take decades to achieve that through the ordinary course of computing advances, which accelerate 10x every five years.

For the first time, we have the technology to do ultra-high-resolution climate modeling, to jump to lightspeed, and predict changes in regional extreme weather decades out.

We can achieve million-x speedups by combining three technologies: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers, along with vast quantities of observed and model data to learn from.

And with super-resolution techniques, we may have within our grasp the billion-x leap needed to do ultra-high-resolution climate modeling. Countries, cities, and towns can get early warnings to adapt and make infrastructures more resilient. And with more accurate predictions, people and nations will act with more urgency.

So, we will dedicate ourselves and our significant resources to direct NVIDIA’s scale and expertise in computational sciences, to join with the world’s climate science community.

NVIDIA this week revealed plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change. Named Earth-2, or E-2, the system would create a digital twin of Earth in Omniverse.

The system would be the climate change counterpart to Cambridge-1, the world’s most powerful AI supercomputer for healthcare research. We unveiled Cambridge-1 earlier this year in the U.K. and it’s being used by a number of leading healthcare companies.

All the technologies we’ve invented up to this moment are needed to make Earth-2 possible. I can’t imagine a greater or more important use.

[Note: A version of this story originally ran on the NVIDIA blog.]

Jensen Huang founded NVIDIA in 1993 and has served since its inception as president, chief executive officer, and a member of the board of directors. In 2017, he was named Fortune’s Businessperson of the Year. In 2019, Harvard Business Review ranked him No. 1 on its list of the world’s 100 best-performing CEOs over the lifetime of their tenure. 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Teleoperation and the future of safe driving

This post was written by Amit Rosenzweig, CEO of Ottopia.

Teleoperation: the technology that enables a human to remotely monitor, assist and even drive an autonomous vehicle.

Teleoperation is a seemingly simple capability, yet it involves numerous technologies and systems in order to be implemented safely. In the first article of this series, we established what teleoperation is and why it is critical for the future of autonomous vehicles (AVs). In the second article, we showed the legislative traction and emphasis gained for this technology. In the third and fourth articles, we explained two of the many technical challenges that needed to be overcome in order to enable remote vehicle assistance and operation. In this article, we will explore how this is all achieved in the safest possible way. 

More than a decade ago, the major AV companies made a promise. They claimed that autonomous vehicles would by now be completely self-sufficient. Human driving was obsolete. As the years pass, we continue to see how this goal is elusive, and that there will always be the need for a human to be kept in the loop. The initial response to this was remote driving.

Remote Driving? Major danger

Teleoperation was originally a system that overrides the autonomy of a vehicle and allows a human to manually drive it remotely. Essentially it would replace all self-driving functions and safety systems with a remote driver. This would appear to make a degree of sense. Currently, the solution for unknown situations, aka edge cases, is to put a “safety driver” in the driver’s seat. This way, when the autonomy does not know what to do and gets stuck, the human can manually solve the problem by driving the car for just a few seconds. By enabling the human driver to be in a remote location, they can monitor and solve problems for multiple vehicles, thereby cutting down on driver costs.

Chances are when people first envisioned this remote driving, they assumed we would have perfect and fully immersive virtual reality with zero latency as seen in a sci-fi movie like Black Panther. Unfortunately, there are critical shortcomings with regard to remote driving. As it is, from the instant a driver recognizes an obstacle in the road until their foot hits the brake pedal – brake reaction time – it takes about 0.7 seconds. This means that at a speed of only 30 mph, which translates to 44 feet per second, over 30 feet of braking distance are needed to prevent a collision. This is if the driver is IN the vehicle, traveling at ONLY 30 mph, and the car stops on the spot.

Ottopia Teleoperation in crowded environments

Above: Figure 1: “Obstacles” can appear in almost every environment

Image Credit: Ottopia

For a remote driver, one must factor in at least a few fractions of a second in latency plus the lack of haptic feedback. In other words, the brake reaction time alone is at least 0.8 seconds, with a minimum of 35 feet needed to avoid a collision at 30 mph. And this does not even factor in braking distance. Maybe this is why in a different sci-fi movie, Guardians of the Galaxy 2, one can see how remote pilots are inferior to those onboard the ship.

Clearly, humans cannot be allowed to drive a vehicle from a remote location. At least not on their own.

Advanced Teleoperator Assistance System (ATAS ®): the first transformation for teleoperation

Yes, originally the teleoperation system would shut off the autonomy stack and enable a person to drive the vehicle, but why? Why would you shut off this incredible piece of technology that already knows how to sense, react and respond in ways a person will never be able to do? This is why the second stage of teleoperation involved systems like ATAS® (an Ottopia registered trademark).

Like the more familiar ADAS (Advanced Driver Assistance System) the purpose of ATAS® is to work with the (remote) driver while leveraging the existing safety functions enabled by the vehicle’s autonomous capabilities. The main directive of an ATAS® is to prevent collisions. There are two main ways to do this, both made possible by the autonomy stack.

The first is collision warning. At every given moment, the powerful LiDAR, perception, and computation capabilities are ascertaining each and every object in the field of view of the AV. As the vehicle progresses on its way, the system identifies the speed and trajectory of the vehicle in addition to things that may pose a safety hazard. The teleoperator display has a layer that shows their heading and can alert if anything might be a reason to slow down, stop or circumnavigate the particular obstacle. This system helps compensate for the reactive shortcomings of a human driver while still allowing them to make the important decisions of how to get where they need to go.

Remote collision warning in action

Above: Figure 2: Remote collision warning in action

Image Credit: Ottopia

The second is collision avoidance. The ultimate safety decision-making power does not and cannot lie with the human driver. Yes, the human is subject to what the autonomy decides is safest! This may seem backwards until you remember that the vehicle is in the moment. It has instant perception abilities. It sees the oncoming crash before any human ever could. Furthermore, even if the human driver could see the potential risk, it is possible they are distracted or blinded or otherwise incapable of recognizing the impending danger. That is why, only with regard to braking in safety situations, the vehicle and its corresponding autonomy system must make the decision to stop the vehicle and prevent a disaster.

Clearly, a remote driver must have a system like ATAS® in order to ensure the safety of those in an AV and those around it. However, there remains serious room for improvement.

Tele-assistance. The final form?

Tele-assistance, also known as remote vehicle assistance (RVA), high-level commands, or indirect control – is when the operator gives certain orders to the AV without directly deciding how it completes that task. Tele-assistance helps reduce many of the risks involved in remote driving, even with ATAS®. Tele-assistance is also dramatically more efficient in terms of how many operators are needed.

This is how Tele-assistance works: In the traditional teleoperation situation, an AV would be driving along when it encounters an event which it does not know how to handle. It pulls over to the safest possible spot, stops, and triggers an alert for human intervention. That human would link in, observe the situation, and decide on how best to remedy the problem. Instead of putting their hands on a steering wheel and feet on pedals, the operator will choose from a menu of commands they can give to the vehicle to guide it out of its predicament.

Examples of such commands include path choosing – where the operator selects one of a few offered choices for an optimal path forward; path drawing – where the operator makes a custom path for the AV to follow; and object override – recognizing when the seeming obstacle is not a problem (e.g., a small cardboard box in the middle of the lane) and, in fact, the vehicle can simply continue on its way.

Tele-assistance in action (Image courtesy of Ottopia)

Above: Figure 3: Tele-assistance in action

Image Credit: Ottopia

Traditional teleoperation created more problems than it solved. It is hubristic to claim that a human can remote-drive any normal-sized automobile or truck without any assistance or dedicated safety technology. While humans are required to handle situations confronted by autonomy, the solution for driving is ideally assistance, and at the very least, driving with a safety system like ATAS®.

When tele-assistance is coupled with maximized network connectivity and dynamic video compression, as described in the previous two articles, autonomous vehicles can be commercially deployed in the safest and most efficient manner.

Amit Rosenzweig is the CEO & Founder of Ottopia

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Google’s future in enterprise hinges on strategic cybersecurity

Gaps in Google’s cybersecurity strategy make banks, financial institutions, and larger enterprises slow to adopt the Google Cloud Platform (GCP), with deals often going to Microsoft Azure and Amazon Web Services instead.

It also doesn’t help that GCP has long had the reputation that it is more aligned with developers and their needs than with enterprise and commercial projects. But Google now has a timely opportunity to open its customer aperture with new security offerings designed to fill many of those gaps.

During last week’s Google Cloud Next virtual conference, Google executives leading the security business units announced an ambitious new series of cybersecurity initiatives precisely for this purpose. The most noteworthy announcements are the formation of the Google Cybersecurity Action Team, new zero-trust solutions for Google Workspace, and extending Work Safer with CrowdStrike and Palo Alto Networks partnerships.

The most valuable new announcements for enterprises are on the BeyondCorp Enterprise platform, however. BeyondCorp Enterprise is Google’s zero-trust platform that allows virtual workforces to access applications in the cloud or on-premises and work from anywhere without a traditional remote-access VPN. Google’s announced Work Safer initiative combines BeyondCorp Enterprise for zero-trust security and their Workspace collaboration platform.

Workspace now has 4.8 billion installations of 5,300 public applications across more than 3 billion users, making it an ideal platform to build and scale cybersecurity partnerships. Workspace also reflects the growing problem chief information security officers (CISOs) and CIOs have with protecting the exponentially increasing number of endpoints that dominate their virtual-first IT infrastructures.

Bringing order to cybersecurity chaos

With the latest series of cybersecurity strategies and product announcements, Google is attempting to sell CISOs on the idea of trusting Google for their complete security and public cloud tech stack. Unfortunately, that doesn’t reflect the reality of how many legacy systems CISOs have lifted and shifted to the cloud for many enterprises.

Missing from the many announcements were new approaches to dealing with just how chaotic, lethal, and uncontrolled breaches and ransomware attacks have become. But Google’s announcement of Work Safer, a program that combines Workspace with Google cybersecurity services and new integrations to CrowdStrike and Palo Alto Networks, is a step in the right direction.

The Google Cybersecurity Action Team claimed in a media advisory it will be “the world’s premier security advisory team with the singular mission of supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses.”  But let’s get real: This is a professional services organization designed to drive high-margin engagement in enterprise accounts. Unfortunately, small and mid-tier enterprises won’t be able to afford engagements with the Cybersecurity Action Team, which means they’ll have to rely on system integrators or their own IT staff.

Why every cloud needs to be a trusted cloud

CISOs and CIOs tell VentureBeat that it’s a cloud-native world now, and that includes closing the security gaps in hybrid cloud configurations. Most enterprise tech stacks grew through mergers, acquisitions, and a decade or more of cybersecurity tech-buying decisions. These are held together with custom integration code written and maintained by outside system integrators in many cases. New digital-first revenue streams are generated from applications running on these tech stacks. This adds to their complexity. In reality, every cloud now needs to be a trusted cloud.

Google’s series of announcements relating to integration and security monitoring and operations are needed, but they are not enough. Historically Google has lagged behind the market when it comes to security monitoring by prioritizing its own data loss prevention (DLP) APIs, given their proven scalability in large enterprises. To Google’s credit, it has created a technology partnership with Cybereason, which will use Google’s cloud security analytics platform Chronicle to improve its extended detection and response (XDR) service and will help security and IT teams identify and prevent attacks using threat hunting and incident response logic.

Google now appears to have the components it previously lacked to offer a much-improved selection of security solutions to its customers. Creating Work Safer by bundling the BeyondCorp Enterprise Platform, Workspace, the suite of Google cybersecurity products, and new integrations with CrowdStrike and Palo Alto Networks will resonate the most with CISOs and CIOs.

Without a doubt, many will want a price break on BeyondCorp maintenance fees at a minimum. While BeyondCorp is generally attractive to large enterprises, it’s not addressing the quickening pace of the arms race between bad actors and enterprises. Google also includes Recapture and Chrome Enterprise for desktop management, both needed by all organizations to scale website protection and browser-level security across all devices.

It’s all about protecting threat surfaces

Enterprises operating in a cloud-native world mostly need to protect threat points. Google announced a new client connector for its BeyondCorp Enterprise platform that can be configured to protect Google-native and also legacy applications — which are very important to older companies. The new connector also supports identity and context-aware access to non-web applications running in both Google Cloud and non-Google Cloud environments. BeyondCorp Enterprise will also have a policy troubleshooter that gives admins greater flexibility to diagnose access failures, triage events, and unblock users.

Throughout Google Cloud Next, cybersecurity executives spoke of embedding security into the DevOps process and creating zero trust supply chains to protect new executable code from being breached. Achieving that ambitious goal for the company’s overall cybersecurity strategy requires zero trust to be embedded in every phase of a build cycle through deployment.

Cloud Build is designed to support builds, tests, and deployments on Google’s serverless CI/CD platform. It’s SLSA Level -1 compliant, with scripted builds and support for available provenance. In addition, Google launched a new build integrity feature as Cloud Build that automatically generates a verifiable build manifest. The manifest includes a signed certificate describing the sources that went into the build, the hashes of artifacts used, and other parameters. In addition, binary authorization is now integrated with Cloud Build to ensure that only trusted images make it to production.

These new announcements will protect software supply chains for large-scale enterprises already running a Google-dominated tech stack. It’s going to be a challenge for mid-tier and smaller organizations to get these systems running on their IT budgets and resources, however.

Bottom line: Cybersecurity strategy needs to work for everybody  

As Google’s cybersecurity strategy goes, so will the sales of the Google Cloud Platform. Convincing enterprise CISOs and CIOs to replace or extend their tech stack and make it Google-centric isn’t the answer. Recognizing how chaotic, diverse, and unpredictable the cybersecurity threatscape is today and building more apps, platforms, and adaptive tools that learn fast and thwart breaches.

Getting integration right is just part of the challenge. The far more challenging aspect is how to close the widening cybersecurity gaps all organizations face — not only large-scale enterprises — without requiring a Google-dominated tech stack to achieve it.

 

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link