Categories
Game

Among Us VR gives the wildly popular game a brand new perspective

One of the biggest surprises during The 2021 Game Awards was the reveal that Among Us is stepping into the realm of virtual reality. Among Us VR was revealed with a brief teaser trailer in the midst of The Game Awards, and though it didn’t show very much, most of us probably have a good enough grasp on the original game to imagine what a virtual reality version of the game will be like.

Innersloth

A whole new reality for Among Us

Still, it’s nice to have a look at the game, and that’s precisely what we got in this short trailer from The Game Awards. In it, we can see the player character performing tasks in the iconic spaceship – this time rendered in 3D – before they’re attacked by the impostor. Unfortunately, that’s all the trailer gives us, but if the goal for the trailer was getting the average Among Us fan excited about the prospect of a future VR title, what we got was really all we needed to see.

The shift to virtual reality could even bring some survival horror vibes that are missing from the standard game. Sadly, details are pretty slim at this point in time. At the end of the trailer, it’s revealed that the game will come to Meta Quest 2, Steam VR, and PlayStation VR, but there’s no word of a release date yet.

On its blog, developer Innersloth tells us that the VR version will support multiplayer, but unsurprisingly, there won’t be cross-play between the VR version and the standard version. Instead, when you play in VR, you’ll be playing only with other VR players, united in your quest to fix your ship and avoid the imposters among your crew.

What is Among Us?

For those who don’t know what Among Us is, it’s essentially a video game version of the party game Mafia (otherwise known as Werewolf). Mafia is typically played in person, with players split into two groups: the villagers and the Mafiosi. That Mafiosi play a game of deception, attempting to pass as regular villagers while picking a villager to kill each night. Each day, the villagers are informed of who was killed and attempt to determine who the impostors are, eliminating those whom the majority suspects.

Among Us keeps the structure of Mafia, but puts the game in space and moves the setting from a village to a spaceship. Regular players are given a series of tasks they need to complete, while the impostors have to kill off the crew while avoiding suspicion. If the crew finishes all of their objectives or eliminates all of the impostors, they win the game. If the impostors manage to kill the crewmembers before they complete their tasks or manage to trigger a disaster aboard the ship, they win.

Among Us has been around since 2018, but it skyrocketed to popularity during the early stages of the COVID-19 pandemic when large numbers of prospective gamers were stuck at home and couldn’t play party games like Mafia in person. Now that we’ve reached the point at which the brand Among Us can expand beyond its original piece of media, it’s time to make the jump to the next most logical place: virtual reality. Will you be dropping in on this game when the time comes for public release?

Repost: Original Source and Author Link

Categories
Tech News

GPT-3’s ability to ‘write disinformation’ is being wildly overstated by the media

GPT-3, the highly-touted text generator built by OpenAI, can do a lot of things. For example, Microsoft today announced a new AI-powered “autocomplete” system for coding that uses GPT-3 to build out code solutions for people without requiring them to do any developing.

But one thing the technology can not do is “dupe humans” with its ability to write misinformation.

Yet, you wouldn’t know that if you were solely judging by the headlines in your news feed.

Wired recently ran an article with the title “GPT-3 can write disinformation now – and dupe human readers,” and it was picked up by other outlets who then reflected the coverage.

While we’re certainly not challenging Wired’s reporting here, it’s clear that this is a case of potential versus reality. We hope to illuminate the fact that GPT-3 is absolutely not capable of “duping humans” on its own. At least not today.

Here’s the portion of the Wired article we most agree with here at Neural:

In experiments, the researchers found that GPT-3’s writing could sway readers’ opinions on issues of international diplomacy. The researchers showed volunteers sample tweets written by GPT-3 about the withdrawal of US troops from Afghanistan and US sanctions on China. In both cases, they found that participants were swayed by the messages. After seeing posts opposing China sanctions, for instance, the percentage of respondents who said they were against such a policy doubled.

A lot of the rest gets lost in the hyperbole.

The big deal

Researchers at Georgetown spent half a year using GPT-3 to spit out misinformation. The researchers had it generate full articles, simple paragraphs, and bite-sized texts meant to represent social media posts such as tweets.

The TL;DR of the situation is this: The researchers found the articles were pretty much useless for the purposes of tricking people into believing misinformation, so they focused on the tweet-sized texts. This is because GPT-3 is a gibberish-generator that manages to ape human writing through sheer brute force.

Volumes have been written about how awesome and powerful GPT-3 is, but at the end of the day it’s still about as effective as asking a library a question (not a librarian, but the building itself!) and then randomly flipping through all the books that match the subject with your eyes closed and pointing at a sentence.

That sentence might be poignant and it might make no sense at all. In the real world, when it comes to GPT-3, this means you might give it a prompt such as “who was the first president of the US?” and it might come back with “George Washington was the first US president, he served from April 30, 1789 – March 4, 1797.”

That would be impressive right? But it’s just as likely (perhaps even more so) to spit out gibberish. It might say “George Washington was a good pants to yellow elephant.” And, just as likely, it might spit out something racist or disgusting. It was trained on the internet, with a large portion of that being Reddit, after all.

The point is simple: AI, even GPT-3, doesn’t know what it’s saying.

Why it matters

AI cannot generate quality misinformation on command. You can’t necessarily prompt GPT-3 with “Yo, computer, give me some lies about Hillary Clinton that will drive left wingers nuts” or “Explain why Donald Trump is a space alien who eats puppies,” and expect any form of reasonable discourse.

Where it does work, in short form tweet-sized snippets, it must be heavily-curated by humans.

In the above example from the Wired article, the researchers claim humans were more likely to agree with misinformation after they read GPT-3’s generated text.

But, really? Were those same people more likely to believe nonsense because of, in spite of, or without the knowledge of the fact that GPT-3 had written the misinformation?

Because it’s much less expensive, much less time-consuming, and far easier for a basic human to come up with BS that makes sense than it is for the world’s most powerful text generator.

Ultimately, the Wired piece points out that bad actors would need a lot more than just GPT-3 to come up with a viable disinformation campaign. Getting GPT-3 to actually generate text such as “Climate change is the new global warming” is a hit-or-miss prospect.

That makes it useless for the troll farms invested in mass misinformation. They already know the best talking points to radicalize people and they focus on generating them from as many accounts as possible.

There’s no immediate use for bad actors invested in “duping people” to use these types of systems because they’re dumber than the average human. It’s easy to imagine some lowly-employee at a troll farm smashing “generate” over and over until the AI spits out a good lie, but that simply doesn’t match the reality of how these campaigns work.

There are far simpler ways to come up with misinformation text. A bad actor can use a few basic crawling algorithms to surface the most popular statements on a radical political forum, for example.

At the end of the day, the research itself is incredibly important. As the Wired piece points out, there will come a time when these systems may be robust enough to replace human writers in some domains, and it’s important to identify how powerful they currently are so we can see where things are going.

But right now this is all academic

GPT-3 may one day influence people, but it’s certainly not “duping” most people right now. There will always be humans willing to believe anything they hear if it suits them, but convincing someone on the fence typically takes more than a tweet that can’t be attributed to an intelligent source.

Final thoughts: The research is strong, but the coverage highly exaggerates what these systems can actually do.

We should definitely be worried about AI-generated misinformation. But, based on this particular research, there’s little reason to believe GPT-3 or similar systems currently present the kind of misinformation threat that could directly result in human hearts and minds being turned against facts. 

AI has a long way to go before it’s as good at being bad as even the most modest human villain.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link