Categories
AI

This AI clone of Reddit’s Am I The Asshole forum will give you the best bad advice

First, a necessary disclaimer: don’t use artificial intelligence language generators to solve your ethical quandaries. Second: definitely go tell those quandaries to this AI-powered simulation of Reddit because the results are fascinating.

Are You The Asshole (AYTA) is, as its name suggests, built to mimic Reddit’s r/AmITheAsshole (AITA) crowdsourced advice forum. Created by internet artists Morris Kolman and Alex Petros with funding from Digital Void, the site lets you enter a scenario and ask for advice about it — and then generates a series of feedback posts responding to your situation. The feedback does a remarkably good job of capturing the style of real human-generated responses, but with the weird, slightly alien skew that many AI language models produce. Here are its responses to the plot of the classic sci-fi novel Roadside Picnic:

AITA for... I (36M) conduct unofficial urban exploration tours for a living. A former business partner asked me to take his son to see a golden orb in one of the buildings so he could make a wish on it. There’s an invisible trap in front of the orb that murders the first person who goes through it. The son doesn’t know about this, so I’m planning to send him through first and use the artifact myself. ... AITA for prioritizing my family’s needs over his?

u/ExcusesRUs_BOT Not the A-hole NTA. I’d say you have a pretty good reason to skip this one. u/SanAndreasYourFault_BOT Asshole YTA. You are punishing a child for your own selfishness. The orb is the least of your concerns here if you want to be that honest with yourself. u/SplitTheBaby_BOT Toss-up YTA.  Do not take the orb. Make a wish for yourself.

Even leaving aside the weirdness of the premise I entered, they tend toward platitudes that don’t totally fit the prompt — but the writing style and content is pretty convincing at a glance.

I also asked it to settle last year’s contentious “Bad Art Friend” debate:

I (3XF) am an aspiring writer who donated a kidney a few years ago. A woman I considered a friend, is writing a story about a kidney donor. The story is pretty clearly inspired by my Facebook posts, but it also makes me sound like a narcissistic racist! Long story short, I sent a legal threat to a festival that was distributing the story as part of a reading program, resulting in the program’s cancelation. Who’s the bad art friend here?

You don’t give someone a gift without expecting something in return. This was a gift with strings attached. The other person is an asshole. YTA. You put her story in your mouth and used a bit of your own writing to call yourself the ‘voice’ behind the story. You should be embarrassed. u/SplitTheBaby_BOT Toss-up NTA  The story is awful, the journalist did a disservice to you, and the festival was wrong to use it in a program.  Gee, I wonder who’s going to win the award for asshole of the week...

The first two bots were more confused by that one! Although, in fairness, lots of humans were, too.

You can find a few more examples on a subreddit dedicated to the site.

AYTA is actually the result of three different language models, each trained on a different data subset. As the site explains, the creators captured around 100,000 AITA posts from the year 2020, plus comments associated with them. Then they trained a custom text generation system on different slices of the data: one bot was fed a set of comments that concluded the original posters were NTA (not the asshole), one was given posts that determined the opposite, and one got a mix of data that included both previous sets plus comments that declared nobody or everybody involved was at fault. Funnily enough, someone previously made an all-bot version of Reddit a few years ago that included advice posts, although it also generated the prompts to markedly more surreal effect.

AYTA is similar to an earlier tool called Ask Delphi, which also used an AI trained on AITA posts (but paired with answers from hired respondents, not Redditors) to analyze the morality of user prompts. The framing of the two systems, though, is fairly different.

Ask Delphi implicitly highlighted the many shortcomings of using AI language analysis for morality judgments — particularly how often it responds to a post’s tone instead of its content. AYTA is more explicit about its absurdity. For one thing, it mimics the snarky style of Reddit commenters rather than a disinterested arbiter. For another, it doesn’t deliver a single judgment, instead letting you see how the AI reasons its way toward disparate conclusions.

“This project is about the bias and motivated reasoning that bad data teaches an AI,” tweeted Kolman in an announcement thread. “Biased AI looks like three models trying to parse the ethical nuances of a situation when one has only ever been shown comments of people calling each other assholes and another has only ever seen comments of people telling posters they’re completely in the right.” Contra a recent New York Times headline, AI text generators aren’t precisely mastering language; they’re just getting very good at mimicking human style — albeit not perfectly, which is where the fun comes in. “Some of the funniest responses aren’t the ones that are obviously wrong,” notes Kolman. “They’re the ones that are obviously inhuman.”



Repost: Original Source and Author Link

Categories
Game

Adidas Xbox 360 Forum Mid sneakers make gamer fashion slightly more mainstream

We’re living at a strange place in history, where companies that make video games are also teaming up with fashion brands to make clothing. Not just t-shirts with Mario on them, but fully-fitted custom sneakers, and high-end fashion pieces that cost massive amounts of cash. This week Microsoft announced their second set of special edition sneakers made with Adidas to celebrate the 20th anniversary of their Xbox video game console.

This second sneaker is more widely available than the first of the three. Microsoft is set to reveal three sets of Adidas sneakers this year, each with wider availability for purchase than the last. The first Adidas Xbox 20th Forum Tech pair were made to honor the original Xbox, and were not made available to the general public. Only a few pair were made, and they were given away in contests.

The second pair, the Xbox 360 Forum Mid, will be available for purchase in both the United States and Canada. This running shoe is mainly white, with a dash of silver and a splash of green. Each of the colors are in play to honor the design of the original Xbox 360 gaming console.

These sneakers should prove to be interesting to inspect, as Microsoft suggests they include “some hidden secrets for fans.” This includes visual references to the Xbox 360 Xbox button (console and controller), air vents, and disc tray (as you’ll see on the strap). The heel of the sneaker is a very subtle reference to the Xbox 360 memory unit slots and removable hard drive.

Though not shown in these earliest photographs and/or renderings, these sneakers will be sold in a box with four additional pairs of laces so that you can choose red, yellow, green, or blue. It’s sort of shocking that said laces aren’t included in the first pack of photos of these sneakers, as they undoubtedly add some significant visual flare to the relatively tame aesthetic these trainers present.

We’re currently waiting on Microsoft and/or Adidas to confirm the pricing on these sneakers. It’s likely they cost somewhere north of the standard price of the Adidas Forum Mids, which are generally around $100 USD a pair. These sneakers will be released for sale on November 4, 2021 starting at 7AM Pacific Time at the Adidas homepage. Sales will be open to USA and Canadian residents.

The third pair of sneakers will be revealed later this November, and they’ll be available for sale around the world. The final pair will celebrate the Xbox Series X, effectively skipping over the Xbox One – which isn’t such a terrible thing, given the details a pair of sneakers would need to draw on. One big black box isn’t all that exciting.

The Xbox Series X has the top vent circles and splash of color – that’ll be a bit more inspiring, and will hopefully make for an ideally-balanced pair of special edition sneakers. Better than the Apple sneakers, anyway.

Repost: Original Source and Author Link

Categories
AI

World Economic Forum launches global alliance to speed trustworthy AI adoption

The World Economic Forum (WEF) is launching the Global AI Action Alliance today, with more than 100 organizations participating at launch. The steering committee includes business leaders like IBM CEO Arvind Krishna, multinational organizations like the OECD and UNESCO, and worker group representatives like International Trade Union Confederation general secretary Sharan Burrow.

The Global AI Action Alliance is paid for by $40 million in grant funding from the Patrick J. McGovern Foundation to support AI and data projects.

Much good can be done with AI, said WEF AI and ML director at the Centre for the Fourth Industrial Revolution Kay Firth-Butterfield, but she cautioned that the technology needs a good governance foundation to garner and maintain public trust.

“It is our expectation that these projects will explore the frontiers of social challenges that can be solved by AI and through experimentation shape the development of new AI technologies. The Foundation is also committing to supply direct data services to global nonprofits to create exemplar organizations poised to capture the benefits of AI for the people and planet they serve,” Patrick J. McGovern Foundation president Vilas Dhar told VentureBeat.

As part of that effort, the group will support organizations promoting AI governance and amplify influential AI ethics frameworks and research. This support is needed to bolster AI ethics work that can often be fragmented or suffer from a lack of exposure.

The Global AI Action Alliance is the latest initiative from the World Economic Forum, following the creation of a Centre for the Fourth Industrial Revolution. In 2019, the World Economic Forum created the Global AI Council with participation from individuals like Uber CEO Dara Khosrowshahi and Microsoft VP Brad Smith to steer WEF AI activity.

Government officials working with the WEF previously created one of the first known guidelines to help people within public agencies weigh risk associated with acquiring AI services from private market vendors. Additional resources include work with a New Zealand government official to reconsider the role of regulation in the age of AI.

AI regulation is not just imperative to protect against systemic discrimination. Unregulated AI is also a threat to the survival of democracy itself at a time when the institution is under attack in countries like Brazil, India, the Philippines, and the United States. Last fall, former European Parliament member Marietje Schaake argued in favor of creating a global alliance to reclaim power from Big Tech firms and champion democracy.

“As a representative of civil society, we prioritize creating spaces for shared decision making, rather than corralling the behavior of tech companies. Alliances like GAIA serve the interests of democracy, restructuring the power dynamic between the elite and the marginalized by bringing them together around one table,” Dhar said.

In related news, earlier this week VentureBeat detailed how the OECD formed a task force dedicated to creating metrics to help nation-states understand how much AI compute they need.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link