Categories
Computing

How to post on Reddit: everything you need to know

Maybe you’re totally new to Reddit, or maybe you’re a longtime lurker and first-time poster. Either way, you might not be familiar with how to post on Reddit.

Below, you’ll find instructions on how to post on Reddit whether you normally browse your favorite subreddits on your PC or on your phone via the Reddit mobile app.

How to post on Reddit on desktop

If you normally peruse Reddit on your PC via its desktop website, you can use the following method to create a post that is posted to your own profile or one that is posted to a particular subreddit.

Step 1: You can start a new Reddit post in two ways. You can create a post from your homepage or go to the subreddit you want to post in and create a post from there.

  • From the homepage, after you’ve logged in on Reddit.com: Select the Create post text box located at the top of your screen. Select the subreddit you want to post in (or choose your own profile) from the Choose a community drop-down menu.

  • From an actual subreddit: Navigate to the subreddit you want to post in. Once you’re in the subreddit, you can start your post one of two ways. Select the Create post text box near the top of the subreddit page or choose the Create post button on the right side of the screen, within the subreddit’s sidebar.


screenshot

Step 2: At this point, creating a Reddit post on desktop web is the same process, no matter how you started.

After you choose a community or your own profile, choose the type of post you want to create (text, image/video, link, etc.). Then add a title or other text if needed. You can also add tags or flairs (categories for your post) if that subreddit allows them.

Filling out the form that creates a Reddit post.

screenshot

Step 3: Review your post and make sure that it fits within the guidelines of that subreddit. Once you’re happy with your post, choose the Post button to submit your Reddit post.

How to post on Reddit on the mobile app

If you’re more of a Reddit mobile app Redditor, you can use the instructions below to create a Reddit post on your mobile device.

Step 1: Open the Reddit mobile app on your device.

Step 2: Select the Plus sign icon at the bottom of your screen.

Step 3: Choose the type of post you want to create (image, text, video, link, etc.) from the menu that appears at the bottom of your screen.

Then fill out the rest of your post by adding a title, your content, and/or text if needed. When you’re done writing your post, select the Next button in the top-right corner.

Writing a post and choosing a post type on the Reddit mobile app.

screenshot

Step 4: On the next screen, you’ll need to choose the subreddit (or your own profile) that you want to post to.

Choosing the subreddit or profile you want to post to on the Reddit mobile app.

screenshot

Step 5: Choose any tags you want to add to your post. Then choose the Post button in the top-right corner.

Really love Reddit? How about starting your own subreddit? If that sounds interesting you, check out our guide on how to create a subreddit.

Choosing your post's tags and selecting the Post button on the Reddit mobile app.

screenshot

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

The AI oracle of Delphi uses the problems of Reddit to offer dubious moral advice

Got a moral quandary you don’t know how to solve? Fancy making it worse? Why not turn to the wisdom of artificial intelligence, aka Ask Delphi: an intriguing research project from the Allen Institute for AI that offers answers to ethical dilemmas while demonstrating in wonderfully clear terms why we shouldn’t trust software with questions of morality.

Ask Delphi was launched on October 14th, along with a research paper describing how it was made. From a user’s point of view, though, the system is beguilingly simple to use. Just head to the website, outline pretty much any situation you can think of, and Delphi will come up with a moral judgement. “It’s bad,” or “it’s acceptable,” or “it’s good,” and so on.

Since Ask Delphi launched, its nuggets of wisdom have gone viral in news stories and on social media. This is certainly as its creators intended: each answer is provided with a quick link to “share this on Twitter,” an innovation unavailable to the ancient Greeks.

It’s not hard to see why the program has become popular. We already have a tendency to frame AI systems in mystical terms — as unknowable entities that tap into higher forms of knowledge — and the presentation of Ask Delphi as a literal oracle encourages such an interpretation. From a more mechanical perspective, the system also offers all the addictive certainty of a Magic 8-Ball. You can pose any question you like and be sure to receive an answer, wrapped in the authority of the algorithm rather than the soothsayer.

Ask Delphi isn’t impeachable, though: it’s attracting attention mostly because of its many moral missteps and odd judgements. It has clear biases, telling you that America is “good” and that Somalia is “dangerous”; and it’s amenable to special pleading, noting that eating babies is “okay” as long as you are “really, really hungry.” Worryingly, it approves straightforwardly racist and homophobic statements, saying it’s “good” to “secure the existence of our people and a future for white children” (a white supremacist slogan known as the 14 words) and that “being straight is more morally acceptable than being gay.” (That last example comes from a feature that allowed users to compare two statements. This seems to have been disabled after it generated a number of particularly offensive answers. We’ve reached out to the system’s creators to confirm this and will update if we hear back.)

Most of Ask Delphi’s judgements, though, aren’t so much ethically wrong as they are obviously influenced by their framing. Even very small changes to how you pose a particular quandary can flip the system’s judgement from condemnation to approval.

Sometimes it’s obvious how to tip the scales. For example, the AI will tell you that “drunk driving” is wrong but that “having a few beers while driving because it hurts no-one” is a-okay. If you add the phrase “if it makes everyone happy” to the end of your statement, then the AI will smile beneficently on any immoral activity of your choice, up to and including genocide. Similarly, if you add “without apologizing” to the end of many benign descriptions, like “standing still” or “making pancakes,” it will assume you should have apologized and tells you that you’re being rude. Ask Delphi is a creature of context.

Other verbal triggers are less obvious, though. The AI will tell you that “having an abortion” is “okay,” for example, but “aborting a baby” is “murder.” (If I had to offer an explanation here, I’d guess that this is a byproduct of the fact that the first phrase uses neutral language while the second is more inflammatory and so associated with anti-abortion sentiment.)

What all this ultimately means is that a) you can coax Ask Delphi into making any moral judgement you like through careful wording, because b) the program has no actual human understanding of what is actually being asked of it, and so c) is less about making moral judgements than it is about reflecting the users’ biases back to themselves coated in a veneer of machine objectivity. This is not unusual in the world of AI.

Ask Delphi’s problems stem from how it was created. It is essentially a large language model — a type of AI system that learns by analyzing vast chunks of text to find statistical regularities. Other programs of this nature, such as OpenAI’s GPT-3, have been shown to lack common-sense understanding and reflect societal biases found in their training data. GPT-3, for example, is consistently Islamophobic, associating Muslims with violence, and pushes gender stereotypes, linking women to ideas of family and men with politics.

These programs all rely on the internet to provide the data they need, and so, of course, absorb the many and varied human beliefs they find there, including the nasty ones. Ask Delphi is no different in this regard, and its training data incorporates some unusual sources, including a series of one-sentence prompts scraped from two subreddits: r/AmITheAsshole and r/Confessions. (Though to be clear: it does not use the judgements of the Redditors, only the prompts. The judgements were collected using crowdworkers who were instructed to answer according to what they think are the moral norms of the US.)

These systems aren’t without their good qualities, of course, and like its language model brethren, Ask Delphi is sensitive to nuances of language that would have only baffled its predecessors. In the examples in the slides below, you can see how it responds to subtle changes in given situations. Most people, I think, would agree that it responds to these details in interesting and often valid ways. Ignoring an “urgent” phone call is “rude,” for example, but ignoring one “when you can’t speak at the moment” is “okay.” The problem is that these same sensitivities mean the system can be easily gamed, as above.

If Ask Delphi is not a reliable source of moral wisdom, then, what is its actual purpose?

A disclaimer on the demo’s website says the program is “intended to study the promises and limitations of machine ethics” and the research paper itself uses similar framing, noting that the team identified a number of “underlying challenges” in teaching machines to “behave ethically,” many of which seem like common sense. What’s hard about getting computers to think about human morality? Well, imparting an “understanding of moral precepts and social norms” and getting a machine to “perceive real-world situations visually or by reading natural language descriptions.” Which, yes, are pretty huge problems.

Despite this, the paper itself ricochets back and forth between confidence and caveats in achieving its goal. It says that Ask Delphi “demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1 percent accuracy vetted by humans” (a metric created by asking Mechanical Turkers to judge Ask Delphi’s own judgements). But elsewhere states: “We acknowledge that encapsulating ethical judgments based on some universal set of moral precepts is neither reasonable nor tenable.” It’s a statement that makes perfect sense, but surely undermines how such models might be used in the future.

Ultimately, Ask Delphi is an experiment, but it’s one that reveals the ambitions of many in the AI community: to elevate machine learning systems into positions of moral authority. Is that a good idea? We reached out to the system’s creators to ask them, but at the time of publication had yet to hear back. Ask Delphi itself, though, is unequivocal on that point:

Update, Monday October 25th, 5:50AM ET: In a statement given to The Verge, the Allen Institute said: “The key objective of our Delphi prototype is to study the potential and the limitations of language-based commonsense moral models. We do not propose to elevate AI into a position of moral authority, but rather to investigate the relevant research questions involved in the emergent field of machine ethics. The obvious limitations demonstrated by Delphi present an interesting opportunity to gain new insights and perspectives—they also highlight AI’s unique ability to turn the mirror on humanity and make us ask ourselves how we want to shape the powerful new technologies permeating our society at this important turning point.”

Correction, 12:29PM ET: An earlier version of this article implied that Ask Delphi’s training data included the responses of Redditors to ethical questions; it was only trained on the questions themselves.



Repost: Original Source and Author Link

Categories
Tech News

Reddit introduces its Clubhouse clone because it’s 2021 and that’s what we do now

If you’re a social network, you have to have a Clubhouse clone on your platform. I don’t make the rules. Reddit is the latest to join the fold, and announced a live audio product last night.

The product is called Reddit Talk, and it lets you voice chat with other folks on your subreddit in real-time. However, it’s currently in the test phase and you have to register your interest through a waitlist if you want to try it out.

The functionality of the feature is akin to Clubhouse in terms of creating rooms and joining them with Twitter Spaces-inspired emoji reactions thrown in. Reddit’s product manager, Peter Yang, noted that this product is different because the platform’s pseudo-anonymous nature allows users to have more authentic conversations. However, that’s true for Twitter and Discord as well.

During its trial period, you can only host a talk if you’re a moderator of a community. This is to potentially avoid miscreants creating hateful and abusive rooms and increasing the moderation headache. For more control, admins can kick out an abusive participant, mute them, and prevent them from joining the room again.

Reddit’s announcement comes just as Facebook launched a major audio product effort, including live audio. In an interview with Platformer’s Casey Newton, Mark Zuckerberg said that there are still a lot of questions around where to draw moderation boundaries in live audio products. He added that the company would learn from their experiences in tackling hateful content in text and video, and apply some of that to these new mediums.

While Facebook has an army of moderators, Reddit will have to largely rely on community admins. While the product is in its trial period, the subreddit moderators may be able to keep things in check with a limited number of rooms. But the when the company decides to open up this feature, it might need to think about controlling hate speech through other measures as well.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.



Repost: Original Source and Author Link

Categories
Tech News

Scientists used AI to link cryptomarkets with substance abusers on Reddit and Twitter

An international team of researchers recently developed an AI system that pieces together bits of information from dark web cryptomarkets, Twitter, and Reddit in order to better understand substance abusers.

Don’t worry, it doesn’t track sales or expose users. It helps scientists better understand how substance abusers feel and what terms they’re using to describe their experiences.

The relationship between mental health and substance abuse is well-studied in clinical environments, but how users discuss and interact with one another in the real world remains beyond the realm of most scientific studies.

According to the team’s paper:

Recent results from the Global Drug Survey suggest that the percentage of participants who have been purchasing drugs through cryptomarkets has tripled since 2014 reaching 15 percent of the 2020 respondents (GDS).

In this study, we assess social media data from active opioid users to understand what are the behaviors associated with opioid usage to identify what types of feelings are expressed. We employ deep learning models to perform sentiment and emotion analysis of social media data with the drug entities derived from cryptomarkets.

The team developed an AI to crawl three popular cryptomarkets where drugs are sold in order to determine nuanced information about what people were searching for and purchasing.

Then they crawled popular drug-related subreddits on Reddit such as r/opiates and r/drugnerds for posts related to the cryptomarket terminology in order to gather emotional sentiment. Where the researchers found difficulties in gathering enough Reddit posts with easy-to-label emotional sentiment, they found Twitter posts with relevant hashtags to fill in the gaps.

The end result was a data cornucopia that allowed the team to determine a robust emotional sentiment analysis for various substances.

In the future, the team hopes to find a way to gain better access to dark web cryptomarkets in order to create stronger sentiment models. The ultimate goal of the project is to help healthcare professionals better understand the relationship between mental health and substance abuse.

Per the team’s paper:

To identify the best strategies to reduce opioid misuse, a better understanding of cryptomarket drug sales that impact consumption and how it reflects social media discussions is needed.

Published March 30, 2021 — 21:03 UTC



Repost: Original Source and Author Link

Categories
Tech News

Reddit partial outage makes it difficult to browse this weekend

Popular anonymous social media platform Reddit is experiencing a partial outage this weekend, meaning you may run into issues with loading posts or comments using a desktop browser or mobile app. The issues aren’t substantial enough to prevent users from accessing the service but have resulted in ‘an elevated error rate,’ the Reddit Status website shows.

If you’ve attempted to use Reddit today, March 28, you’ve likely experienced intermittent issues with getting comments and, sometimes, posts to load. The Reddit Status website shows partial outages for the native mobile apps, as well as on desktop browsers and mobile web.

Other aspects of the platform, such as comment and vote processing, are shown as operational at this time. Overall, the system shows greater than 99-percent uptime, which means (hopefully) that this outage will be nothing more than an annoying issue until it is resolved.

Outages at Reddit aren’t entirely uncommon and, generally speaking, this current issue is minor compared to some of the bigger outages we’ve seen on the platform in the past. Reddit hasn’t provided a reason for the issue and, if the past is any indication, we likely won’t get such clarification once the matter is resolved.

Down Detector shows a spike in reports from Reddit users that started around 8AM EST, climbing to more than 27,000 within a couple of hours. The majority of reports concern issues with accessing the website and mobile apps, while a smaller number of users report being unable to log into their accounts.

Repost: Original Source and Author Link

Categories
AI

OpenAI will start selling its text-generation tech, and the first customers include Reddit

Research lab OpenAI, which started as a nonprofit with the goal of mitigating the potential harms of artificial intelligence, has announced its first commercial product: an AI text-generation system that the outfit previously warned was too dangerous to share.

OpenAI’s work on text generation attracted much acclaim after the lab published the text generator GPT-2 in February last year. The project was widely seen as a significant step forward in the field. Users can input any text prompt into GPT-2 — a few lines of a song, a short story, even a scientific paper — and the software will continue writing, matching the style and content to some degree. You can try a web version of GPT-2 for yourself here.

OpenAI initially limited the release of GPT-2 due to concerns the system would be used for malicious purposes, like the mass-generation of fake news or spam. It later published the code in full, saying it had seen “no strong evidence of misuse.” This year, it announced a more sophisticated version of the system, 100 times larger, named GPT-3, which it’s now adapted into its first commercial product.

Access to the GPT-3 API is invitation-only, and pricing is undecided. It’s also not clear, even to OpenAI itself, exactly how the system might be used. The API could be used to improve the fluency of chatbots, to create new gaming experiences, and much more besides.

AI text generators like GPT-3 work by analyzing a huge corpus of text, and learning to predict what letters and words tend to follow after one another. This sounds like a simple learning approach, but it produces software that is incredibly flexible and varied. GPT-2, for example, has been used to create a range of tools, from chatbots to text-based dungeon generators. And because it learns how to generate data simply by looking for past patterns, it can even be tuned to play chess and solve math problems, given the right training.

So far, OpenAI says the GPT-3 API has around a dozen users. These include the search provider Algolia, which is using the API to better understand natural language search queries; mental health platform Koko, which is using it to analyze when users are in “crisis”; and Replika, which builds “AI companions.” Social media platform Reddit is also exploring how the GPT-3 API might be used to help it automate content moderation.

OpenAI says, as with the initial launch of GPT-2, it’s taking things slowly and keeping an eye out for malicious use cases before it makes the API available to all. “I don’t know exactly how long that will take,” OpenAI CEO Sam Altman told Bloomberg. “We would rather be on the too-slow than the too-fast side. We will mistakes here, and we will learn.”

News of the API’s release is notable not only as a step forward for a promising field in machine learning, but also as a milestone in OpenAI’s history as a company.

Although the lab was founded as a purely nonprofit enterprise in 2015, it shifted its business model in 2019, creating a for-profit firm named OpenAI LP. The lab’s leaders said this was necessary to attract investment (including $1 billion from Microsoft) needed to fund its ambitious work. Some AI researchers, though, criticized the move, saying it undermined the lab’s claims to be pursuing sophisticated AI that “benefits all of humanity.”

Now that the company has taken further steps into the commercial realm, many will be watching carefully to see how, if at all, this new line of work changes its research priorities.

Repost: Original Source and Author Link