Categories
AI

Artificial intelligence might eventually write this article

I hope my headline is an overstatement, purely for job purposes, but in this week’s Vergecast artificial intelligence episode, we explore the world of large language models and how they might be used to produce AI-generated text in the future. Maybe it’ll give writers ideas for the next major franchise series, or write full blog posts, or, at the very least, fill up websites with copy that’s too arduous for humans to do.

Among the people we speak to is Nick Walton, the cofounder and CEO of Latitude, which makes the game AI Dungeon, which creates a plot in the game around what you put into it. (That’s how Walton ended up in a band of traveling goblins — you’ll just have to listen to understand how that makes sense!) We also chat with Samanyou Garg, founder of Writesonic, a company that offers various writing tools powered by AI. The company can even have AI write a blog post — I’m shaking! But really.

Anyway, toward the end of the episode, I chat with James Vincent, The Verge’s AI and machine learning senior reporter, who calms me down and helps me understand what the future of text-generation AI might be. He’s great. Check out the episode above, and make sure you subscribe to the Vergecast feed for one more episode of this AI miniseries, as well as the regular show. See you there!

Repost: Original Source and Author Link

Categories
AI

Can AI write an episode of Stargate? Google AI took on the challenge

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


The process of writing a television show typically involves a writers room and a lot of time, as humans figure out the plot and the dialogue that makes a show work. 

For the cult classic Stargate science fiction franchise, which spanned three series (SG-1, Stargate Atlantis and Stargate Universe), character and plot development was helmed by Stargate co-creator Brad Wright. In 2021, Wright publicly posted a message on Twitter asking if it was possible for AI to write an episode of Stargate that would appear on SciFi insider site The Companion

None other than Laurence Moroney, AI lead at Google, responded by picking up the gauntlet to try and prove what AI could do.– though he wasn’t initially worried that AI would replace him or other writers.

“When the whole project started, I threw it out there to The Companion as an idea – I knew I had seen a couple of AI models for scripts,” Wright told VentureBeat. “In some ways, it’s fantastic. In other ways, it’s very unthreatening.”

The first iteration of the AI-generated script was completed by November 2021. The script was interesting, but there was also a lot of gibberish, Wright said. Now Moroney and Wright are collaborating again on a second version that aims to dial a new gate address for a more involved and engaging Stargate script.

“I told Laurence (Moroney) we should probably try to really step up the game if we’re going to do this again, and he accepted that challenge,” Wright said. “That’s what blew me away because it’s not just better – it’s like, whoa… better!”

How the Stargate AI script was generated

The process for producing the Stargate AI-generated script was much the same as the way any AI model is first built – by training it.

Moroney trained the AI model with every Stargate episode script ever written, providing the system with a corpus of every line of dialogue and plot. He used a variety of technologies, primarily Google’s TensorFlow machine learning framework.

He also used pre-trained natural language models, along with a technique known as transformers, Moroney said.

“Not to be confused with the Hasbro toys, the technique called transformers was primarily invented actually for language translation,” Moroney said. “If you think about the idea of language translation, you have an input sentence that you want to match to an output sentence and that almost sounds like the ideal thing for script generation.”

For example, an input sentence could be ‘Captain Samantha Carter says something to General Jack O’Neil.’ The output sentence is derived from the transformer training on how Jack has responded in the past to input sentences similar to the one that Samantha just uttered.

“So I could train a transformer to predict what Jack would say to anything,” Moroney said.

The other core technology approach used was something known as a universal sentence encoder, which provides a numeric value for the context of a sentence. With that approach, Moroney said it was possible to have semantics in the sentences be numerically encoded, in order to understand the connections between one sentence and the next, moving forward and backwards. Wright noted that, in his view, the second version of the Stargate API was better than the first because of the encoder – since the script never descended into nonsensical gibberish.

While there are various Google machine learning operations tools Moroney could have used for more automation, he emphasized that much of the process to build the Stargate script was manual. He provided a prompt to the trained TensorFlow model, which would then provide a response. Those responses were then inputted into the script, which Moroney manually assembled.

“The model wasn’t spitting out a screenplay with properly formatted text,” Moroney said. “It was a number of models generating the appropriate tags to go in at the appropriate time.

Bringing scriptwriting AI to the Google enterprise

The models and techniques used by Moroney to build out the Stargate script could be applicable to the enterprise. . However, he joked that it’s unlikely that his AI techniques will be used to write an event keynote script for Google and Alphabet CEO Sundar Pichai anytime soon.

“If I got every word that Sundar (Pichai) ever said, I could train a model on his vernacular,” he said. “If I wrote something that could help my pitch so that it’s more in his voice, than my voice, that kind of thing would be a very useful tool.”

Moroney said that for writers helping to produce the scripts used for keynotes at a corporate event, an AI model could help reduce the time needed to get the right tone for a particular speaker, but it won’t be able to do all the work.

“If you’re doing an announcement for something like Google I/O, you’re going to be talking about something new that nobody’s ever seen before,” he said. “Therefore by definition, you don’t have the data to be able to train models like that.”

Another potential way AI is now being exploited is for the production of deep fakes, which represents a non-trivial cybersecurity risk. An AI trained in the right vernacular could potentially be used to aid in the production of a deep fake, but Moroney said that he prefers to see positive uses of technology rather than negative ones.

“I personally worry a lot about deep fakes,” he said. “But I’m also optimistic that there are technologies out there that you can use to spot deep fakes and they actually are quite easy to spot with the right application of technologies.”

Stargate AI script is good, but it won’t replace humans

An often-heard concern about AI is that it will replace humans, but Moroney isn’t worried – for now. In his view, AI-assisted tools could potentially help generate the foundation for future scripts. While humans could use AI to help with parts of the process, the core storytelling and uniqueness comes from things that cannot be trained by machine learning.

Wright said emphatically that the AI-generated script is a unique idea that honors the Stargate show, but it honors the show as it was. That said, as a man that has spent his career writing about the future, there is always potential for something more.

“I think AI is going to be able to generate something eventually that will pass a test as to whether or not an audience believes it was written by humans,” Wright said. “It might not be better than the best humans, but you know, at the beginning, the best chess masters were beating the best computers and now AI beats chess masters. I’m sure that something similar could happen with this.”

The Stargate AI version 2 script, as read by the original actors from Stargate – including Richard Dean Anderson, Amanda Tapping and Michael Shanks –will premiere on The Companion on May 21.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.



Repost: Original Source and Author Link

Categories
Tech News

GPT-3’s ability to ‘write disinformation’ is being wildly overstated by the media

GPT-3, the highly-touted text generator built by OpenAI, can do a lot of things. For example, Microsoft today announced a new AI-powered “autocomplete” system for coding that uses GPT-3 to build out code solutions for people without requiring them to do any developing.

But one thing the technology can not do is “dupe humans” with its ability to write misinformation.

Yet, you wouldn’t know that if you were solely judging by the headlines in your news feed.

Wired recently ran an article with the title “GPT-3 can write disinformation now – and dupe human readers,” and it was picked up by other outlets who then reflected the coverage.

While we’re certainly not challenging Wired’s reporting here, it’s clear that this is a case of potential versus reality. We hope to illuminate the fact that GPT-3 is absolutely not capable of “duping humans” on its own. At least not today.

Here’s the portion of the Wired article we most agree with here at Neural:

In experiments, the researchers found that GPT-3’s writing could sway readers’ opinions on issues of international diplomacy. The researchers showed volunteers sample tweets written by GPT-3 about the withdrawal of US troops from Afghanistan and US sanctions on China. In both cases, they found that participants were swayed by the messages. After seeing posts opposing China sanctions, for instance, the percentage of respondents who said they were against such a policy doubled.

A lot of the rest gets lost in the hyperbole.

The big deal

Researchers at Georgetown spent half a year using GPT-3 to spit out misinformation. The researchers had it generate full articles, simple paragraphs, and bite-sized texts meant to represent social media posts such as tweets.

The TL;DR of the situation is this: The researchers found the articles were pretty much useless for the purposes of tricking people into believing misinformation, so they focused on the tweet-sized texts. This is because GPT-3 is a gibberish-generator that manages to ape human writing through sheer brute force.

Volumes have been written about how awesome and powerful GPT-3 is, but at the end of the day it’s still about as effective as asking a library a question (not a librarian, but the building itself!) and then randomly flipping through all the books that match the subject with your eyes closed and pointing at a sentence.

That sentence might be poignant and it might make no sense at all. In the real world, when it comes to GPT-3, this means you might give it a prompt such as “who was the first president of the US?” and it might come back with “George Washington was the first US president, he served from April 30, 1789 – March 4, 1797.”

That would be impressive right? But it’s just as likely (perhaps even more so) to spit out gibberish. It might say “George Washington was a good pants to yellow elephant.” And, just as likely, it might spit out something racist or disgusting. It was trained on the internet, with a large portion of that being Reddit, after all.

The point is simple: AI, even GPT-3, doesn’t know what it’s saying.

Why it matters

AI cannot generate quality misinformation on command. You can’t necessarily prompt GPT-3 with “Yo, computer, give me some lies about Hillary Clinton that will drive left wingers nuts” or “Explain why Donald Trump is a space alien who eats puppies,” and expect any form of reasonable discourse.

Where it does work, in short form tweet-sized snippets, it must be heavily-curated by humans.

In the above example from the Wired article, the researchers claim humans were more likely to agree with misinformation after they read GPT-3’s generated text.

But, really? Were those same people more likely to believe nonsense because of, in spite of, or without the knowledge of the fact that GPT-3 had written the misinformation?

Because it’s much less expensive, much less time-consuming, and far easier for a basic human to come up with BS that makes sense than it is for the world’s most powerful text generator.

Ultimately, the Wired piece points out that bad actors would need a lot more than just GPT-3 to come up with a viable disinformation campaign. Getting GPT-3 to actually generate text such as “Climate change is the new global warming” is a hit-or-miss prospect.

That makes it useless for the troll farms invested in mass misinformation. They already know the best talking points to radicalize people and they focus on generating them from as many accounts as possible.

There’s no immediate use for bad actors invested in “duping people” to use these types of systems because they’re dumber than the average human. It’s easy to imagine some lowly-employee at a troll farm smashing “generate” over and over until the AI spits out a good lie, but that simply doesn’t match the reality of how these campaigns work.

There are far simpler ways to come up with misinformation text. A bad actor can use a few basic crawling algorithms to surface the most popular statements on a radical political forum, for example.

At the end of the day, the research itself is incredibly important. As the Wired piece points out, there will come a time when these systems may be robust enough to replace human writers in some domains, and it’s important to identify how powerful they currently are so we can see where things are going.

But right now this is all academic

GPT-3 may one day influence people, but it’s certainly not “duping” most people right now. There will always be humans willing to believe anything they hear if it suits them, but convincing someone on the fence typically takes more than a tweet that can’t be attributed to an intelligent source.

Final thoughts: The research is strong, but the coverage highly exaggerates what these systems can actually do.

We should definitely be worried about AI-generated misinformation. But, based on this particular research, there’s little reason to believe GPT-3 or similar systems currently present the kind of misinformation threat that could directly result in human hearts and minds being turned against facts. 

AI has a long way to go before it’s as good at being bad as even the most modest human villain.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
AI

IBM’s Project CodeNet will test how far you can push AI to write software

Elevate your enterprise data technology and strategy at Transform 2021.


IBM’s AI research division has released a 14-million-sample dataset to develop machine learning models that can help in programming tasks. Called Project CodeNet, the dataset takes its name after ImageNet, the famous repository of labeled photos that triggered a revolution in computer vision and deep learning.

While there’s a scant chance that machine learning models built on the CodeNet dataset will make human programmers redundant, there’s reason to be hopeful that they will make developers more productive.

Automating programming with deep learning

In the early 2010s, impressive advances in machine learning triggered excitement (and fear) about artificial intelligence soon automating many tasks, including programming. But AI’s penetration in software development has been extremely limited.

Human programmers discover new problems and explore different solutions using a plethora of conscious and subconscious thinking mechanisms. In contrast, most machine learning algorithms require well-defined problems and a lot of annotated data to develop models that can solve the same problems.

There have been many efforts to create datasets and benchmarks to develop and evaluate “AI for code” systems. But given the creative and open nature of software development, it’s very hard to create the perfect dataset for programming.

The CodeNet dataset

Days Gone for the PC is now available.

Above: Project CodeNet is a huge dataset of ~14M code samples spread across dozens of programming languages.

Image Credit: Sony

With Project CodeNet, the researchers at IBM have tried to create a multi-purpose dataset that can be used to train machine learning models for various tasks. CodeNet’s creators describe it as a “very large scale, diverse, and high-quality dataset to accelerate the algorithmic advances in AI for Code.”

The dataset contains 14 million code samples with 500 million lines of code written in 55 different programming languages. The code samples have been obtained from submissions to nearly 4,000 challenges posted on online coding platforms AIZU and AtCoder. The code samples include both correct and incorrect answers to the challenges.

One of the key features of CodeNet is the amount of annotation that has been added to the examples. Every one of the coding challenges included in the dataset has a textual description along with CPU time and memory limits. Every code submission has a dozen pieces of information, including the language, the date of submission, size, execution time, acceptance, and error types.

The researchers at IBM have also gone through great effort to make sure the dataset is balanced along different dimensions, including programming language, acceptance, and error types.

Programming tasks for machine learning

CodeNet is not the only dataset to train machine learning models for programming tasks. But a few characteristics that make it stand out. First is the sheer size of the dataset, including the number of samples and the diversity of the languages.

But perhaps more important is the metadata that goes with the coding samples. The rich annotations added to CodeNet make it suitable for a diverse set of tasks as opposed to other coding datasets that are specialized for specific programming tasks.

There are several ways CodeNet can be used to develop machine learning models for programming tasks. One is language translation. Since each coding challenge in the dataset contains submissions of various programming languages, data scientists can use it to create machine learning models that translate code from one language to another. This can be handy for organizations that want to port old code to new languages and make them accessible to newer generations of programmers and maintainable with new development tools.

CodeNet can also help to develop machine learning models for code recommendation. Recommendation tools could be as simple as autocomplete-style models that finish the current line of code to more complex systems that write full functions or blocks of code.

Since CodeNet has a wealth of metadata about memory and execution-time metrics, data scientists can also use it to develop code optimization systems. Or they can use the error-type metadata to train machine learning systems that flag potential flaws in source code.

A more advanced use case that would be interesting to see is code generations. CodeNet is a rich library of textual descriptions of problems and their corresponding source code. There have already been several examples of developers using advanced language models such as GPT-3 to generate code from natural language descriptions. It will be interesting to see whether CodeNet can help finetune these language models to become more consistent in code generation.

The researchers at IBM have already conducted several experiments with CodeNet, including code classification, code similarity evaluation, and code completion. The deep learning architectures they used include simple multi-layer perceptrons, convolutional neural networks, graph neural networks, and Transformers. The results, reported in a paper that details Project CodeNet, show that they have been able to obtain above 90-percent accuracy in most tasks. (Though it’s worth noting that evaluating accuracy in programming is a bit different from image classification and text generation, where minor errors might result in awkward but acceptable results.)

A monstrous engineering effort

The engineers at IBM carried out a complicated software and data engineering effort to curate the CodeNet dataset and develop its complementary tools.

First, they had to gather the code samples from AIZU and AtCoder. While one of them had an application programming interface that made it easy to obtain the code, the other had no easy-to-access interface and the researchers had to develop tools that scrapped the data from the platform’s web pages and decomposed it into a tabular format. Then, they had to manually merge the two datasets into a unified schema.

Next, they had to develop tools to cleanse the data by identifying and removing duplicates and samples that had a lot of dead code (source code that is not executed at runtime).

They also developed preprocessing tools that will make it easier to train machine learning models on the CodeNet corpus. These tools include tokenizers for different programming languages, parse trees, and a graph representation generator for use in graph neural networks.

All these efforts are a reminder of the huge human effort needed to create efficient machine learning systems. Artificial intelligence is not ready to replace programmers (at least for the time being). But it might change the kind of tasks that require the efforts and ingenuity of human programmers.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News

College student Liam Porr used the language-generating AI tool GPT-3 to produce a fake blog post that recently landed in the No. 1 spot on Hacker News, MIT Technology Review reported. Porr was trying to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And, he told MIT Technology Review, “it was super easy, actually, which was the scary part.”

So to set the stage in case you’re not familiar with GPT-3: It’s the latest version of a series of AI autocomplete tools designed by San Francisco-based OpenAI, and has been in development for several years. At its most basic, GPT-3 (which stands for “generative pre-trained transformer”) auto-completes your text based on prompts from a human writer.

My colleague James Vincent explains how it works:

Like all deep learning systems, GPT-3 looks for patterns in data. To simplify things, the program has been trained on a huge corpus of text that it’s mined for statistical regularities. These regularities are unknown to humans, but they’re stored as billions of weighted connections between the different nodes in GPT-3’s neural network. Importantly, there’s no human input involved in this process: the program looks and finds patterns without any guidance, which it then uses to complete text prompts. If you input the word “fire” into GPT-3, the program knows, based on the weights in its network, that the words “truck” and “alarm” are much more likely to follow than “lucid” or “elvish.” So far, so simple.

Here’s a sample from Porr’s blog post (with a pseudonymous author), titled “Feeling unproductive? Maybe you should stop overthinking.”

Definition #2: Over-Thinking (OT) is the act of trying to come up with ideas that have already been thought through by someone else. OT usually results in ideas that are impractical, impossible, or even stupid.

Yes, I would also like to think I would be able to suss out that this was not written by a human, but there’s a lot of not-great writing on these here internets, so I guess it’s possible that this could pass as “content marketing” or some other content.

OpenAI decided to give access to GPT-3’s API to researchers in a private beta, rather than releasing it into the wild at first. Porr, who is a computer science student at the University of California, Berkeley, was able to find a PhD student who already had access to the API, who agreed to work with him on the experiment. Porr wrote a script that gave GPT-3 a blog post headline and intro. It generated a few versions of the post, and Porr chose one for the blog, copy-pasted from GPT-3’s version with very little editing.

The post went viral in a matter of a few hours, Porr said, and the blog had more than 26,000 visitors. He wrote that only one person reached out to ask if the post was AI-generated, although several commenters did guess GPT-3 was the author. But, Porr says, the community downvoted those comments.

William Porr

He suggests that GPT-3 “writing” could replace content producers which ha ha these are the jokes people of course that could not happen I hope. “The whole point of releasing this in private beta is so the community can show OpenAI new use cases that they should either encourage or look out for,” Porr writes. And notable that he doesn’t yet have access to the GPT-3 API even though he’s applied for it, admitting to MIT Technology Review, “It’s possible that they’re upset that I did this.”

Repost: Original Source and Author Link