Categories
AI

All these images were generated by Google’s latest text-to-image AI

There’s a new hot trend in AI: text-to-image generators. Feed these programs any text you like and they’ll generate remarkably accurate pictures that match that description. They can match a range of styles, from oil paintings to CGI renders and even photographs, and — though it sounds cliched — in many ways the only limit is your imagination.

To date, the leader in the field has been DALL-E, a program created by commercial AI lab OpenAI (and updated just back in April). Yesterday, though, Google announced its own take on the genre, Imagen, and it just unseated DALL-E in the quality of its output.

The best way to understand the amazing capability of these models is to simply look over some of the images they can generate. There’s some generated by Imagen above, and even more below (you can see more examples at Google’s dedicated landing page).

In each case, the text at the bottom of the image was the prompt fed into the program, and the picture above, the output. Just to stress: that’s all it takes. You type what you want to see and the program generates it. Pretty fantastic, right?

But while these pictures are undeniably impressive in their coherence and accuracy, they should also be taken with a pinch of salt. When research teams like Google Brain release a new AI model they tend to cherry-pick the best results. So, while these pictures all look perfectly polished, they may not represent the average output of the Image system.

Often, images generated by text-to-image models look unfinished, smeared, or blurry — problems we’ve seen with pictures generated by OpenAI’s DALL-E program. (For more on the trouble spots for text-to-image systems, check out this interesting Twitter thread that dives into problems with DALL-E. It highlights, among other things, the tendency of the system to misunderstand prompts, and struggle with both text and faces.)

Google, though, claims that Imagen produces consistently better images than DALL-E 2, based on a new benchmark it created for this project named DrawBench.

DrawBench isn’t a particularly complex metric: it’s essentially a list of some 200 text prompts that Google’s team fed into Imagen and other text-to-image generators, with the output from each program then judged by human raters. As shown in the graphs below, Google found that humans generally preferred the output from Imagen to that of rivals’.

Google’s DrawBench benchmark compares the output of Imagen to rival text-to-image systems like OpenAI’s DALL-E 2.
Image: Google

It’ll be hard to judge this for ourselves, though, as Google isn’t making the Imagen model available to the public. There’s good reason for this, too. Although text-to-image models certainly have fantastic creative potential, they also have a range of troubling applications. Imagine a system that generates pretty much any image you like being used for fake news, hoaxes, or harassment, for example. As Google notes, these systems also encode social biases, and their output is often racist, sexist, or toxic in some other inventive fashion.

A lot of this is due to how these systems are programmed. Essentially, they’re trained on huge amounts of data (in this case: lots of pairs of images and captions) which they study for patterns and learn to replicate. But these models need a hell of a lot of data, and most researchers — even those working for well-funded tech giants like Google — have decided that it’s too onerous to comprehensively filter this input. So, they scrape huge quantities of data from the web, and as a consequence their models ingest (and learn to replicate) all the hateful bile you’d expect to find online.

As Google’s researchers summarize this problem in their paper: “[T]he large scale data requirements of text-to-image models […] have have led researchers to rely heavily on large, mostly uncurated, web-scraped dataset […] Dataset audits have revealed these datasets tend to reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups.”

In other words, the well-worn adage of computer scientists still applies in the whizzy world of AI: garbage in, garbage out.

Google doesn’t go into too much detail about the troubling content generated by Imagen, but notes that the model “encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes.”

This is something researchers have also found while evaluating DALL-E. Ask DALL-E to generate images of a “flight attendant,” for example, and almost all the subjects will be women. Ask for pictures of a “CEO,” and, surprise, surprise, you get a bunch of white men.

For this reason OpenAI also decided not release DALL-E publicly, but the company does give access to select beta testers. It also filters certain text inputs in an attempt to stop the model being used to generate racist, violent, or pornographic imagery. These measures go some way to restricting potential harmful applications of this technology, but the history of AI tells us that such text-to-image models will almost certainly become public at some point in the future, with all the troubling implications that wider access brings.

Google’s own conclusion is that Imagen “is not suitable for public use at this time,” and the company says it plans to develop a new way to benchmark “social and cultural bias in future work” and test future iterations. For now, though, we’ll have to be satisfied with the company’s upbeat selection of images — raccoon royalty and cacti wearing sunglasses. That’s just the tip of the iceberg, though. The iceberg made from the unintended consequences of technological research, if Imagen wants to have a go at generating that.



Repost: Original Source and Author Link

Categories
Game

Tool around in a real-time generated AI version of ‘GTAV’

Last month, you may have seen that a group of researchers created a machine learning system that could transform the presentation of Grand Theft Auto V into something that looks almost photorealistic. It turns out, at about the same time, another group of AI enthusiasts were working on something even more impressive involving Rockstar’s open world title. On Friday, YouTuber Harrison Kinsley shared a video showing off GAN Theft Auto, a neural network that can generate a playable stretch of Grand Theft Auto V’s game world on its own.

Kinsley and collaborator Daniel Kukieła made GAN Theft Auto with GameGAN, which last year recreated Pac-Man by watching another AI play through the game. GameGAN, as the name suggests, is a generative adversarial network. Every GAN consists of two competing neural networks: a generator and a discriminator. The generator is trained on a sample dataset and then told to produce content based on what it saw. The discriminator, meanwhile, will compare the output of the generator with the original dataset, in the process “coaching” its counterpart to output content that is closer and closer to the source material.

“Every pixel you see here is generated from a neural network while I play,” Kinsley says in the video. “The neural network is the entire game. There are no rules written here by us or the [RAGE] engine.”

Training a GAN is a very GPU intensive task. NVIDIA loaned Kinsley a DGX Station A100 computer to make the project a reality. The system comes with four of the company’s A100 GPUs and a 64-core AMD server CPU. Kinsley and Kukieła used all of that computing power to run 12 rules-based AI simultaneously. Those programs would drive the same stretch of highway, collecting the data the neural network needed to start generating its own game world. The two also developed a supersampling AI to clean up the output of the neural network so it wouldn’t look so pixelated.

As you can see from the video, the network models a surprising number of systems from the game. As the car moves, so does the shadow underneath it and the reflection of the sun on its back windshield. The mountains in the distance even get closer as well. That’s not something Kinsley necessarily expected the AI would do when he and Kukieła first started training the AI.

There’s also a dream-like quality to the gameplay and that’s thanks in part to the fact the neural network doesn’t replicate every aspect of GTA V perfectly. For one, collisions give it trouble. Kinsley says there was one instance where he saw an oncoming police cruiser split into two just as it was about to crash with his car.

If you want to try GAN Theft Auto for yourself, Kinsley and Kukieła have uploaded the project to GitHub. They say most computers should be able to run the demo. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Yseop now tells you what percentage of a financial report can be generated automatically

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


French AI company Yseop has launched a new tool that instantly shows prospective clients what percentage of their financial reports can be automatically generated.

Founded out of Paris in 2008, Yseop (pronounced “easy op”) applies natural language generation (NLG) to structured data to create “written narratives,” saving personnel from having to fully produce reports themselves. According to Yseop, financial analysts spend 48% of their time writing and updating reports, a figure the company said falls to 9% when using its platform.

Yseop also claims some notable enterprise customers, including Oracle, BNP Paribas, and KBC Asset Management.

The write stuff

Algorithms have been creeping into the journalistic and business reporting realm for some time. A company called Automated Insights powers sports coverage and earnings reports for outlets such as The Associated Press (AP), while Narrative Science offers something similar, with a specific focus on “data storytelling” for the enterprise. Elsewhere, Chinese tech titan Tencent has a robotic reporter called Dreamwriter, which publishes business and finance articles. Last year, a Chinese court ruled that an article produced by Dreamwriter was protected by copyright after another outlet reproduced the content without consent.

Similar to Yseop, each of these companies’ respective technologies works best on structured data, where someone just plugs in the numbers and the AI constructs a coherent story around it. Put another way, not all documents or reports are suited to automation, which is something Yseop is looking to help customers figure out with ALIX, a tool that is available now for free.

First up, the user uploads a PDF document containing an existing financial report from their business.

Above: ALIX: Upload a PDF to process

ALIX then processes the report to understand the content and figure out what percentage it would be able to automate for the user.

Above: ALIX: Processing a PDF

As a test, VentureBeat tried ALIX out with an English-language essay to see how it responded — it correctly assessed that the document was not data-driven.

Above: ALIX: This document is not suitable for automation

We then tried ALIX with two real financial reports to see how it assessed their suitability for automation. In the first case, the report was deemed to be mostly suitable for automation.

Above: ALIX: Automation report statements

Above: How ALIX can automate your report

A lower automation confidence score, as with the second report we tried, doesn’t necessarily mean that it’s a lost cause. The section breakdown might show that there are specific parts of the report that are very data-driven rather than pure non-data driven static text.

This means a company could decide to run a few pages of a document through Yseop’s financial analyst and still derive some value.

Above: ALIX: Lower overall confidence score

With ALIX, Yseop is trying to remove some of the friction for companies that are considering automating their report writing. It gives them a little insight into how much of their report could be automated, replacing manual processes with a self-serve technological approach.

“This was a process which was quite manual,” Yseop CEO Emmanuel Walckenaer told VentureBeat during a virtual roundtable event yesterday. “So typically the customer would send sample reports to us. We would analyze them and say whether it was good or not good. What’s great with this tool is you get the results immediately. And you can actually test dozens of different reports, and the customers can do that on [their] own.”

Some bigger companies may have broader concerns around using this type of technology. Would a billion-dollar public company really be comfortable uploading sensitive company financials to a third party’s cloud to process? Yseop’s privacy promise runs something like: Reports you upload are not stored and no information is saved. But some businesses might prefer specific technological safeguards (e.g. localized data processing) versus a trust-based ethos.

“Yseop does not openly discuss security and policies, but trust is a core pillar for our organization, and we will not jeopardize customer trust by mishandling customer data,” Walckenaer said. “In addition, security of our applications and platform is a core competency, and we take all necessary measures to ensure customer data integrity is always preserved.”

While ALIX is geared toward the financial sphere for now, there are plans to expand its scope to other industries it supports, including pharmaceutical and medical report writing.

ALIX is available for anyone to use now.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

UK ditches exam results generated by biased algorithm after student protests

The UK has said that students in England and Wales will no longer receive exam results based on a controversial algorithm after accusations that the system was biased against students from poorer backgrounds, Reuters and BBC News report. The announcement followed a weekend of demonstrations at which protesters chanted “fuck the algorithm” outside the country’s Department for Education.

Instead, students will receive grades based on their teachers’ estimates after formal exams were canceled due to the pandemic. The announcement follows a similar U-turn in Scotland, which had previously seen 125,000 results downgraded.

In the UK, A-levels are the set of exams taken by students around the age of 18. They’re the final exams taken before university, and they have a huge impact on which institution students attend. Universities make offers based on students’ predicted A-level grades, and usually, a student will have to achieve certain grades to secure their place.

In other words: it’s a stressful time of year for students, even before the country’s exam regulator used a controversial algorithm to estimate their grades.

As the BBC explains, the UK’s Office of Qualifications and Examinations Regulation (Ofqual) relied primarily on two pieces of information to calculate grades: the ranking of students within a school and their school’s historical performance. The system was designed to generate what are, on a national level, broadly similar results to previous years. Overall, that’s what the algorithm accomplished, with The Guardian reporting that overall results are up compared to previous years, but only slightly. (The percentage of students achieving an A* to C based on the algorithm’s grading rose by 2.4 percent compared to last year.)

But it’s also led to thousands of grades being lowered from teachers’ estimations: 35.6 percent of grades were adjusted down by a single grade, while 3.3 percent went down by two grades, and 0.2 went down by three. That means a total of almost 40 percent of results were downgraded. That’s life-changing news for anyone who needed to achieve their predicted grades to secure their place at the university of choice.

Worse still, data suggests that fee-paying private schools (also known as “independent schools”) disproportionately benefited from the algorithm used. These schools saw the amount of grades A and above increase by 4.7 percent compared to last year, Sky News reports. Meanwhile, state-funded “comprehensive” schools saw an increase of less than half that: 2 percent.

There is a variety of factors that seem to have biased the algorithm. One theory put forward by FFT Education Datalab is that Ofqual’s approach varied depending on how many students took a given subject, and this decision seems to have led to fewer grades getting degraded at independent schools, which tend to enter fewer students per subject. The Guardian also points out that what it calls a “shockingly unfair” system was happy to boost the number of “U” grades (aka, fails) and round down the amount of A* grades, while one university lecturer has pointed out other failings in the regulator’s approach.

Fundamentally, however, because the algorithm placed so much importance on a school’s historical performance, it was always going to cause more problems for high-performing students at underperforming schools, where the individual’s work would be lost in the statistics. Average students at better schools, meanwhile, seem to have been treated with more leniency. Part of the reason the results have caused so much anger is that this outcome reflects what many see as the wider biases of the UK’s education system.

The government’s decision to ignore the algorithmically determined grades will be welcome news to many, but even using teachers’ predictions comes with its own problems. As Wired notes, some studies have suggested such predictions can suffer from racial biases of their own. One study from 2009 found that Pakistani pupils were predicted a lower score (62.9 percent) more than their white counterparts in one set of English exams and that results for boys from Black and Caribbean backgrounds can spike when they’re assessed anonymously starting from age 16.



Repost: Original Source and Author Link