Categories
AI

Why Transformers offer more than meets the eye

Did you miss today’s livestream? Watch the AI at the Edge & IoT Summit on demand now.


What do OpenAI’s language-generating GPT-3 and DeepMind’s protein shape-predicting AlphaFold have in common? Besides achieving leading results in their respective fields, both are built atop Transformer, an AI architecture that has gained considerable attention within the last several years. Dating back to 2017, Transformer has become the architecture of choice for natural language tasks, and it has demonstrated an aptitude for summarizing documents, translating between languages, and analyzing biological sequences.

Transformer has clear immediate business applications. OpenAI’s GPT-3 is currently used in more than 300 apps by tens of thousands of developers, producing 4.5 billion words per day. DeepMind is applying its AlphaFold technology to identify cures for rare, neglected diseases. And more sophisticated applications are on the horizon, as demonstrated by research showing that Transformer can be tuned to play games like chess and even applied to image processing.

What are Transformers?

The Transformer architecture is made up of two core components: an encoder and a decoder. The encoder contains layers that process input data, like text and images, iteratively layer by layer. Each encoder layer generates encodings with information about which parts of the inputs are relevant to each other. They then pass these encodings to the next layer before reaching the final encoder layer.

The decoder’s layers do the same thing, but to the encoder’s output. They take the encodings and use their incorporated contextual information to generate an output sequence of data — whether text, a predicted protein structure, or an image.

Each encoder and decoder layer makes use of an “attention mechanism” that distinguishes Transformer from other architectures. For every input, attention weighs the relevance of every other input and draws from them to generate the output. Each decoder layer has an additional attention mechanism that draws information from the outputs of previous decoders before the decoder layer finally draws information from the encodings to produce an output.

Transformers typically undergo semi-supervised learning that involves unsupervised pretraining, followed by supervised fine-tuning. Residing between supervised and unsupervised learning, semi-supervised learning accepts data that’s partially labeled or where the majority of the data lacks labels. In this case, Transformers are first subjected to “unknown” data for which no previously defined labels exist and must teach themselves to classify the data, processing the unlabeled data to learn from its inherent structure. During the fine-tuning process, Transformers train on labeled datasets so they learn to accomplish particular tasks, like answering questions, analyzing sentiment, and paraphrasing documents.

It’s a form of transfer learning, or storing knowledge gained while solving one problem and applying it to a different — but related — problem. The pretraining step helps the model learn general features that can be reused on the target task, boosting its accuracy.

Attention has the added benefit of boosting model training speed. Because Transformers aren’t sequential, they can be more easily parallelized, and larger and larger models can be trained with significant — but not unattainable — increases in compute. Running on 16 Google TPUv3 special-built processors, AlphaFold took a few weeks to train, while OpenAI’s music-generating Jukebox took over a month across hundreds of Nvidia V100 graphics cards.

The business value of Transformers

Transformers have been widely deployed in the real world. Viable is using the Transformer-powered GPT-3 to analyze customer feedback, identifying themes and sentiment from surveys, help desk tickets, live chat logs, reviews, and more. Algolia, another startup, is using it to improve its web search products.

More exciting use cases lie beyond the language domain. In January, OpenAI took the wraps off DALL-E, a text-to-image engine that’s essentially a visual idea generator. Given a text prompt, it generates images to match the prompt, filling in the blanks when the prompt implies the image must contain a detail that isn’t explicitly stated.

OpenAI predicts that DALL-E could someday augment — or even replace — 3D rendering engines. For example, architects could use the tool to visualize buildings, while graphic artists could apply it to software and video game design. In another point in DALL-E’s favor, the Transformer-driven tool can combine disparate ideas to synthesize objects, some of which are unlikely to exist in the real world — like a hybrid of a snail and a harp.

“DALL-E shows creativity, producing useful conceptual images for product, fashion, and interior design,” Gary Grossman, global lead at Edelman’s AI center of excellence, wrote in a recent blog post. “DALL-E could support creative brainstorming … either with thought starters or, one day, producing final conceptual images. Time will tell whether this will replace people performing these tasks or simply be another tool to boost efficiency and creativity.”

We will eventually see Transformer-based models that can go one step further, synthesizing not just pictures but videos from whole cloth. These types of systems have been detailed in academic literature. Other, related applications may soon — or already — include generating realistic voices, recognizing speech, parsing medical records, predicting stock prices, and creating computer code.

Indeed, Transformers have immense potential in the enterprise, which is one of the reasons the global AI market is anticipated to be worth $266.92 billion by 2027. Transformer-powered apps could enable workers to spend their time on less menial, more meaningful work, bolstering productivity. The McKinsey Global Institute predicts technology like Transformers will result in a 1.2% increase in gross domestic product growth (GDP) for the next 10 years and help capture an additional 20% to 25% in net economic benefits — $13 trillion globally — in the next 12 years.

Businesses that ignore the potential of Transformers do so at their peril.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Deathloop Preview: Hitman Meets Dishonored With Sci-fi Spin

While the first half of 2021 has been quiet for new game releases, it’s about to heat up. The back end of the year is already looking loaded with Halo Infinite, Battlefield 6, Horizon Forbidden West, and much more looming. In between all those heavy hitters, there’s one upcoming release that could end up being the surprise game of the year: Deathloop.

Developed by Arkane Studios, Deathloop has long been a curiosity for gamers. It debuted at E3 2019 with a stylish trailer, though it wasn’t clear what it actually was. Later, a gameplay glimpse revealed that it was a first-person shooter with some extra powers reminiscent of Arkane’s Dishonored series. Still, it’s “time loop” premise was a little vague. It felt like there was a special twist; it just wasn’t clear what it was.

A preview event finally reveals those missing pieces. Imagine a spy thriller that blends Dishonored and Hitman into an action-packed, sci-fi take on Groundhog Day. That only begins to describe Deathloop’s madcap premise.

Looper

Here’s the simple explanation. Deathloop is a first-person shooter about a character named Colt who’s stuck in a time loop. He needs to eliminate eight targets to break that loop. The basic flow of the game is that players load into a level and hunt one of the eight people down. The catch is that players won’t know how to find their target at first. They’ll need to search for clues while sneaking past guards or gunning through them.

Oh, and every time Colt dies? Players exit the loop and restart from the top.

It’ll be tempting to label Deathloop as a roguelite, but that’s not really the case. While there is a “die and try again” flow, players aren’t starting from scratch when they reload into a mission. Acquired weapons and gear are permanent, so there’s constant progression being made, even in failed attempts.

Most importantly, any intel gained is applicable to the next go-round. If a player happens to overhear two guards talking about where the target will be and when, that knowledge doesn’t go away. Each level is a time loop, after all. Gather enough intel over a few runs and a player should be able to jump into a fresh loop and track down their target swiftly. The game even lets players jump into the loop at different times of day so they don’t have to wait around for a specific moment.

What’s unexpected here is how much Deathloop looks like it’s taking cues from the Hitman series. In those games, Agent 47 spends a chunk of levels absorbing details about his targets. He maps out their walking routes, observes their little habits, and figures out the opportune moment to strike. Deathloop works the exact same way. Each run is about seeking out more and more details as to who exactly the target is and where to find them.

Colt plans an assassination in Deathloop.

In the gameplay clip we saw, Colt must find his hit hidden at a swanky party. If he goes in guns blazing, he’ll quickly find himself overwhelmed by dozens of partygoers — not a terribly viable strategy. Instead, he can take the time to sneak around the area outside the party and scavenge for clues. Perhaps he’ll find an audio log or pick up on a stray conversation that narrows it down much more. It’s like a big, bloody game of Guess Who?

Even the assassinations themselves are reminiscent of Hitman. In the gameplay clip, Colt has a chance to simply snipe his enemy from afar and make a quiet escape. Instead, he decides to go a more slapstick route with a comedic, scripted elimination.

It’s a glowing comparison. The Hitman series has done a fantastic job of using puzzle-like investigation to create a thinking person’s action game. That same idea is at the heart of Deathloop, whih turns each assassination into a little mystery that must be solved before the grand finale.

The spy who shot me

Deathloop’s espionage was a highlight of the gameplay reel, but it’s not just a cerebral stealth game. In fact, it’s downright boisterous when it comes to its action.

There are two distinct aspects of combat, which go hand in hand. When it comes to weapons, players have access to a wide array of classes. Those include standard gun types like shotguns or hacking tools that can take over enemy turrets or cause distractions. At the top of each loop, players actually select their loadout, which keeps each run feeling different. On one attempt, players might go in full-bore wielding dual SMGs. The next try, they might go full stealth.

Colt shooting an enemy in Deathloop.

Combat really starts to look like a blast when special powers come into play. Throughout the game, players collect slabs, which grant new power-ups. That’s where Arkane’s unique stamp really comes through. Some of these powers are pulled straight from Dishonored 2, like a teleporting “blink” jump or the Nexus ability that allows players to link multiple enemies together and kill them all by shooting one.

The most gleeful slab shown in the gameplay reel is a power called Karnesis. It’s essentially a telekinetic power that allows players to yank an enemy off the ground and throw them. At one point in the demo, Colt sneaks up on three guards standing next to a railing. He tosses them all into the air and sends them off the edge with the flick of his wrist. It’s a morbid delight.

It’s always hard to get an exact feel for a game from prerecorded gameplay demos. They tend to be perfectly crafted, showing play that’s more high level than most fans will achieve. Even still, the gameplay demo offers a seriously appetizing taste of what’s to come. Colt fluidly swaps between guns, tools, and powers. He’ll rapidly shoot down two enemies, toss another group of enemies up in the air, and then teleport behind a turret to sic it on some guards in a flash.

I went into the Deathloop event having no real sense of how it would play out. I walked out dying to get my hands on it. It’s an incredibly stylish and slick premise that capitalizes on Arkane’s strengths while borrowing entirely new ideas. Forget Halo and Horizon; Deathloop is the game to watch this fall.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

When AI meets BI: 5 red flags to watch for

Join Transform 2021 this July 12-16. Register for the AI event of the year.


If there is a Holy Grail to business success in 2021, it most certainly has something to do with the power of data. And if business intelligence wasn’t already the answer to C-suite prayers for insight and vision derived from massive amounts of data, then artificial intelligence would be.

Today, AI is widely seen as the enabler that BI has always needed to take it to next-level business value. But incorporating AI into your existing BI environment is not so simple.

And it can be precarious, too: AI can dramatically amplify any almost unnoticeable issue into a significantly larger — and negative — impact onto downstream processes. For example, if you can’t sing in tune but only play with a small karaoke machine at home, there is no risk. It’s not a big problem. But imagine yourself in a huge stadium with a multi-megawatt PA system.

With that amplification potential firmly in mind, organizations hoping to integrate AI into their BI solutions must be keenly aware of the red flags that can scuttle even the best-intentioned projects.

Major pitfalls to watch out for when integrating AI into BI include the following:

1. Misalignment with (or absence of) business use case

This should be the easiest pitfall to recognize — and the most common found among current AI implementations. As tempting as it may be to layer AI into your BI solution simply because your peers or competitors are doing so, the consequences can be dire. It may prove difficult to justify the ROI if you spend millions of dollars to automate a piece of work done by a single employee earning $60,000 per year. Here, the positive ROI does not look obvious.

When seeking input from business leaders about the potential viability of an AI-enabled BI, start the conversation with specific scenarios where AI’s scale and scope could potentially address well-defined gaps and yield a business value exceeding the estimated expense. If those gaps aren’t well-understood or the sufficient new value is not guaranteed, then it’s difficult to justify proceeding further.

2. Insufficient training data

Let’s assume that you have a feasible business case — what should you look out for next? Now, you need to make sure that you have enough data to train AI via a machine learning (ML) process. You may have tons of data, but is it enough to be used for AI training? That will depend on a specific use case. For example, when Thomson Reuters built a Text Research Collection in 2009 for news classification, clustering, and summarization, it required a huge amount of data — close to two million news articles.

If at this point you’re still wondering who can determine what the right training data is, and how much of it will be enough for the intended use case, then you’re facing your next red flag.

3. Missing AI teacher

If you have an outstanding data scientist on-staff, it does not guarantee that you already have an AI teacher. It’s one thing to be able to code in R or Python and build sophisticated analytical solutions, and quite another to identify the right data for AI training, to package it properly for the AI training, to continuously validate the output, and guide AI in its learning pathway.

An AI teacher is not just a data scientist – it’s a data scientist with a lot of patience to go through the incremental machine learning process, with a thorough understanding of the business context and the problem you’re trying to solve, and an acute awareness of the risk of introducing bias via the teaching process.

AI teachers are a special breed, and AI teaching is increasingly considered to be at the intersection of artificial intelligence, neuroscience, and psychology — and they may be hard to find at the moment. But AI does need a teacher: Like a big service dog, such as a Rottweiler, with the proper training it can be your best friend and helper, but without one it could become dangerous, even for the owner.

If you are lucky to get an AI teacher, you still have a couple of other concerns to consider.

4. Immature master data

Master data (MD), the core data that underpins the successful operation of the business, is critically important not only for AI, but for traditional BI as well. The more mature or well-defined that MD is, the better. And while BI can compensate for MD’s immaturity inside a BI solution via additional data engineering, the same cannot be done inside AI.

Of course, you can use AI to master your data, but that is a different use case, known as data preparation for BI and AI.

How can we tell so-called mature MD from immature MD? Consider the following:

  1. The level of certainty in deduplication of MD Entities — it should be close to 100%
  2. The level of relationship management:
    • Inside each MD entity class — for example, “Company_A-is-a-parent-of-Company_B”
    • Across MD entity classes — for example, “Company_A-supplies-Part_XYZ”
  3. The level of consistency of categorizations, classifications, and taxonomies. If the marketing department uses a product classification that is different from the one used in finance, then these two must be properly — and explicitly — mapped to one another.

If you have mastered the above A-B-C of your MD and have successfully moved through the preceding three “red flag” check points, then you can attempt relatively simple use cases of enhancing BI with AI — the ones that use structured data.

If unstructured data, such as free-form text or any information without a pre-defined data model, must be involved in your AI implementation, then watch out for red flag #5.

5. Absence of a well-developed knowledge graph

What is a knowledge graph? Imagine all your MD implemented in a machine-readable format with all the definitions, classes, instances, relationships, and classifications, all interconnected and queryable. That would be a basic knowledge graph. Formally speaking, a knowledge graph includes an information model (at the class level) along with corresponding instances, qualified relationships (defined at the class level and implemented at the instance level), logical constraints, and behavioral rules.

If a knowledge graph is implemented using Semantic Web standards, then you can load it straight into AI, thereby significantly minimizing the AI teaching process described earlier. Another great feature of a knowledge graph is that it is limitlessly extensible in terms of the informational model, relationships, constraints, and so on. It is also easily mergeable with other knowledge graphs. If mature MD may be sufficient for AI implementations using only structured data, then a knowledge graph is a must for:

  • AI solutions processing unstructured data — where the AI uses the knowledge graph to analyze the unstructured data in a similar way as structured data;
  • AI storytelling solutions — where the analytical results are presented as a story or a narrative, not just tables or charts, thereby shifting BI from an on-screen visualization supporting the discussion at the table, to a party of this discussion; a cognitive support service, if you will.

While these potential pitfalls seem daunting at first, they are certainly less worrisome than the alternative. And they serve as a reminder of the best practice common to all successful AI implementations: up-front preparation.

AI can change BI from an insights-invoking tool to a respected participant in the actual decision-making process. It is a qualitative change — and it may be a highly worthwhile investment, as long as you know what to watch out for.

Igor Ikonnikov is a Research & Advisory Director in the Data & Analytics practice at Info-Tech Research Group. Igor has extensive experience in strategy formation and execution in the information management domain, including master data management, data governance, knowledge management, enterprise content management, big data, and analytics.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Google Pixel 4 and 4 XL preview: More than meets the eye

It’s October, and you know what that means: pumpkins, parties, and Pixels. In just about 24 hours, Google will show us its latest creation at an event in New York City, and from the looks of it, it’s going to be a trademark Pixel phone: loaded with A.I. tech and packing a powerful camera.

And we know this even before we get into the rumors. In uncharacteristic fashion, some of the biggest leaks about the next Pixel have come from Google itself. But rest assured, there’s a whole lot more Google hasn’t told us about it’s next handset. So here’s everything we’ve heard about the Pixel, officially and otherwise.

Update 10/15: With just a few hours to go, we have some last-minute rumors on price, colors, and accessories.

The latest

While we detailed nearly everything about the Pixel 4 below, a few tidbits have rolled in over the past 24 hours that fill in the few questions we had. The first concerns the price, the operative word being concerns. According to a Best Buy listing obtained by 9to5Google, the Pixel 4 XL will start at $999, a $100 jump from the Pixel 3 XL. That’s a significant hike that would put it on equal footing with the iPhone 11 Pro and Galaxy S10+. And you might have to spring for a pair of headphones to boot. According to an early unboxing, the Pixel 4 will no longer have a pair of wired Pixel Buds in the box. It will continue to get an 18W charger, however.

Finally, it appears as though the Pixel may only come in three colors: white, black, and orange. Contrary to rumors that suggested blue and green varieties were on the way, Evan Blass tweeted a photo of the three-member Pixel family, seemingly settling the confusion once and for all. We’re just not so sure how well that orange color will sell once Halloween is over.

Design

No one has ever confused a Pixel for a Galaxy phone or iPhone, and the Pixel 4 isn’t going to change that. Like the Pixels that came before, the Pixel 4 is going to have a lot of bezel around the screen. Google has already given us a glimpse of the top bezel, and it looks to be as big as the Pixel 3’s, which is to say it’s reeeeally big in comparison to the sliver atop the Galaxy Note 10+.

Leaked marketing pics from Twitter leakster Evan Blass have shown us the bottom too, and while it’s not quite as thick as the top, it’s still plenty sizable. But here’s a bit of good news: the Pixel 4 XL doesn’t appear to have a notch anywhere in sight.

google pixel 4 white Evan Blass

The Pixel 4 will come in white, as usual, but might also add blue, green, and yellow to irs repertoire.

According to 9to5Google, the bezels will reportedly surround a pair of screens sized similarly to the Pixel 3’s: a 5.7-inch Pixel 4 and 6.3-inch Pixel 4 XL. Pixel-wise (no pun inside), you can expect similar OLED panels (so no Quad HD screen for the small one) and rounded corners. It’s also rumored to have a 90Hz screen like the OnePlus 7T, so gestures should be way faster than they are on the Pixel 3.



Repost: Original Source and Author Link

Categories
AI

Listen to Google Meet’s impressive new background noise cancellation feature in action

Google Meet’s new AI-powered background noise cancellation has started rolling out, VentureBeat reports. Google announced the feature back in April for its G Suite Enterprise and G Suite Enterprise for Education customers. It’s coming to the web first, with iOS and Android following later.

A video produced by VentureBeat shows the software in action, with G Suite’s director of product management Serge Lachapelle demonstrating how it can pretty seamlessly remove the sound of crackling crisp packets, clicking pens, or glass clinking. Google’s announcement said the tech will also work on dogs barking or the clicking of a keyboard.

VentureBeat reports that Google has been working on the feature for around a year and a half, using thousands of its own meetings to train its AI model. YouTube clips of lots of people talking were also used by the team. However, Lachapelle was keen to emphasize that although the feature will improve over time, the company will not directly use external meetings to train it. Instead, it will use customer support channels to try to identify where the software might be going wrong.

Google isn’t the first company to try to use artificial intelligence to reduce background noise on calls. However, unlike a solution such as Nvidia’s RTX Voice software, Google’s processing happens in the cloud, meaning it can work consistently on a much broader range of hardware. Eventually, this will include smartphones. Lachapelle emphasizes that the data is encrypted during transport, and it’s never accessible outside of the de-noising process.

The new feature is on by default, and Lachapelle says that it won’t give call participants any visual indication that it’s turned on to try to keep the software’s interface clean. However, if you’d like to turn the noise cancellation off, you can do so from the audio menu in Google Meet’s settings.

Google hasn’t given a timeline for when the feature might roll out to non-Enterprise and Enterprise for Education accounts, but Lachapelle says that the hope is to bring it to a “larger and larger” group of users over time.

Repost: Original Source and Author Link