Categories
AI

AI might help edit the next generation of blockbusters

The next few Tuesdays, The Verge’s flagship podcast The Vergecast is showcasing a miniseries dedicated to the use of artificial intelligence in industries that are often overlooked, hosted by Verge senior reporter Ashley Carman. This week, the series focuses on AI for the video world.

More specifically, we’re looking at how AI is being used as a tool to help people streamline the process of creating video content. Yes, this might mean software taking on a bigger role in the very human act of creativity, but what if instead of replacing us, machine learning tools could be used to assist our work?

That’s what Scott Prevost, VP of Adobe Sensei — Adobe’s machine learning platform — envisions for Adobe’s AI products. “Sensei was founded on this firm belief that we have that AI is going to democratize and amplify human creativity, but not replace it,” Prevost says. “Ultimately, enabling the creator to do things that maybe they couldn’t do before. But also to automate and speed up some of the mundane and repetitive tasks that are parts of creativity.”

Adobe has already built Sensei’s initiatives into its current products. Last fall, the company released a feature called Neural Filters for Photoshop, which can be used to remove artifacts from compressed images, change the lighting in a photo, or even alter a subject’s face, giving them a smile instead of a frown, for example, or adjusting their “facial age.” From the user’s perspective, all this is done by just moving a few sliders.

Adobe’s Neural Filter
Image: Adobe

Adobe also has features like Content Aware Fill, which is built into its video editing software After Effects and can seamlessly remove objects from videos — a task that would take hours or even days to do manually. Prevost shared a story about a small team of documentary filmmakers who ran into trouble with their footage when they realized there were unwanted specks on their visual caused by a dirty camera lens. With Content Aware Fill, the team was able to remove the unwanted blemishes from the video after identifying the object in only a single frame. Without software like Adobe’s, the team would have had to edit thousands of frames individually or reshoot the footage entirely.

Adobe’s content-aware fill for video
GIF: Adobe

Another feature from Adobe called Auto Reframe uses AI to reformat and reframe video for different aspect ratios, keeping the important objects in frame that may have been cut out using a regular static crop.

Adobe’s Auto Reframe feature
GIF: Adobe

Technology in this field is clearly advancing for consumers, but also for the big-budget professionals, too. While AI video editing techniques like deepfakes have not really made it onto the big screen just yet — most studios still rely on traditional CGI — the place where directors and Hollywood studios are on the way to using AI is for dubbing.

A company called Flawless, which specializes in AI-driven VFX and film-making tools, is currently working on something they call TrueSync, which uses machine learning to create realistic, lip-synced visualizations on actors for multiple languages. Co-CEO and co-founder of Flawless Scott Mann told The Verge that this technique works significantly better than traditional CGI to reconstruct an actor’s mouth movements.

“You’re training a network to understand how one person speaks, so the mouth movements of an ooh and aah, different visemes and phonemes that make up our language are very person specific,” says Mann. “And that’s why it requires such detail in the process to really get something authentic that speaks like that person spoke like.”

An example Flawless shared that really stood out was a scene from the movie Forrest Gump, with a dub of Tom Hanks’ character speaking Japanese. The emotion of the character is still present and the end results are definitely more believable than a traditional overdub because the movement of the mouth is synchronized to the new dialogue. There are points where you almost forget that it’s another voice actor behind the scenes.

But as with any AI changing any industry, we also have to think about job replacement.

If someone is creating, editing, and publishing projects by themselves, then Adobe’s AI tools should save them a lot of time. But in larger production houses where each role is delegated to a specific specialist — retouchers, colorists, editors, social media managers — those teams may end up downsizing.

Adobe’s Prevost believes the technology will more likely shift jobs than completely destroy them. “We think some of the work that creatives used to do in production, they’re not going to do as much of that anymore,” he says. “They may become more like art directors. We think it actually allows the humans to focus more on the creative aspects of their work and to explore this broader creative space, where Sensei does some of the more mundane work.”

Scott Mann at Flawless has a similar sentiment. Though the company’s technology may result in less of a need for script rewriters for translated movies, it may open up doors for new job opportunities, he argues. “I would say, truthfully, that role is kind of a director. What you’re doing is you’re trying to convey that performance. But I think with tech and really with this process, it’s going to be a case of taking that side of the industry and growing that side of the industry.”

Will script supervisors end up becoming directors? Or photo retouchers end up becoming art directors? Maybe. But what we are seeing for certain today is that a lot of these tools are already combining workflows from various points of the creative process. Audio mixing, coloring, and graphics are all becoming one part of multipurpose software. So, if you’re working in the visual media space, instead of specializing in specific creative talents, your creative job may instead require you to be more of a generalist in the future.

“I think the boundaries between images, and videos, and audio, and 3D, and augmented reality are going to start to blur,” says Prevost. “It used to be that there are people who specialized in images, and people who specialized in video, and now you see people working across all of these mediums. And so we think that Sensei will have a big role in basically helping to connect these things together in meaningful ways.”

Repost: Original Source and Author Link

Categories
AI

Pair programming driven by programming language generation

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


As artificial intelligence expands its horizon and breaks new grounds, it increasingly challenges people’s imaginations regarding opening new frontiers. While new algorithms or models are helping to address increasing numbers and types of business problems, advances in natural language processing (NLP) and language models are making programmers think about how to revolutionize the world of programming.

With the evolution of multiple programming languages, the job of a programmer has become increasingly complex. While a good programmer may be able to define a good algorithm, converting it into a relevant programming language requires knowledge of its syntax and available libraries, limiting a programmer’s ability across diverse languages.

Programmers have traditionally relied on their knowledge, experience and repositories for building these code components across languages. IntelliSense helped them with appropriate syntactical prompts. Advanced IntelliSense went a step further with autocompletion of statements based on syntax. Google (code) search/GitHub code search even listed similar code snippets, but the onus of tracing the right pieces of code or scripting the code from scratch, composing these together and then contextualizing to a specific need rests solely on the shoulders of the programmers.

Machine programming

We are now seeing the evolution of intelligent systems that can understand the objective of an atomic task, comprehend the context and generate appropriate code in the required language. This generation of contextual and relevant code can only happen when there is a proper understanding of the programming languages and natural language. Algorithms can now understand these nuances across languages, opening a range of possibilities:

  • Code conversion: comprehending code of one language and generating equivalent code in another language.
  • Code documentation: generating the textual representation of a given piece of code.
  • Code generation: generating appropriate code based on textual input.
  • Code validation: validating the alignment of the code to the given specification.

Code conversion

The evolution of code conversion is better understood when we look at Google Translate, which we use quite frequently for natural language translations. Google Translate learned the nuances of the translation from a huge corpus of parallel datasets — source-language statements and their equivalent target-language statements — unlike traditional systems, which relied on rules of translation between source and target languages.

Since it is easier to collect data than to write rules, Google Translate has scaled to translate between 100+ natural languages. Neural machine translation (NMT), a type of machine learning model, enabled Google Translate to learn from a huge dataset of translation pairs. The efficiency of Google Translate inspired the first generation of machine learning-based programming language translators to adopt NMT. But the success of NMT-based programming language translators has been limited due to the unavailability of large-scale parallel datasets (supervised learning) in programming languages. 

This has given rise to unsupervised machine translation models that leverage large-scale monolingual codebase available in the public domain. These models learn from the monolingual code of the source programming language, then the monolingual code of the target programming language, and then become equipped to translate the code from the source to the target. Facebook’s TransCoder, built on this approach, is an unsupervised machine translation model that was trained on multiple monolingual codebases from open-source GitHub projects and can efficiently translate functions between C++, Java and Python.

Code generation

Code generation is currently evolving in different avatars — as a plain code generator or as a pair-programmer autocompleting a developer’s code.

The key technique employed in the NLP models is transfer learning, which involves pretraining the models on large volumes of data and then fine-tuning it based on targeted limited datasets. These have largely been based on recurrent neural networks. Recently, models based on Transformer architecture are proving to be more effective as they lend themselves to parallelization, speeding the computation. Models thus fine-tuned for programming language generation can then be deployed for various coding tasks, including code generation and generation of unit test scripts for code validation.

We can also invert this approach by applying the same algorithms to comprehend the code to generate relevant documentation. The traditional documentation systems focus on translating the legacy code into English, line by line, giving us pseudo code. But this new approach can help summarize the code modules into comprehensive code documentation.

Programming language generation models available today are CodeBERT, CuBERT, GraphCodeBERT, CodeT5, PLBART, CodeGPT, CodeParrot, GPT-Neo, GPT-J, GPT-NeoX, Codex, etc.

DeepMind’s AlphaCode takes this one step further, generating multiple code samples for the given descriptions while ensuring clearance of the given test conditions.

Pair programming

Autocompletion of code follows the same approach as Gmail Smart Compose. As many have experienced, Smart Compose prompts the user with real-time, context-specific suggestions, aiding in the quicker composition of emails. This is basically powered by a neural language model that has been trained on a bulk volume of emails from the Gmail domain.

Extending the same into the programming domain, a model that can predict the next set of lines in a program based on the past few lines of code is an ideal pair programmer. This accelerates the development lifecycle significantly, enhances the developer’s productivity and ensures a better quality of code.

TabNine predicts subsequent blocks of code across a wide range of languages like JavaScript, Python, Typescript, PHP, Java, C++, Rust, Go, Bash, etc. It also has integrations with a wide range of IDEs.

CoPilot can not only autocomplete blocks of code, but can also edit or insert content into existing code, making it a very powerful pair programmer with refactoring abilities. CoPilot is powered by Codex, which has trained billions of parameters with bulk volume of code from public repositories, including Github.

A key point to note is that we are probably in a transitory phase with pair programming essentially working in the human-in-the-loop approach, which in itself is a significant milestone. But the final destination is undoubtedly autonomous code generation. The evolution of AI models that evoke confidence and responsibility will define that journey, though.

Challenges

Code generation for complex scenarios that demand more problem solving and logical reasoning is still a challenge, as it might warrant the generation of code not encountered before.

Understanding of the current context to generate appropriate code is limited by the model’s context-window size. The current set of programming language models supports a context size of 2,048 tokens; Codex supports 4,096 tokens. The samples in few-shot learning models consume a portion of these tokens and only the remaining tokens are available for developer input and model-generated output, whereas zero-shot learning / fine-tuned models reserve the entire context window for the input and output.

Most of the language models demand high compute as they are built on billions of parameters. To adopt these in different enterprise contexts could put a higher demand on compute budgets. Currently, there is a lot of focus on optimizing these models to enable easier adoption.

For these code-generation models to work in pair-programming mode, the inference time of these models has to be shorter such that their predictions are rendered to developers in their IDE in less than 0.1 seconds to make it a seamless experience. 

Kamalkumar Rathinasamy leads the machine learning based machine programming group at Infosys, focusing on building machine learning models to augment coding tasks. 

Vamsi Krishna Oruganti is an automation enthusiast and leads the deployment of AI and automation solutions for financial services clients at Infosys.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
Computing

Intel Alder Lake Outperforms Previous Generation By Over 50%

Today, new CPU-Z benchmarks emerged, showcasing the performance of the Intel Core i5-11600K. Designed as an affordable mid-tier processor, this CPU outperformed the previous generation by miles — and this applies to both Intel and AMD models.

With the expected launch of Intel’s 12th-gen Alder Lake processors in just a bit over a week, new benchmarks emerge every single day. We’ve already seen the performance of Intel’s high-end Alder Lake processor, the Core i9-12900K, and it was impressive. However, Intel’s mid-range CPUs were largely left alone up until now.

The 12th Generation of Intel processors, Alder Lake, is set to release in the fall. (Image credit: Intel)

Three benchmarks have been submitted to CPU-Z. While early tests are not always accurate, these benchmarks are all quite consistent, suggesting that the results may be close to the real performance of Core i5-11600K. One way or another, it’s important to take these with a grain of salt until the processors are tested after their release.

The Intel Core i5-11600K processor is rumored to have a base clock of 2.8GHz that can be boosted up to 3.7GHz, as well as a single-core boost of up to 4.9GHz. It features six performance cores with hyperthreading and four efficiency cores, adding up to 16 threads. The systems used in the benchmarks included two different Asus Z690 motherboards: the Asus ROG Maximus Z690 Apex and the Asus ROG Maximus Z690 Hero. The DDR5 RAM came from OLOy, Corsair, and TeamGroup, but all three had a frequency of 4800MHz.

In the CPU-Z benchmarks, this Alder Lake processor scored between 760 and 773 in single-core tests and 7,156-7,220 in multi-core. In order to add some context to these numbers, the CPU was pitted against the Core i5-11600K, a comparable processor from the previous Rocket Lake generation.

The result is very favorable for Alder Lake, as its predecessor scored a mere 633 points in single-core and 4731 points in multi-core benchmarks. This means that if the benchmarks are accurate, the new Intel Core i5-12600K is up to 20% faster in single-core tasks and a whopping 52% faster in multi-threaded tasks.

Intel Core 12th-gen processor boxes.
VideoCardz

Outclassing its predecessor is one thing, but Intel Alder Lake also stands victorious against AMD processors of the same price range. Compared to the AMD Ryzen 7 5800X, which has eight cores and 16 threads, Core i5-12600K was around 19% faster in single-core and 9% faster in multi-core tests.

The benchmark results speak very well of the upcoming Intel Core i5-12600K. Of course, it may still be outperformed by more expensive CPUs of the previous generation, but Intel’s pricing is likely to be somewhat competitive for this processor. The comparable Rocket Lake CPU had an MSRP of $262. If this processor remains similarly priced, it may offer great performance for this bracket.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Google is using AI to design its next generation of AI chips more quickly than humans can

Google is using machine learning to help design its next generation of machine learning chips. The algorithm’s designs are “comparable or superior” to those created by humans, say Google’s engineers, but can be generated much, much faster. According to the tech giant, work that takes months for humans can be accomplished by AI in under six hours.

Google has been working on how to use machine learning to create chips for years, but this recent effort — described this week in a paper in the journal Natureseems to be the first time its research has been applied to a commercial product: an upcoming version of Google’s own TPU (tensor processing unit) chips, which are optimized for AI computation.

“Our method has been used in production to design the next generation of Google TPU,” write the paper’s authors, co-led by Google research scientists Azalia Mirhoseini and Anna Goldie.

AI, in other words, is helping accelerate the future of AI development.

In the paper, Google’s engineers note that this work has “major implications” for the chip industry. It should allow companies to more quickly explore the possible architecture space for upcoming designs and more easily customize chips for specific workloads.

An editorial in Nature calls the research an “important achievement,” and notes that such work could help offset the forecasted end of Moore’s Law — an axiom of chip design from the 1970s that states that the number of transistors on a chip doubles every two years. AI won’t necessarily solve the physical challenges of squeezing more and more transistors onto chips, but it could help find other paths to increasing performance at the same rate.

Google’s TPU chips are offered as part of its cloud services and used internally for AI research.
Photo: Google

The specific task that Google’s algorithms tackled is known as “floorplanning.” This usually requires human designers who work with the aid of computer tools to find the optimal layout on a silicon die for a chip’s sub-systems. These components include things like CPUs, GPUs, and memory cores, which are connected together using tens of kilometers of minuscule wiring. Deciding where to place each component on a die affects the eventual speed and efficiency of the chip. And, given both the scale of chip manufacture and computational cycles, nanometer-changes in placement can end up having huge effects.

Google’s engineers note that designing floor plans takes “months of intense effort” for humans, but, from a machine learning perspective, there is a familiar way to tackle this problem: as a game.

AI has proven time and time again it can outperform humans at board games like chess and Go, and Google’s engineers note that floorplanning is analogous to such challenges. Instead of a game board, you have a silicon die. Instead of pieces like knights and rooks, you have components like CPUs and GPUs. The task, then, is to simply find each board’s “win conditions.” In chess that might be checkmate, in chip design it’s computational efficiency.

Google’s engineers trained a reinforcement learning algorithm on a dataset of 10,000 chip floor plans of varying quality, some of which had been randomly generated. Each design was tagged with a specific “reward” function based on its success across different metrics like the length of wire required and power usage. The algorithm then used this data to distinguish between good and bad floor plans and generate its own designs in turn.

As we’ve seen when AI systems take on humans at board games, machines don’t necessarily think like humans and often arrive at unexpected solutions to familiar problems. When DeepMind’s AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous “move 37” — a seemingly illogical piece placement by the AI that nevertheless led to victory.

Nothing quite so dramatic happened with Google’s chip-designing algorithm, but its floor plans nevertheless look quite different to those created by a human. Instead of neat rows of components laid out on the die, sub-systems look like they’ve almost been scattered across the silicon at random. An illustration from Nature shows the difference, with the human design on the left and machine learning design on the right. You can also see the general difference in the image below from Google’s paper (orderly humans on the left; jumbled AI on the right), though the layout has been blurred as it’s confidential:

A human-designed chip floor plan is on the left, and the AI-designed floor plan on the right. The images have been blurred by the paper’s authors as they represent confidential designs.
Image: Mirhoseini, A. et al

This paper is noteworthy, particularly because its research is now being used commercially by Google. But it’s far from the only aspect of AI-assisted chip design. Google itself has explored using AI in other parts of the process like “architecture exploration,” and rivals like Nvidia are looking into other methods to speed up the workflow. The virtuous cycle of AI designing chips for AI looks like it’s only just getting started.

Update, Thursday Jun 10th, 3:17PM ET: Updated to clarify that Google’s Azalia Mirhoseini and Anna Goldie are co-lead authors of the paper.

Repost: Original Source and Author Link

Categories
AI

Qualcomm unveils chips to power the next generation of smartphones

Elevate your enterprise data technology and strategy at Transform 2021.


Qualcomm today unveiled its new Snapdragon 778G5G mobile processor platform to power high-end smartphones coming in the second quarter.

The platform of multiple chips will be used in smartphones coming from Honor, iQOO, Motorola, OPPO, Realme, and Xiaomi. Snapdragon 778G is designed to deliver everything from high-end mobile gaming to accelerated artificial intelligence.

The chip includes three image sensors to capture three photos or videos simultaneously in wide, ultra-wide, and zoom. Users will be able to record from all three lenses at once, allowing them to capture the best aspects of each and automatically merge them into one professional-quality video.

Users can also shoot with 4K HDR10+ video capturing over a billion shades of color. Snapdragon 778G also supports staggered high dynamic range (HDR) image sensors so you can capture computational HDR video, which provides dramatic improvements to color, contrast, and detail when capturing videos. Qualcomm also announced other chips at an event in San Diego, California.

AI advances

Above: Qualcomm has upgraded the AI Engine in its latest Snapdragon chip.

Image Credit: Qualcomm

The platform also features the 6th-generation Qualcomm AI Engine with the Qualcomm Hexagon 770 processor delivering up to 12 TOPs and 2 times performance improvements on a per watt basis compared to its predecessor.

Now, virtually all connections, video calls, and phone calls are enhanced by AI to enable use cases like AI-based noise suppression and better AI-based camera experiences. The platform also uses the second-generation Qualcomm Sensing Hub, which integrates a dedicated low-power AI processor for contextual awareness use cases.

Gaming

Qualcomm's Game Quick Touch

Above: Qualcomm’s Game Quick Touch feature reduces touch latency.

Image Credit: Qualcomm

When it comes to gaming, the platform delivers select Qualcomm Snapdragon Elite Gaming features, including Variable Rate Shading (VRS) and Qualcomm Game Quick Touch, two features that are now unlocked in the 7 series. The company previously announced the features in the Snapdragon 888 series in December. Game Quick Touch reduces touch latency, optimizing it at the millisecond level.

Enabled by the Qualcomm Adreno 642L graphics processing unit (GPU), VRS allows developers to specify and group the pixels being shaded within different game scenes to help reduce the GPU workload and provide greater power savings while still maintaining the highest visual fidelity.

Qualcomm Game Quick Touch offers up to 20% faster input response for touch latency.

Connectivity

The Snapdragon 778G includes the integrated Snapdragon X535G Modem-RF System to help deliver millimeter-wave and sub-6 5G capabilities to more users around the world.

By featuring the Qualcomm FastConnect 6700 Connectivity System, Snapdragon 778G supports multi-gigabit class Wi-Fi 6 speeds (up to 2.9Gbps) with 4k QAM and access to 160MHz channels in both the 5GHz and 6GHz bands.

Additionally, the Qualcomm Snapdragon Sound technology suite offers verification that Qualcomm Technologies’ hallmark audio features and system-level optimizations are implemented to enable a redefined listening experience, end to end. With support for Bluetooth 5.2, fast Wi-Fi 6/6E and 5G, Snapdragon 778G is capable of delivering low-latency gaming, sharing, video calls, and more.

Performance

The chip is manufactured using a 6-nanometer process technology, in which the width between circuits is six billionths of a meter. Snapdragon 778G balances performance and power efficiency. The Qualcomm Kryo 670 generates up to 40% uplift in overall CPU performance, and the Adreno 642LGPU is designed to deliver up to 40% faster graphics rendering compared to the previous generation.

Devices based on Snapdragon 778G are expected to be commercially available in the second quarter of 2021.

Qualcomm also announced its first 10 Gigabit 5G M.2 reference design, aimed at accelerating the adoption of faster 5G wireless networking services.

Reference designs

Above: Qualcomm’s reference designs for 5G smartphones.

Image Credit: Qualcomm

The Qualcomm Snapdragon X65 and X62 5G M.2 Reference Designs will accelerate 5G adoption across industry segments, including PCs, always-connected PCs (ACPCs), laptops, customer premises equipment (CPEs), extended reality, gaming, and other mobile broadband (MBB) devices.

The new reference designs are powered by the world’s most advanced and first 3GPP release 165G modem-RF solutions –Snapdragon X65and X625G modem-RF systems — supporting unmatched spectrum aggregation, global 5G sub-6, extended-range mmWave, and high power-efficiency. The reference designs will help customers launch 5G products faster, with products expected to debut later this year.

New 5G modem

Qualcomm also unveiled its Snapdragon X65 5G modem-RF system for global 5G expansion. The upgraded features in this platform will enhance the fourth-generation 5G modem-to-antenna solution, which was announced in February.

As the world’s first 10 Gigabit 5G and the first 3GPP release 16 modem-RF system, the software-upgradeable architecture allows future-proofing of solutions powered by the Snapdragon X65, which supports and enables acceleration of 5G expansion. The architecture also enhances coverage, power efficiency, and performance for users. The upgrades will enable rollouts in a variety of regions, including China.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
AI

Microsoft exclusively licenses OpenAI’s groundbreaking GPT-3 text generation model

Microsoft’s ongoing partnership with San Francisco-based artificial intelligence research company OpenAI now includes a new exclusive license on the AI firm’s groundbreaking GPT-3 language model, an auto-generating text program that’s emerged as the most sophisticated of its kind in the industry.

The two companies have been entwined for years through OpenAI’s use of the Azure cloud computing platform, with Azure being how OpenAI accesses the vast computing resources it needs to train many of its models. Last year, Microsoft made a major $1 billion investment to become OpenAI’s exclusive cloud provider, a deal that now involves being the exclusive licensee for GPT-3.

OpenAI released GPT-3, the third iteration of its ever-growing language model, in July, and the program and its prior iterations have helped create some of the most fascinating AI language experiments to date. It’s also inspired vigorous debate around the ethics of powerful AI programs that may be used for more nefarious purposes, with OpenAI initially refusing to publish research about the model for fear it would be misused.

“Unlike most AI systems which are designed for one use-case, OpenAI’s API today provides a general-purpose ‘text in, text out’ interface, allowing users to try it on virtually any English language task. GPT-3 is the most powerful model behind the API today, with 175 billion parameters,” OpenAI explains in a blog post about its partnership with Microsoft.

“We see this as an incredible opportunity to expand our Azure-powered AI platform in a way that democratizes AI technology, enables new products, services and experiences, and increases the positive impact of AI at Scale,” writes Microsoft chief technology officer Kevin Scott in the company’s blog post announcing the deal. “Our mission at Microsoft is to empower every person and every organization on the planet to achieve more, so we want to make sure that this AI platform is available to everyone – researchers, entrepreneurs, hobbyists, businesses – to empower their ambitions to create something new and interesting.”

OpenAI says, “the deal has no impact on continued access to the GPT-3 model through OpenAI’s API, and existing and future users of it will continue building applications with our API as usual,” which does raise some interesting questions about what exactly Microsoft has acquired here. A Microsoft spokesperson tells The Verge that its exclusive license gives it unique access to the underlying code of GPT-3, which contains technical advancements it hopes to integrate into its products and services.

So while other companies and researchers may be able to access GPT-3 through OpenAI’s API, only Microsoft will reap the benefits of getting to make use of all the AI advancements that went into making it such a sophisticated program. It’s not clear what that will look like right now, but what is clear is that Microsoft sees immense value in OpenAI’s work and likely wants to be the first (and, in this case, only) company to take its largely experimental research work and translate it into real-world product advancements.

Repost: Original Source and Author Link

Categories
Security

Star Trek Fan Deepfaked Next Generation Data Into Picard

Brent Spiner reprised his role as Lt. Cmdr. Data for the 2020 CBS All Access series Star Trek: Picard, and while it was certainly a nice touch to see Spiner play the iconic synthetic life form for the first time in years, there was no getting around the fact that the character didn’t look entirely like the Mr. Data fans remember from his Star Trek: The Next Generation heyday.

However, Star Trek fans are a tech-savvy bunch, and it didn’t take long for one of them to spring into action to right this particular wrong — with the aid of some cutting-edge deepfake technology.

“I’m a lifelong Star Trek fan, and have been making these videos for more than two years,” Deep Homage, a U.S.-based deepfake creator who requested to remain anonymous, told Digital Trends. “I like the Star Trek: Picard series very much, but I found Data’s ‘makeover’ a little jarring, having seen Next Generation Data for so many years.”

Since Data appears in dream sequences in Picard, Deep Homage felt that it would make far more sense for Jean-Luc to remember Data’s younger self from their days together on the Enterprise. Deep Homage therefore set about creating a digitally de-aged Data deepfake (try saying that 10 times quickly) using deep learning-based face-swap technology. The results provide a tantalizing alternative to the Data seen in Picard.

“The process involves training a computer model with thousands of example images of both faces, derived from video footage,” Deep Homage said. “It takes at least a week to train the model to re-create the face, and then the footage is converted with the model and edited.”

Deepfakes have been used for a variety of creative, filmic applications — whether it’s swapping out Arnie Schwarzenegger for Sly Stallone in Terminator 2 or creating an alternative speech by former President Richard Nixon regarding the moon landing. Increasingly, they’re finding their ways into regular movies and TV shows, too. Recently, similar A.I. technology was used to create a de-aged Mark Hamill as Luke Skywalker for the Disney+ show The Mandalorian.

“Deepfake technology is evolving and getting better, and creative people are taking the lead in using it,” Deep Homage said. “If you were to compare the videos made in 2020 with those made in 2018, I think you would notice that the resolution of the face is better, and the level of detail is higher — especially in the skin and the eyes.”

Editors’ Choice




Repost: Original Source and Author Link