Categories
Game

Twitch unveils built-in fundraising tool for streamers

Twitch creators will soon be able to raise money for charity directly on the livestreaming platform. The company launched a closed beta of Twitch Charity today to a select group of partners and affiliates, according to a . Donations will be processed through the Paypal Giving Fund, and be tracked in both the stream’s activity feed and chat. Traditionally, viewers have made donations through subs and Bits, after which streamers have to rely on a third-party charity portal like Tiltify to actually send money to their chosen organization. By launching its own charity product, Twitch will cut out the middleman and likely make it easier to both host a fundraiser and donate to one.

Fundraising on Twitch has become a significant moneymaker in recent years. Charity streams can easily rake in six or seven-figure sums, particularly if a major streamer or other celebrity is involved. Well-known creators can easily raise millions of dollars with charity streams — last year’s Z Event featured multiple streamers and raised for Action Against Hunger in around 72 hours.

An additional benefit to a native fundraising feature is that creators won’t be subject to the fees of a third-party feature. Unlike Tiltify (which takes a 5 percent cut of of whatever money gets raised), Twitch has decided to let creators donate 100 percent of their revenue and will forgo a tax incentive.

Creators will be randomly selected for the closed beta of Twitch Charity later today. For everyone else, Twitch expects to unveil the feature to most partners and affiliates later this year.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Acer unveils a pair of portable, glasses-free 3D monitors

Acer’s no-glasses 3D is finally available beyond a laptop. The company has trotted out SpatialLabs View and SpatialLabs View Pro portable monitors that bring the more immersive screen tech to gamers and creators. Both 15.6-inch displays deliver stereoscopic 3D for content that either has a profile (for over 50 games) or the right export plugin. The screens can convert 2D content to 3D, and they’ll still be useful as 4K monitors with a 400-nit brightness and 100 percent Adobe RGB color coverage.

The differences largely come down to tweaks for particular audiences. The Pro builds on the regular SpatialLabs View with both creative tech and an “intelligent industrial design” to help deploy in the field. You can even use a VESA wall mount when you want a more permanent presence than the integrated kickstand (present on both models) can provide.

You’ll have to wait until summer for either model. Acer hasn’t yet outlined pricing, but these stand-alone monitors should be considerably more affordable than the $1,700 the company originally charged for its lowest-priced 3D laptop. It’s just a question of whether or not you want a 3D monitor in the first place. The glasses-free visuals could add a pleasant spin to otherwise familiar experiences, but there’s little doubt they’ll carry a premium compared to ‘plain’ 2D screens.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

Apple Unveils Official Release Date for MacOS Monterey

In a press release for the new AirPods following the end of the Apple Unleashed event, Apple announced a release date for MacOS Monterey. The new operating system is set to come next week, as the follow-up to 2020’s MacOS Big Sur release, and brings several big enhancements centered around productivity.

Although Apple didn’t provide a solid launch date, the timing means MacOS Monterey should be expected at the latest by October 30. Apple’s release specifically mentioned that the Airpods 3 only work with Apple devices running iOS 15.1, iPadOS 15.1, WatchOS 8.1, tvOS 15.1, or MacOS Monterey.

Coming as a free update for most Macs, the big feature for most people in this release is Universal Control, which lets you use a single mouse and keyboard to control multiple MacOS and iPad devices. The new feature is separate from Sidecar, which lets iPad users leverage the tablet as a second display for your Mac.

If you’re on MacOS Big Sur, you can check to see if MacOS Monterey is ready for your device by heading to the Apple Menu and choosing System Preferences. You’ll then want to choose About This Mac. From there, you should see Software Update. Click this, and if your Mac is eligible, you’ll see MacOS Monterey listed here and can proceed by hitting the Upgrade Now button.

Other features include AirPlay to Mac, which lets you cast your audio or video from an iPhone or an iPad to a Mac without extra software or servers. Rounding out the list of features in Monterey is a new tab design in Safari, enhanced Shortcuts, a new Focus mode, and support for SharePlay.

Some of the new features in Monterey will be exclusive to Apple M1 Mac models. This includes the portrait mode improvements to FaceTime, which blur the background behind you. Also included is Live Text, which pulls text and phone numbers from Photos. Finally, on Intel Macs, you won’t be able to use the new globe view option or see the expanded details in cities like New York or Los Angeles. This is due to the fact that Intel Macs don’t have the neural engine featured on the Apple M1 chip.

MacOS Monterey was originally announced at WWDC 2021 in June. Since then, the OS was in beta testing through the Apple Developer Program. Members of the general public also were beta testing the OS through the Apple Beta Software Program.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Transposit unveils CloudOps workflow-building functionality

If someone were to select three terms that represent the frontier of IT in 2021, they might choose: automation, automation, automation.

Others might select user experience (UX), AI, edge computing, CloudOps workflows, or DevOps. They’re all legitimate, but the common denominator is automation because each relies to some extent on automated processes.

One would think that next-gen self-service workflows, which are a major enterprise automation IT trend and use the aforementioned functions, would be installed everywhere at the usual-suspect cloud-service providers (AWS, Azure, and Google Cloud Platform) since they themselves are progressive IT consumers. But this isn’t the case; a new DevOps player, Transposit, knows it and sees a new niche market.

The San Francisco, California-based startup, which describes itself as a DevOps process orchestration provider, today announced new functionality for its Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure connectors that it claims to allow engineering and cloud operations teams to move faster and more confidently in enabling self-service infrastructure through automated workflows.

Transposit Actions, as these functions are called, don’t have to be used only at a major cloud-service provider. They can be added to any step within an enterprise run book to create automated workflows across diverse stacks so that whichever tools teams are using, the visibility, context, and actionability to use them optimally are right in front of them, Transposit’s VP of marketing, Ed Sawma told VentureBeat.

Bringing cloud services into these workflows empowers CloudOps admins to ensure outstanding customer service, UX, and reliability, Sawma said. This is about enabling engineering teams to get immediate access to the services and infrastructure they need to ship software rapidly and safely, he said.

CloudOps workflows get enterprise traction

“First and foremost, the big cloud providers are focused on core infrastructure, storage, compute, and providing those core building blocks,” Sawma said. “They’ve moved up the stack a bit from there, but traditionally they haven’t really offered solutions for how you operate the workflow, how you deploy code and run that code.  They know DevOps, they see ICD platforms and continuous delivery platforms, but because it’s not the core building blocks of cloud infrastructure, they’re not expert in it.”

Transposit uses what it calls a “human-in-the-loop” methodology in its specialized platform, Sawma said.

“A lot of what IT and ops teams have to do require a human to think about it and to make some kind of decision, and so we built our platform from the ground up for what we believe should be called intelligence augmentation,” Sawma said. “It’s like, ‘How do we take a human operator and give them superpowers to be able to go and take action across all these hundreds of different tools?’— Which is the complexity of a modern development stack.”

Transposit delivers DevOps process orchestration. Its fully integrated, human-in-the-loop approach to automation empowers engineering operations teams to streamline DevOps practices, improve service reliability, and resolve incidents faster. As the glue between tools, data, and people, Transposit claims to codify institutional knowledge to make processes work more efficiently.

Transposit’s no-code builder, coupled with developer customization, enables operations to create self-service workflows that let users do everything from provisioning infrastructure to creating new accounts and permissions, Sawma said.

The new functionality for AWS, GCP, and Azure connectors allow engineering and cloud ops teams to immediately access data from cloud services (such as recent deployments, service status across regions, and list instances); to reroute connected cloud services during a disabled server event; and enable service request automation to provision a new EC2 instance, create a new AWS account, or grant new permissions.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Spell unveils deep learning operations platform to cut AI training costs

All the sessions from Transform 2021 are available on-demand now. Watch now.


Spell today unveiled an operations platform that provides the tooling needed to train AI models based on deep learning algorithms.

The platforms currently employed to train AI models are optimized for machine learning algorithms. AI models based on deep learning algorithms require their own deep learning operations (DLOps) platform, Spell head of marketing Tim Negris told VentureBeat.

The Spell platform automates the entire deep learning workflow using tools the company developed in the course of helping organizations build and train AI models for computer vision and speech recognition applications that require deep learning algorithms.

Deep roots

Deep learning algorithms trace their lineage back to neural networks in a field of machine learning that structures algorithms in layers to create a neural network that can learn and make intelligent decisions on its own. The artifacts and models that are created using deep learning algorithms, however, don’t lend themselves to the same platforms used to manage machine learning operations (MLOps), Negris said.

An AI model based on deep learning algorithms can require tracking and managing hundreds of experiments with thousands of parameters spanning large numbers of graphical processor units (GPUs), Negris noted. The Spell platform specifically addresses the need to manage, automate, orchestrate, document, optimize, deploy, and monitor deep learning models throughout their entire lifecycle, he said. “Data science teams need to be able to explain and reproduce deep learning results,” Negris added.

While most existing MLOps platforms are not well suited to managing deep learning algorithms, Negris said the Spell platform can also be employed to manage AI models based on machine learning algorithms. Spell does not provide any tools to manage the lifecycle of those models, but data science teams can add their own third-party framework for managing them to the Spell platform.

The Spell platform also reduces cost by automatically invoking spot instances that cloud service providers make available for a finite amount of time whenever feasible, Negris said. That capability can reduce the total cost of training an AI model by as much as 66%, he added. That’s significant because the cost of training AI models based on deep learning algorithms can in some cases reach millions of dollars.

A hybrid approach

In time, most AI applications will be constructed using a mix of machine and deep learning algorithms. In fact, as the building of AI models using machine learning algorithms becomes more automated, many data science teams will spend more of their time constructing increasingly complex AI models based on deep learning algorithms. The cost of building AI models based on deep learning algorithms should also steadily decline as GPUs deployed in an on-premises IT environment or accessed via a cloud service become more affordable.

In the meantime, Negris said that while the workflows for building AI models will converge, it’s unlikely traditional approaches to managing application development processes based on DevOps platforms will be extended to incorporate AI models. The continuous retraining of AI models that are subject to drift does not lend itself to the more linear processes that are employed today to build and deploy traditional applications, he said.

Nevertheless, all the AI models being trained eventually need to find their way into an application deployed in a production environment. The challenge many organizations face today is aligning the rate at which AI models are developed with the faster pace at which applications are now deployed and updated.

One way or another, it’s only a matter of time before every application — to varying degrees — incorporates one or more AI models. The issue going forward is finding a way to reduce the level of friction that occurs whenever an AI model needs to be deployed within an application.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

JetBrains unveils on-premises edition of Jupyter Notebook platform

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Software development company JetBrains announced it has made available an on-premises edition of Datalore, a platform that enables data scientists to collaborate around shared instances of a Jupyter Notebook.

Previously, Datalore was only available to a single user via a cloud service provided by JetBrains. Now data science teams that work, for example, in highly regulated environments that are not allowed to employ cloud services can host the platform themselves, said Dmitry Trofimov, project lead for Datalore at JetBrains. Datalore for Enterprise is priced at $125 per user per month.

Based on an open source format, Jupyter Notebooks are documents that combine live runnable code with text that describes an AI model, equations, images, interactive visualizations, and other output. Widely employed by data scientists, Jupyter Notebooks are packaged with JetBrains’ collaboration tools that make it easier to share documents across a team of data scientists.

Datalore also provides access to an integrated development environment (IDE) that includes smart coding assistance tools that automatically complete code as data scientists and analysts write in the Python programming language, for one example. The IDE also allows data scientists to collect and explore data, create machine learning and deep learning models, visualize results, and then share them with other members of a team. The Datalore cloud service adds the ability to prototype an AI model more easily using cloud-based infrastructure.

As AI goes mainstream, the building of applications is becoming more of a team sport that requires developers and data scientists to collaborate. In some cases, data scientists are also starting to learn how to code as part of an effort to become less dependent on developers, noted Trofimov. “The situation is changing,” he said.

Supporting AI ROI

Regardless of how the convergence of AI model and application development is achieved, it’s clear that Jupyter Notebooks are becoming the means through which much of that collaboration is being enabled. The challenge JetBrains and other service providers face is the strong preference for open source tools that data scientists often deploy and manage themselves. JetBrains and others, however, are betting that as pressure to accelerate the building and updating of AI models increases, more organizations will opt to either deploy a platform for building AI models internally or leverage a cloud service.

Data scientists are not only hard to find and retain these days, but they also typically only build a few AI models per year that make it into a production environment. Business and IT leaders that are betting heavily on AI to drive digital business transformation initiatives are now paying a lot more attention to data scientist productivity. In effect, every organization that builds and deploys software today is engaged in an AI arms race with rivals that are making similar investments.

Of course, hiring data scientists and investing in platforms to build AI models is no guarantee of success. There is no correlation between the number of data scientists employed and the ability of an organization to successfully automate a process using AI. Success tends to relate more to the level of insight data scientists can bring to bear on business process they are seeking to automate. Too many data scientists don’t spend enough time engaging the people that manage a process to better understand all the rules and exceptions that inevitably pervade a process. Conversely, a new generation of business executives will inevitably need to become familiar with Jupyter Notebooks to better understand how data science is being applied to those processes.

In the meantime, the greater the appreciation there is for AI as a team sport within organizations, the more likely it becomes there will be an actual return on that AI investment.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Linux Foundation unveils new permissive license for open data collaboration

Elevate your enterprise data technology and strategy at Transform 2021.


The Linux Foundation has announced a new permissive license designed to help foster collaboration around open data for artificial intelligence (AI) and machine learning (ML) projects.

It has often been said that data is the new oil, but for AI and ML projects in particular, having access to expansive and diverse data sets is key to reducing bias and building powerful models capable of all manner of intelligent tasks. To machines, data is a little like “experience” is to humans — the more of it you have, the better decisions you are likely to make.

With CDLA-Permissive-2.0, the Linux Foundation is building on its previous efforts to encourage data-sharing efforts through licensing arrangements that clearly define how the data — and any derivative data sets — can and can’t be used.

Data pools

The Linux Foundation first introduced the Community Data License Agreement (CDLA) back in 2017 to entice organizations to open up their vast pools of (underused) data to third-parties. There were two original licenses, a sharing license with a “copyleft” reciprocal commitment borrowed from the open source software sphere, stipulating that any derivative data sets built from the original data set must be shared under a similar license; and a permissive license (1.0) without any such obligations in place (similar to how someone might define “true” open source software).

Licenses are basically legal documents that outline how a piece of work (in this case data sets) can be used or modified, but oftentimes specific phrases, ambiguities, or exceptions can be enough to make companies run a mile if they think that releasing content under a specific license could cause them problems somewhere down the line. And that is where the CDLA-Permissive-2.0 license comes into play — it’s essentially a rewrite of version 1.0, but it’s shorter and simpler to follow. But more than that, it has removed certain provisions that were deemed unnecessary or burdensome, and which may have hindered broader use of the license.

For example, version 1.0 of the license included obligations that data recipients preserve attribution notices in the data sets. For context, attribution notices or statements are standard in the software sphere, where a company that releases software built on open source components have to credit the creators of these components in their own software license. But according to the Linux Foundation, feedback it received from the community, and lawyers representing companies involved in open data projects, indicated that there were” challenges involved with associating attributions with data (or versions of data sets).

So while data source attribution is still an option, and might make sense for specific projects particularly where transparency is paramount, it is no longer a condition for businesses looking to share data under the new permissive license. The chief obligation that remains is that the main community data license agreement text is included with the new data sets.

Data vs software

This also helps to highlight how transposing a concept from a software license onto a data set license doesn’t always make sense, partly because laws and regulations usually treat data differently to software and other similar creative content.

“Data is different from software,” Linux Foundation’s VP of compliance and legal Steve Winslow told VentureBeat. “Open source software is typically made of copyrightable works, where authorship is important. By contrast, data may frequently have little or no applicable intellectual property rights, and authorship and attribution are often less important.”

But isn’t attribution still desirable, even if it’s not always going to be applicable or relevant? According to Winslow, enforcing data attribution could have some negative consequences in terms of organizations’ willingness to collaborate around data.

“Some data recipients may still choose to attribute the data, to show that the data is trustworthy based on its source,” Winslow said. “But it will be their call, and not a requirement, as it could impose limitations on how to organize and analyze the data or force unintended burdens on data collaboration.”

For example, let’s assume data from multiple contributors — which could run into the thousands — is pooled into a single data set. If the data set is only ever used in that combined form, then attribution isn’t a huge burden. But if the data set is subsequently split into subsets which are redistributed separately or combined with a different data set, then that creates a whole lot of work in terms of determining which attributions apply to the new data set. In short, things can rapidly descend into confusion and chaos.

Transition

Several companies have already revealed plans to make their existing open data sets available under the new CDLA-Permissive-2.0 license, including Microsoft’s research arm which will now transition some of its open data sets, including Hippocorpus, Public Perception of Artificial Intelligence, Xbox Avatars Descriptions, Dual Word Embeddings, and GPS Trajectory.

Elsewhere, IBM’s Project CodeNet, NOAA JFK, Airline Reporting Carrier On-Time PerformancePubLayNet, and Fashion-MNIST will also transition to the new license.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Ambarella unveils two new AI chip families for 4K security cameras

Elevate your enterprise data technology and strategy at Transform 2021.


Ambarella unveiled two new AI chip families for 4K security cameras today as it pushes further into computer vision.

The Santa Clara, California-based company is introducing its new CV5S and CV52S security chips as the latest in its system-on-chip portfolio based on the CVflow architecture. The chips use an advanced 5-nanometer manufacturing process, where the width between circuits is five-billionths of a meter.

With the combination of new designs and better miniaturization from the manufacturing process, the SoCs can support simultaneous 4K encoding and advanced AI processing in a single low-power design, which provides great AI SoC performance per watt of power consumed, said Fermi Wang, CEO of Ambarella, in an interview with VentureBeat.

“If you have a camera, you want to cover a very big space, and then you want to use a 4K camera to cover all 360 degrees,” Wang said. “You want to have a single chip to talk to the 4K sensors. You want to process all of the videos together and analyze the video.”

The CV5S family targets security camera applications that require multiple sensors for 360-degree coverage, over a wide area and with a long range, such as outdoor city environments or large buildings. Ambarella designed the CV52S family for single-sensor security cameras with advanced AI performance that need to more clearly identify individuals or objects in a scene, including faces and license plate numbers over long distances, such as ITS traffic cameras.

Camera applications

Above: Ambarella’s CV5S AI chips can power the latest security cameras.

Image Credit: Ambarella

The cameras can be used for detecting the faces of criminals in crowds or recognizing license plates. There are of course privacy concerns about that. They can also send an alert if a crowd is forming in a part of a city, and they can monitor packages left behind in stations, airports, and more. Applications also include managing traffic congestion, detecting vehicle accidents, automating speed control, locating missing or stolen vehicles, monitoring queues in retail environments, better managing retail product placement, enhancing warehouse tracking — generally providing more actionable intelligence at both the store and corporate levels. And these more invasive applications can be countered with the ability to set up privacy masks, like continuously obscuring certain portions of larger scenes, preserving privacy.

“We are generating revenue based on all of the CVflow family today,” Wang said. “If you look at all the possible applications that we can address here — our current security camera market, automotive markets — you can see that there’s a lot more opportunities. We’re talking about smart homes, smart cities, smart retail, and also in the future robotics. There are many, many applications that we can address moving forward.”

John Lorenz, senior technology and market analyst at Yole Développement, said in a statement that security system designers want higher resolution cameras, more channels, and faster AI. He said Ambarella’s new chips are competitive in the security chip market, which is expected to exceed $4 billion by 2025, with two-thirds of that being chips with AI capabilities.

The new CV5S SoC family supports multi-imager camera designs and can simultaneously process and encode four imager channels of up to 8 megapixels (MP), or 4K resolution, each at 30 frames per second, while performing advanced AI on each 4K imager. These SoCs double the encoding resolution and memory bandwidth while consuming 30% less power than Ambarella’s prior generation.

Above: Fermi Wang is CEO of Ambarella.

Image Credit: Ambarella

The new CV52S SoC family targets single-sensor security cameras and supports 4K resolution at 60fps, while providing four times the AI computer vision performance, two times the central processing unit (CPU) performance, and 50% more memory bandwidth than its predecessors. This increase in neural network (NN) performance enables more AI processing to be performed at the edge, instead of in the cloud.

“Because you do all the video analytics at the edge, the full video doesn’t need to leave the camera,” Wang said. “You only pass the data that you analyze along to the cloud.”

That’s important as you don’t want traffic from self-driving cars to clog up the wireless bandwidth connections to datacenters.

“The biggest difference approach is that we talked about in the past, we call it ‘algorithm first.’ Basically, when we do the video compression or video processing, or image processing, even and then the controller computer vision for the deep neural network or AI processor, we can see the flow, we try to determine the hardware architecture as well,” Wang said. “For the architecture, we look at what kind of algorithm we want to implement first. So then, after we go through all the areas of study, we understand how an application works, and the portion of the area that takes the most computation performance, and where we can optimize without losing the performance or accuracy of the algorithm.”

Above: Ambarella is making AI chips for a spectrum of devices.

Image Credit: Ambarella

He added, “And after going through all the tradeoffs, we create the architecture not only try to deliver the best performance, but also deliver the best power consumption as well. I think we are very differentiated and can compete.”

In addition to security, there are many other AI-based internet of things (IoT) applications that can take advantage of the high resolution and advanced AI processing provided by these new SoC families. For example, smart cities can leverage high edge AI performance and image resolution for improved traffic management, accident detection, and automated speed control, as well as the rapid location of missing and stolen vehicles.

“If there’s an accident where there is traffic congestion, or if you need to find a missing vehicle, then you can get enough information to monitor all of the smart city requirements,” Wang said. “You need to do real-time management.”

Likewise, smart retail operations can use this resolution and advanced AI to better manage product placement, adjust cashier staffing for real-time line management, enhance warehouse product tracking, and provide more actionable intelligence at both the store and corporate levels.

The chip families share features such as a software development kit (SDK) for the security camera market, CVflow development tools, and dual Arm A76 1.6GHz CPUs with 1MB of L3 cache memory and a two times performance gain over prior generations. It also has enhanced image signal processing with high-dynamic range, ISO low-light, dewarping, and rotation performance.

They also have on-chip privacy masking to block out a portion of the captured scene, connector interfaces, on-chip cybersecurity hardware with secure boot, OTP, and Arm TrustZone technology. They can support up to 14 cameras and a variety of memory.

The CV5S and CV52S SoC families are expected to be available for sampling in October.

Origins

Above: Ambarella’s chip portfolio.

Image Credit: Ambarella

Ambarella was founded in 2005. It has evolved over the years from a video processor chip design firm to a designer of computer vision chips for a variety of markets. It started with video processors for cameras and video cameras. Then it transitioned to making AI-based chips for automobiles and security cameras.

“We went through many different markets, some good, some bad,” Wang said. “We did chips for camcorders and GoPro sports cameras, DJI drone cameras, and eventually there was no innovation in these markets.”

The markets that have lasted longer include security cameras, which require ever-increasing levels of resolution and quality, and automotive cameras, which started in 2011 and continue today as cars need more video sensors and AI processing to distinguish driving hazards.

“Over time, we believes that video analytics will become very important, where you can interpret what the computer vision shows,” Wang said.

Deep learning neural networks are necessary for that work, and that has put a lot of pressure on better AI processing while at the same time delivering better efficiency with low power consumption and lower costs.

Above: Ambarella’s CVflow architecture is the basis for a lot of chip families.

Image Credit: Ambarella

“We start our CVflow family with 10-nanometer production, and today we go to five nanometers,” Wang said. “And also we build tons of different software, including a tool to convert any neural network designed by our customers.”

In the past several years, Ambarella has spent $500 million on research and development on computer vision, and it reported last year’s revenue for the segment at $25 million, Wang said. Analysts are expecting the company to hit $75 million in computer vision revenue in 2021.

“It’s not only just a product anymore,” Wang said. “It’s really a revenue generator for our customers. We proved that the investment was really important, and I’m glad we went through that.”

The company has about 40 mass-production customers now. And they are asking for better and better performance.

“If you use 8K performance to process multiple video streams at the same time, then it becomes a mainstream product,” Wang said. “In fact, I can say that in security cameras, people want to connect four to six to eight cameras to one single chip.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

AMD unveils a liquid-cooled version of the Radeon RX 6900 XT GPU

AMD is bringing liquid cooling to the Radeon RX 6900 XT with a new version of the GPU. The graphics card can run at higher clock speeds than its sibling, up to 2,250Mhz instead of 2,015Mhz. It also has a higher board power of 330 watts, compared with the standard RX 6900 XT’s 300W board power, according to VideoCardz.

Perhaps most interestingly, the Radeon RX 6900 XT Liquid Edition seems to have increased memory bandwidth from the standard model, as The Verge notes. It appears to be up from 16Gbps to 18Gbps. Engadget has contacted AMD for confirmation. Otherwise, the liquid-cooled GPU has the same amount of memory size and infinity cache, and an identical number of compute units to the RX 6900 XT.

It has a two-slot height, so it’s smaller than the air-cooled model, which required 2.5 slots. The Radeon RX 6900 XT Liquid Edition boasts an HDMI 2.1 slot, two DisplayPort ports and a a single USB-C one.

“The AMD-designed liquid-cooled Radeon RX 6900 XT graphics card is a limited-edition product for select system integrators,” an AMD spokesperson told Engadget. “AMD will be making a limited number of Radeon RX 6900 XT liquid cooled reference design graphics cards available to select system integrators, who are a critical part of the gaming ecosystem, to offer this this model through their channels. Please contact system integrator partners for system pricing.”

The $1,000 RX 6900XT is at the higher end of the Radeon 6000 series lineup. It was one of AMD’s first RDNA 2 PC cards. The company made another addition to the 6000 series line in March, with the $479 Radeon RX 6700 XT.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Tech News

EU unveils plans for digital ID wallet for accessing services across the bloc

The EU has unveiled plans for a digital ID wallet that people in the bloc could use to access services across the 27 member states.

Citizens will be able to use the wallets to prove their ID and share electronic documents “with the click of a button on their phone,” the European Commission said on Thursday.

“The European digital identity will enable us to do in any Member State as we do at home without any extra cost and fewer hurdles,” said Margrethe Vestager, the European Commission’s executive vice president for digital.

Under the proposals, the bloc’s 27 member states will offer citizens and businesses digital wallets that can link their national digital IDs with documents including driving licenses, diplomas, and bank accounts.

They could then use the wallets to rent cars, enroll in colleges, or open bank accounts in other member states.

Online platforms such as Google and Facebook would also be required to accept the wallets to access their services.

“Because of that, you can decide how much data you want to share — only enough to identify yourself,” Vestager said during a virtual media briefing.

The EU says the wallets will be available to any citizen, resident, and business in the bloc, but they will not be mandatory.

€9.6 billion ($11.7 billion) in extra revenue to the bloc and generate up to 27,000 new jobs over five years.

The Commission hopes to reach an agreement with member states on the proposals by September 2022 before commencing pilot projects.

Critics have raised concerns about potential security risks, but the EU has big hopes for the system. The bloc says it could generate €9.6 billion ($11.7 billion) in extra revenue and up to 27,000 new jobs over five years.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link