Categories
AI

How optimized object recognition is advancing tiny edge devices

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Emza Visual Sense and Alif Semiconductor have demonstrated an optimized face detection model running on Alif’s Ensemble microcontroller based on Arm IP. The two found it is suitable for enhancing low-power artificial intelligence (AI) at the edge.

The emergence of optimized silicon, models and AI and machine learning (ML) frameworks has made it possible to run advanced AI inference tasks such as eye tracking and face identification at the edge, at low-power and low cost. This opens up new use cases in areas such as industrial IoT and consumer applications.

Making edge devices magnitudes faster

By using Alif’s Ensemble multipoint control unit (MCU), which the Alif claims is the first MCU using the Arm Ethos-U55 microNPU, the AI model ran “an order of magnitude” faster than a CPU-only solution with the M55 at 400MHz. It appears Alif meant two orders of magnitude, as the footnotes state that  the high-performance U55 took 4ms compared to 394ms for the M55. The high efficiency U55 executed the model in 11ms. The Ethos-U55 is part of Arm’s Corstone-310 subsystem, which it launched new solutions for in April. 

Emza said it trained a full “sophisticated” face detection model on the NPU that can be used for face detection, yaw face angle estimation and facial landmarks. The complete application code has been contributed to Arm’s open-source AI repository called “ML Embedded Eval Kit,” making it the first Arm AI ecosystem partner to do so. The repository can be used to gauge runtime, CPU demand and memory allocation before silicon is available. 

“To unleash the potential of endpoint AI, we need to make it easier for IoT developers to access higher performance, less complex development flows and optimized ML models,” said Mohamed Awad, vice president of IoT and embedded at Arm. “Alif’s MCU is helping redefine what is possible at the smallest endpoints and Emza’s contribution of optimized models to the Arm AI open-source repository will accelerate edge AI development.” 

Emza claims its visual sensing technology is already shipping in millions of products and with this demonstration, it is expanding its optimized algorithms to SoC vendors and OEMs. 

“As we look at the dramatically expanding horizon for TinyML edge devices, Emza is focused on enabling new applications across a broad array of markets,” said Yoram Zylberberg, CEO ofEmza. “There is virtually no limit to the types of visual sensing use cases that can be supported by new powerful, highly efficient hardware.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Microsoft unveils Azure Percept, a family of edge devices optimized for AI

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


During its Microsoft Ignite 2021 conference this week, Microsoft unveiled Azure Percept, a platform of hardware and services aimed at simplifying the ways customers can use AI technologies at the edge. According to the company, the goal of the new offering is to give customers an end-to-end system, from the hardware to the AI and machine learning capabilities.

Edge computing is forecast to be a $6.72 billion market by 2022. Its growth will coincide with that of the deep learning chipset market, which some analysts predict will reach $66.3 billion by 2025. There’s a reason for these rosy projections — edge computing is expected to make up roughly three-quarters of the total global AI chipset business in the next six years.

The Azure Percept platform includes a development kit with a camera called Azure Percept Vision, as well as a “getting started” experience called Azure Percept Studio that guides customers through the AI lifecycle. Azure Percept Studio includes developing and training resources, as well as guidance on deploying proof-of-concept ideas.

AI at the edge

Azure Percept Vision and Azure Percept Audio, which ships separately from the development kit, connect to Azure services and come with embedded hardware-accelerated modules that enable speech and vision AI at the edge or during times when the device isn’t connected to the internet. The hardware in the Azure Percept development kit uses the industry standard 80/20 T-slot framing architecture, which Microsoft says will make it easier for customers to pilot new product ideas.

As customers work on their ideas with the Azure Percept development kit, they’ll have access to Azure AI Cognitive Services and Azure Machine Learning models, plus AI models available from the open source community designed to run on the edge, Microsoft says. In addition, Azure Percept devices will automatically connect to Azure IoT Hub, which helps enable communication with security protections between internet of things devices and the cloud.

Azure Percept competes with Google’s Coral, a collection of hardware kits and accessories intended to bolster AI development at the edge. And Amazon recently announced AWS Panorama Appliance, a plug-in appliance that connects to a network and identifies videos from existing cameras with computer vision models for manufacturing, retail, construction, and other industries.

But in addition to announcing first-party hardware, Microsoft says it’s working with third-party silicon and equipment manufacturers to build an ecosystem of devices to run on the Azure Percept platform. Moreover, the company says the Azure Percept team is currently working with select early customers to understand concerns around the responsible development and deployment of AI on devices, providing them with documentation and access to toolkits for their AI implementations.

“We’ve started with the two most common AI workloads, vision and voice [and] sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” Microsoft VP Roanne Sones said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

A continued investment

In 2018, Microsoft committed $5 billion to intelligent edge innovation by 2022  — an uptick from the $1.5 billion it spent prior to 2018 — and pledged to grow its IoT partner ecosystem to over 10,000. This investment has borne fruit in Azure IoT Central, a cloud service that enables customers to quickly provision and deploy IoT apps, and IoT Plug and Play, which provides devices that work with a range of off-the-shelf solutions. Microsoft’s investment has also bolstered Azure Sphere; Azure Security Center, its unified cloud and edge security suite; and Azure IoT Edge, which distributes cloud intelligence to run in isolation on IoT devices directly.

Microsoft has competition in Google’s Cloud IoT, a set of tools that connect, process, store, and analyze edge device data. Not to be outdone, Amazon Web Services’ IoT Device Management tracks, monitors, and manages fleets of devices running a range of operating systems and software. And Baidu’s OpenEdge offers a range of IoT edge computing boards and a cloud-based management suite to manage edge nodes, edge apps, and resources such as certification, password, and program code.

But the Seattle company has ramped up its buildout efforts, most recently with the acquisition of CyberX and Express Logic, a San Diego, California-based developer of real-time operating systems (RTOS) for IoT and edge devices powered by microcontroller units. Microsoft has also partnered with companies like DJI, SAP, PTC, Qualcomm, and Carnegie Mellon University for IoT and edge app development.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Ryzen 4000 CPUs explained: How AMD optimized Zen 2 for laptops

AMD’s highly anticipated Ryzen 4000 mobile CPUs may be built on the same 7nm process as the company’s wildly successfully Ryzen 3000 chips, but this time around the company is pinning the chip’s success on a carefully balanced design.

AMD officials said they’ve actually been working on the design for Ryzen 4000 mobile (code-named ‘Renoir’) since 2017, which, they note, predates the introduction of the company’s first Ryzen desktop chips.

The goal for the mobile chip couldn’t be more different. “The challenge in doing a notebook processor is balance: How do we balance the attributes that make it a good notebook processor?” said Dan Bouvier, AMD’s client products chief architect.

A laptop chip can’t go all-out like a desktop chip can. It has to consider the notebook chassis, the Z-height (thickness), the power envelope, and the battery life. “These are all opposing things that work against bringing higher performance,” Bouvier explained, “but you still want to balance that and bring the best performance.”

amd ryzen 4000 zen 2 2 AMD

Most of the architectural changes with Zen 2 are well known, but its 15-percent increase in Instructions Per Clock (IPC)  have made the Zen 2 the hit it is.

Bouvier added that AMD took a risk by stretching Renoir’s design beyond that of its quad-core predecessor. “When we started Renoir, we said, ‘let’s do quad-core, we’ll just make it faster.’” But Bouvier said AMD realized even more was possible so it aimed for a 6-core CPU. And once those models came back, AMD aimed even higher. “We started looking at the models and said, this is looking pretty good—let’s go further. So we did go eight cores, and we really went out on a limb.”

And remember, Bouvier pointed out: In 2017, competitor Intel was still selling a dual-core CPU.

amd ryzen 4000 zen 2 AMD

Two 4-core CCXs are used to build the basic Ryzen 4000 CPUs today.

The Ryzen 4000 CPU’s basic building block is essentially the same 7nm Zen 2 core AMD has used with its Ryzen 3000 series and third-generation Threadripper CPUs, but optimized for mobile. The basic building block of a mobile Ryzen is built on quad-core core complexes, or “CCX.” Each CCX features four cores with SMT and 512MB of L2 cache, plus a 1MB Level 3 cache that’s shared among all four cores. Two of the CCXs make up an 8-core chip.

You might expect AMD to use a single cluster for power efficiency needs. Bouvier described it as a “tradeoff,” but noted that the multi-CCX still enables very high bandwidth, very high frequency, and better power performance.

Repost: Original Source and Author Link