Categories
AI

Meta announces plans to build an AI-powered ‘universal speech translator’

Meta, the owner of Facebook, Instagram, and WhatsApp, has announced an ambitious new AI research project to create translation software that works for “everyone in the world.” The project was announced as part of an event focusing on the broad range of benefits Meta believes AI can offer the company’s metaverse plans.

“The ability to communicate with anyone in any language — that’s a superpower people have dreamed of forever, and AI is going to deliver that within our lifetimes,” said Meta CEO Mark Zuckerberg in an online presentation.

The company says that although commonly spoken languages like English, Mandarin, and Spanish are well catered to by current translation tools, roughly 20 percent of the world’s population do not speak languages covered by these systems. Often, these under-served languages do not have easily accessible corpuses of written text that are needed to train AI systems or sometimes have no standardized writing system at all.

Meta says it wants to overcome these challenges by deploying new machine learning techniques in two specific areas. The first focus, dubbed No Language Left Behind, will concentrate on building AI models that can learn to translate language using fewer training examples. The second, Universal Speech Translator, will aim to build systems that directly translate speech in real-time from one language to another without the need for a written component to serve as an intermediary (a common technique for many translation apps).

In a blog post announcing the news, Meta researchers did not offer a timeframe for completing these projects or even a roadmap for major milestones in reaching their goal. Instead, the company stressed the utopian possibilities of universal language translation.

“Eliminating language barriers would be profound, making it possible for billions of people to access information online in their native or preferred language,” they write. “Advances in [machine translation] won’t just help those people who don’t speak one of the languages that dominates the internet today; they’ll also fundamentally change the way people in the world connect and share ideas.”

Crucially, Meta also envisions that such technology would hugely benefit its globe-spanning products — furthering their reach and turning them into essential communication tools for millions. The blog post notes that universal translation software would be a killer app for future wearable devices like AR glasses (which Meta is building) and would also break down boundaries in “immersive” VR and AR reality spaces (which Meta is also building). In other words, though developing universal translation tools may have humanitarian benefits, it also makes good business sense for a company like Meta.

It’s certainly true that advances in machine learning in recent years have hugely improved the speed and accuracy of machine translation. A number of big tech companies, from Google to Apple, now offer users free AI translation tools, used for work and tourism, and undoubtedly provide incalculable benefits around the world. But the underlying technology has its problems, too, with critics noting that machine translation misses nuances critical for human speakers, injects gendered bias into its outputs, and is capable of throwing up those weird, unexpected errors only a computer can. Some speakers of uncommon languages also say they fear losing hold of their speech and culture if the ability to translate their words is controlled solely by big tech.

Considering such errors is critical when massive platforms like Facebook and Instagram apply such translations automatically. Consider, for example, a case from 2017 when a Palestinian man was arrested by Israeli police after Facebook’s machine translation software mistranslated a post he shared. The man wrote “good morning” in Arabic, but Facebook translated this as “hurt them” in English and “attack them” in Hebrew.

And while Meta has long aspired to global access, the company’s own products remain biased towards countries that provide the bulk of its revenue. Internal documents published as part of the Facebook Papers revealed how the company struggles to moderate hate speech and abuse in languages other than English. These blind spots can have incredibly deadly consequences, as when the company failed to tackle misinformation and hate speech in Myanmar prior to the Rohingya genocide. And similar cases involving questionable translations occupy Facebook’s Oversight Board to this day.

So while a universal translator is an incredible aspiration, Meta will need to prove not only that its technology is equal to the task but that, as a company, it can apply its research fairly.

Repost: Original Source and Author Link

Categories
AI

A Meta prototype lets you build virtual worlds by describing them

Meta is testing an artificial intelligence system that lets people build parts of virtual worlds by describing them, and CEO Mark Zuckerberg showed off a prototype at a live event today. Proof of the concept, called Builder Bot, could eventually draw more people into Meta’s Horizon “metaverse” virtual reality experiences. It could also advance creative AI tech that powers machine-generated art.

In a prerecorded demo video, Zuckerberg walked viewers through the process of making a virtual space with Builder Bot, starting with commands like “let’s go to the beach,” which prompts the bot to create a cartoonish 3D landscape of sand and water around him. (Zuckerberg describes this as “all AI-generated.”) Later commands range from broad demands like creating an island to extremely specific requests like adding altocumulus clouds and — in a joke poking fun at himself — a model of a hydrofoil. They also include playing sound effects like “tropical music,” which Zuckerberg suggests is coming from a boombox that Builder Bot created, although it could also have been general background audio. The video doesn’t specify whether Builder Bot draws on a limited library of human-created models or if the AI plays a role in generating the designs.

Several AI projects have demonstrated image generation based on text descriptions, including OpenAI’s DALL-E, Nvidia’s GauGAN2, and VQGAN+CLIP, as well as more accessible applications like Dream by Wombo. But these well-known projects involve creating 2D images (sometimes very surreal ones) without interactive components, although some researchers are working on 3D object generation.

As described by Meta and shown in the demo, Builder Bot appears to be using voice input to add 3D objects that users can walk around, and Meta is aiming for more ambitious interactions. “You’ll be able to create nuanced worlds to explore and share experiences with others with just your voice,” Zuckerberg promised during the event keynote. Meta made several other AI announcements during the event, including plans for a universal language translator, a new version of a conversational AI system, and an initiative to build new translation models for languages without large written data sets.

Zuckerberg acknowledged that sophisticated interactivity, including the kinds of usable virtual objects many VR users take for granted, poses major challenges. AI generation can pose unique moderation problems if users ask for offensive content or the AI’s training reproduces human biases and stereotypes about the world. And we don’t know the limits of the current system. So for now, you shouldn’t expect to see Builder Bot pop up in Meta’s social VR platform — but you can get a taste of Meta’s plans for its AI future.

Update 12:50PM ET: Added details about later event announcements from Meta.

Repost: Original Source and Author Link

Categories
AI

Meta launches PyTorch Live to build AI-powered mobile experiences

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


During its PyTorch Developer Day conference, Meta (formerly Facebook) announced PyTorch Live, a set of tools designed to make AI-powered experiences for mobile devices easier. PyTorch Live offers a single programming language — JavaScript — to build apps for Android and iOS, as well as a process for preparing custom machine learning models to be used by the broader PyTorch community.

“PyTorch’s mission is to accelerate the path from research prototyping to production deployment. With the growing mobile machine learning ecosystem, this has never been more important than before,” a spokesperson told VentureBeat via email. “With the aim of helping reduce the friction for mobile developers to create novel machine learning-based solutions, we introduce PyTorch Live: a tool to build, test, and (in the future) share on-device AI demos built on PyTorch.”

PyTorch Live

PyTorch, which Meta publicly released in January 2017, is an open source machine learning library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see a rapid uptake in the data science and developer community. It claimed one of the top spots for fast-growing open source projects last year, according to GitHub’s 2018 Octoverse report, and Meta recently revealed that in 2019 the number of contributors on the platform grew more than 50% year-over-year to nearly 1,200.

PyTorch Live builds on PyTorch Mobile, a runtime that allows developers to go from training a model to deploying it while staying within the PyTorch ecosystem, and the React Native library for creating visual user interfaces. PyTorch Mobile powers the on-device inference for PyTorch Live.

PyTorch Mobile launched in October 2019, following the earlier release of Caffe2go, a mobile CPU- and GPU-optimized version of Meta’s Caffe2 machine learning framework. PyTorch Mobile can launch with its own runtime and was created with the assumption that anything a developer wants to do on a mobile or edge device, the developer might also want to do on a server.

“For example, if you want to showcase a mobile app model that runs on Android and iOS, it would have taken days to configure the project and build the user interface. With PyTorch Live, it cuts the cost in half, and you don’t need to have Android and iOS developer experience,” Meta AI software engineer Roman Radle said in a prerecorded video shared with VentureBeat ahead of today’s announcement.

Built-in tools

PyTorch Live ships with a command-line interface (CLI) and a data processing API. The CLI enables developers to set up a mobile development environment and bootstrap mobile app projects. As for the data processing API, it prepares and integrates custom models to be used with the PyTorch Live API, which can then be built into mobile AI-powered apps for Android and iOS.

In the future, Meta plans to enable the community to discover and share PyTorch models and demos through PyTorch Live, as well as provide a more customizable data processing API and support machine learning domains that work with audio and video data.

PyTorch Live

“This is our initial approach of making it easier for [developers] to build mobile apps and showcase machine learning models to the community,” Radle continued. “It’s also an opportunity to take this a step further by building a thriving community [of] researchers and mobile developers [who] share and utilize pilots mobile models and engage in conversations with each other.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Why enterprises should build AI infrastructure first

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Artificial intelligence is bringing new levels of automation to everything from cars and kiosks to utility grids and financial networks. But it’s easy to forget that before the enterprise can automate the world, it has to automate itself first.

As with most complicated systems, IT infrastructure management is ripe for intelligent automation. As data loads become larger and more complex and the infrastructure itself extends beyond the datacenter into the cloud and the edge, the speed at which new environments are provisioned, optimized, and decommissioned will soon exceed the capabilities of even an army of human operators. That means AI will be needed on the ground level to handle the demands of AI initiatives higher up the IT stack.

AI begins with infrastructure

In a classic Catch-22, however, most enterprises are running into trouble deploying AI on their infrastructure, in large part because they lack the tools to leverage the technology in a meaningful way. A recent survey by Run: AI shows that few AI models are getting into production – less than 10% at some organizations – with many data scientists still resorting to manual access to GPUs and other elements of data infrastructure to get projects to the finish line.

Another study by Global Surveys showed that just 17% of AI and IT practitioners report seeing high utilization of hardware resources, with 28% reporting that much of their infrastructure remains idle for large periods of time. And this is after their organizations have poured millions of dollars into new hardware, software, and cloud resources, in large part to leverage AI.

If the enterprise is to successfully carry out the transformation from traditional modes of operation to fully digitized ones, AI will have to play a prominent role. IT consultancy Aarav Solutions points out that AI is invaluable when it comes to automating infrastructure support, security, resource provisioning, and a host of other activities. Its secret sauce is the capability to analyze massive data sets at high speed and with far greater accuracy than manual processes, giving decision-makers granular insight into the otherwise hidden forces affecting their operations.

A deeper look into all the interrelated functions that go into infrastructure management on a daily basis, sparks wonder at how the enterprise has gotten this far without AI. XenonStack COO and CDS Jagreet Kaur Gill, recently highlighted the myriad functions that can be kicked into hyperspeed with AI, everything from capacity planning and resource utilization to anomaly detection and real-time root cause analysis. With the ability to track and manage literally millions of events at a time, AI will provide the foundation that allows the enterprise to maintain the scale, reliability, and dynamism of the digital economy.

Intelligence to the edge

With this kind of management stack in place, says Sandeep Singh, vice president of storage marketing at HPE, it’s not too early to start talking about AIOps-driven frameworks and fully autonomous IT operations, particularly in greenfield deployments between the edge and the cloud. The edge, after all, is where much of the storage and processing of IoT data will take place. But it is also characterized by a highly dispersed physical footprint, with small, interconnected nodes pushed as close to user devices as possible. But its very nature, then, the edge must be autonomous. Using AIOps, organizations will be able to build self-sufficient, real-time analytics and decision-making capabilities, while at the same time ensuring maximum uptime and failover should anything happen to disrupt operations at a given endpoint.

Looking forward, it’s clear that AI-empowered infrastructure will be more than just a competitive advantage, but an operational necessity. With the amount of data generated by an increasingly connected world, plus the quickly changing nature of all the digital processes and services this entails, there is simply no other way to manage these environments without AI.

Intelligence will be the driving force in enterprise operations as the decade unfolds, but just like any other technology initiative, it must be implemented from the ground up – and that process starts with infrastructure.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

BMW uses Nvidia’s Omniverse to build state-of-the-art factories

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


BMW has standardized on a new technology unveiled by Nvidia, the Omniverse, to simulate every aspect of its manufacturing operations, in an effort to push the envelope on smart manufacturing.

BMW has done this down to work order instructions for factory workers from 31 factories in its production network, reducing production planning time by 30%, the company said.

During Nvidia’s GTC November 2021 Conference, members of BMW’s Digital Solutions for Production Planning and Data Management for Virtual Factories provided an update on how far BMW and Nvidia have progressed in simulating manufacturing operations relying on digital twins. Their presentation, BMW and Omniverse in Production, provides a detailed tour of how the Regensburg factory has a fully functioning, real-time digital twin capable of simulating at scale production and finite scheduling based on constraints down to work order instructions and robotics programming on the shop floor.

Improving product quality, reducing manufacturing costs and unplanned downtime while increasing output, and ensuring worker safety are goals all manufacturers strive for, yet seldom reach consistently. Achieving these goals has much more to do with how fluid and real-time the data from production and process monitoring, product definition, and shop floor scheduling is shared across manufacturing in a comprehensible format each team can use.

Overcoming the challenges of achieving these goals motivates manufacturers to adopt analytics, AI, and digital twin technologies. At the heart of these challenges is the need to accurately decipher the massive amount of data manufacturing operations generate daily. Getting the most value out of data that any given manufacturing operation generates daily is the essence of smart manufacturing.

Defining What A Factory of the Future Is

McKinsey and the World Economic Forum (WEF) are studying what sets exceptional factories apart from all the others. Their initial collaborative research and many subsequent research studies, including the creation of the Shaping the Future of Advanced Manufacturing and Production Platform, reflect how productive the collaborative efforts of McKinsey and the WEF are today. In addition, McKinsey and WEF have set high standards in their definition of what a factory of the future is, as they’re providing ongoing analysis of the select group of manufacturers’ operations for clients.

According to McKinsey and WEF, lighthouse manufacturers scale pilots into integrated production at scale. They’re also known for their scalable technology platforms, strong performance on change management, and adaptability to changing supply chain, market, and customer constraints, while maintaining visibility and cost control across the manufacturing process. BMW Automotive is an inaugural member of the lighthouse manufacturing companies McKinsey and WEF first identified after evaluating over 1,000 companies. The following graphic from McKinsey and WEF’s research provides a geographical view of lighthouse manufacturers’ factory locations globally.

McKinsey and WEF's ongoing collaboration provides new insights into how manufacturers can continue to adopt new technologies to improve operations, add greater visibility and control across shop floors, and keep costs in check. Source: McKinsey and Company, 'Lighthouse' manufacturers lead the way—can the rest of the world keep up?   

Above: McKinsey and WEF’s ongoing collaboration provides new insights into how manufacturers can continue to adopt new technologies to improve operations, add greater visibility and control across shop floors, and keep costs in check. Source: McKinsey and Company, ‘Lighthouse’ manufacturers lead the way—can the rest of the world keep up?

BMW’s Factories of the Future Blueprint

The four sessions BMW contributed to during Nvidia’s GTC November 2021 Conference together provide a blueprint of how BMW transforms its production centers into factories of the future. Core to their blueprint is getting back-end integration services right, including real-time integration with ProjectWise, BMW internal systems Prisma and MAPP, and Tecnomatix eMS. BMW relies on Omniverse Connectors that support live sync with each application on the front end of their tech stacks. Front-end applications include many leading 2D and 3D computer-aided design (CAD), real-time visualization, product lifecycle management (PLM), and advanced imaging tools. BMW standardized on Nvidia Omniverse as the centralized platform to integrate the various back-end and front-end systems at scale so their tech stack could scale and support analytics, AI, and digital twin simulations across 31 manufacturing plants.

Excel at customizing models in real-time

How BMW deployed Nvidia Omniverse explains why they’re succeeding with their factory of the future initiatives while others fail. BMW recognized early that each system’s different clock speeds or cadences integral to production, from CAD and PLM to ERP, MES, Quality Management, and CRM, needed to be synchronized around a single source of data everyone could understand. Nvidia Omniverse acts as the data orchestrator and provides information every department can interpret and act on. “Global teams can collaborate using different software packages to design and plan the factory in real-time, using the capability to operate in a perfect simulation, which revolutionizes BMWs planning processes,” says Milan Nedeljković, member of the Board of Management of BMW AG.

Product customizations dominate BMW’s product sales and production. They’re currently producing 2.5 million vehicles per year, and 99% of them are custom. BMW says that each production line can be quickly configured to produce any one of ten different cars, each with up to 100 options or more across ten models, giving customers up to 2,100 ways to configure a BMW. In addition, Nvidia Omniverse gives BMW the flexibility to reconfigure its factories quickly to accommodate new big model launches.

Simulating line improvements to save time

BMW succeeds with its product customization strategy because each system essential to production is synchronized on the Nvidia Omniverse platform. As a result, every step in customizing a given model reflects customer requirements and also be shared in real-time with each production team. In addition, BMW says real-time production monitoring data is used for benchmarking digital twin performance. With the digital twins of an entire factory, BMW engineers can quickly identify where and how each specific models’ production sequence can be improved. An example is how BMW uses digital humans and simulation to test new workflows for worker ergonomics and efficiency, training digital humans with data from real associates. They’re also doing the same with the robotics they have in place across plant floors today. Combining real-time production and process monitoring data with simulated results helps BMW’s engineers quickly identify areas for improvement, so quality, cost, and production efficiency goals keep getting achieved.

BMW simulates robotics improvements using Nvidia's Omniverse first before introducing them into production runs to ensure greater accuracy, product quality, and cost goals are going to be met.  

Above: BMW simulates robotics improvements using Nvidia’s Omniverse first before introducing them into production runs to ensure greater accuracy, product quality, and cost goals are going to be met.

For any manufacturer to succeed with a complex product customization strategy like BMW has, all the systems that manufacturing relies on must be in sync with each other in real-time. There needs to be a common cadence the systems are operating at, providing real-time data and information each team can use to do their specific jobs. BMW is achieving this today, enabling them to plan down to the model-by-model configuration level at scale. They’re also able to test each model configuration in a fully functioning digital twin environment in Nvidia’s Omniverse, and then reconfigure production lines to produce the new models. Real-time production and process monitoring data from existing production lines and digital twins help BMW’s engineering, and production planning teams know where, how, and why to modify digital twins to completely test any new improvement before making it live in production.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Celonis investing heavily to build out process-mining platform

Celonis, a Germany-based maker of process-mining tools, has announced several key acquisitions, capabilities, and partnerships involving its core platform. These include the launch of the Celonis Execution Graph, the acquisition of Lenses.io, a maker of streaming data tools, and a partnership with Canada’s Conexiom.

Gartner Research has characterized process mining as a core technology for building out the digital twin of an organization. These capabilities allow executives to peer inside business processes, identify inefficiencies, and design more efficient workflows by testing out a twin that doesn’t interfere with operations. These recent moves will make it easier to use graph data and streaming data to extend executable digital twins of an organization across process and even company boundaries.

Gartner has identified graph data technology as a core component of data fabrics required to build digital twins that tie together data and simulations across different enterprise applications. Celonis CEO Alexander Rilke told VentureBeat he finds it helpful to characterize their approach as an “X-Ray for your business.”

New capabilities coming to process-mining

Process mining has traditionally focused on one process at a time. The adoption of graph technology will make it easier to evaluate trade-offs in optimizing different processes simultaneously. The new Celonis Execution Graph uses graph data structures to tie together data from across business processes to see how they influence each other across systems and suppliers. Noteworthy is a new feature, Multi-Event Log, that spots inefficiencies that cross multiple processes.

IDC VP of Intelligent Process Automation Research, Maureen Fleming told Venture Beat via email, “As organizations successfully use process mining to identify inefficiencies in one process, they often have to investigate upstream processes to find the source of the inefficiency. The effort and time value of visualizing and analyzing the impact of interactions in one process to another through automation rather than manual efforts means the time spent in analysis shrinks and more targeted improvement efforts commence more rapidly.”

Celonis also acquired Lenses.io to simplify real-time data integration into process mining, analytics, and low-code application development. Lenses took a novel approach for transforming raw real-time streaming data into clear business events. For example, this could help combine weather patterns, raw material access, and social trends to identify when shipments might be late. Celonis hopes to combine this capability with the low-code business execution technology from Integromat, which it acquired a year ago.

Fleming said that event stream processing could increase precision and speed problem identification and opportunity detection. This promises to make process mining data more actionable and the results more controllable.

The third major announcement was a partnership with Conexiom, a leader in sales order automation services. The companies plan to collaborate on a touchless order-capture product to make it easier to create business processes that span multiple companies.

Getting a leg up in hyper-automation

Both RPA (robotic process automation) and process mining represent two popular and complementary approaches for automating and reengineering existing business processes in the larger hyper-automation market. RPA platforms help enterprises create bots that mimic the way humans type and click through various business apps. In contrast, process-mining tools analyze enterprise data logs to identify the most promising automation and reengineering opportunities.

RPA and process mining vendors are expanding into each other’s territory to become the gateway for hyper-automation initiatives that accelerate digital transformation. For example, Celonis previously acquired Integromat, which automates low-code development opportunities identified with process mining. Conversely, leading RPA vendors such as UiPath, Automation Anywhere, and Blue Prism also have moved into process mining and process discovery. Microsoft is also expanding into both RPA and process mining.

These will help Celonis secure a lead against other process-mining vendors and in the larger market for hyper-automation, which Gartner expects to reach $596 billion in 2022. These moves are part of Celonis’s long-term strategy for growing the market for process mining and execution beyond the hot market for RPA.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Streamlit, which helps data scientists build apps, hits version 1.0

Streamlit, a popular app framework for data science and machine learning, has reached its version 1.0 milestone. The open source project is curated by a company of the same name that offers a commercial service built on the platform. So far, the project has had more than 4.5 million GitHub downloads and is used by more than 10,000 organizations.

The framework fills a vital void between data scientists who want to develop a new analytics widget or app and the data engineering typically required to deploy these at scale. Data scientists can build web apps to access and explore machine-learning models, advanced algorithms, and complex data types without having to master back-end data engineering tasks.

Streamlit cofounder and CEO Adrien Treuille told VentureBeat that “the combination of the elegant simplicity of the Streamlit library and the fact that it is all in Python means developers can do things in hours that normally took weeks.”

Examples of this increased productivity boost include reducing data app development time from three and a half weeks to six hours or reducing 5,000 lines of JavaScript to 254 lines of Python in Streamlit, Treuille said.

The crowded landscape of data science apps

The San Francisco-based company joins a crowded landscape filled with dozens of DataOps tools that hope to streamline various aspects of AI, analytics, and machine-learning development. Treuille attributes the company’s quick growth to being able to fill the gap between data scientists’ tools for rapid exploration (Jupyter notebooks, for one example) and the complex technologies companies use to build robust internal tools (React and GraphQL), front-end interface (React and JavaScript), and data engineering tools (dbt and Spark). “This gap has been a huge pain point for companies and often means that rich data insights and models are siloed in the data team,” Treuille said.

The tools are used by everyone from data science students to large companies. The company is seeing the fastest growth in tech-focused enterprises with a large base of Python users and a need to rapidly experiment with new apps and analytics.

“Every company has the same problems with lots of data, lots of questions, and too little time to answer all of them,” Treuille said.

Improvements in v1.0 include faster app speed and responsiveness, improved customization, and support for statefulness. The company plans to enhance its widget library, improve the developer experience, and make it easier for data scientists to share code, components, apps, and answers next year in 2022.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Microsoft Might Build Its Own Apple M1 Chip For Surface

Microsoft could be taking a cue from Apple. A recent job listing (via HotHardware) for Microsoft’s Surface division calls for a system-on-a-chip (SoC) architect, suggesting that Microsoft might be interested in developing its own M1 competitor for future Surface devices.

A job listing doesn’t confirm anything — it only vaguely hints at what Microsoft could be doing, so it shouldn’t be assumed that Microsoft is building an M1 competitor. It makes sense, though, especially considering Apple is expected to launch its M1X chip next week, bolstering its advantage against the Intel competition.

The most recent Surface devices — like the Surface Laptop Studio — use Intel chips, continuing the decades-long partnership Microsoft and Intel have maintained. Apple transitioned away from Intel with the launch of its M1 SoC, which manages to stay cooler and quieter while offering similar levels of performance.

Outside of vertically integrating the hardware, going with an in-house SoC has some advantages. Microsoft would be free to choose the design that best fits its devices instead of a slot-in design from Intel, which could make for a more efficient SoC overall. The M1 is a great example of this. It’s powerful in the right areas, but Apple nixed features like support for several external displays to keep the design focused in a clear direction.

It’s no secret that Apple is leading the way with SoC design. Intel is taking some notes with its Alder Lake processors, which utilize a hybrid architecture similar to the M1. Even if we never see a Surface device with Microsoft-developed silicon, it makes sense for Microsoft to explore the possibility.

Although the Surface Laptop 4 featured an AMD processor, Microsoft has mostly stuck with Intel for its Surface range. And never has a Surface device featured silicon developed by Microsoft. The job listing points to someone who can define “the features and capabilities of SoC used on Surface devices,” which is a hint that Microsoft is interested in at least a semi-customized design.

That’s not new for Microsoft. The Xbox Series X features a semi-custom SoC from AMD, and Microsoft just posted a job calling for an SoC silicon architect for its Xbox and Azure divisions.

We won’t know more for at least a year, though. Microsoft just released a new generation of Surface devices, and although Microsoft could be developing its own SoC, it might end up sticking with Intel or AMD for the next generation. Still, it makes sense for Microsoft to take this step. Even Intel has said that it sees Apple as its main competitor going forward. We can only assume Microsoft is eyeing something similar.

Still, we recommend taking the posting with skepticism. A job listing doesn’t mean much, so it doesn’t confirm that Microsoft is developing an M1 competitor.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Intel to Help Rival Qualcomm Build Snapdragon Processors

Intel is staging a comeback. But rather than fend off competition from some of its bitter rivals, Intel is instead embracing Qualcomm to help it regain the silicon crown. As part of its efforts to regain leadership, Intel will be opening the doors of its fabs to manufacture processors for other companies, and the company announced that it had secured deals to make chips for Qualcomm and Amazon at its factories.

This means that Intel’s foundries will be making chips for its rivals based on design from ARM, rather than the x86 architecture that’s at the heart of its processors, like the 11th Gen Core CPUs. The plan, according to Intel, will help it regain its leadership in chip manufacturing by 2025 with five core technologies — including RibbonFET, PowerVia, and Foveros enhancements — that affect the way transistors are designed and interact with each other.

RibbonFET, for example, is a new design for transistor circuitry that will allow Intel to build smaller, more powerful chips, while PowerVia helps to manage power consumption by the transistors, according to CNET. The company is leveraging its Foveros acquisition to make its chips even denser. Intel CEO Pat Gelsinger announced that extreme ultraviolet lithography could debut by 2025.

In another change, Intel will be aligning with the rest of the industry on how it names its nodes. Rather than naming the process based on size — Intel had previously named its nodes based on the size of its transistors, which are measured in nanometers — the company is changing to what it describes as standard labeling to compete with other foundries. The Enhanced SuperFin process will be called Intel 7, for example, while Intel’s 7nm node will be referred to as Intel 4. RibbonFET and PowerVia will be integrated into Intel 20A and that will launch in 2024.

The company acknowledged some of the skepticism among analysts and industry insiders after years of missteps and delays.

“We’re laying out a whole lot of details to The Street to hold us accountable,” Intel CEO Pat Gelsinger told Reuters in an interview as he laid out his company’s road map during a presentation.

Intel's process roadmap through 2025.

The partnership with Qualcomm is somewhat surprising, given that the latter is starting to encroach on Intel’s leadership in the PC market with its own ARM-based Snapdragon processors on always-connected PCs through a partnership with Microsoft. Qualcomm’s Snapdragon CPUs compete against Intel’s laptop processors and have been adopted by Lenovo, HP, Samsung, and others. More recently, Qualcomm acquired Nuvia for its ARM-based processor designs, and the company hinted that new always-connected PC processors are expected to arrive as early as next year.

It’s unclear how much of Qualcomm’s business Intel will receive as part of the deal or how much the deal is worth. Qualcomm currently also relies on Samsung to manufacture some of its Snapdragon processors, for example. At one point, Intel was even publicly courting Apple to bring the manufacturing of its A-series and M-series processors to its foundries.

Earlier this year, Gelsinger confirmed that Intel was in talks with Amazon, Cisco, IBM, and Microsoft to be the foundry for those company’s chips. It’s unclear if Intel will get the Microsoft business, given that Qualcomm had designed the custom processor in the Surface Pro X.

With Amazon, Intel won’t be manufacturing the processors that power Amazon Web Services, according to Yahoo Finance. Instead, Amazon is relying on Intel’s packaging technology to build chipsets.

In addition to increasing threats from rivals AMD and Nvidia — the latter is in the process to acquire ARM from Japanese conglomerate Softbank — Intel is also facing stiff competition from Apple and Samsung. Apple is already mid-process on transitioning away from Intel x86 processors on the Mac, and to date, the company had launched a new MacBook Air, MacBook Pro, Mac mini, and iMac with its own M1 processor. These are based on the same ARM designs as those found on the iPhone and iPad.

Samsung, meanwhile, is working with AMD to launch an ARM-based Exynos processor with RDNA graphics, and such a chipset could be headed to smartphones, tablets, Chromebooks, and even laptops in the future.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

New Windows 11 Build Makes Notifications Less Annoying

Despite the ability to test Windows 11 in the Windows Insider program, the operating system is still not final, which is why Microsoft just delivered the fourth major Windows 11 build. New in this latest release are some tweaks to the Notification Center, background activity app notifications, updates to the Microsoft Store, and even more rounded corners.

For most people, the biggest noticeable change sits in the Notification Center. Now, it is possible to quickly access Focus Assist settings directly from Notification Center, so you can quickly silence your app notifications.

Elsewhere, this release updates the hidden icons flyout in the lower right corner of the Taskbar. It now has more rounded corners — moving away from the leftover squared-off design from Windows 10.

Some other updates in this new Windows 11 build cover the background activity notification for an app in the Taskbar. It used to glow orange in Windows 10, but now, apps flash more subtly with a red pill and flashing that eventually fades away. Microsoft says this can help minimize distractions.

Windows Insider builds usually change a lot of other smaller things, so it’s no surprise to see three other changes. These are not as huge, but might be noticeable in daily use of the operating system.

With the latest Windows 11 build, the touch keyboard icon in the Taskbar has been adjusted so it’s no longer huge. The Taskbar calendar flyout also sees a change so it now fully collapses down when clicking the chevron at the top, allowing you to see more notifications. Finally, Microsoft says the new Microsoft Store should now feel a bit faster, with animations for selecting movies and apps.

Based on feedback from beta testing, this fourth Windows 11 release also brings a long list of fixes. The fixes squash bugs with the Windows Explorer process, Task View, Settings app, and more. There are also a set of known issues if you plan to download. You might see issues with entering text in search in the Start Menu, experience a flickering taskbar, and a host of other things. Microsoft highlights all the changes if you’d like to learn more.

Previous Windows 11 builds have tweaked the Start Menu, the App preview icons in the Taskbar. Microsoft even introduced the chat experience for Teams earlier this week, allowing beta testers to keep in contact with friends and family through an in-built app.

Editors’ Choice




Repost: Original Source and Author Link