Categories
Computing

The Framework Laptop Chromebook may actually be worth $1,000

The modular computer brand, Framework, is introducing a Chromebook Edition of its innovative Framework Laptop. Preorders are available today for the U.S. and Canada for $1,000.

Much like its standard laptop counterpart, the Framework Laptop Chromebook Edition is built with sustainability and security in mind. It will be a “high-performance, upgradeable, repairable, customizable” device running ChromeOS.

Thanks to Framework’s partnership with Google, the Chromebook Edition will feature a Titan C security chip and will also support automatic updates for up to eight years. To further privacy, the Chromebook includes hardware switches that cut power from the camera and microphones to disable access, the brand said.

The Framework Laptop Chromebook Edition is powered by a 12th Gen Intel Core i5-1240P processor with a base of  8GB of DDR4 RAM and 256GB NVMe storage. Upgradable hardware options include 64GB of DDR4 RAM and 1TB of NVMe storage, in addition to 250GB and 1TB storage expansion cards.

The display and design on the Chromebook Edition include a 2256 x 1504 screen with a 3:2 aspect ratio and magnet-attach bezels that are compatible with all Framework devices. In its milled aluminum frame, the Chromebook is 0.62 inches thick and 2.87 pounds. It also includes a 1.5mm key-travel keyboard.

The Framework Laptop Chromebook Edition showing its keyboard.

With its high customization, you have the option to add a number of ports to the Framework Laptop Chromebook Edition, including USB-C, USB-A, MicroSD, HDMI, DisplayPort, and Ethernet, among others. You can also add other features such as high speed storage.

In addition to running ChromeOS, the Chromebook Edition supports other software including downloading Android apps, developing on Linux, and playing PC games with Steam, to name a few.

The Framework Laptop Chromebook Edition includes scannable QR codes on each part of its system to make it easy to repair, replace, and upgrade the product, giving you “unprecedented access to documentation, repair guides” and “insight into design and manufacturing data,” the brand said. The Chromebook’s Embedded Controller firmware and coreboot BIOS are also open source and available to users.

The Framework Laptop Chromebook Edition held in someone's hand.

The first shipments of the Framework Laptop Chromebook Edition will be sent out to consumers between late November and early December, with the Framework saying it will use a batch pre-order system to deliver the Chromebook to customers once it is available.

Those pre-ordering will have to put down a $100 deposit to reserve the Chromebook, which is fully refundable. Customers can also put replacement parts and modules for the Chromebook on waitlist on the brand’s Marketplace starting today.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Can NIST move ‘trustworthy AI’ forward with new draft of AI risk management framework?

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Is your AI trustworthy or not? As the adoption of AI solutions increases across the board, consumers and regulators alike expect greater transparency over how these systems work. 

Today’s organizations not only need to be able to identify how AI systems process data and make decisions to ensure they are ethical and bias-free, but they also need to measure the level of risk posed by these solutions. The problem is that there is no universal standard for creating trustworthy or ethical AI. 

However, last week the National Institute of Standards and Technology (NIST) released an expanded draft for its AI risk management framework (RMF) which aims to “address risks in the design, development, use, and evaluation of AI products, services, and systems.” 

The second draft builds on its initial March 2022 version of the RMF and a December 2021 concept paper. Comments on the draft are due by September 29. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The RMF defines trustworthy AI as being “valid and reliable, safe, fair and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced.”

NIST’s move toward ‘trustworthy AI’ 

The new voluntary NIST framework provides organizations with parameters they can use to assess the trustworthiness of the AI solutions they use daily. 

The importance of this can’t be understated, particularly when regulations like the EU’s General Data Protection Regulation (GDPR) give data subjects the right to inquire why an organization made a particular decision. Failure to do so could result in a hefty fine. 

While the RMF doesn’t mandate best practices for managing the risks of AI, it does begin to codify how an organization can begin to measure the risk of AI deployment. 

The AI risk management framework provides a blueprint for conducting this risk assessment, said Rick Holland, CISO at digital risk protection provider, Digital Shadows.

“Security leaders can also leverage the six characteristics of trustworthy AI to evaluate purchases and build them into Request for Proposal (RFP) templates,” Holland said, adding that the model could “help defenders better understand what has historically been a ‘black box‘ approach.” 

Holland notes that Appendix B of the NIST framework, which is titled, “How AI Risks Differ from Traditional Software Risks,” provides risk management professionals with actionable advice on how to conduct these AI risk assessments. 

The RMF’s limitations 

While the risk management framework is a welcome addition to support the enterprise’s internal controls, there is a long way to go before the concept of risk in AI is universally understood. 

“This AI risk framework is useful, but it’s only a scratch on the surface of truly managing the AI data project,” said Chuck Everette, director of cybersecurity advocacy at Deep Instinct. “The recommendations in here are that of a very basic framework that any experienced data scientist, engineers and architects would already be familiar with. It is a good baseline for those just getting into AI model building and data collection.”

In this sense, organizations that use the framework should have realistic expectations about what the framework can and cannot achieve. At its core, it is a tool to identify what AI systems are being deployed, how they work, and the level of risk they present (i.e., whether they’re trustworthy or not). 

“The guidelines (and playbook) in the NIST RMF will help CISOs determine what they should look for, and what they should question, about vendor solutions that rely on AI,” said Sohrob Jazerounian, AI research lead at cybersecurity provider, Vectra.

The drafted RMF includes guidance on suggested actions, references and documentation which will enable stakeholders to fulfill the ‘map’ and ‘govern’ functions of the AI RMF. The finalized version will include information about the remaining two RMF functions — ‘measure’ and ‘manage’ — will be released in January 2023.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Forrester: Adopting fraud-fighting AI requires the right technical framework

All the sessions from Transform 2021 are available on-demand now. Watch now.


AI tools have become part and parcel of fraud management solutions in the enterprise, according to a new Forrester report. In it, analysts at the firm identify key fraud management use cases where AI can help, mapping how brands can deploy AI technologies in each scenario.

Fraud is on the rise worldwide, and it’s impacting enterprises’ bottom lines. According to a PricewaterhouseCoopers survey, 47% of businesses experienced fraud in 2019 and 2020, which cost a collective $42 billion. Some studies show that the pandemic is playing an increasing role. In a report commissioned by J.P. Morgan, nearly two-thirds of treasury and finance professionals blamed the pandemic or the uptick in payment fraud at their companies.

The Forrester report outlines AI’s unique strengths in combating fraud, including its ability to boost monitoring accuracy, augment human intelligence, and bring biometrics into the mainstream. Feeding rich datasets into fraud detection AI models can help spot fraud that rules-based systems overlook, Forrester notes, while those same models can be used by security teams to prioritize which alerts to investigate. Meanwhile, emerging AI-based solutions like biometric verification enable app authentication in near real time.

The report stresses, however, that organizations need to have the right technical pieces in place to reap the benefits of fraud-detecting AI. For example, prediction models and risk scoring systems must be low-latency to cope with large transaction volumes. Beyond this, data scientists must build training, test, and validation datasets, which can be challenging. According to Forrester, many banks report that their training data is miscategorized and suffers from quality issues like missing fields or inconsistent spellings of people and organizations. And in Asia, banks are reluctant to provide training data to consulting firms — and sometimes even to employees.

Indeed, other reports show that data issues plague companies of all sizes on their AI journeys. In an Atlation survey, a clear majority of employees (87%) pegged data quality issues as the reason their organizations failed to successfully implement AI and machine learning. McKinsey similarly estimates that companies may be squandering as much as 70% of their data-cleansing efforts.

Overcoming challenges

Organizations must also ensure that models evolve over time and explain why certain transactions are deemed fraudulent, Forrester says. They also need iterative, closed-loop workflows to train models and integrate them with data sources like blacklists, whitelists, device IDs and reputations, and know-your-customer lists.

The report recommends that companies use a combination of on-premises and cloud-based implementation options for their AI fraud detection use cases. While on-premises solutions offer convenience, the cloud has the potential to improve training and inferencing performance, Forrester notes, lowering costs and in some cases providing enhanced protection.

“As more fraud management vendors offer cloud-based solutions, it makes sense not to invest solely in static on-premises hardware and software, but to augment their on-premises assets with elastic computational resources in infrastructure-as-a-service or software-as-a-service solutions,” coauthors and analysts Andras Cser, Danny Mu, and Meng Liu wrote.

Once barriers to the adoption of fraud-fighting AI are overcome, the advantages can be enormous. Visa prevents $25 billion in annual fraud, thanks to the AI it developed, SVP and global head of data Melissa McSherry revealed at VentureBeat’s Transform 2021 conference. At a higher level, aggregate potential cost savings for banks from AI applications has been estimated at $447 billion by 2023.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Nice publishes ethical framework for applying AI to customer service

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Nice, a provider of a robotic process automation (RPA) platform infused with machine learning algorithms employed in call centers, today published a Robo Ethical Framework for employing AI to better serve customers.

The goal is to provide some direction on how best to employ robots alongside humans in a call center, rather than focusing on how to replace humans, said Oded Karev, vice president of RPA for Nice.

Specifically, the five guiding principles for the framework are:

  1. Robots must be designed for a positive impact: Robots should contribute to the growth and well-being of the human workforce. With consideration to societal, economic, and environmental impacts, every project that involves robots should have at least one positive rationale clearly defined.
  2. Robots must be free of bias: Personal attributes such as race, religion, sex, gender, age, and other protected status should be left out of consideration when creating robots so their behavior is employee agnostic. Training algorithms are evaluated and tested periodically to ensure they are bias-free.
  3. Robots must safeguard individuals: Delegating decisions to robots requires careful consideration. The algorithms, processes, and decisions embedded within robots must be transparent, providing the ability to explain conclusions with unambiguous rationale. Humans must be able to audit a robot’s processes and intervene to redress the system to prevent potential offenses.
  4. Robots must be driven by trusted data sources: Robots must be designed to act based upon verified data from trusted sources. Data sources used for training algorithms should maintain the ability to reference the original source.
  5. Robots must be designed with holistic governance and control: Humans must have complete information about a system’s capabilities and limitations. Robotics platforms must be designed to protect against abuse of power and illegal access by limiting, proactively monitoring, and authenticating any access to the platform and every type of edit action in the system.

Nice is including a copy of this framework with every license of its RPA platform that it sells. Organizations are, of course, under no obligation to implement it, but the company is trying to proactively reduce the current level of “robot anxiety” that currently exists among employees within an organization, said Karev.

That level of anxiety is actually slowing down the rate at which RPA and other AI technologies would otherwise be adopted, Karev added.

Implementing robotics ethically

In general, most organizations are not closing call centers and laying off workers because they deployed an RPA platform. Instead, as more rote tasks become automated, the call center staff is engaging more deeply with customers in a way that increases overall satisfaction. As a result, customers are consuming more services that are now sold to them via a customer service representative.

There are, however, vertical industry segments where customers would rather not engage with anyone at all. They simply want a robot to automate a task, such as registering a product on their behalf. In either scenario, the relationship between the end customers is fundamentally evolving, thanks in part to the rise of RPA and AI, noted Karev.

In some cases, organizations overestimate the ability of robots to handle customer interactions in place of humans, added Karev. “Robots are not as smart as some of us think they are,” he cautioned.

In fact, Karev noted that governance is crucial to make sure trusted insiders are not abusing robots for nefarious purposes or that cybercriminals are not hijacking a workflow to siphon revenue.

It’s not clear to what degree the Nice framework will become a real-world codicil to the literary Asimov’s Three Laws of Robotics that start by saying no robot may harm a human or, by its inaction, allow a human to come to harm. However, the Nice framework and others like it are a step in the right direction.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Baidu debuts updated AI framework and R&D initiative

Elevate your enterprise data technology and strategy at Transform 2021.


At Wave Summit, Baidu’s bi-annual deep learning conference, the company announced version 2.1 of PaddlePaddle, its framework for AI and machine learning model development. Among the highlights are a large-scale graph query engine; four pretrained models; and PaddleFlow, a cloud-based suite of machine learning developer tools that include APIs and a software development kit (SDK). Baidu also unveiled what it’s calling the Age of Discovery, a 1.5 billion RMB (~$235 million) grant program that will invest over the next three years in AI education, research, and entrepreneurship.

At Wave Summit, Baidu CTO Haifeng Wang outlined the top AI trends from the company’s perspective. Deep learning with knowledge graphs has significantly improved the performance and interpretability of models, he said, while multimodal semantic understanding across language, speech, and vision has become achievable through graphs and language semantics. Moreover, Wang noted, deep learning platforms are coordinating closely with hardware and software to meet various development needs, including computing power, power consumption, and latency.

To this end, PaddlePaddle 2.1 introduces optimization of automatic mixed precision, which can speed up the training of models — including Google’s BERT — by up to 3 times. New APIs reduce memory usage and further improve training speeds, as well as adding support for data preprocessing, GPU-based computation, mixed-precision training, and model sharing.

Also in tow with PaddlePaddle 2.1 are four new language models built from Baidu’s ERNIE. ERNIE, which Baidu developed and open-sourced in 2019, learns pretrained natural language tasks through multitask learning, where multiple learning tasks are solved at the same time by exploiting commonalities and differences between them. Beyond this, PaddlePaddle 2.1 brings an optimized pruning compression technology called PaddleSlim, as well as LiteKit, a toolkit for mobile developers that aims to reduce the development costs of edge AI.

PaddlePaddle Enterprise and Age of Discovery

PaddlePaddle Enterprise, Baidu’s business-oriented set of machine learning tools, gained a new service this month in PaddleFlow. PaddleFlow is a cloud platform that provides capabilities for developers to build AI systems, including resources management and scheduling, task execution, and service deployment via developer APIs, a command-line client, and an SDK.

In related news, Baidu says that as a part of its new Age of Discovery initiative, the company will invest RMB 500 million ($78 million) in capital and resources to support 500 academic institutions and train 5,000 AI tutors and 500,000 students with AI expertise by 2024. Baidu also plans to pour RMB 1 billion ($156 million) into 100,000 businesses for “intelligent transformation” and AI talent training.

Laments over the AI talent shortage have also become a familiar enterprise refrain. O’Reilly’s 2021 AI Adoption in the Enterprise paper found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing this as a “significant” barrier. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers working on AI development and research globally, only 25% are “well-versed enough in the technology to work with teams to take it from research to application.”

“PaddlePaddle researchers and developers will collaborate with the open source community to build a deep learning open source ecosystem and break the boundaries of AI technology,” Baidu said in a press release. “With the permeation of AI across various industries, it is critical for platforms to keep lowering their threshold to accelerate intelligent transformation.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Framework repairable laptop hits preorder status

One thing most laptops and other electronic devices have in common is that it’s nearly impossible to repair them. This is particularly true about notebooks. Many modern designs are completely sealed, leaving users unable to access internal components, while others have modest repair capabilities. A company called Framework revealed a modular and highly repairable laptop back in February.

The Framework laptop combined a readily accessible design with modular, interchangeable expansion cards allowing components to be replaced and upgraded with ease when needed. The machine looks like a basic 13.5-inch notebook on the outside. Only it’s much easier to repair if needed. The Framework laptop is now available for preorder with the machine starting at $999.

The company says it will begin shipping the 13.5-inch notebook at the end of July. Initially, Framework had expected to start shipping the notebook in the spring, but the chip supply problem plaguing multiple industries prevented that. Three different configurations will be offered, including Base, Performance, and Professional. The Base unit is $999, with the Professional costing $1999, and the mid-range falling somewhere between.

Base buyers will get an Intel Core i5, eight gigabytes of RAM, and 256 gigabytes of storage. The Professional version offers an Intel Core i7, 32 gigabytes of RAM, and one terabyte of storage on the high-end. Windows versions start with Home on the base, and Professional versions get Windows Pro.

Those particularly into DIY can get a bare-bones shell for $749 allowing users to plug in their own components. One of the more interesting aspects of the machine is that it’s customizable with an Expansion Card system enabling users to choose the ports they want and which side they want them on. Four can be selected at a time, including USB-C, USB-A, HDMI, display port, microSD, and more. Users can also customize the color of the machine using bezels that attach via magnets. Preorders are underway now.

Repost: Original Source and Author Link

Categories
Computing

Framework Laptop Is Now Available For Pre-order, Starting at $999

The Framework Laptop, a modular notebook that lets you swap most of its parts for repairs or upgrades, is now up for grabs in a limited presale. Announced earlier this year by one of the founders of Oculus, this Framework notebook is said to redefine the “replace when broken” approach to laptops that we often see today. We now know the specs and the price of the laptops that will begin shipping this summer.

With its sleek, modern design, thin bezels, and just 2.87 pounds of weight, the Framework Laptop almost looks like a MacBook Pro. However, the notebook’s main strength lies not in the looks, but in the ability to upgrade, customize, replace, and repair the majority of its internal and external hardware at will. While many Windows laptops offer the option to replace memory or storage, this lightweight notebook takes it a few steps further.

Components such as the battery, screen, keyboard, and even the motherboard can all be upgraded. This is still a novelty in the laptop market. Although many users might expect it to be a difficult process, the company claims that just about anyone can replace the parts in this modular laptop.

“The only tool you need to swap any part of it is the screwdriver we include in the box,” Framework said, presenting a custom screwdriver not much larger than a golf ball.

While not everyone will feel up to the task of upgrading their laptop themselves, Framework comes with a range of customization options that are as easy to use as simply plugging in a USB stick. The laptop has four bays that you can fill with Expansion Cards of your choice. Some of the options include USB-C, USB-A, microSD, DisplayPort, HDMI, and even a high-end headphone amp.

The Framework Laptop runs on an Intel processor. You are given the option to choose between the three configurations, the Base, Performance, and Professional models. The $999 Base configuration comes with a Core i5-1135G7 (8M cache, up to 4.20GHz), 8GB of DDR4 RAM, and a 256GB SSD.

The Performance configuration starts at $1,399, and comes with the Core i7-1165G7 (12M cache, up to 4.70GHz), 16GB of DDR4 RAM, and a 512GB SSD. Lastly, the $1,999 Professional configuration includes the Core i7-1185G7 (12M cache, up to 4.80GHz), 32GB of DDR4 RAM, and a 1TB SSD. There’s also a 1080p 60 fps (frames per second) webcam installed at the top bezel of the 13.5″ screen. That makes it one of the few laptops you can buy right now that steps up from a 720p webcam.

These prices place the notebook out of the budget laptop price range, but it does cost less than the cheapest Macbook Pro. This pricing is surprising in a machine that combines decent specs with a lifespan that Framework claims will exceed other laptops.

Pre-orders are limited to one Framework Laptop per customer, and requires $100 to reserve your order.

The Framework Laptop with replaceable parts

The company is also selling a “DIY Edition” that starts at just $749. Framework calls it a “barebones configuration,” which lets you assemble it yourself from a kit of modules.

Whether the Framework Laptop will be successful still remains to be seen, but it looks to be a sustainable, eco-friendly (made out of 50% post-consumer recycled aluminum and 30% PCR plastic) option that might pave the way for other modular laptops in the future.

Framework is limiting pre-orders to the U.S. today, with orders for Canada to come in the next few weeks. Framework says it will be taking orders in Europe and Asia before the end of the year.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Framework Laptop promises easy upgrades and modular ports

A new laptop startup aims to make a DIY, upgrade-friendly notebook, with Framework hoping to build a market from those frustrated by today’s breed of sealed-up computers. Combining readily-accessed internal components with modular, interchangeable expansion cards, the Framework Laptop may look like a regular 13.5-inch notebook but it’s very different inside and out.

“At Framework, we believe the time has come for consumer electronics products that are designed to last,” Nirav Patel, company founder, says of the startup. “Founded in San Francisco in 2019, our mission is to empower you with great products you can easily customize, upgrade, and repair, increasing longevity and reducing e-waste in the process.”

The company’s first product marks a return to some of the traditional approaches to notebooks, blended with some newer ideas. The Framework Laptop has a 13.5-inch 3:2 aspect screen running at 2256 x 1502 resolution, a milled aluminum housing that’s 15.85mm thick and 2.87 pounds, and runs 11th Gen Intel Core processors.

It’ll support up to 64GB of DDR memory and 4TB+ of Gen4 NVMe solid-state storage. There’s also a 1080p/60fps webcam with a hardware privacy switch, a 55 Wh battery, and a keyboard with 1.5mm travel. However it’s how those components are pieced together that stands out.

The storage, WiFi card, and two of the memory slots are socketed, so that they can be upgraded by an individual user. The mainboard itself is designed to be swapped out too, as processors improve. The battery, screen, keyboard, and even the magnetically-attached bezel are designed to be readily replaced. Framework will even have QR codes on each component which, when scanned, will link to replacement guides and product listings.

Meanwhile, for connectivity there’ll be four swappable bays for Framework’s Expansion Card system. It’ll have a choice of USB-C, USB-A, HDMI, DisplayPort, and MicroSD port modules that can be slotted in, as well as less common options like extra storage or even a high-end headphone amp. The company plans to open up the design for that, too, so that other companies can make compatible modules.

It’s fair to say that Framework faces an uphill battle. Modular devices haven’t exactly had an easy go of things in the past, with even big names in the tech world giving up on their plans. Google’s Ara modular phone, for example, was meant to be as easily-upgraded as a set of LEGO bricks: instead, Google canned the project.

Intel, meanwhile, had plans for a modular laptop design. Its Compute Card would effectively condense the key components into a single block, which could be slotted into a notebook casing. It shelved that idea back in 2019.

The fact that the computing segment has been so aggressively commercialized explains part of the challenge. Low-price notebooks have relied on manufacturer scale to squeeze supplier costs down to the bare minimum; meanwhile, sleeker ultraportables and performance laptops demand custom designs in order to satisfy user requirements for both power and portability. That has led to little to no support for user-upgradable parts like memory or storage, since RAM chips and flash drives are soldered in place to save on thickness.

Framework’s approach differs dramatically from that. In fact, as well as prebuilt models running Windows 10 Home or Pro, it’ll also have a Framework Laptop DIY Edition. That will come as the individual components, and the choice to load either Windows or Linux if you’d prefer.

The Framework Laptop itself uses 50-percent post consumer recycled (PCR) aluminum, and an average of 30-percent PCR plastic.

What we don’t know – and what a lot of this will hinge upon – is pricing. Exact specifications, costs, and preorder details will follow closer to Framework’s summer 2021 estimate for the laptop shipping, the company promises. It’s difficult to imagine that there won’t be some premium to pay for this degree of flexibility, never mind the fact that smaller laptop-makers typically end up paying more for components than their industry heavyweight rivals.

Repost: Original Source and Author Link

Categories
AI

Proposed framework could reduce energy consumption of federated learning

Modern machine learning systems consume massive amounts of energy. In fact, it’s estimated that training a large model can generate as much carbon dioxide as the total lifetime of five cars. The impact could worsen with the emergence of machine learning in distributed and federated learning settings, where billions of devices are expected to train machine learning models on a regular basis.

In an effort to lesson the impact, researchers at the University of California, Riverside and Ohio State University developed a federated learning framework optimized for networks with severe power constraints. They claim it’s both scalable and practical in that it can be applied to a range of machine learning settings in networked environments, and that it delivers “significant” performance improvements.

The effects of AI and machine learning model training on the environment are increasingly coming to light. Ex-Google AI ethicist Timnit Gebru recently coauthored a paper on large language models that discussed urgent risks, including carbon footprint. And in June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car.

In machine learning, federated learning entails training algorithms across client devices that hold data samples without exchanging those samples. A centralized server might be used to orchestrate rounds of training for the algorithm and act as a reference clock, or the arrangement might be peer-to-peer. Regardless, local algorithms are trained on local data samples and the weights — the learnable parameters of the algorithms — are exchanged between the algorithms at some frequency to generate a global model. Preliminary studies have shown this setup can lead to lowered carbon emissions compared with traditional learning.

In designing their framework, the researchers of this new paper assumed that clients have intermittent power and can participate in the training process only when they have power available. Their solution consists of three components: (1) client scheduling, (2) local training at the clients, and (3) model updates at the server. Client scheduling is performed locally such that each client decides whether to participate in training based on an estimation of available power. During the local training phase, clients that choose to participate in training update the global model using their local datasets and send their updates to the server. Upon receiving the local updates, the server updates the global model for the next round of training.

Across several experiments, the researchers compared the performance of their framework with benchmark conventional federated learning settings. The first benchmark was a scenario in which federated learning clients participated in training as soon as they had enough power. The second benchmark, meanwhile, dealt with a server that waited for clients to have enough power to participate in training before initiating a training round.

The researchers claim that their framework significantly outperformed the two benchmarks in terms of accuracy. They hope it serves as a first step toward sustainable federated learning techniques and opens up research directions in building large-scale machine learning training systems with minimal environmental footprints.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Experimental AI framework Vx2Text generates video captions using inferences from audio and text

A grand challenge in AI is developing a conversational system that can reliably understand the world and respond using natural language. Ultimately, solving it will require a model capable of extracting salient information from images, text, audio, and video and answering questions in a way that humans can understand. In a step toward this, researchers at Facebook, Columbia University, Georgia Tech, and Dartmouth developed Vx2Text, a framework for generating text from videos, speech, or audio. They claim that Vx2Text can create captions and answer questions better than previous state-of-the-art approaches.

Unlike most AI systems, humans understand the meaning of text, videos, audio, and images together in context. For example, given text and an image that seem innocuous when considered apart (e.g., “Look how many people love you” and a picture of a barren desert), people recognize that these elements take on potentially hurtful connotations when they’re paired or juxtaposed. Multimodal learning can carry complementary information or trends, which often only become evident when they’re all included in the learning process. And this holds promise for applications from transcription to translating comic books into different languages.

In the case of Vx2Text, “modality-specific” classifiers convert semantic signals from videos, text, or audio into a common semantic language space. This enables language models to directly interpret multimodal data, opening up the possibility of carrying out multimodal fusion — i.e., combining signals to bolster classification — by means of powerful language models like Google’s T5.  A generative text decoder within Vx2Text transforms multimodal features computed by an encoder into text, making the framework suitable for generating natural language responses.

Vx2Text AI

“Not only is such a design much simpler but it also leads to better performance compared to prior approaches,” the researchers wrote in a paper describing their work. Helpfully, it also does away with the need to design specialized algorithms or resort to alternative approaches to combine the signals, they added.

In experiments, the researchers show that Vx2Text generates “realistic” natural text for both audio-visual “scene-aware” dialog and video captioning. Although the researchers provided the model with context in the form of dialog histories and speech transcripts, they note that the generated text includes information from non-text modalities, for example references to actions like helping someone to get up or answering a telephone.

Vx2Text AI

Vx2Text has applications in the enterprise, where it could be used to caption recorded or streamed videos for accessibility purposes. Alternatively, the framework (or something like it) could find its way into video sharing platforms like YouTube and Vimeo, which rely on captioning, among other signals, to improve the relevancy of search results.

“Our approach hinges on the idea of mapping all modalities into a semantic language space in order to enable the direct application of transformer networks, which have been shown to be highly effective at modeling language problems,” the researchers wrote. “This renders our entire model trainable end-to-end.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link