Categories
AI

How Softiron used digital twins to reduce its carbon footprint

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


A few years back, Softiron, which makes data center hardware for software-defined storage, turned to digital twins to help optimize its hardware, not just for cost and performance, but also to dramatically reduce its carbon footprint. A recent assessment by ESG investment firm Earth Capital found that these efforts are paying off.  

SoftIron’s latest products generate about 20% the heat of comparable enterprise storage products and consume as little as 20% of the power of comparable offerings. In total, Earth Capital estimates every 10 petabytes of storage installed translates to savings of about 6,656 tons of carbon, compared to industry norms. 

SoftIron’s COO Jason Van der Schyff detailed how the company has used digital twins to achieve such impressive gains in an exclusive interview with VentureBeat. The exec also explains how they brought environmental considerations into their design workflow for both the products and the factories that build and ship them. This helped them recognize that a focus on I/O rather than CPU performance could help them hit enterprise requirements and sustainability goals. 

VentureBeat: How do you go about building digital twins to optimize your carbon and energy footprint?

Jason Van der Schyff: At SoftIron, we use a variety of digital twinning strategies across our physical products and our facilities and supply chain to determine and analyze our carbon and energy footprint. Our products are entirely digitally modelled, from the foundational circuit boards to mechanical components and all internal active and passive components. This allows us to not only model the thermal performance but also analyze both indigenous and foreign influences, such as vibrations induced from harmonic oscillations caused by cooling fans – an innovation we were recently awarded a patent for. This type of analysis in a digital form allows us to adapt our designs to create less heat, use less cooling and therefore less energy, allowing us to provide our customers with some of the lowest power consumption in the market and aid them in their carbon reduction goals.

With respect to our manufacturing, digital twins provide efficiency in designing and deploying new manufacturing techniques. It enables us to model digitally the impact of design changes in the production workflow across our various manufacturing sites before it ever manifests in the real world – all without wasting materials. 

SoftIron’s factory digital twin enables innovation from our manufacturing center of excellence in Berlin. There we are able to model and control our global manufacturing footprint as a single global capability. While this modeling sometimes happen thousands of miles from where actual production is taking place, it means that the physical product can be manufactured close to the point of consumption and in a way is able to utilize local supply chains, all of which has a positive impact on both supply chain resilience and sustainability. Digital twinning underpins this strategy – which we call “Edge Manufacturing.”

VentureBeat: What kind of tools do you use to store the raw data and share it among different stakeholders in the process?

Van der Schyff: As a designer and manufacturer of enterprise storage, SoftIron chooses to deploy our digital twin on our own infrastructure in our facility in Berlin, with real-time resilience provided by geo-replication across our facilities in California and Sydney. A unified internal network allows all collaborators direct access in real-time to collaborate and contribute to the iteration of the digital twin designs.

VentureBeat: What is involved in identifying some of the biggest contributions to inefficiency and then mitigating these in the final products?

Van der Schyff: The bulk of inefficiencies is introduced through waste, be it wasted power, extraneous componentry or even manufacturing wasted time. By developing a digital twin throughout the development process, we’re able to model and analyze inefficiencies in the design and functionality of our products. This improves quality and minimizes rework. Our manufacturing floor is further modeled to provide accurate time and motion studies and utilize a variety of layouts to optimize efficiency before physical construction is completed to further mitigate inefficiencies.

VentureBeat: What have been some of your discoveries around the specific improvements or changes that led to the most significant impact?

Van der Schyff: Early in the history of the company, by modeling the performance and interaction between the hardware and software layer, we were able to determine that software-defined storage is primarily an I/O problem rather than a compute problem. This discovery informed the selection of components and the adoption of a low power ARM64 architecture to provide highly performant, yet economical storage appliances. These low-power appliances provide savings such that for every 10 PB of data storage shipped by SoftIron, an estimated 6,656 tons of CO2 are saved by reduced energy consumption alone in the customer data center over its lifetime.

VentureBeat: How do digital twins fit into this process?

Van der Schyff: Digital twins provide open access to all data in one place increasing cross border and asynchronous collaboration. Through this collaboration, SoftIron can bring cross-functional expertise to each design, be it a product or a manufacturing process, to observe and mitigate inefficiencies and exploit opportunities to optimize our carbon and energy footprint. 

The significant supply chain disruptions we have seen over the last year or more have only highlighted the weaknesses in the way IT is currently produced. In this way, we believe that sustainability and resilience are inextricably linked. Manufacturing has historically placed all of its eggs in a few very large, low cost, baskets in the world – driving for every increased volume of smaller and smaller component variation in order to drive out costs.

Digital twinning is one enabling technology (along with the current generations of super flexible, efficient low-volume assembly line machinery) that helps to break the cost-to-volume equation apart. This fosters small, distributed manufacturing operations, opens up the supply chain to more local, perhaps lower volume suppliers and, over time, enables a more resilient, sustainable, global IT industry to emerge. SoftIron, we believe, is at the vanguard of this, but we expect this model to become more widespread over the coming decade.

VentureBeat: What’s next?

Van der Schyff: As SoftIron expands its Edge Manufacturing strategy, further opportunities will become available to optimize our carbon and energy footprint and implement further reductions by shortening supply chains, increasing local recycling opportunities and drastically reducing the amount of energy spent in delivering SoftIron’s low-energy appliances to its customers.

We believe what we are doing will serve as both a model and catalyst for others to follow. Over the last 12 months we have seen some major announcements regarding developing chip production in the U.S. and Europe and we hope that by the time these facilities come online, there will be a U.S. and European IT manufacturing economy, of which SoftIron is a leading part.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
Tech News

CFM RISE open fan architecture jet engine could reduce fuel consumption by 20 percent

GE Aviation and Safran announced a new technology development program that aims to reduce fuel consumption for jet aircraft by 20 percent while reducing CO2 emissions at the same time. The program is called CFM RISE (Revolutionary Innovation for Sustainable Engines) and will demonstrate a mature range of new, disruptive technologies for future commercial aircraft engines that have the potential to enter service by the mid-2030.

Both GE Aviation and Safran also agreed as part of the announcement to extend the CFM International 50/50 partnership through the year 2050. The company has a goal of reducing CO2 emissions by 50 percent by 2050. The two companies say that their relationship is the strongest it has ever been. They will work together with the RISE technology demonstration program to reinvent flight for the future.

The companies want to take next-generation single-aisle aircraft to a new level of fuel efficiency and reduced emissions. Executives working on the project say that the current LEAP engine has already reduced emissions by 15 percent compared to past generation of engines. The new RISE technology will reduce that number even further.

New engine technologies also ensure 100 percent compatibility with alternative energy sources, including Sustainable Aviation Fuels and hydrogen. Both companies say the RISE Program is the foundation for the next-generation CFM engine expected to be available by the middle of the 2030s. One of the key features of the new engine is an open fan architecture, which is the key to improved fuel efficiency while delivering the same travel speed and cabin experience offered by current generation aircraft.

The program will leverage hybrid electric capability to optimize the efficiency of the engines while enabling electrification for many aircraft systems. So far, the RISE program has more than 300 separate components, modules, and full engine builds. A demonstrator engine is scheduled to begin testing around the middle of the decade, with a flight test soon after.

Repost: Original Source and Author Link

Categories
AI

Razer partners with ClearBot to reduce ocean plastics

Elevate your enterprise data technology and strategy at Transform 2021.


Razer said it has partnered with marine-waste cleaning firm ClearBot to reduce pollution from ocean plastics.

In celebration of World Oceans Day, the partnership will complement Razer’s own “green investment” efforts to help sustainability-focused startups by equipping them with tools and capabilities to help them scale.

The gaming peripherals and gamer brand company recently announced its 10-year #GoGreenWithRazer sustainability goal. Razer’s Green Fund initiated the partnership with ClearBot.

With approximately 11 million tons of plastics entering the oceans each year, ocean-cleaning enterprises often face difficulties with dated technology, cost, and efficiency. The ClearBot team designs robots that leverage AI vision to identify different types of marine plastic waste and collect information regarding these pollutants in the oceans to protect aquatic life. ClearBot robots are deployed to retrieve marine plastic waste, which is then sorted on shore and responsibly disposed of.

Patricia Liu, chief of staff at Razer, said in a statement that ClearBot’s AI and other tech will enable governments and others to broaden their sustainability efforts. She encouraged other sustainability startups to reach out to Razer for help.

Under the partnership, Razer’s engineers and designers have volunteered personal time and technical expertise to help turn the ClearBot prototype into a scalable, mass-marketable product. Leveraging Razer’s knowledge and manufacturing know-how, ClearBot was able to evolve the robot design into one that is smarter and more efficient.

The newly designed and fully automated robot is armed with cutting-edge AI and machine learning capabilities that can detect marine plastics within two meters in rough waters. The robot can collect up to 550 pounds of plastics in just one cycle while running on solar-powered energy.

Sidhant Gupta, CEO of ClearBot, said in a statement that the Razer assistance was very eye-opening for ClearBot and they hope to do more over time together. ClearBot has issued a call to action through its program to collect data on marine plastic waste. The company is encouraging people to upload photos of marine plastic waste commonly found in open waters onto ClearBot’s website. The research and design team will add this information to ClearBot’s existing database to help improve the robot’s waste detection AI algorithm.

Sneki Snek for trees

Meanwhile, back in March, Razer partnered with Conservation International to protect trees, expanding its target from saving 100,000 trees to saving 1 million trees. The initiative has saved 300,000 trees to date, and Razer is launching new Sneki Snek merchandise for every additional 100,000 trees saved.

Razer has now launched two new Sneki Snek merchandise items. The Razer Sneki Snek Eye Mask features a padded design with velvety finish for comfort and undisturbed rest. The Razer Sneki Snek Floor Rug protects floors from everyday wear and tear and is crafted with noise dampeners for unobstructed chair movement. Keeping with the Razer mascot’s sustainability objectives, both the eye mask and floor rugs are made of 100% recycled materials.

When the milestone of 400,000 trees is reached, Razer will release its Sneki Snek Slippers.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
AI

Google launches Lyra codec in beta to reduce voice call bandwidth usage

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Google today open-sourced Lyra in beta, an audio codec that uses machine learning to produce high-quality voice calls. The code and demo, which are available on GitHub, compress raw audio down to 3 kilobits per second for “quality that compares favorably to other codecs,” Google says.

While mobile connectivity has steadily increased over the past decade, the explosive growth of on-device compute power has outstripped access to reliable, fast internet. Even in areas with reliable connections, the emergence of work-from-anywhere and telecommuting have stretched data limits. For example, early in the pandemic, nearly 90 out of the top 200 U.S. cities saw internet speeds decline as bandwidth became strained, according to BroadbandNow.

It’s Google’s assertion that Lyra can make a difference in these scenarios.

Lyra’s architecture is separated into two pieces, an encoder and decoder. When someone talks into their phone, the encoder captures distinctive attributes, called features, from their speech. Lyra extracts these features in 40-millisecond chunks and then compresses and sends them over the network. It’s the decoder’s job to convert the features back into an audio waveform that can be played out over the listener’s phone.

According to Google, Lyra’s architecture is similar to traditional audio codecs, which form the backbone of internet communication. But while these traditional codecs are based on digital signal processing techniques, the key advantage for Lyra comes from the ability of its decoder to reconstruct a high-quality signal.

Lyra codec

Above: Lyra’s architecture in schematic form.

Image Credit: Google

Google believes there are a number of applications Lyra might be uniquely suited to, from archiving large amounts of speech and saving battery to alleviating network congestion in emergency situations.

“We are excited to see the creativity the open source community is known for applied to Lyra in order to come up with even more unique and impactful applications,” Google Chrome engineers Andrew Storus and Michael Chinen wrote in a blog post. “We [want] to enable developers and get feedback as soon as possible.”

The Lyra code is written in C++ using the Bazel build framework. The core API provides an interface for encoding and decoding at the file and packet levels, and the complete signal processing toolchain is provided, which includes filters as well as transforms. Google’s example code integrates with the Android NDK to show how Lyra can work with Java-based Android apps, and Google has also provided the weights and vector quantizers necessary to run Lyra.

“This release provides the tools needed for developers to encode and decode audio with Lyra, optimized for the 64-bit ARM android platform, with development on Linux,” Storus and Chinen continued. “We hope to expand this codebase and develop improvements and support for additional platforms in tandem with the community.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Dataiku’s new AI tools reduce dependency on data science teams

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Dataiku today expanded its effort to make AI accessible to the average business user with an update that makes it possible to run what-if simulations of AI models to determine how changes to the data they are based on will impact them.

The goal is to make it easier for business analysts to experiment with AI models based on machine learning algorithms they can create with the help of a data scientist team, Dataiku CEO Florian Douetteau said.

As part of that effort, Dataiku 9 adds a Model Assertions tool that enables a subject matter expert to inject a known condition or sanity-checks into a model to prevent a certain outcome or conclusion from ever being reached.

There is also now a Visual ML Diagnostics tool that will generate error messages if the platform determines a model will fail and a Model Fairness Report tool that provides access to a dashboard through which companies can assess the bias or fairness of an AI model.

Finally, there is also now a Smart Pattern Builder and Fuzzy Joins capability that makes it easier to work with more complex or even incomplete datasets without having to write code or manually clean or prep data.

Most organizations are investing in AI to make better data-driven decisions, which Douetteau says makes it essential for business analysts to create AI models without having to wait for a data science team to construct them. “Business users should be able to create AI, not just consume it,” he said.

That doesn’t necessarily mean an organization shouldn’t hire a data science team to address more complex challenges, but it does reduce the dependency an organization would otherwise have a on small team of specialists, Douetteau noted.

When it comes to AI, most organizations are employing the approach they typically used to make previous generations of analytics applications available to end users. A team of IT specialists would create a series of dashboards based on data marts that would be exposed to business users via a self-service portal. But that approach is insufficient when it comes to AI because the rate of change in the underlying data has now substantially increased, Douetteau said. Business analysts need to be able to dynamically create AI models that enable organizations to respond faster to rapidly changing business conditions.

Of course, there is no better example of the need for that capability than the COVID-19 pandemic. Most of the AI models organizations had in place prior to the arrival of the pandemic were rendered obsolete almost overnight. It has taken most of those organizations months to construct new AI models, but even now it’s difficult to make assumptions that are not subject to change as new data becomes available.

Unfortunately, the average data science team today is fortunate if they can get an AI model into a production environment in a few months. Businesses clearly need to get data science tools into the hands of end users to enable AI to live up to its full potential, Douetteau said. The challenge is ensuring there are enough checks and balances in place to make sure any AI model that is implemented is properly vetted. After all, the scale at which an AI model typically works means that any potential error could represent a considerable risk for any business.

At this point, however, the proverbial AI genie is out of the bottle, regardless of the level of risk. Tumultuous economic times are pushing many business executives to accept higher levels of risk in an effort to either reduce costs or maximize revenues. But whatever outcomes AI eventually enables, the way business processes are constructed and maintained is about to change utterly.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Proposed framework could reduce energy consumption of federated learning

Modern machine learning systems consume massive amounts of energy. In fact, it’s estimated that training a large model can generate as much carbon dioxide as the total lifetime of five cars. The impact could worsen with the emergence of machine learning in distributed and federated learning settings, where billions of devices are expected to train machine learning models on a regular basis.

In an effort to lesson the impact, researchers at the University of California, Riverside and Ohio State University developed a federated learning framework optimized for networks with severe power constraints. They claim it’s both scalable and practical in that it can be applied to a range of machine learning settings in networked environments, and that it delivers “significant” performance improvements.

The effects of AI and machine learning model training on the environment are increasingly coming to light. Ex-Google AI ethicist Timnit Gebru recently coauthored a paper on large language models that discussed urgent risks, including carbon footprint. And in June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car.

In machine learning, federated learning entails training algorithms across client devices that hold data samples without exchanging those samples. A centralized server might be used to orchestrate rounds of training for the algorithm and act as a reference clock, or the arrangement might be peer-to-peer. Regardless, local algorithms are trained on local data samples and the weights — the learnable parameters of the algorithms — are exchanged between the algorithms at some frequency to generate a global model. Preliminary studies have shown this setup can lead to lowered carbon emissions compared with traditional learning.

In designing their framework, the researchers of this new paper assumed that clients have intermittent power and can participate in the training process only when they have power available. Their solution consists of three components: (1) client scheduling, (2) local training at the clients, and (3) model updates at the server. Client scheduling is performed locally such that each client decides whether to participate in training based on an estimation of available power. During the local training phase, clients that choose to participate in training update the global model using their local datasets and send their updates to the server. Upon receiving the local updates, the server updates the global model for the next round of training.

Across several experiments, the researchers compared the performance of their framework with benchmark conventional federated learning settings. The first benchmark was a scenario in which federated learning clients participated in training as soon as they had enough power. The second benchmark, meanwhile, dealt with a server that waited for clients to have enough power to participate in training before initiating a training round.

The researchers claim that their framework significantly outperformed the two benchmarks in terms of accuracy. They hope it serves as a first step toward sustainable federated learning techniques and opens up research directions in building large-scale machine learning training systems with minimal environmental footprints.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

MIT CSAIL taps AI to reduce sheet metal waste

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) say they’ve created an AI-powered tool that provides feedback on how different parts of laser-cut designs should be placed onto metal sheets. By analyzing how much material is used in real time, they claim that their tool — called Fabricaide — allows users to better plan designs in the context of available materials.

Laser cutting is a core part of industries spanning from manufacturing to construction. However, the process isn’t always efficient. Cutting sheets of metal requires time and expertise, and even the most skillful users can produce leftovers that go to waste.

Fabricaide ostensibly solves this with a workflow that “significantly” shortens the feedback loop between design and fabrication. The tool keeps an archive of what a user has done, tracking how much of each material they have left and allowing the user to assign multiple materials to different parts of the design to be cut. This simplifies the process so that it’s less of a headache for multimaterial designs.

Fabricaide also features a custom 2D packing algorithm that can arrange parts onto sheets in an efficient way, in real time. As the user creates their design, Fabricaide optimizes the placement of parts onto existing sheets and provides warnings if there’s insufficient material, with suggestions for material substitutes.

MIT CSAIL Fabricaide

Fabricaide acts as an interface that integrates with existing design tools and is compatible with computer-assisted design software like AutoCAD, SolidWorks, and Adobe Illustrator. In the future, the researchers hope to incorporate more sophisticated properties of materials, like how strong or flexible they need to be.

“By giving feedback on the feasibility of a design as it’s being created, Fabricaide allows users to better plan their designs in the context of available materials,” Ticha Sethapakdi, a Ph.D. student who led the development of Fabricaide alongside MIT professor Stefanie Mueller, said in a statement. “A lot of these materials are very scarce resources, and so a problem that often comes up is that a designer doesn’t realize that they’ve run out of a material until after they’ve already cut the design. With Fabricaide, they’d be able to know earlier so that they can proactively determine how to best allocate materials.”

The AI in manufacturing market is expected to be valued at $1.1 billion in 2020 and is likely to reach $16.7 billion by 2026, according to Markets and Markets. AI-based solutions like Fabricaide, if commercialized, could help manufacturers to transform their operations by playing a crucial role in automating stages of manufacturing and augmenting human work.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link