Categories
AI

Nvidia debuts ReOpt to optimize supply chain routing with AI

During a keynote address at its fall 2021 GPU Technology Conference (GTC), Nvidia debuted ReOpt, a software package that combines local search heuristics algorithms and “metaheuristics” to optimize vehicle route planning and distribution. According to the company, ReOpt can improve route planning, warehouse picking, fleet management, and more in logistics to control delivery costs from factories to stores and homes.

Companies are increasingly facing supply chain challenges caused — or exacerbated — by the pandemic. A U.S. Census Bureau survey f0und that 38.8% of U.S. small businesses were experiencing domestic supplier delays by the middle of July 2021. Late deliveries can seriously impact customer loyalty, with one survey finding that 80% of shoppers would cut ties with brands if they experienced stock shortages.

“At a time when the global supply chain faces massive disruption, ReOpt provides the AI software required for everything from vehicle routing for last-mile delivery to efficiently picking and packing of warehoused goods bound for homes and offices,” Nvidia software engineering manager Alex Fender said in a blog post. “ReOpt delivers new tools for dynamic logistics and supply chain management to a wide range of industries, including transportation, warehousing, manufacturing, retail, and quick-service restaurants.”

AI-powered logistics

Delivering goods directly to a customer’s door, called last-mile delivery, was costly even before the pandemic disrupted the global supply chain network. Over half of all air, express, rail, maritime, and truck transport shipping costs result from last-mile deliveries, impacting profitability, according to ABI Research. Onfleet estimates that companies typically eat about 25% of that cost themselves — a number that continues to increase as bottlenecks worsen.

ReOpt, which is now available in early access, taps algorithms to provide customers with road condition, traffic, and route metrics to reduce miles, fuel cost, carbon emissions, and idle time. The service models the movements of vehicles that have finite capacities and different costs, factoring in items like fresh produce that must be carried by refrigerated trucks. ReOpt also allows customers to create automated routines that dynamically route robots for truck loading as new orders arrive. And it can take into account the number of pilots, drivers, and workers available to operate vehicles on a given day, folding in maintenance costs.

“GPUs offer the computational power needed to fuel the most ambitious heuristics while supporting the most challenging constraints. ReOpt takes advantage of Nvidia’s massively parallel architecture to generate thousands of solution candidates and refine them to select only the best one at the end,” Fender continued. “As a result, ReOpt can scale to the largest problems in seconds with world-class accuracy.”

A growing number of companies are developing AI services to optimize components of the supply chain. DispatchTrack provides AI-powered route optimization, reservations, billing and settlement, and omnichannel order tracking tools. Locus is also developing a platform for logistics and “enterprise-scale” supply chain automation. Others in the global logistics market — which is expected to be grow to $12.68 billion in value by 2023, according to Research and Markets — are Convoy, Optimal Dynamics, KeepTruckin, and Next Trucking, which have collectively raised hundreds of millions in venture capital.

Tech giants have entered the fray, too — most recently Microsoft with its Supply Chain Insights product. Uber’s eponymous Uber Freight connects carriers and drivers with companies that need to move cargo. As for Google’s Supply Chain Twin, which became generally available in September, it organizes data in Google Cloud to expose a more complete view of suppliers, inventories, and events like weather.

While only 12% of manufacturing and transportation organizations are currently using AI in their supply chain operations, 60% expect to be doing so within the next four years, according to MHI. This dovetails with a recent PwC report, which found that 48% of companies are ramping up investments for simulation modeling and supply chain resilience.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Intellimize raises $30M to optimize websites with AI

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Intellimize, a startup aiming to help marketers drive conversions by personalizing websites, today announced that it raised $30 million in series B funding led by Cobalt Capital, with participation from Addition, Amplify Partners, Homebrew, and Precursor Ventures. CEO Guy Yalif says that the proceeds, which bring the company’s total raised to over $50 million, will be put toward expanding Intellimize’s engineering and customer-facing teams.

Personalization is increasingly key to boosting business revenue. Seventy-four percent of customers feel frustrated when website content isn’t personalized, one recent survey found. According to McKinsey, enterprises that have successfully embraced personalization have found proven ways to drive 5% to 15% increases in revenue and 10% to 30% increases in marketing-spend efficiency, predominantly by deploying triggered recommendations and communications.

Yalif is the former head of vertical marketing at Twitter and held executive positions at Microsoft, Boston Consulting Group, and Yahoo. He launched Intellimize with Brian Webb and Jin Lim in 2016. At Yahoo, Yalif worked with both Yahoo VP of engineering — Lim — and Webb, who was an architect on Yahoo’s personalized content recommendation team.

“[Over the past year,] we’ve been busy helping marketers create high converting websites by combining their ideas with our machine learning,” Yalif told VentureBeat via email. “We’ve become especially popular with business-to-business brands along with ecommerce. For example, we helped Snowflake generate 49% more leads with their website, while Sumo Logic accelerated decades of traditional testing and optimized across more than 1 billion versions of their site.”

AI-driven personalization

Intellimize leverages AI to generate webpages for visitors in real time. It enables customers to optimize for one or multiple goals simultaneously and tap third-party machine learning services for further customization, Yalif says, making adjustments in response to user behaviors.

“One-size-fits-all websites are the biggest squandered opportunity in all of marketing for all industries … Intelligent website optimization is essential tech to the modern marketing stack, and our investors share this sentiment,” he added.

Marketers begin by creating experiences for prospects. Intellimize’s AI then runs combinations of experiences and learns what converts, drawing on data including location, device type, time of day, day of the week, and traffic source. Finally, as a part of the last step, the platform delivers the best-performing experience to customers.

Intellimize

With Intellimize, San Mateo, California-based Looker says it drove five times return on investment by nudging prospective customers to content, demo requests, and other web forms.

“Our goal is to help more marketers deliver more revenue, more customers, and more leads to sales,” Yalif continued. “We will help more conversion-obsessed marketers dynamically adjust their websites to each unique visitor’s changing behavior over time.”

The AI-driven personalization market has grown substantially in recent years. In June 2019, Amazon launched Personalize, a service that supports the development of websites, mobile apps, and content management and email marketing systems that suggest products and provide tailored search results. More recently, Google added AI-powered app personalization to app development platform Firebase. And on the startup side, Evolv Technologies, Optimizely (which was acquired by Episerver last year), and others have raised millions of dollars to automate A/B web testing with AI algorithms.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Tomi.ai raises $1M to help brick-and-mortar companies optimize digital ads

Elevate your enterprise data technology and strategy at Transform 2021.


Tomi.ai, an AI-powered platform that optimizes digital ads, today announced it has raised $1 million in seed funding from Begin Capital and the Phystech Leadership Fund. Tomi founder and CEO Konstantin Bayandin says the funds will be used to expand the company’s platform.

Companies with long and offline sales cycles in industries like real estate, automotive, and financial services sometimes struggle to optimize their digital ads for business outcomes. Due to low conversion rates and the offline nature of these outcomes, companies tailor ads to leads and clicks, which can result in exorbitant customer acquisition costs.

Tomi aims to solve this by collecting online data from a tracking pixel on a company’s website and ad platform API integrations, as well as transactions from customer relationship management systems. After recording 100 to 300 “positive outcomes” to train machine learning models, the service conducts a “dry run” on Facebook’s and Google’s ad platforms, comparing the results of controlled experiments to track performance uplift across platforms, channels, and campaigns.

Bayandin worked as senior director of digital marketing and technology at Compass and chief marketing officer at Ozon, where he focused on predictive modeling. While working at Compass, Bayandin says he witnessed how limited the opportunities were for mostly offline industries with long sales cycles compared with ecommerce at Ozon.

“The vision is that Tomi becomes the gold standard solution for Facebook and Google ads targeting and optimization in traditional industry verticals with lead gen marketing for long offline sales cycles,” Bayandin told VentureBeat via email. “A number of tools advertise they do targeting and optimization of ad campaigns, a few tools do predictive targeting and optimization, but all of them use third-party data in one way or another, and it’s only us who rely on first-party data. Product-wise, we differ because of our laser focus on traditional industries that can’t leverage the power of ad systems’ smart bidding, owing to low conversions and a long sales cycle.”

Technical backend

With Tomi, customers pay only for incremental customer lifetime value and business outcomes and can use audiences optimized for expected lifetime value and target new visitors with high intent. The platform runs on a high-load, Google Cloud-powered instance that processes 30 million hits per day.

“Our machine learning algorithms have to learn from a few positive examples counting from 100 or more. The models also have to be stable in terms of little changes in user behavior so that the variance for predictions is as little as possible. We are exploiting ‘bias variance tradeoff’ a lot by substituting rare actual transaction events by numerous synthetic conversions with non-discrete values,” Bayandin explained. “We also use a variant of transfer learning by training machine learning models on the overall website traffic and applying them to paid acquisition with the premise that user behavior depends on user intent rather than on the source of traffic. We use feature engineering for our models based on industry-specific learning that we have learned with our customers.”

Bayandin characterizes the platform as a “natural fit” for large customers, such as marketplaces, real estate, and financial companies — it currently has 10 midsize and enterprise customers. “We’ve built the platform with only eight people in the team now and plan to have 20 by the end of the year,” Bayandin said. “We launched the product just over a year ago and the revenue has grown 4 times since then.” He said the company plans “to focus on growth and product development to release a self-service platform later.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

AMD Won’t Optimize Nvidia Graphics Cards for Super Resolution

AMD is set to release its new FidelityFX Super Resolution technology this month. The tech is its response to Nvidia’s Deep Learning Super Sampling (DLSS), an upscaling technology that improves gaming performance. Although AMD made FSR open-source and cross-vendor, Radeon’s vice president states that the feature will not be optimized for Nvidia cards.

FidelityFX Super Resolution is a much-anticipated technology that might shake things up on the upscaling front for both AMD and Nvidia. As it’s said to offer up to twice the performance in games in 4K with ray tracing enabled, it stands a chance at challenging Nvidia’s DLSS. 

FSR reconstructs the image to make it appear like it’s rendering at a higher resolution. This might make a game that was initially 1080p appear to be 1440p. According to AMD, FidelityFX is a spatial upscaling technique that doesn’t rely on motion vectors or history buffers.

AMD Radeon RX 6900 XT
AMD

While FSR was made to be a competitor to Nvidia’s DLSS, it doesn’t work the same way. Nvidia’s tech is artificial intelligence-based, but FSR does not use any machine learning. However, AMD revealed that FSR will use a combination of linear and nonlinear upscaling as opposed to Nvidia’s strictly deep learning approach.

AMD’s new ray-tracing solution is going to be fully open-source. The company has previously spoken about the benefits of going open-source as well as cross-vendor, promising that FSR would function even on non-AMD graphics cards. However, it seems that this promise will not become reality until Nvidia itself makes it happen.

Scott Herkelman, the vice president and general manager of Radeon, announced on Twitter that the new technology will not be available to Nvidia users. He explained this by saying: “Just to be clear, though — we aren’t optimizing it for GeForce. That will be up to them to do the work on behalf of their gaming community — we just proved it works.”

This may come as a disappointment to some users who do not own a Radeon card. Collaboration between the two brands may happen, but it’s definitely not a guarantee. The ball is in Nvidia’s court and the company may not be willing to spare resources on optimizing FidelityFX to work on GeForce GPUs.

FSR is an interesting solution that could dethrone DLSS at some point in time. However, as of right now, Nvidia is likely to maintain the upper hand due to the performance differences between the two. With FidelityFX Super Resolution launching on June 22, we will just have to wait for Nvidia’s response to know whether FSR will make it to GeForce cards.

Editors’ Choice






Repost: Original Source and Author Link

Categories
AI

Walgreens used AI to optimize vaccine outreach emails

Elevate your enterprise data technology and strategy at Transform 2021.


As of May 18, nearly 40% of the U.S. population had been fully vaccinated against COVID-19, with close to 50% having received at least one shot. But outreach remains a major challenge. McKinsey estimated in December that vaccine adoption would require “unprecedented” public and private action and incremental investment of about $10 billion. Highlighting the unevenness in the rollout, a lower percentage of Black Americans than of the general population had been vaccinated by March in every state reporting statistics by race.

Governments at the local, state, and federal levels are involved in distributing and administering vaccines, alongside private-sector partners like pharmacy chains, grocers, and retailers. Among those is Walgreens, which now offers same-day COVID-19 vaccine appointments in most of its U.S. retail locations.

At the beginning of the COVID-19 vaccine rollout, Walgreens aimed to ensure that it had strong customer engagement and high open rates of its email communications on vaccine availability. To meet this goal, the company partnered with Phrasee, an AI-powered copywriting platform, to create a targeted email marketing campaign for customers.

AI-powered marketing

When McKinsey surveyed 1,500 executives across industries and regions in 2018, 66% said addressing skills gaps related to automation and digitization was a “top 10” priority. Forrester predicts that 57% of business-to-business sales leaders will invest more heavily in tools with automation.  And that’s perhaps why Salesforce anticipates the addressable market for customer intelligence will grow to $13.4 billion by 2025, up from several billion today.

According to loyalty and personalization director Brian Tyrrell, Walgreens leveraged Phrasee’s technology to create more engaging subject lines and bodies that reflected the right degree of urgency. “We knew that our largest owned channel in terms of reach was our email channel. We had the opportunity to communicate with 50 million customers there. So when we started rolling out communications about testing, and now the vaccine, it was never more important to ensure that these customers were opening that content,” Tyrrell said in a statement.

Phrasee, which was founded in 2015 by Neil Yager, Parry Malm, and Victoria Peppiatt, offers an email “optimization” product that combines AI and computational linguistics to generate, automate, and analyze language in real time. Phrasee tailors language for email subject lines, in-body copy, and calls-to-action, reminding customers about things like abandoned shopping carts and important sale announcements.

Walgreens tapped the Phrasee platform in March 2020 to change its use of emojis in emails. The goal was to make sure “urgent” emojis, like the red alarm bell, were being used in a way that matched the level of severity covered in the content.

“The folks at Phrasee helped us understand how customers were engaging with different parts of those subject lines, to know where to tone down and where to tone up certain parts of our brand language. We also made everything much more simplified so that customers could digest the content of our emails as easily as possible,” Tyrrell said.

While the platform helped Walgreens simplify its emails, Phrasee’s team also partnered with the retailer to make sure the tone of its campaign was appropriate. Specifically, Walgreens altered some of the “fun” language that felt tone-deaf during the early pandemic.

“One of the most important elements is that we (Walgreens) maintained a very consistent and authentic tone of voice. We take a lot of this feedback loop that we get from Phrasee to roll back up into how we develop brand tone as a brand in its entirety,” Tyrrell said.

Expanded partnership

Walgreens said it saw a 30% increase in email open rates after implementing Phrasee’s suggested changes. This means 30% more customers received info on available vaccine appointments — and potentially up to 30% more customers scheduled a vaccine.

Since March, Walgreens has expanded its partnership with Phrasee beyond optimizing email subject lines. Now, the retailer and Phrasee, along with Adobe, are helping optimize messaging throughout the customer experience.

“Phrasee’s really good at subject lines. But what’s next, how can we use this for app push messaging? How can you power the content within our emails instead of just the subject lines? So we’re really expanding our partnership into other areas,” Tyrrell said. “If we only have one shot to get customers to engage and get a vaccine today, how can we put our best foot forward?”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

The Fusion Project aims to optimize data collection from vehicles

The Fusion Project, which promises to provide a more efficient way to collect the data required to train AI models for autonomous vehicles, is being launched today by Airbiquity, Cloudera, NXP Semiconductors, Teraki, and Wind River.

The goal is to compress the data collected from autonomous vehicles to the point where it becomes possible to update the AI models employed in an autonomous vehicle faster. Today, autonomous vehicles rely on inference engines based on AI models trained in the cloud. The automotive industry is a long way from being able to train AI models in real time on the vehicle itself. In the meantime, the members of the Fusion Project are committing to making it easier to collect data by compressing data on the vehicles before it is transferred back to AI models residing in the cloud.

Those data compression techniques will eventually be applied to other forms of transportation such as trains and planes, said David LeGrand, senior industry and solutions marketing manager for manufacturing and retail at Cloudera.

The members of the Fusion Project are pledging to develop an integrated embedded system for collecting compressed data from vehicles that can be fed back to a cloud platform. That capability will substantially reduce the cost of collecting data from what one day might be millions of vehicles, noted LeGrand.

In addition to compressing the data collected using software developed by Cloudera, the members of the Fusion Project will enable over-the-air updates to the inference engines installed in a vehicle using software management software from Airbiquity.

NXP, meanwhile, will provide the vehicle processing platforms, while Teraki provides the AI software that will be deployed at the edge. Finally, Wind River will provide the embedded system software.

Initially, the Fusion Project will specifically focus its efforts on advancing the ability of autonomous vehicles to recognize when to optimally change lanes based on the data gathered via vision AI engines installed in the vehicle, said LeGrand. The first tests of vehicles embedded with Project Fusion technologies will take place in Europe, added LeGrand.

The immediate goal is to not eliminate the need for drivers, but rather to take the current alert systems that most vehicles have today to the next level by training AI models based on the data about the actual driving experience being collected by vehicles, noted LeGrand. “It’s not going to be fully autonomous,” said LeGrand. “It’s more like a driver-assist system.”

There are, of course, fully autonomous vehicles that can follow a highly prescribed set of programming instructions to get from one point to another. The challenge is that the level of responsiveness required for an autonomous vehicle to navigate traffic flows that include vehicles driven by humans that are likely to make random decisions remains elusive.

There may eventually come a day when AI models embedded within a vehicle could be trained and updated in real time. Today, achieving that goal would require the equivalent of a server based on a graphics processing unit (GPU) to be installed in the trunk of every vehicle. Naturally, that would make autonomous vehicles prohibitively expensive.

In the meantime, the process of transferring data between inference engines and the AI models on which they are based will continue to become more efficient. The AI model might not make it all the way out to the vehicle itself, but it will become more feasible to deploy AI models at the network edge. The challenge, of course, is finding a way to achieve that goal in a way that is economically viable for automotive manufacturers.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link