Report: Enterprise use of AI to predict cash flow expected to increase 450%

Enterprise deployment of AI and machine learning (ML) for cash flow forecasting is expected to increase 450% over the next two years, according to the recently released 2021 Cash Forecasting & Visibility Survey from GTreasury and Strategic Treasurer. The survey of nearly 250 enterprises across industries highlights a growing appetite for AI/ML modernization among finance and treasury teams seeking more accurate and more immediate cash flow forecasts.

To sharpen forecasting capabilities (which are critical for determining business direction and priorities), today’s enterprises are embracing new technology strategies and refining methods to introduce greater automation and efficiency. While just 6% of respondents currently use AI/ML technology to predict and understand their cash forecasting, enterprises’ reported plans indicate that, within two years, that number will reach 27%.

Respondents also indicate a similarly bright trajectory for regression analysis: 12% use it currently, but projected usage will grow to 29% in two years, and 43% use or expect to use it at some point in the future.

Graphic. Title: Rise of AI/ML in the near future. 6% of people are using AI/ML and 12% are using regression analysis. 21% of respondents plan to use AI/ML and 17% plan to use regression analysis in the next two years. 24% plan to use AI/ML and 14% plan to use regression analysis in over two years.

The vast majority of enterprises still rely on traditional manual methods for cash forecasting — 91% of survey respondents report using Excel spreadsheets as one of their forecasting tools. In comparison, 25% have a more modern digital treasury platform in place, and 28% use ERP systems. Fifteen percent use financial reporting and analysis (FR&A) or budgeting tools to assist in their forecasts, and just 5% use a dedicated forecasting platform.

Variance analysis is another task requiring heavy manual effort from enterprises: 57% of respondents say that their variance analysis activities are fully manual, and another 19% report significant manual activities. One-fifth of companies avoid this manual effort only by performing no variance analysis whatsoever. The remaining 5% of respondents do utilize variance analysis that’s backed by fully automated processes.

The survey’s findings are beads strung along a common thread: Enterprises recognize and demand the benefits of more efficient and effective cash forecasting. With investments in AI/ML and other advanced capabilities, many enterprises are already pursuing new strategies and spending what it takes to place the tools and technologies they require at their command.

Read the full report from GTreasury and Strategic Treasurer.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


HTC Vive Flow headset images leak days before reported launch

HTC is expected to launch a new VR headset within the week, but you don’t have to wait till then to see what it looks like. A collection of Vive Flow images has made its way online, courtesy of evleaks, before the launch event. According to Protocol, the Vive Flow is a lightweight headset developed for consumers under the internal name “Hue.” The Bluetooth SIG consortium previously published documents describing Hue as a VR AIO (all-in-one) product, which means the device could be a standalone headset that doesn’t need a phone or doesn’t have to be tethered to a PC to work. 

The company reportedly wants to position the Vive Flow primarily as a way to consume media, with some access to gaming. Its chip is less powerful than the Oculus Quest 2’s, Protocol says, but it will have six degrees of freedom tracking. The images leaked online also show more information about the device, including a dual-hinge system to make sure it fits most people, snap-on face cushion, immersive spatial audio, adjustable lenses and active cooling system. After you pair your phone with it via Bluetooth, you can use your mobile device as a VR controller and to stream content to VR using Miracast tech.

In addition, the images show that the Vive Flow will be available for pre-order starting on October 15th, with shipments going out in early November. The headset will set you back US$499, which is $200 more than the Quest 2’s launch price, and you’ll get seven free virtual reality content and a carrying case if you pre-order.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


Data flow automation engine Prefect raises $32M

Elevate your enterprise data technology and strategy at Transform 2021.

Dataflow automation startup Prefect today announced it has raised $32 million in a series B funding round led by Tiger Global. The company says it will use the capital to further develop its platform, attract talent, and support its growing community of users.

When McKinsey surveyed 1,500 executives across industries and regions in 2018, 66% said addressing skills gaps related to automation and digitization was a “top 10” priority. Salesforce’s recent Trends in Workflow Automation report found that 95% of IT leaders are prioritizing automation and 70% of execs are seeing the equivalent of over four hours saved per employee each week. Moreover, according to market research firm Fact.MR, the adoption of business workflow automation at scale could create a market opportunity worth over $1.6 billion between 2017 and 2026.

Prefect, which was founded in 2018, offers a platform that can build, run, and monitor up to millions of data workflows and pipelines. The company’s hybrid execution model keeps code and data private while taking advantage of a managed orchestration service. Customers can use Prefect for scheduling, error handling, data serialization, and parameterization, leveraging a Python framework to combine tasks into workflows and then deploy and monitor their execution through a dashboard or API.

Prefect customers can design their workflows with a framework called Core, which sends metadata to Prefect’s cloud in order to register a flow for scheduling. Flow updates are asynchronously sent to the cloud as metadata, ensuring Prefect can’t view the content of the flows themselves.

Hybrid engine

Founder and CEO Jeremiah Lowin says the key to Prefect’s cloud hybrid execution model lies in agents — small open source programs that can launch flows into any environment. Agents can stream real-time state updates and kick off new runs, with an API to query data as well as join, filter, sort, and transform it.

Prefect’s scheduler service offers options that allow for per-run changes, and it lets users label flows so they’re picked up by agents with matching labels to support multiple environments. Prefect can start, pause, and resume tasks at any time, allowing manual steps like review and approval, and it can give flows access to sensitive information at runtime, including API keys or passwords.

If a task crashes unexpectedly, Prefect can restart it autonomously via special “Lazarus” agents. And the platform alerts stakeholders when an agent goes offline, exposing logs for streaming, filtering, and searching.


Above: Prefect’s online dashboard.

Image Credit: Prefect

Lowin says Prefect had a banner year in 2020, with 130% quarter-over-quarter usage growth since February 2020. The company recently announced a relationship with Microsoft for Startups to advance dataflow automation, and it claims its platform is now processing 25 million tasks per month and 2 million workflows per month.

Prefect’s success aligns with broader industry trends in digital transformation. According to McKinsey, data flows have raised the global gross domestic product by at least 10% over a decade. The value totaled $7.8 trillion in 2014 alone, contributing to economic growth primarily by raising productivity.

Market Research Future predicts the global data analytics market alone will be valued at over $132 billion by 2026. A range of organizations can use data to boost their marketing strategies, increase their bottom line, personalize their content, and better understand their customers. In fact, businesses that use big data increase their profits by an average of 8%, according to a survey conducted by BARC research.

Bessemer Venture Partners also participated in Prefect’s most recent funding round. This brings the Washington, D.C.-based company’s total raised to date to over $46.7 million.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Tech News

We won’t travel faster than light anytime soon, but we might be able to distort the flow of time

In 1994, physicist Miguel Alcubierre proposed a radical technology that would allow faster than light travel: the warp drive, a hypothetical way to skirt around the universe’s ultimate speed limit by bending the fabric of reality.

It was an intriguing idea – even NASA has been researching it at the Eagleworks laboratory – but Alcubierre’s proposal contained problems that seemed insurmountable. Now, a recent paper by US-based physicists Alexey Bobrick and Gianni Martire has resolved many of those issues and generated a lot of buzz.

But while Bobrick and Martire have managed to substantially demystify warp technology, their work actually suggests that faster-than-light travel will remain out of reach for beings like us, at least for the time being.

There is, however, a silver lining: warp technology may have radical applications beyond space travel.

Across the universe?

The story of warp drives starts with Einstein’s crowning achievement: general relativity. The equations of general relativity capture the way in which spacetime – the very fabric of reality – bends in response to the presence of matter and energy which, in turn, explains how matter and energy move.

General relativity places two constraints on interstellar travel. First, nothing can be accelerated past the speed of light (around 300,000 km per second). Even traveling at this dizzying speed it would still take us four years to arrive at Proxima Centauri, the nearest star to our Sun.

Second, the clock on a spaceship traveling close to the speed of light would slow down relative to a clock on Earth (this is known as time dilation). Assuming a constant state of acceleration, this makes it possible to travel the stars. One can reach a distant star that is 150 light years away within one’s lifetime. The catch, however, is that upon one’s return more than 300 years will have passed on Earth.

A new hope

This is where Alcubierre came in. He argued that the mathematics of general relativity allowed for “warp bubbles” – regions where matter and energy were arranged in such a way as to bend spacetime in front of the bubble and expand it to the rear in a way that allowed a “flat” area inside the bubble to travel faster than light.

To get a sense of what “flat” means in this context, note that spacetime is sort of like a rubber mat. The mat curves in the presence of matter and energy (think of putting a bowling ball on the mat). Gravity is nothing more than the tendency objects have to roll into the dents created by things like stars and planets. A flat region is like a part of the mat with nothing on it.

Such a drive would also avoid the uncomfortable consequences of time dilation. One could potentially make a round trip into deep space and still be greeted by one’s nearest and dearest at home.

A spacetime oddity

How does Alcubierre’s device work? Here discussion often relies on analogies, because the math is so complex.

Imagine a rug with a cup on it. You’re on the rug and you want to get to the cup. You could move across the rug, or tug the rug toward you. The warp drive is like tugging on spacetime to bring your destination closer.

But analogies have their limits: a warp drive doesn’t really drag your destination toward you. It contracts spacetime to make your path shorter. There’s just less rug between you and the cup when you switch the drive on.

Alcubierre’s suggestion, while mathematically rigorous, is difficult to understand at an intuitive level. Bobrick and Martire’s work is set to change all that.

Starship bloopers

Bobrick and Martire show that any warp drive must be a shell of material in a constant state of motion, enclosing a flat region of spacetime. The energy of the shell modifies the properties of the spacetime region inside it.

This might not sound like much of a discovery, but until now it was unclear what warp drives might be, physically speaking. Their work tells us that a warp drive is, somewhat surprisingly, like a car. A car is also a shell of energy (in the form of matter) that encloses a flat region of spacetime. The difference is that getting inside a car does not make you age faster. That, however, is the kind of thing a warp drive might do.

Using their simple description, Bobrick and Martire demonstrate a method for using Einstein’s general relativity equations to find spacetimes that allow for arrangements of matter and energy that would act as warp bubbles. This gives us a mathematical key for finding and classifying warp technologies.

Their work manages to address one of the core problems for warp drives. To make the equations balance, Alcubierre’s device runs on “negative energy” – but we are yet to discover any viable sources of negative energy in the real world.

Tech News

Facebook’s feckless ‘Fairness Flow’ won’t fix its broken AI

Facebook today posted a blog post detailing a three-year-old solution to its modern AI problems: an algorithm inspector that only works on some of the company’s systems.

Up front: Called Fairness Flow, the new diagnostic tool allows machine learning developers at Facebook to determine whether certain kinds of machine learning systems contain bias against or towards specific groups of people. It works by inspecting the data flow for a given model.

Per a company blog post:

To measure the performance of an algorithm’s predictions for certain groups, Fairness Flow works by dividing the data a model uses into relevant groups and calculating the model’s performance group by group. For example, one of the fairness metrics that the toolkit examines is the number of examples from each group. The goal is not for each group to be represented in exactly the same numbers but to determine whether the model has a sufficient representation within the data set from each group.

Other areas that Fairness Flow examines include whether a model can accurately classify or rank content for people from different groups, and whether a model systematically over- or underpredicts for one or more groups relative to others.

Background: The blog post doesn’t clarify exactly why Facebook’s touting Fairness Flow right now, but its timing gives a hint at what might be going on behind the scenes at the social network.

MIT Technology Review’s Karen Hao recently penned an article exposing Facebook’s anti bias efforts. Their piece makes the assertion that Facebook is motivated solely by “growth” and apparently has no intention of combating bias in AI where doing so would inhibit its ceaseless expansion.

Hao wrote:

It was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth.

In the wake of Hao’s article, Facebook’s top AI guru, Yann LeCun, immediately pushed back against the article and its reporting.

Facebook had allegedly timed the publishing of a research paper with Hao’s article. Based on LeCun’s reaction, the company appeared gobstruck by the piece. Now a scant few weeks later, we’ve been treated to a 2,500+ word blog post on Fairness Flow, a tool that addresses the exact problems Hao’s article discusses.

[Read: Facebook AI boss Yann LeCun goes off in Twitter rant, blames talk radio for hate content]

However, addresses might be too strong a word. Here’s a few snippets from Facebook’s blog post on the tool:

  • Fairness Flow is a technical toolkit that enables our teams to analyze how some types of AI models and labels perform across different groups. Fairness Flow is a diagnostic tool, so it can’t resolve fairness concerns on its own.
  • Use of Fairness Flow is currently optional, though it is encouraged in cases that the tool supports.
  • Fairness Flow is available to product teams across Facebook and can be applied to models even after they are deployed to production. However, Fairness Flow can’t analyze all types of models, and since each AI system has a different goal, its approach to fairness will be different.

Quick take: No matter how long and boring Facebook makes its blog posts, it can’t hide the fact that Fairness Flow can’t fix any of the problems with Facebook’s AI.

The reason bias is such a problem at Facebook is because so much of the AI at the social network is black box AI – meaning we have no clue why it makes the output decisions it does in a given iteration.

Imagine a game where you and all your friends throw your names in a hat and then your good pal Mark pulls one name out and gives that person a crisp five dollar bill. Mark does this 1,000 times and, as the game goes on, you notice that only your white, male friends are getting money. Mark never seems to pull out the name of a woman or non-white person.

Upon investigation, you’re convinced that Mark isn’t intentionally doing anything to cause the bias. Instead, you determine the problem must be occurring inside the hat.

At this point you have two decisions: number one, you can stop playing the game and go get a new hat. And this time, you try it out before you play again to make sure it doesn’t have the same biases.

Or you could go the route that Facebook’s gone: tell people that hats are inherently biased, and you’re working on new ways to identify and diagnose those problems. After that, just insist everyone keep playing the game while you figure out what to do next.

Bottom line: Fairness Flow is nothing more than an opt-in “observe and report” tool for developers. It doesn’t solve or fix anything.

Published March 31, 2021 — 17:38 UTC

Repost: Original Source and Author Link