Categories
AI

Grow up: 5 reasons why many businesses are still in ‘AI adolescence’

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Here’s what businesses can learn from the small group of organizations that already use artificial (AI) to their competitive advantage. 

If the world’s largest companies were people, most would be in their teenage years when it comes to using Artificial Intelligence (AI).

According to new research from Accenture on AI maturity, 63% of 1,200 companies were identified as “Experimenters,” or companies that are stuck in the experimentation phase of their AI lives. They have yet to leverage the technology’s full potential to innovate and transform their business, and they risk leaving money on the table. 

This is money that the most AI-mature organizations are already pocketing. While the “AI adults” (dubbed Achievers in the research) are only a small group — representing 12% of companies — they are reaping big rewards: By outperforming their peers on AI, they are increasing their revenue growth by 50% on average. How? Because they master key capabilities in the right combination by having command of the technology itself — including data, AI and cloud — as well as their organizational strategy, responsible use of AI, C-suite sponsorship, talent and culture. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Unlike people, companies don’t necessarily grow up and graduate into adulthood in a relatively fixed period. Instead, they hold their development in their own hands. This makes it crucial to understand what keeps adolescent AI users from reaching their maturity. They typically share the five following characteristics:

1. Their C-suite has not bought into AI’s ability to spur growth

Only 56% of Experimenters have CEO and senior sponsorship — compared to 83% of Achievers — signaling that AI maturity starts with leadership buy-in. What’s more, Achievers are four times more likely than Experimenters to implement platforms that encourage idea sharing and easily posing questions internally. In one example of innovation emboldened by leadership, a global digital platform is harnessing AI and generative design to create autonomous buildings that fit together like pieces of a LEGO set.

2. They are not investing in their team members

Experimenters are hampered by a shortage of AI-skilled workers. Furthermore, they have yet to invest in training that helps their employees reach AI literacy. While more than three-quarters of Achievers (78%) have mandatory AI trainings for its engineers to C-suite executives, the same can be said for only 51% of Experimenters. 

To succeed with AI, Experimenters should reskill current team members in the technology. For example, a leading Southeast Asian oil and gas firm built a gamified platform to expand its employees’ digital fluency. It later created a cloud-based performance reviewer that assessed a decade’s worth of employee data to make recommendations for filling various digital roles. This reduced the time needed to fill positions and helped close the digital skills gap. 

3. Their AI use is not integrated across the enterprise

While 75% of all companies analyzed have incorporated AI into their business strategies and cloud plans, they lack a foundational AI core. To achieve AI maturity, they must integrate AI across the enterprise while also knowing when to tap external resources. 

Achievers are 32% more likely than Experimenters to develop custom-built machine learning applications or work with a partner to extract value from their data. For instance, one major U.S. credit card company created an innovative AI ecosystem by partnering with a technical university to create a dedicated analytics laboratory. The lab helped it stay on top of science and engineering breakthroughs.

4. They are designing AI without considering its implications

Scaling AI effectively relies on building responsibly from the start. With an increase in AI regulation, organizations that can demonstrate high-quality, trustworthy technology systems that are “regulation ready” will have a significant advantage in the marketplace. In fact, Achievers are already 53% more likely than their peers to develop and deploy AI responsibly. 

Otherwise, companies risk destroying trust with customers, employees, businesses and society. To combat this, a European-based pharmaceutical company created accountability mechanisms and risk management controls to ensure its AI-powered operations and services aligned with its core values. 

5. They wrongly believe AI has already plateaued

Companies that do not aggressively increase their AI spending risk being left behind. To successfully generate business value with AI, leaders know this is just the beginning, which is why in the last year alone, 46% of CEOs mentioned the technology in their earnings calls.

By 2024, we project nearly half of companies (49%) will devote at least 30% of their technology budgets to AI, up from 19% in 2021. These organizations know the quality of their investments matters just as much as the quantity, and they are dedicated to simultaneously expanding AI’s scope while better integrating its solutions.  

AI means lifelong learning

Environments shape people, especially in their teenage years. It’s not so different with companies and the industries they are rooted in. Tech firms with little legacy technology have a natural AI advantage. Most insurance companies, on the other hand, are both hampered by this legacy and face a much higher degree of regulation. Not surprisingly, these are the sectors where AI maturity is highest and lowest, respectively. Still, most industries have their Achievers, and across the board, all are expected to mature further. By 2024, the overall share of Achievers will increase from the current rate of 12% to 27%. 

But even these “adults” will need to continue learning as technology is transforming every part of a business, sometimes leading to total enterprise reinvention. There’s plenty of room for growth around AI for everyone. 

Sanjeev Vohra leads Accenture’s data and AI service Applied Intelligence and is a member of Accenture’s Global Management Committee. 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Edge Impulse lands $34M as the TinyML market continues to grow

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


As enterprises increasingly pilot AI technologies, tiny machine learning, or TinyML, is emerging as the preeminent way to cut down on the resources required for deployment. TinyML is a machine learning technique that can be implemented in low-energy systems, like sensors, to perform automated tasks. The technology is still very clearly AI, but with lower power usage, costs, and often without the need for an internet connection.

Applications for TinyML run the gamut, but the most popular range from factories, retail, and agriculture. In the manufacturing sector, TinyML can prevent downtime by alerting workers to perform preventative maintenance based on equipment conditions. And in farming, TinyML can monitor the vitals of livestock to help identify the early signs of disease.

A number of startups offer products designed to help enterprises implement TinyML solutions, but among the most visible is Edge Impulse. Launched in 2019, Edge Impulse provides a platform and services for developing devices that leverage embedded AI and machine learning. It claims that nearly 30,000 developers from thousands of companies including Oura, Polycom, and NASA have created upwards of 50,000 custom projects using Edge Impulse solutions, building industrial, logistics, consumer, and health solutions.

Advancing TinyML

Edge Impulse was founded two years ago by Jan Jongboom and Zach Shelby. Jongboom previously contributed code to Mozilla’s now-discontinued operating system, Firefox OS, and lead developer evangelism for several of Arm’s internet of things (IoT) platforms. Shelby comes from an investment background, having served as a member of the boards of Petasense and proptech startup CubiCasa.

Edge Impulse allows developers to collect or upload training data from devices, label the data, train a model, and deploy and monitor the model in a production environment. The platform supports development for machine learning for sensors, audio, and computer vision, specializing in TinyML industrial applications including predictive maintenance, asset tracking, and monitoring, and sensing.

“Accuracy — which is normally used to assess the performance of a machine learning model — only tells a very small part of the story. You need to know the strengths and weaknesses of your model, know when it misses events or when it triggers false positives,” Shelby told VentureBeat via email. “[That said,] machine learning has huge value potential for all businesses working with sensor related data, from saving cost and better service customers to enabling whole new generations of feature value.”

Edge Impulse

Above: Edge Impulse’s development dashboard.

Image Credit: Edge Impulse

To increase the efficiency of models trained on its platform, Edge Impulse uses a compiler that compiles models to C++. The company claims that this can reduce RAM usage by 25% to 55% and storage usage by up to 35% compared with rival approaches.

“[We’ve seen] enterprise applications including human key word detection on battery-powered devices in wearables, predictive maintenance in the smart grid, gesture recognition using radar in devices, monitoring critical refrigeration equipment in the field, field detection of eye diseases, monitoring of welding quality using audio, [and] construction and manufacturing safety monitoring using computer vision and sensors,” Shelby said. “We saw customers slow down at the beginning of the pandemic, as they were not on the sites where they needed to collect data, but business has picked up very strongly.”

Growth year

According to Gartner, by 2027, machine learning in the form of deep learning will be included in over 65% of edge use cases — up from less than 10% in 2021. Meanwhile, ABI Research predicts that the TinyML market will grow from 15.2 million device shipments in 2020 to 2.5 billion in 2030.

Reflecting the broader segment’s growth, Edge Impulse reports that the developer base on its platform increased by four times last year, with annual recurring revenue growing by three times. In related news, the company today announced that it raised $34 million in a series B round led by Coatue with participation from Canaan Partners, Acrew Capital, Fika Ventures, Momenta Ventures, and Knollwood Investment Advisory, tripling its valuation to $234 million and bringing its total capital raised to over $54 million.

Shelby says that the new funds will be used to expand Edge Impulse’s roughly-40-person team and its hardware partner network, which already includes Nvidia, Texas Instruments, Syntiant, and Synaptics. “We’ll use the money to accelerate even faster, significantly growing our developer ecosystem to 100,000 developers by the end of 2022, rapidly grow our solution engineering team to help customers reach success faster, expand our hardware ecosystem, and invest in new R&D making machine learning for sensor, audio, and computer vision more efficient,” he added. 

Edge Impulse competes with startups like CoCoPie, Neural Magic, NeuReality, Deci, CoCoPie, and DeepCube, among others. FogHorn is one of its closest direct competitors, delivering a range of edge intelligence software for industrial and commercial applications. Incumbents like Microsoft, Amazon, and Google also offer services — for example, Amazon Web Services’ IoT Greengrass — targeting edge AI development through their respective cloud platforms.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: Recognition of bias in AI continues to grow

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


This week, the Partnership on AI (PAI), a nonprofit committed to responsible AI use, released a paper addressing how technology — particularly AI — can accentuate various forms of biases. While most proposals to mitigate algorithmic discrimination require the collection of data on so-called sensitive attributes — which usually include things like race, gender, sexuality, and nationality — the coauthors of the PAI report argue that these efforts can actually cause harm to marginalized people and groups. Rather than trying to overcome historical patterns of discrimination and social inequity with more data and “clever algorithms,” they say, the value assumptions and trade-offs associated with the use of demographic data must be acknowledged.

“Harmful biases have been found in algorithmic decision-making systems in contexts such as health care, hiring, criminal justice, and education, prompting increasing social concern regarding the impact these systems are having on the wellbeing and livelihood of individuals and groups across society,” the coauthors of the report write. “Many current algorithmic fairness techniques [propose] access to data on a ‘sensitive attribute’ or ‘protected category’ (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. [But] these demographic-based algorithmic fairness techniques [remove] broader questions of governance and politics from the equation.”

The PAI paper’s publication comes as organizations take a broader — and more critical — view of AI technologies, in light of wrongful arrestsracist recidivismsexist recruitment, and erroneous grades perpetuated by AI. Yesterday, AI ethicist Timnit Gebru, who was controversially ejected from Google over a study examining the impacts of large language models, launched the Distributed Artificial Intelligence Research (DAIR), which aims to ask question about responsible use of AI and recruit researchers from parts of the world rarely represented in the tech industry. Last week, the United Nations’ Educational, Scientific, and Cultural Organization (UNESCO) approved a series of recommendations for AI ethics, including regular impact assessments and enforcement mechanisms to protect human rights. Meanwhile, New York University’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives are studying the impacts and applications of AI algorithms, as are Khipu, Black in AI, Data Science Africa, Masakhane, and Deep Learning Indaba.

Legislators, too, are taking a harder look at AI systems — and their potential to harm. The U.K.’s Centre for Data Ethics and Innovation (CDEI) recently recommended that public sector organizations using algorithms be mandated to publish information about how the algorithms are being applied, including the level of human oversight. The European Union has proposed regulations that would ban the use of biometric identification systems in public and prohibit AI in social credit scoring across the bloc’s 27 member states. Even China, which is engaged in several widespread, AI-powered surveillance initiatives, has tightened its oversight of the algorithms that companies use to drive their business.

Pitfalls in mitigating bias

PAI’s work cautions that efforts to mitigate bias in AI algorithms will inevitably encounter roadblocks, however, due to the nature of algorithmic decision-making. If optimizing for a goal that’s poorly defined, it’s likely that a system will reproduce historical inequity — possibly under the guise of objectivity. Attempting to ignore societal differences across demographic groups will work to reinforce systems of oppression because demographic data coded in datasets has an enormous impact on the representation of marginalized peoples. But deciding how to classify demographic data is an ongoing challenge, as demographic categories continue to shift and change over time.

“Collecting sensitive data consensually requires clear, specific, and limited use as well as strong security and protection following collection. Current consent practices are not meeting this standard,” the PAI report coauthors wrote. “Demographic data collection efforts can reinforce oppressive norms and the delegitimization of disenfranchised groups … Attempts to be neutral or objective often have the effect of reinforcing the status quo.”

At a time when relatively few major research papers consider the negative impacts of AI, leading ethicists are calling on practitioners to pinpoint biases early in the development process. For example, a program at Stanford — the Ethics and Society Review (ESR) — requires AI researchers to evaluate their grant proposals for any negative impacts. NeurIPS, one of the largest machine learning conferences in the world, mandates that coauthors who submit papers state the “potential broader impact of their work” on society. And in a whitepaper published by the U.S. National Institute of Standards and Technology (NIST), the coauthors advocate for “cultural effective challenge,” a practice that seeks to create an environment where developers can question steps in engineering to help identify problems.

Requiring AI practitioners to defend their techniques can incentivize new ways of thinking and help create change in approaches by organizations and industries, the NIST coauthors posit.

“An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended,” NIST scientist Reva Schwartz, a coauthor of the NIST paper, wrote. “All these factors can allow bias to go undetected … [Because] we know that bias is prevalent throughout the AI lifecycle … [not] knowing where [a] model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital … step.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Deliverr raises $250M to grow its ecommerce fulfillment network

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


While ecommerce sales are on the rise — with revenue projected to increase from $2.3 billion in 2018 to $4.5 billion by the end of 2021 — fulfillment remains a challenge as the pandemic snarls the supply chain. As early as July, a U.S. Census Bureau survey f0und that 38.8% of U.S. small businesses were experiencing domestic supplier delays. Shoppers tend not be very understanding of of disruptions, unfortunately, with 38% saying that they’ll abandon their order if the delivery is estimated to take longer than a week.

Against this backdrop, Deliverr, an ecommerce fulfillment startup headquartered in San Francisco, today announced that it raised $250 million in series E funding, bringing its total raised to over $500 million. The round, which was led by Tiger Global with participation from 8VC, Activant, GLP, Brookfield Technology Partners, and Coatue, values Deliverr at $2 billion post-money.

CEO Harish Abbott says that the proceeds will be put toward growing Deliverr’s shipping network, supporting product development, and expanding headcount.

“The most effective way to address supply chain congestion is to move inventory closer to the end customer. Deliverr is the only company working to solve this problem through stronger inventory placement, while leveraging cutting-edge machine learning and optimization technology to build a smarter fulfillment network,” Abbott said in a press release. “With this new capital, Deliverr will focus on scaling next-day fulfillment for ecommerce merchants and grow our world-class team of engineers, data scientists, and operations experts.”

AI-powered fulfillment

Deliverr was cofounded by former Symphony Commerce colleagues Abbott and Michael Krakaris in 2017. Prior to Symphony, Krakaris spend time working with product marketing teams at Twilio. Abbot was the chief product officer at Lulu.com and a senior program manager at Amazon.

Using predictive analytics and machine learning, Deliverr anticipates the demand for products based on demographics, geography, and other variables. The platform then uses the analysis to “pre-position” items close to areas of demand, stocking items across a network of over 80 warehouses, cross-docks, and sort centers.

Deliverr

Deliverr rents out — rather than purchases — warehouse space, using warehouses’ fulfillment departments to pick and pack ecommerce orders. The company’s software determines which products to send to which warehouses and then finds the best delivery method to ship to customers, with either two-day or next-day delivery guarantees.

Deliverr’s platform integrates with retailers’ listing tools and allows managers to explore cost previews for each SKU in their catalog. It also syncs with sales channels so that orders flow in automatically.

Growth market

One in three companies claim to have incorporated AI capabilities like those offered by Deliverr into their supply chain management processes and one in four is working toward that goal, a study from Symphony RetailAI found. A separate report suggests that within the next two years, retailers plan to upgrade their predictive inventory planning, predictive labor planning, and robotic systems for picking and material handling.

Deliverr is a beneficiary of the tech boom. The company’s network — which Deliverr claims is within 100 miles of half of the U.S. population — is on track to power a more than $2.5 billion gross merchandise volume (GMV) run rate by the end of 2021. (For retailers, GMV refers to the average sale price per item charged to a customer multiplied by the number of items sold.) Current customers include large retailers on marketplaces from Shopify, Walmart, Amazon, eBay, and Target.

The explosive growth of online sales is expected to drive the ecommerce fulfillment services market to $86.44 billion in value this year, according to Grand View Research. Deliverr competes with on-demand logistics and fulfillment startup Flowspace, Bringg, ShipBob, Bond, and Shippo, among others.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Nvidia Q3 revenues grow 50% to $7.1B as it easily beats expectations

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


Nvidia reported revenues of $7.1 billion for its third fiscal quarter ended October 31, up 50% from a year earlier. Gaming revenue grew 42% to $3.22 billion. The numbers were above Wall Street’s expectations.

Nvidia reported non-GAAP earnings per share of $1.17 on revenues of $7.1 billion, up 60% from EPS of $1.04 on revenue of $6.5 billion a year earlier.

The Santa Clara, California-based company makes graphics processing units (GPUs) that can be used for games, AI, and datacenter computing. While many businesses have been hit hard by the pandemic, Nvidia has seen a boost in those areas. The company saw record revenue in its gaming, datacenter, and professional visualization platforms.

GAAP earnings per diluted share for the quarter were 97 cents, up 83% from a year ago. In after-hours trading, Nvidia’s stock is trading at $303.50 a share, up 2.5%.

Webinar

Three top investment pros open up about what it takes to get your video game funded.


Watch On Demand

Analysts expected Nvidia to report earnings for the October quarter of $1.11 a share on revenues of $6.83 billion.

“The third quarter was outstanding, with record revenue,” said Nvidia CEO Jensen Huang in a statement. “Demand for Nvidia AI is surging, driven by hyperscale and cloud scale-out, and broadening adoption by more than 25,000 companies. Nvidia RTX has reinvented computer graphics with ray tracing and AI, and is the ideal upgrade for the large, growing market of gamers and creators, as well as designers and professionals building home workstations.”

He added, “Our GTC event series showcases the expanding universe of Nvidia accelerated computing. Last week’s event was our most successful yet, highlighting diverse applications, including supply-chain logistics, cybersecurity, natural language processing, quantum computing research, robotics, self-driving cars, climate science, and digital biology.”

And he said, “Omniverse [a metaverse for engineers to simulate things] was a major theme at GTC. We showed what is possible when we can jump into virtual worlds. Omniverse will be used from collaborative design, customer service avatars, and video conferencing, to digital twins of factories, processing plants, even entire cities. Omniverse brings together Nvidia’s expertise in AI, simulation, graphics, and computing infrastructure. This is the tip of the iceberg of what’s to come.”

Huang will receive the chip industry’s highest honor, the Robert N. Noyce Award, at the Semiconductor Industry Association (SIA) annual awards dinner on November 18. The award is named after Intel cofounder Robert Noyce, who is credited with numerous pioneering achievements at the dawn of the chip industry.

Nvidia has seen a boom in both gaming and datacenter revenues as users go online during the pandemic. Gamers have been snatching up graphics cards to play PC games, but a shortage of semiconductors has hurt companies like Nvidia.

Nvidia is still waiting on regulatory approval for its $40 billion acquisition of Arm.

Nvidia touted its Nvidia GTC event last week, where I moderated a session on a vision for the metaverse. Nvidia said it launched or updated 65 software development kits for AI products, showed off its new Omniverse updates for its metaverse for engineers. Nvidia said cryptocurrency mining revenue was $105 million.

Datacenter

Nvidia USPS

Above: AI algorithms were developed on Nvidia DGX servers at a U.S. Postal Service Engineering facility.

Image Credit: Nvidia

Datacenter revenues hit $2.94 billion, up 55% from a year earlier. Nvidia launched a variety of products in the quarter, and it announced its plans to build Earth-2, an AI supercomputer for tackling the climate change crisis.

Gaming

Nvidia has new RTX GPUs for work laptops.

Above: Nvidia has new RTX GPUs for work laptops.

Image Credit: Nvidia

As noted, gaming revenue was $3.22 billion, up 42% from a year earlier and up 5% from the previous quarter. It added RTX to games such as Marvel’s Guardians of the Galaxy, Battlefield 2042, and Dying Light 2. More than 200 games now support RTX, as well as 125 that support DLSS.

 Professional visualization

Nvidia has eight new RTX GPU cards.

Above: Nvidia has eight new RTX GPU cards.

Image Credit: Nvidia

Professional visualization generated revenues of $577 million, up 144% from a year earlier and up 11% from the previous quarter.

Automotive

Second-quarter automotive revenue was $135 million, up 8% from a year earlier and down 11% from the previous quarter.

Outlook

For the fourth quarter ending January 31, analysts expect earnings to be $1.09 a share on revenue of $6.86 billion. Nvidia expects to make gross margins of 65.3% and revenues of $7.4 billion.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
AI

Legal analytics platform Trellis raises $14.1M to grow API offerings

Los Angeles, California-based Trellis, an algorithm-powered research platform for litigators, today announced that it raised $14.1 million in a series A round led by Headline Ventures, with participation from Calibrate and individual investors. The company, which has raised $20 million in capital to date, plans to spend the money on expanding Trellis’ headcount and further developing its products, as well as building new APIs.

“Data is essential for legal teams, but also for better executive decision making across industries riddled with risk or litigation exposure. We’re not in the dark anymore,” CEO Nicole Clark, who cofounded Trellis with Alon Schwartz, said in a statement. “My practice radically changed when I started using data and analytics to plan my litigation strategy. The value of the data on my practice and case outcomes was abundantly clear.”

Clark was an employment and labor law litigator, often appearing in state trial courts across the country. While working in the court system, she found it difficult to access trial court data, which was stored in thousands of individual county court databases throughout the U.S. Clark had a developer friend create a tool to pull public data from a few of the courts she was appearing in most often, which became the seed of the idea for Trellis. She left her her law practice in 2018 to build the tool into a business.

Trellis

“Despite the fact that the judicial forum data belongs to the public, in order to find out any information on a case, you need to go to the trial court and know the case number to look up — which means you need to know a case exists to find out any information,” Clark told VentureBeat via email. “Trellis takes away those limitations so you can surface information by party, judge, legal issue — or any combination of those.”

Surfacing trial data

Clark says that the data Trellis accesses and aggregates is entirely unstructured, hosted, and maintained separately by 3,000 different counties “with no uniformity of integration.” Trellis acquires the data, structures it, normalizes it, and parses out hearing, judge, motion, and party information across hundreds of millions of trial cases.

“Because this data is unstructured and only became digitized in the last 5 to 10 years, there was no training data for [our] algorithms. So we had to do a massive classification effort across hundreds of different data points — which was different county by county — to create the training set to be able to run machine learning in the first place,” Clark said. “The types of analytics and variations on insights is still in its infancy in terms of what is possible and what Trellis plans to accomplish as it continues to grow.”

Currently, Trellis provides metrics for a judge’s timing to decisions, current and historical case calendars, law firm venues and active cases, which law firms represent which companies, and more. As an example, Trellis shows enterprises which lawyers and law firms perform best based on up-to-date trial court performance data. For attorneys and legal practices, Trellis shows individual litigators’ experiences before specific judges, how long they take to get cases to settlement or trial, and verdict ranges by case type.

In every data point, Trellis also reveals the state and county average for context on where and when a judge is an outlier. In addition, Trellis provides search tools, visualizations, and a “brief bank” for motion modeling and drafting for the over 90 million documents filed by attorneys in cases around the U.S.

Trellis

Above: Trellis’ analytics dashboard.

Image Credit: Trellis

Trellis currently covers 362 trial courts in states like California, Florida, Illinois, and Texas. Additional coverage is planned to launch this quarter for New Jersey, Ohio, and Connecticut. Also, “whole nation” coverage is expected to come by early 2023. Every county covered includes 15 years of historical trial case data, verdict analysis, and searchable filed documents.

Clark says the Trellis near-term focus adds functionality that enables customers in insurance, corporations, hedge funds, and law firms. It digests data in specific ways, for example by creating their own machine learning models. The 22-employee company isn’t currently profitable despite $2 million in annual recurring revenue from 1,600 customers (and an 80% gross margin profile). It also competes against incumbents like Lexis, Westlaw, and Gavelytics. But Clark believes that Trellis’ technical capabilities give it an advantage in the nearly $1 billion legal analytics market.

“Lexis [and] Westlaw … focus on court of appeals data. Trellis focuses strictly on the state trial court system. The majority of cases — 99% — don’t make it to trial, let alone making it to trial, and then making it to appeal, and then getting the case published on appeal,” Clark said. “Gavelytics only offers judge analytics. Trellis is a ‘Google for the entire state trial court system,’ with judge analytics as one feature of the data.”

By the end of the year, Trellis plans to expand its workforce to over 30 employees.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Suspected Cannabis Grow House Turns Out to Be a Bitcoin Farm

During a May 18 raid in the U.K., West Midlands Police expected to find a cannabis growing operation after learning about a site stealing electricity on the Great Bridge Industrial Estate. Instead, they found about 100 computers mining cryptocurrency.

The BBC reports that detectives received a tip prior to the raid. Onlookers said that they saw multiple people visiting the site throughout the day and police drones picked up a lot of heat coming from the building — typically a sign of a growing operation. Western Power Distribution also found that the site was illegally connected to its grid, stealing “thousands of pounds [worth of] of electricity.”

BBC

“It had all the hallmarks of a cannabis cultivation setup,and I believe it is only the second such crypto mine we have encountered in the West Midlands,” Sgt. Jennifer Griffin said. The authorities seized the computers but didn’t make any arrests.

Crypto-mining operations are becoming a problem not only in the U.K., but around the world. In 2018, New York allowed some power providers to start charging higher fees to crypto miners. In March 2021, New York introduced a bill that would ban mining for three years while the state evaluated the environmental impact.

These measures come in response to the growing climate problem surrounding mining coins like Ethereum. Some scientists say that increased demand for cryptocurrency has already negated the effects of using electric vehicles.

Recent reports show some mining operations bringing in over $20,000 per month, and Nvidia, whose GPUs are frequently at the heart of these mining operations, earns upwards of $400 million each year from crypto miners.

It’s not just private operations, either. One of Russia’s largest oil producers set up a Bitcoin mining farm in Siberia last year that’s entirely powered by gas.

With the price of multiple cryptocurrencies hitting all-time highs, we’re still not sure if demand — in terms of power and coins — will continue to grow. With volatility in currencies like Bitcoin, some investors think the market is showing a repeat of 2014 and 2018, where high volatility caused mass selloffs and devalued the coin.

Regardless, it looks like cannabis farms aren’t the only thing police have to worry about when it comes to stolen electricity.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Nvidia quarterly revenues grow 84% to $5.66 billion

Elevate your enterprise data technology and strategy at Transform 2021.


Nvidia reported revenues of $5.66 billion for its first fiscal quarter ended May 2, up 84% from a year earlier. Gaming revenue grew 106% to $2.76 billion. The numbers were above Wall Street’s expectations.

A year ago, Nvidia reported non-GAAP earnings per share of $1.89 on revenues of $3.1 billion. The Santa Clara, California-based company makes graphics processing units (GPUs) that can be used for games, AI, and datacenter computing. While many businesses have been hit hard by the pandemic, Nvidia has seen a boost in those areas. The company saw record revenue in its gaming, datacenter, and professional visualization platforms.

GAAP earnings per diluted share for the quarter were a record $3.03, up 106% from a year ago and up 31% from the previous quarter. Non-GAAP earnings per diluted share were $3.66, up 103 % from a year earlier and up 18% from the previous quarter. In after-hours trading, Nvidia’s stock is down slightly at $626.10 a share.

Analysts expected to report earnings per share of $3.31 on revenues of $5.41 billion, while the company guided for revenues of $5.3 billion. Gaming was expected to be $2.69 billion in revenues in the quarter, while datacenter revenues were expected to be $2 billion. As mentioned, gaming came in at $2.76 billion and datacenter revenues were $2.05 billion, up 79%.

Jensen Huang, CEO of Nvidia, said in a statement that it was a fantastic quarter with strong demand for products.

“Our datacenter business continues to expand, as the world’s industries take up Nvidia AI to process computer vision, conversational AI, natural language understanding and recommender systems,” Huang said. “Nvida RTX has reinvented computer graphics and is driving upgrades across the gaming and design markets. Our partners are launching the largest-ever wave of Nvidia-powered laptops. Across industries, the adoption of Nvidia computing platforms is accelerating.”

The new Predator Triton 300 uses Nvidia GeForce RTX 3050 family GPUs.

Above: The new Predator Triton 300 uses Nvidia GeForce RTX 3050 family GPUs.

Image Credit: Nvidia

Nvidia has seen a boom in both gaming and datacenter revenues as users go online during the pandemic. Gamers have been snatching up graphics cards to play PC games, but a shortage of semiconductors has hurt companies like Nvidia, and cryptocurrency miners are also buying the graphics cards in competition with the gamers.

Nvidia itself predicted in February that revenue would be $5.3 billion for the first fiscal quarter, but then on April 12 it said it expected it would exceed that number. It said that it expected its cryptocurrency division to hit $150 million in the quarter, up from the previously estimated $50 million.

Nvidia had also said that the first fiscal quarter’s GAAP gross margins would be 63.8% and non-GAAP gross margins would be 66%. The GAAP margin came in at 64.1%, and non-GAAP came in at 66.2%.

Last year, Nvidia completed its $7 billion acquisition of Mellanox, which makes key technologies for connecting chips in datacenters. Mellanox revenue is included in the CPU and networking segment. But Nvidia is still waiting on regulatory approval for its $40 billion acquisition of Arm.

“Mellanox, one year in, has exceeded our expectations and transformed Nvidia into a data-center-scale computing company,” Huang said. “We continue to make headway with our planned acquisition of Arm, which will accelerate innovation and growth for the Arm ecosystem. From gaming, cloud computing, AI, robotics, self-driving cars, to genomics and computational biology, Nvidia continues to do impactful work to invent a better future.”

Datacenter

Nvidia USPS

Above: AI algorithms were developed on NVIDIA DGX servers at a U.S. Postal Service Engineering facility.

Image Credit: Nvidia

Datacenter revenues were $2.05 billion, up 79% from a year earlier and up 8% from the previous quarter. During the quarter, Nvidia held its GPU Technology conference with more than 200,000 registrations from 195 countries and 14 million views for its opening keynote. It introduced Nvidia Grace, an Arm-based datacenter central processing unit (CPU).

Nvidia said growth came from its Mellanox networking products and demand for Ampere-based GPUs for datacenters. Collette Kress, chief financial officer, said that Nvidia was extremely pleased with the performance of Mellanox in its first full year after Nvidia acquired it. In a call with analysts, Huang said, “We are seeing strength across the board in datacenters.”

Gaming

Nvidia has new RTX GPUs for work laptops.

Above: Nvidia has new RTX GPUs for work laptops.

Image Credit: Nvidia

The first-fiscal quarter saw record revenue of $2.76 billion, up 106%, and up 11% from the previous quarter. Nvidia launched second-generation RTX graphics including its 3060 and 3060 Ti graphics cards during the quarter. Those chips went into laptops, and 3050 GPUs also debuted for low-cost laptops. Nvidia’s RTX platform is now supported by more than 60 games.

To prevent the cards from being scooped up only by crypto enthusiasts, Nvidia reduced the Ethereum hash rate on newly manufactured RTX 3080, 3070 and 3060 Ti graphics cards — which carry a “Lite Hash Rate,” or “LHR,” identifier — in addition to previous steps to lower the RTX 3060’s hash rate. And GeForce Now has 10 million members in 70 countries, with 1,000 games in the library of the cloud gaming service. In a statement, Nvidia said, “We believe gaming also benefited from cryptocurrency mining demand, although it is hard to determine to what extent.”

In original equipment manufacturer (OEM) and Other revenue, Nvidia said revenue was up 137 percent from a year ago and up 114 percent sequentially, primarily reflecting the addition of cryptocurrency mining processors (CMP), which generated revenue of $155 million.

Professional visualization

Nvidia has eight new RTX GPU cards.

Above: Nvidia has eight new RTX GPU cards.

Image Credit: Nvidia

Professional Visualization saw record revenues of $372 million, up 21% both from a year earlier and the previous quarter. During the quarter, Nvidia launched Omniverse Enterprise, a metaverse for engineers, in collaboration with companies like BMW Group, Foster+Partners, and WPP.

Automotive

First-quarter automotive revenue was $154 million, down 1% from a year earlier and up 6% from the previous quarter. During the quarter, Nvidia unveiled Nvidia Drive Atlan, an AI processor for autonomous vehicles.

On May 21, 2021, the company’s board of directors declared a four-for-one split of Nvidia’s common stock payable in the form of a stock dividend, with the additional shares expected to be distributed on July 19, 2021. The stock dividend is conditioned on obtaining stockholder approval.

Outlook

Revenue for the second fiscal quarter that closes at the end of July is expected to be $6.30 billion. GAAP and non-GAAP gross margins are expected to be 64.6 percent and 66.5 percent, respectively, plus or minus 50 basis points.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
AI

Artificial intelligence research continues to grow as China overtakes US in AI journal citations

The artificial intelligence boom isn’t slowing yet, with new figures showing a 34.5 percent increase in the publication of AI research from 2019 to 2020. That’s a higher percentage growth than 2018 to 2019 when the volume of publications increased by 19.6 percent.

China continues to be a growing force in AI R&D, overtaking the US for overall journal citations in artificial intelligence research last year. The country already publishes more AI papers than any other country, but the United States still has more cited papers at AI conferences — one indicator of the novelty and significance of the underlying research.

These figures come from the fourth annual AI Index, a collection of statistics, benchmarks, and milestones meant to gauge global progress in artificial intelligence. The report is collated with the help of Stanford University, and you can read all 222 pages here.

In many ways, the report confirms trends identified in past years: the sheer volume of AI research is growing across a number of metrics, China continues to be increasingly influential, and investors are pumping yet more money into AI firms.

However, details reveal subtleties about the AI scene. For example, while private investment in AI increased 9.3 percent in 2020 (a higher increase than 2018 to 2019 of 5.7 percent), the number of newly funded companies receiving funds decreased for the third year in a row. There are several ways to interpret this, but it suggests that investors expect that the winner-takes-all dynamic that has defined the tech industry — in which digital economies of scale tend to reward a few dominant players — will be replicated in the AI world.

The report’s section on technical advances also confirms the major trends in AI capabilities, the biggest of which is the industrialization of computer vision. This field has seen incredible progress during the AI boom, with services like object and facial recognition now commonplace. Similarly, generative technologies, which can create video, images, and audio, continue to increase in quality and availability. As the report notes, this trend “promises to generate a tremendous range of downstream applications of AI for both socially useful and less useful purposes.” Useful applications include cheaper computer-generated media, while malicious outcomes include misinformation and AI revenge porn.

One area of AI research that seems like it’s just beginning to come into its own is biotech. The drug discovery and design sector received the most private investment of any sector in 2020 ($13.8 billion, 4.5 times more than in 2019), and experts canvassed for AI Index’s report cited DeepMind’s AlphaFold program, which uses machine learning to fold proteins, as one of the most significant breakthroughs in AI in 2020. (The other frequently cited breakthrough last year was OpenAI’s text-generation program GPT-3.)

One area where the Index AI report struggles to gauge progress, though, is in ethics. This is a wide-ranging area, spanning everything from the politics of facial recognition to algorithmic bias, and discussion of these topics is increasingly prominent. In 2020, stories like Google’s firing of researcher Timnit Gebru and IBM’s exit from the facial recognition business drove discussions of how AI technology should be applied. But while companies are happy paying lip service to ethical principles, the report notes that most of these “commitments” are non-binding and lack institutional frameworks. As has been noted in the past: AI ethics for many companies is simply a way to slow roll criticism.

Repost: Original Source and Author Link

Categories
AI

CluedIn raises $15M to grow its data prep and analytics platform

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Data management startup CluedIn today announced the closure of a $15 million series A funding round led by Dawn Capital, which brings its total raised to over $16 million. The company says that the proceeds will be used to build out its platform, expand its sales and marketing team, and drive partnerships and expansion, particularly in the U.S.

Most enterprises have to wrangle countless data buckets, some of which inevitably become underused. A Forrester survey found that between 60% and 73% of all data within corporations is never analyzed for insights or larger trends. The opportunity cost of this unused data is substantial, with a Veritas report pegging it at $3.3 trillion by 2020. That’s perhaps why the corporate sector has taken an interest in solutions that ingest, understand, organize, and act on digital content from multiple digital sources. Gartner says that data integration and preparation are among the top three technologies organizations seek to automate by the end of 2022.

Copenhagen-based CluedIn, which was founded in 2015 by Tim Ward and Martin Hyldahl, aims to streamline the process of making data ready for insights. The company leverages a graph database that sits between data sources and applications, offering solutions for data integration, governance, and management.

CluedIn

CluedIn integrates with existing systems and delivers a view of what data needs fixing at the global, source, entity, and property level. Data arbitration is built into the platform — CluedIn automatically reconciles records like addresses, company names, and more via reconfigurable rules. CluedIn’s data prep engine removes duplicate entries from databases, leveraging AI and machine learning for metadata management system.

Data prep benefits

According to a Forbes survey, data scientists spend 80% of their time on data preparation, and 76% view it as the least enjoyable part of their work. It’s also expensive. Trifecta pegs the collective data prep cost for organizations at $450 billion.

Beyond time and cost savings, CluedIn says that its platform, which can share data with third-party applications, enables enterprises to meet core regulatory and compliance requirements. For example, CluedIn can track data lineage and mask any personally identifiable information that it encounters. It also features templates for retention policies designed to help customers align with business- and government-level compliance rules.

Thirty-employee CluedIn has customers in a range of verticals including Nordea, SAP, and Ticketmaster. For Pfizer and Coca-Cola, it’s providing customer insights analytics, according to Ward.

“It’s been a fantastic journey so far. We’re delighted to have gained the recognition of leading industry authorities and partners, and the endorsement of some of the most forward-thinking global customers,” Ward said in a press release. “We can’t wait to use these funds to take CluedIn to the next level and to more customers globally who are looking for ways to unlock the value of first and third party data.”

In addition to Dawn Capital, Collibra cofounder Stijn Christiaens and existing investor Nordic Makers participated in CluedIn’s latest funding round.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link