Categories
AI

5 tips for improving your data science workflow

All the sessions from Transform 2021 are available on-demand now. Watch now.


The biggest wastes in data science and machine learning don’t stem from inefficient code, random bugs, or incorrect analysis. They stem from flaws in planning and communication. Execution mistakes can cost a day or two to fix, but planning mistakes can take weeks to months to set right. Here are five ways you can avoid making those mistakes in the first place:

1. Set the right objective (function)

Mathematician and data analysis pioneer John Tukey said “an approximate answer to the right question is better than an exact answer to the wrong question.” Machine learning solutions work by optimizing towards an objective function — a mathematical formula that describes some value. One of the most basic examples is a profit function: Profit = Revenue – Costs.

While machine learning algorithms excel at finding the optimal solution, they can’t tell you if you’re maximizing the right thing at the right time. Periodically make sure that your objective function reflects your current priorities and values. For example, an early stage company may not be worried as much about profitability; instead they may want to maximize revenue in order to try to increase market-share. A company that is looking to IPO may want to demonstrate profitability, so may focus on minimizing costs, while maintaining the same level of market share. Only capturing the currently important metric (revenue) at specific points in time (quarterly) will hinder your ability to predict new cost functions (profitability) at different times.

Along those lines, data scientists can also fall into the trap of optimizing model metrics, and not business metrics. As an example, data scientists may consider using the area under a precision-recall curve or a receiver-operating-characteristic curve to evaluate overall model performance, but those curves don’t necessarily translate to business success. Instead, setting an objective like “Minimize false positives while maintaining a total false negative rate of X%” can be specific to your current business conditions, and can be used to weigh the specific costs of false positives and false negatives. Capturing pre-aggregated event-based data and periodic re-examination of the problem you’re trying to solve will allow you to keep moving in the right direction, instead of optimizing for the wrong problem.

2. Get on the same page

To your business stakeholders, there’s a huge difference between “We saw a 100 point increase in accuracy in the test set of 100,000 examples” and “If we had these improvements in place, we would have saved $20,000 dollars in the last business quarter.” “100,000 examples” and “100 point increase” are hard to visualize, whereas “$20,000” and “last business quarter” tend to be a lot easier for business stakeholders to grasp. Standardize your units of analysis so that your team and the business leaders spend less time translating, and more time ideating. 

The points-in-time that are critical can also differ by business stakeholder. A sales or customer success practitioner may need weekly, monthly or event-based measures (i.e. first subscription event, renewal event, support request events). While a revenue leader may need models per business segment, sales rep or product line on a quarterly or yearly basis. Collect data at an event level to support these various compute times as they arise. 

We’ve been on teams where train and test sets were at the whims of the particular data scientist. Our analysis wasn’t comparable to each other, and the model metrics we used were incomprehensible to the stakeholder. Once we standardized on business metrics, and times meaningful to the business (i.e. all deals from last quarter, subscription activity in the last month), it became easier to compare models internally and externally and easier to make present impactful business cases for the usage of our models.

3. Allow room for discovery

Data science is an inherently creative endeavor, oftentimes advancements in models come from unexpected places. The biggest breakthroughs come from exploring new avenues and new opportunities. One of the beautiful things about data science is that it takes ideas and methods from a broad array of scientific disciplines. Algorithms developed for genetics are used to analyze literature, methods to analyze literature can be adapted to make romantic matches on a dating app or provide recommendations for a vacation.

Advances in solutions often come from looking at the same problem from a different angle or frame of reference. For example, some of the first models didn’t take into account demographic information. For a long time now, data scientists have understood that including demographic data may help ads reach the right person or measure unintended bias. Then when the frame of psychology was introduced, data scientists began looking at the problem from a psychographic angle: Can demographics and demonstrated interest improve results? For example, adding in data about what someone shared on social media could provide a link to what they are likely to buy. Recently, event-based behavioral data, in near real time, has entered the space bringing both new information and time into the picture. Making very small gas station purchases then a very large TV purchase minutes later may signal a stolen credit card.

While you don’t want to spend all your time running down rabbit holes and chasing down wild geese, setting aside time to try new and creative solutions or explore different angles will pay off in the long run in new capabilities, better models, and faster time to results. Whether it’s setting aside time every week to chase down new leads or try new things, or allowing exploration tasks into your workflow, in the long run you’ll have happier scientists, and better long term results by allowing them to find new solutions or perspectives for the problems at hand.

4. Talk to your consumer

If you build a model without understanding your end-user and the problems they’re trying to solve, your model will be missing vital context. Business leaders tend to view things from 50,000 feet, whereas your models are often deployed at ground level with sales reps. Conditions on the ground never fully match what is viewed from up above, and so if you only take into context what you can see at that higher level, you’ll miss out on vital information. We’ve spent months building models for business leaders, only to discover that the system we built to make life easier, made things more difficult for the sales rep. We saved the company money, but we could’ve had a much bigger, faster impact if we built systems that were more closely aligned with our end users.

There are countless little contextual things that your users take for granted, and without speaking to your customers and working to understand them, you’ll miss out on this critical context. Talking to your users will ensure that your models will solve their needs. For example, a sales rep may be assigned to a territory and product line and expect the model they are provided to reflect this nuance. A revenue leader is looking across all reps to forecast the business. The features that make a model predictive at a global level will not be the same as those at a more granular level. In addition, a revenue leader cares more about accurate forecasting at the start of a quarter and month. A sales rep cares about when and what they can do to increase their success on a specific account. This context implies that you should build at least three different models with features computed at different points-in-time to increase accuracy and prevent leakage.

5. Optimal solutions tend to be suboptimal

Highly optimized solutions cost more to implement, more to maintain, and tend to be less flexible. Build simpler solutions whenever possible. Just because something is theoretically better, doesn’t mean that it’s practically better. We were working on a simple prediction logging database to be able to debug and replicate production predictions. At first, we wanted to get some fancy serverless AWS Athena set-up that wouldn’t require constantly running some database machine. We spent a day digging into Athena trying to get it set up before realizing that we had already spent more money in payroll costs than a persistent cloud-machine would cost to run for two years.

This ties in with “setting the right objective.” Optimized solutions only are optimized if your objective function is 100% correct, and isn’t likely to change. When it does change, then your highly optimized solution is likely to be optimized in the wrong direction. (Such as a model highly optimized to increase revenue and market share, but the business needs to shift towards profitability). A solution that is slightly less optimized, but more flexible, understandable, and adaptable will likely serve you better in the long run as priorities shift, and you better understand the costs associated with the problem space.

You’ll notice that many of these work together. In order to set the right objective function, you’ll want to talk to your consumer and get on the same page as your stakeholder. The ability to pivot your objective function to meet changing demands comes from not building a hyper-optimal solution to the local problem, but building something that is flexible. And of course, allowing room for discovery enables the exploration of new potential optima or problem spaces. Your business and model problems will change over time; set yourself up for success not just today, but into the future. These changes won’t save you 5 or 10 minutes here or there but will rather save you weeks of effort by minimizing the time spent building the wrong solutions.

Max Boyd is Senior Data Scientist at Tomo.

Charna Parkey is VP of Product at Kaskada.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Tonkean raises $50M to expand its workflow automation platform

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Tonkean, a software startup developing a no-code workflow automation platform, today announced that it nabbed $50 million in a series B round led by Accel with participation from Lightspeed Ventures and Foundation Capital. CEO Sagi Eliyahu says that the proceeds will be put toward scaling up the company’s hiring efforts across engineering and go-to-market teams.

San Francisco, California-based Tonkean was founded in 2015 by Eliyahu and Offir Talmor. At age 18, Eliyahu and Talmor met in the Israel Defense Forces (IDF), where they spent four years working on software technologies and challenges. Before founding Tonkean, Eliyahu was the VP of engineering at Jive Software, but many of Tonkean’s R&D early hires in Israel came from Eliyahu’s and Talmor’s IDF unit.

Eliyahu argues that the value proposition of Tonkean’s platform is twofold. It gives businesses and teams within those businesses the ability to tailor workflows to systems, employees, and processes. At the same time, it solves challenges in a way that doesn’t require many customizations.

“As [Jive] scaled, [we] encountered problems that large businesses often see as inevitable: a tech stack that balloons to include hundreds if not thousands of applications and inefficiency that ran rampant throughout the organization,” Eliyahu told VentureBeat via email. “Tonkean was built to solve the fundamental challenges of enterprise software to allow department and operational experts to actually deliver software with the flexibility to streamline business processes without introducing yet more apps.”

Workflow automation

Tonkean’s workflow designer features adaptive modules that can be added or removed in a drag-and-drop fashion. Customers can use it to proactively reach out and follow people via email, Slack, or Microsoft Teams to deliver data and actions to them or to keep track and manage performance across processes, people, and systems. Moreover, they can automate manual steps such as triaging finance requests, routing items to team members, and chasing status updates. Or they can dive into live details of individual jobs and see aggregate views of metrics and KPIs like turnaround time, turnover rate, and cycle times for tasks.

“In many cases, Tonkean is reducing the need for internal custom development by IT and business technology teams or the need to purchase multiple packaged solutions to support needs from various business units,” Eliyahu said. “Tonkean operates at the cross-section of automation platforms like robotic process automation, integration platform as a service, and business process automation, often replacing but also often extending the value of these platforms by allowing enterprises to orchestrate more complex, human-centric processes and reducing the technical skill sets needed to leverage capabilities provided by technology platforms.”

Tonkean

Above: A screenshot of Tonkean’s workflow automation platform.

Image Credit: Tonkean

Tonkean says it already has “a few dozen” customers, mostly at the Fortune 1000 level — including Grubhub and Crypto.com.

“Tonkean’s AI-powered coordination engine can intelligently and proactively reach people by learning individual or team preferences, like what communication medium is preferred, and route alerts, data, or actions to the right place at the right time,” Eliyahu said. “Tonkean is the operating system for business operations, and as such can be used to deliver use cases in any business operations function including revenue operations, legal operations, HR operations, finance operations, IT operations, and more.”

Tonkean has raised $81 million in venture capital to date with this latest funding round, which also had contributions from Zoom CEO Eric Yuan, Atlassian co-CEO Scott Farquhar, former Google CEO Eric Schmidt, and executives from UiPath. The company, which has over 60 employees, plans to expand the size of its workforce to over 100 within the next year.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM taps AI for new workflow automation and data migration tools

Join Transform 2021 this July 12-16. Register for the AI event of the year.


This week during its Think conference, IBM unveiled AI-driven products across its portfolio of enterprise platforms. Mono2Micro, a new capability in WebSphere, taps AI to streamline cloud app migration. Watson Orchestrate helps to automate work in business tools from Salesforce, SAP, and Workday. Meanwhile, an updated IBM Cloud Pak for Data ostensibly reduces the cost and complexity of curating data for AI and machine learning workloads.

The AI industry booming, with research commissioned by IBM finding that almost one-third of businesses are using some form of AI and machine learning. By 2027, the global AI market is expected to be worth $733.7 billion, according to Grand View Research. But while recent advances in the technology are making AI more accessible, a lack of skills and increasing data complexity remain top challenges.

Mono2Micro

The new Mono2Micro service in WebSphere Hybrid Edition, IBM’s app and integration middleware, optimizes apps and workloads to run in hybrid cloud environments on Red Hat OpenShift. Mono2Micro refactors apps to move them to the cloud, restructuring existing code without changing its behavior or semantics.

IDG reports that the average cloud budget is up from $1.62 million in 2016 to a whopping $2.2 million today. But cloud adoption continues to present challenges for enterprises of any size. A separate Statista survey identified security, managing cloud spend, governance, and lack of resources and expertise as significant barriers to adoption.

“As IT complexity grows with increased adoption of hybrid cloud, enterprises are looking to bring in the power of AI to transform how they develop, deplo, and operate their IT,” IBM wrote in a blog post. “A significant challenge that CIOs face is that many of their core applications were written for an on-premises world and they can have hundreds to thousands of legacy applications that need to be modernized and moved to the cloud.”

Mono2Micro uses machine learning and deep learning to analyze large enterprise Java applications. The analysis produces two alternative refactoring options for an application, which can be explored in graphs and reports for transparency and explainability.

Watson Orchestrate

The newest member of IBM’s Watson family, Watson Orchestrate, is designed to give workers across sales, human resources, operations, and more the ability to perform tasks faster. By interacting with existing enterprise systems, Watson Orchestrate can complete to-do list items like scheduling meetings and procuring approvals.

When McKinsey surveyed 1,500 executives across industries and regions in 2018, 66% said addressing skills gaps related to automation and digitization was a “top 10” priority. Forrester predicts that 57% of business-to-business sales leaders will invest more heavily in tools with automation. And that’s perhaps why Salesforce anticipates the addressable market for customer intelligence will grow to $13.4 billion by 2025, up from several billion today.

Users can interact with Watson Orchestrate using natural language — the software automatically selects and sequences prepackaged skills needed to perform a task, connecting with apps, tools, data and history on-the-fly. For example, a sales director could ask Watson Orchestrate to monitor business opportunities, send an email alert when a deal progresses, and set up a meeting with the respective sales lead to discuss the next steps.

Waston Orchestrate also understands and maintains context based on organizational knowledge and prior interactions. Concretely, this means that it can act on information informed by a user’s preferences, like a preferred business application or email contact.

Watson Orchestrate comes on the heels of acquisitions to expand IBM’s automation capabilities including WDG Automation, Instana, MyInvenio, and Turbonomic, signaling the company’s ambitions in the enterprise automation space. According to IBM’s data, 80% of companies are already using — or plan to use in the next 12 months — automation software and tools.

Watson Orchestrate is currently available in preview as part of IBM’s Cloud Paks for Automation.

Cloud Pak for Data

IBM introduced Cloud Pak for Data three years ago as a way to give enterprises the capabilities to apply AI to data across hybrid cloud environments. Beginning this week, the service is gaining new AI-powered functionality including AutoSQL, which automates the access, integration, and management of data without having to move it.

Managing data is highly complex and can be a real challenge for organizations. A recent MIT and Databricks survey of C-suite data, IT, and senior tech executives found that just 13% of organizations are delivering on their data strategy. The report concluded that machine learning’s business impact is limited largely by challenges in managing its end-to-end lifecycle.

AutoSQL uses the same query engine across sources including data warehouses, data lakes, and streaming data. It’s a part of an intelligent data fabric that leverages AI to orchestrate data management tasks, discovering, understanding, accessing, and protecting data across environments. The new fabric unifies disparate data into a unified view. And it aims to ensure that data can be accessed without jeopardizing privacy, security, or compliance.

“Data quality and integration become major issues when pulling from multiple cloud environments,” IBM wrote in a blog post. “With the new data fabric and AI capabilities, [we’re] delivering what we expect to be a significant differentiator for customers by completely automating the data and AI lifecycle — the potential to free up time, money and resources — and connect the right data to the right people at the right time, while conserving resources.”

Another part of the intelligent data fabric, AutoCatalog, taps AI to automate how data is discovered and maintain a real-time catalog of assets from across enterprises. Meanwhile, AutoPrivacy — another data fabric component — automates the identification, monitoring, and enforcement of policies on sensitive data across organizations.

The upgraded Cloud Pak for Data is available to customers starting today.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Workflow automation platform Aisera raises $40M

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Aisera, a company developing a platform that automates operations and support tasks across IT, sales, and customer service, today announced it has raised $40 million in a series C round led by Icon Ventures. The startup says the funds, which bring its total raised to $90 million, will support product expansion and deployment, as well as go-to-market, marketing, and software development efforts.

When McKinsey surveyed 1,500 executives across industries and regions in 2018, 66% said addressing skills gaps related to automation and digitization was a “top 10” priority. According to market research firm Fact.MR, small and medium-sized enterprises are expected to adopt business workflow automation at scale, creating a market opportunity of more than $1.6 billion between 2017 and 2026.

A multi-pronged approach

Aisera offers products that auto-complete actions and workflows by integrating with existing enterprise apps, like Salesforce and ServiceNow. The company was founded in 2017 by Muddu Sudhakar, who previously launched e-discovery vendor Kazeon (which was acquired by EMC in 2009), big data startup Cetas (acquired by VMware in 2012), and cybersecurity firm Caspida (acquired by Splunk in 2015).

Aisera claims its platform can continuously learn to resolve issues through a combination of conversational AI, robotic process automation, and reinforcement learning. For example, Aisera can predict outages and send notifications to DevOps teams and customers. Moreover, the company claims its platform can detect patterns to predict service disruptions.

Aisera

Aisera customers can choose from a library of prebuilt workflows and intents built for IT, HR, facilities, sales operations, and customer service applications. The platform offers out-of-the-box reports and dashboards for auditing, including auto-resolution metrics and the ability to discover the most-requested knowledge articles.

A growing market

Aisera has a number of competitors in a global intelligent process automation market that’s estimated to be worth $15.8 billion by 2025, according to KBV Research. Automation Anywhere and UiPath have secured hundreds of millions of dollars in investments at multibillion-dollar valuations. Within a span of months, Blue Prism raised over $120 million, Kryon $40 million, and FortressIQ $30 million. Tech giants have also made forays into the field, including Microsoft, which acquired RPA startup Softomotive, and IBM, which purchased WDG Automation. That’s not counting newer startups like WorkFusion, Indico, Tray.io, Tonkean, AirSlate, Workato, Camunda, and Automation Hero.

But the funding comes at a time of significant expansion for Aisera. In addition to achieving year-over-year growth of 300% and a base of over 65 million users, the company says it has secured a number of new enterprise customers, including Autodesk, Dartmouth College, McAfee, and Zoom.

Aisera’s success is perhaps unsurprising, given the value proposition of automation. Ninety-five percent of IT leaders are prioritizing automation, and 70% of execs are seeing the equivalent of over four hours saved per employee, per week, according to Salesforce’s recent Trends in Workflow Automation report. Moreover, market research firm Fact.MR says the adoption of business workflow automation at scale could create a market opportunity of over $1.6 billion between 2017 and 2026.

Palo Alto, California-based Aisera’s latest funding round saw participation from new investor World Innovation Lab, as well as existing backers True Ventures, Menlo Ventures, Norwest Venture Partners, Khosla Ventures, First Round Capital, Webb Investment Network, and Sherpalo.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Nvidia launches TAO, an enterprise workflow for AI development

Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


During its GTC 2021 virtual keynote, Nvidia introduced a new product designed to help enterprises choose, adapt, and deploy machine learning models. Called TAO and available starting today in early access, it enables transfer learning as well as other machine learning techniques from a single, enterprise-focused pane of glass.

Transfer learning’s ability to store knowledge gained while solving a problem and apply it to a related problem has attracted considerable attention in the enterprise. Using it, a data scientist can take an open source model like BERT, for example, which is designed to understand generic language, and refine it at the margins to comprehend the jargon employees use to describe IT issues.

TAO integrates Nvidia’s Transfer Learning Toolkit to leverage small datasets, giving models a custom fit without the cost, time, and massive corpora required to build and train models from scratch. TAO also incorporates federated learning, which lets different machines securely collaborate to refine a model for the highest accuracy. Users can share components of models while ensuring datasets remain inside each company’s datacenter.

In machine learning, federated learning entails training algorithms across client devices that hold data samples without exchanging those samples. A centralized server might be used to orchestrate rounds of training for the algorithm and act as a reference clock, or the arrangement might be peer-to-peer. Regardless, local algorithms are trained on local data samples and the weights — the learnable parameters of the algorithms — are exchanged between the algorithms at some frequency to generate a global model.

TAO also incorporates Nvidia TensorRT, which dials a model’s mathematical coordinates to a balance of the smallest model size with the highest accuracy for the system it’ll run on. Nvidia claims that TensorRT-based apps perform up to 40 times faster than CPU-only platforms during inference.

Elements of TAO are already in use in warehouses, in retail, in hospitals, and on the factory floor, Nvidia claims. Users include companies like Accenture, BMW and Siemens Industrial.

“AI is the most powerful new technology of our time, but it’s been a force that’s hard to harness for many enterprises — until now. Many companies lack the specialized skills, access to large datasets or accelerated computing that deep learning requires. Others are realizing the benefits of AI and want to spread them quickly across more products and services,” Adel El Hallak, director of product management for NGC at Nvidia, wrote in a blog post. “TAO … can quickly tailor and deploy an application using multiple AI models.”

The benefits of AI and machine learning can feel intangible at times, but surveys show this hasn’t deterred enterprises from adopting the technology in droves. Business use of AI grew a whopping 270% from 2015 to 2019, according to Gartner, while Deloitte says 62% of respondents for its corporate October 2018 report deployed some form of AI, up from 53% a year ago. Bolstered by this growth, Grand View Research predicts that the global AI market size will reach $733.7 billion by 2027.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Enterprise workflow automation startup DeepSee.ai raises $22.6M

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


DeepSee.ai, an enterprise workflow automation platform, today announced that it closed a $22.6 million series A round led by ForgePoint Capital. The company plans to use the funds to support R&D and the expansion of its product beyond the verticals DeepSee currently targets, chiefly capital markets and insurance.

When McKinsey surveyed 1,500 executives across industries and regions in 2018, 66% said addressing skills gaps related to automation and digitization was a “top 10” priority. Salesforce’s recent Trends in Workflow Automation report found that 95% of IT leaders are prioritizing automation and 70% of execs are seeing the equivalent of over 4 hours saved each week per employee. Moreover, according to market research firm Fact.MR, the adoption of business workflow automation at scale could create a market opportunity of over $1.6 billion between 2017 and 2026.

Salt Lake City, Utah-based DeepSee, which was founded in 2019, leverages open source and proprietary machine learning, linguistic comparison and prediction techniques, and sentiment analysis to automate manual business processes. From digital and legacy sources, DeepSee’s cloud-hosted platform captures, extracts, normalizes, labels, and analyzes unstructured data. The platform then surfaces trends and patterns for review, providing a pipeline to deliver AI-generated templates, rules, and logic to systems for actions.

DeepSee.ai

DeepSee customers first specify the data, documents, and specific types of classification they’d like to perform. Then, they select from prepackaged machine learning models, bring their own models, or opt for one of several open source options. Lastly, they choose their desired outcomes via a custom workflow, API, or robotic process automation.

In an interview with VentureBeat, CEO Steve Shillingford pointed to studies like that by Unit 4, which found that office workers spend 69 days a year on administrative tasks — costing companies $5 trillion a year. In the same Unit 4 study, 67% of respondents said implementing digital or software solutions would be important to remain competitive.

Today’s AI market is very fragmented — several point providers for single-purpose applications. From our vantage, we are seeing a mass consolidation that’s happening in the AI space,” Shillingford told VentureBeat via email. “Enterprises are struggling to stitch point solutions together to drive desired AI initiatives to eliminate the friction to deploy, maintain, and adopt innovation inside the business. And our sweet spot is mining unstructured data, operationalizing AI-powered insights, and automating results into real-time action for the enterprise.”

DeepSee claims it can also mine for insights that reveal how data is impacting a particular business. The company’s crawler technology can browse internal repositories and third-party sources including media, press releases, and business publications to perform attribute clustering, outliers, anomalies, and aggregation of trends, spotlighting insights for analysis.

“What’s really hard and likely existing in every enterprise is the tension between the folks building and tuning models and the folks expected to run the day-to-day operations. We were surprised at the challenges — challenges even the data scientists had inside the enterprise — at getting enough data to train models to become useful,” Shillingford continued. “In only the way a startup could, we managed to find innovation in that constraint and developed a tool for training across small sparse data sets with the same efficacy as if the model was trained on millions of documents. To that end, we think we’ve stumbled on a solution to one of the biggest problems gating really AI-productivity: How to apply NLP to processes where ‘big data’ isn’t available.”

DeepSee.ai

Eighteen-employee DeepSee has a number of competitors in a global intelligent process automation market that’s estimated to be worth $15.8 billion by 2025, according to KBV Research. Automation Anywhere and UiPath last secured hundreds of millions of dollars in investments at multibillion-dollar valuations. Within a span of months, Blue Prism raised over $120 million, Kryon $40 million, and FortressIQ $30 million. Tech giants have also made forays into the field, including Microsoft, which acquired RPA startup Softomotive, and IBM, which purchased WDG Automation. That’s not counting newer startups like WorkFusion, Indico, Tray.io, Tonkean, AirSlate, Workato, Camunda, and Automation Hero.

But Shillingford says that DeepSee has been working closely with one of the largest banks in the world to develop and bring its tech into production. “Fortunately, but totally by coincidence, they were hurt significantly when the pandemic forced them to shut down several offices. This accelerated the rollout of the DeepSee platform, and thanks to our team, we were able to support the massive increase in volume while they adjusted to decrease in productivity,” Shillingford said. “The rollout was so successful, they were able to take market share from others during what was one of the most chaotic markets seen in recent memory.”

AllegisCyber Capital and Signal Peak Ventures also participated in DeepSee’s latest funding round, bringing its total raised to date to $30.7 million.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link