Categories
AI

Amazon launches SageMaker Canvas for no-code AI model development

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


During a keynote address today at its re:Invent 2021 conference, Amazon announced SageMaker Canvas, which enables users to create machine learning models without having to write any code. Using SageMaker Canvas, Amazon Web Services (AWS) customers can run a machine learning workflow with a point-and-click user interface to generate predictions and publish the results.

Low- and no-code platforms allow developers and non-developers alike to create software through visual dashboards instead of traditional programming. Adoption is on the rise, with a recent OutSystems report showing that 41% of organizations were using a low- or no-code tool in 2019/2020, up from 34% in 2018/2019.

“Now, business users and analysts can use Canvas to generate highly accurate predictions using an intuitive, easy-to-use interface,” AWS CEO Adam Selipsky said onstage. “Canvas uses terminology and visualizations already familiar to [users] and complements the data analysis tools that [people are] already using.”

AI without code

With Canvas, Selipsky says that customers can browse and access petabytes of data from both cloud and on-premises data sources, such as Amazon S3, Redshift databases, as well as local files. Canvas uses automated machine learning technology to create models, and once the models are created, users can explain and interpret the models and share the models with each other to collaborate and enrich insights.

“With Canvas, we’re making it even easier to prepare and gather data for machine learning to train models faster and expand machine learning to an even broader audience,” Selipsky added. “It’s really going to enable a whole new group of users to leverage their data and to use machine learning to create new business insights.”

Canvas follows on the heels of SageMaker improvements released earlier in the year, including Data Wrangler, Feature Store, and Pipelines. Data Wrangler recommends transformations based on data in a target dataset and applies these transformations to features. Feature Store acts as a storage component for features and can access features in either batches or subsets. As for Pipelines, it allows users to define, share, and reuse each step of an end-to-end machine learning workflow with preconfigured customizable workflow templates while logging each step in SageMaker Experiments.

With upwards of 82% of firms saying that custom app development outside of IT is important, Gartner predicts that 65% of all apps will be created using low- and no-code platforms like Canvas by 2024. Another study reports that 85% of 500 engineering leaders think that low- and no-code will be commonplace within their organizations as soon as 2021.

If the current trend holds, the market for low- and no-code could climb to between $13.3 billion and $17.7 billion in 2021 and between $58.8 billion and $125.4 billion in 2027.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

No-code customer journey company EasySend gets $55.5M

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


EasySend, an Israel-based company that offers no-code software to digitize processes — from mortgage applications to onboarding paperwork — said it has raised a $55.5 million series B funding round.

The company, which targets small to midsize enterprises, says it uses a cloud-based no-code builder that leverages AI, third-party integrations, and analytics to provide seamless customer experiences, at a low cost. EasySend claims its technology enables organizations to build and launch digital customer journeys for any process, including upsells, customer applications, and claim forms.

EasySend says its customers include insurers, banks, and credit unions like Cincinnati Insurance, NJM Insurance Group, PSCU, Sompo, and Petplan, and that its revenue increased tenfold over the last year.

A number of other companies are working to digitize processes with no-code and low-code offerings, including Ultimus, PandaDoc, FormStack, WayScript, and Baton.

The company said the additional capital will accelerate EasySend’s growth in the U.S. and other regions, supporting expansion into new verticals and product use cases. The funding round was led by Oak HC/FT, with participation from existing investors Vertex IL, Intel Capital, and Hanaco Ventures.

Accelerating digital transformation

The pandemic has undoubtedly shaped the way organizations conduct businesses in several positive ways, with the 2021 Gartner Board of Directors Survey showing that 69% of boards of directors accelerated their digital business initiatives following COVID-19 disruption.

Tal Daskal, CEO at EasySend, said the pandemic has led to a massive shift in people’s online business interaction.

EasySend was founded in 2016 by Daskal and brothers Omer Shirazi (chief operating officer) and Eran Shirazi (chief technology officer). It has now raised a total of $71.5 million.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Deep tech, no-code tools will help future artists make better visual content

This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital.

Despite the hype, the “creator economy” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online.

The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report.

Group of superheroes on a dark background

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content.

Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.

Computer vision will aid livestreaming

Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content.

Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory.

The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today.

Synthetic visual content will be ubiquitous

A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.

By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.

L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Above: L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Image Credit: ©LDV CAPITAL INSIGHTS 2021

In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck.

Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media.

Artificial intelligence will deliver motion capture to the masses

Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content.

Graph displaying motion capture landscape

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K

Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more.

Nearly all content will be gamified

Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement.

Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content.

AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.

People will interact with many digital beings

We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online.

Five examples of digital identities

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI.

These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more.

Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article.

Machine learning will curate, authenticate, and moderate content

For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created.

In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails.

Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications.

It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required.

Visual content tools are the greatest opportunity in the creator economy

The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation.

Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

No-code AI analytics may soon automate data science jobs

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


SparkBeyond, a company that helps analysts use AI to generate new answers to business problems without requiring any code, today has released its product SparkBeyond Discovery.

The company aims to automate the job of a data scientist. Typically, a data scientist looking to solve a problem may be able to generate and test 10 or more hypotheses a day. With SparkBeyond’s machine, millions of hypotheses can be generated per minute from the data it leverages from the open web and a client’s internal data, the company says. Additionally, SparkBeyond explains its findings in natural language, so a no-code analyst can easily understand it.

How companies can benefit from AI analytics data automation

The product is the culmination of work that started in 2013 when the company had the idea to build a machine to access the web and GitHub to find code and other building blocks to formulate new ideas for finding solutions to problems. To use SparkBeyond Discovery, all a client company needs to do is specify its domain and what exactly it wants to optimize.

SparkBeyond has offered a test version of the product, which it began developing two years ago. The company says its customers include McKinsey, Baker McKenzie, Hitachi, PepsiCo, Santander, Zabka, Swisscard, SEBx, Investa, Oxford, and ABInBev.

One of SparkBeyond’s client success stories involved a retailer that wanted to know where to open 5,000 new stores, with the goal of maximizing profit. As SparkBeyond CEO Sagie Davidovich explains, SparkBeyond took the point-of-sale data from the retailer’s existing stores to find which were most profitable. It correlated the profitability with data from a range of external sources, including weather information, maps, and geo-coordinates. Then SparkBeyond went on to test a range of hypotheses, including theories such as if three consecutive rainy days in proximity to competing stories correlated with profitability. In the end, proximity to laundromats correlated the most strongly to profitability, Davidovich explains. It turns out people have time to shop while they wait for their laundry, something that may seem obvious in retrospect, but not at all obvious at the outset.

The company says its auto-generation of predictive models for analysts puts it in a unique position in the marketplace of AI services. Most AI tools aim to help the data scientist with the modeling and testing process once the data scientist has already come up with a hypothesis to test.

Competitors in the data automation space

Several competitors, including Data Robot and H20, offer automated AI and ML modeling. But SparkBeyond’s VP and general manager, Ed Janvrin, says this area of auto-ML feels increasingly commoditized. SparkBeyond also offers an auto-ML module, he says.

There are also several competitors, including Dataiku and Alteryx, that help with no-code data preparation. But those companies are not offering pure, automated feature discovery, says Janvrin. SparkBeyond is working on its own data preparation features which will allow analysts to join most data types — such as time-series, text analysis, or geospatial data — easily without writing code.

Since 2013, SparkBeyond has quietly raised $60 million in total backing from investors, which it did not previously announce. Investors include Israeli venture firm Aleph, Lord David Alliance, and others.

“The demand for data skills has reached virtually every industry,” said Davidovich in a statement. “What was once considered a domain for expert data scientists at large enterprise organizations is now in urgent demand across companies of all sizes.”

“Our new release is powerful yet intuitive enough that data professionals — including analysts at medium-sized and smaller organizations — can now harness the power of AI to quickly join multiple datasets, generate millions of hypotheses and create predictive models, unearthing unexpected drivers for better decision-making.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Formstack: 82% of people don’t know what ‘no-code’ means

Elevate your enterprise data technology and strategy at Transform 2021.


Despite massive growth in no-code, only 18% of people are familiar with “no-code,” according to the Rise of the No-Code Economy report produced by Formstack, a no-code workplace productivity platform.

Rise of the No-Code Economy

Above: People use no-code tools because they make it easier and faster to develop applications.

Image Credit: Formstack

If you haven’t yet heard of no-code, you’re not alone. Despite the massive growth in no-code over the last year, 82% of people are unfamiliar with the term “no-code.” That’s according to the recent report produced by Formstack, a no-code workplace productivity platform. Yet, the survey also found that 66% of respondents have adopted no-code tools in the last year, and 41% in the last six months—suggesting developer shortages and the pandemic have driven growth across the industry.

No-code is a type of software development that allows anyone to create digital applications—from building mobile, voice, or ecommerce apps and websites to automating any number of tasks or processes—without writing a single line of code. It involves using tools with an intuitive, drag-and-drop interface to create a unique solution to a problem. Alternatively, low-code minimizes the amount of hand-coding needed, yet it still requires technical skills and some knowledge of code.

The report showed that no-code adoption and awareness is still relatively limited to the developer and tech community, as 81% of those who work in IT are familiar with the idea of no-code tools. But despite current adoption rates among those outside of IT, the report shows huge potential for non-technical employees to adopt no-code. Formstack’s survey found that 33% of respondents across industries expect to increase no-code usage over the next year, with 45% in computer & electronics manufacturing, 46% in banking and finance, and 71% in software.

In addition to growing adoption rates, many respondents believe their organizations and industries will begin hiring for no-code roles within the next year, signaling that the growing no-code economy will create demand in an entirely new industry segment.

The survey polled 1,000 workers across various industries, job levels, and company sizes to gather comprehensive data on the understanding, usage, and growth of no-code. External sources and data points are included throughout the report to further define and explain the no-code economy.

Read Formstack’s full Rise of the No-Code Economy.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Anvilogic raises $10M to scale no-code cyberattack detection platform

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Cybersecurity detection automation company Anvilogic today announced a $10 million series A round led by Cervin Ventures. CEO Karthik Kannan says the capital will be put toward scaling and R&D.

In a 2017 Deloitte survey, only 42% of respondents considered their institutions to be extremely or very effective at managing cybersecurity risk. The pandemic has certainly done nothing to alleviate these concerns. Despite increased IT security investments companies made in 2020 to deal with distributed IT and work-from-home challenges, nearly 80% of senior IT workers and IT security leaders believe their organizations lack sufficient defenses against cyberattacks, according to IDG.

Anvilogic is a VC-funded cybersecurity startup based in Palo Alto, California and founded by former Splunk, Proofpoint, and Symantec data engineers. The company’s product, which launched in 2019, is a collaborative, no-code platform that streamlines detection engineering workflows by helping IT teams assess cloud, web, network, and endpoint environments and build and deploy attack-pattern detection code.

Anvilogic

Anvilogic is designed to provide improved visibility, enrichment, and context across alerting datasets, enhancing the ability to aggregate, detect, and respond using existing data. The platform provides a continuous maturity scoring model and AI-assisted use case recommendations based on industry trends, threat priorities, and data sources. Using Anvilogic, security teams can visualize suspicious activity patterns and synchronize content metadata for detection and alerting.

Key areas of automation

As Kannan explained to VentureBeat via email, the Anvilogic platform has four key functionality focus areas. The first is automated assessment of state of security, which spans the ability to automatically score a customer’s security readiness with a metric, along with a gap analysis. This capability provides AI-driven prioritization to guide the customer on where to start and when to go deeper, based on criteria such as their industry, the current landscape, peer behavior, available data sources, current gaps, and more.

The platform’s next area is automation of detection engineering, which includes AI-based suggestions for security teams, data sources, a no-code build environment to construct detections, and an integrated workflow for task management and detection deployment. Then there’s automation of hunting and triage, where AI-based correlations of signals produce higher-order threat detection outcomes, which provide the entire story of an alert. Anvilogic auto-enriches alerts based on a hunting and triage framework. The final piece is ongoing learning across enterprises to learn new workflows, patterns, and actions and to provide the entire network better insights and recommendations for detections, hunting, and triage.

“All use cases are connected and have a smooth handoff via a task management workspace, along with baked-in access controls such that the entire detection engineering and hunting/triage process is automated by the platform,” Kannan said. “The user experience is guided by our intuitive and domain-driven user interface, and the maturity score provides users guidance on what to build/deploy and also serves as a tracker of progress and gaps.”

Cybersecurity during the pandemic

Reflecting the pace of adoption, the AI in cybersecurity market will reach $38.2 billion in value by 2026, Markets and Markets projects. That’s up from $8.8 billion in 2019, representing a compound annual growth rate of around 23.3%. Just last week, a study from MIT Technology Review Insights and Darktrace found that 96% of execs at large enterprises are considering adopting “defensive AI” against cyberattacks.

“Our vision is to deliver complete automation to the security operation center (SOC) in the emerging cloud-first world and deliver what we call SOC neutrality. We believe that all logging will be on a distributed cloud warehouse in the future, and there will be even more silos of alerts and workflows (e.g., primary on-premises logging, traditional network workloads, and newer cloud workloads) in the SOC,” Kannan said. “Anvilogic will become the unified security fabric that delivers total end-to-end SOC automation across silos, successfully delivering detection and hunting capability by correlating across workloads, powered by AI, domain-specific frameworks and automation.”

Beyond Cervin, Foundation Capital, Point 72 Ventures, and Dan Warmenhoven participated in 25-employee Anvilogic’s latest funding round. It brings the company’s total raised to date to over $15 million.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Upsolver enables no-code data analytics in the cloud with $25M funding

Join GamesBeat Summit 2021 this April 28-29. Register for free or grab a discounted VIP pass today.


Upsolver, a no-code startup that enables analytics on cloud data lakes, this morning announced that it raised $25 million in financing led by Scale Venture Partners. The company, which also today launched a free community edition of its product, says the funds will be used to hire engineers and scale its go-to-market efforts.

Enterprises are rapidly adopting the cloud — 68% of CIOs ranked migrating to the cloud as their top IT spending driver in 2020, according to a Deloitte survey. But building an analytics-ready cloud data lake can be complex and expensive. A recently published Statista report found that around 83% of cloud practitioners considered security, managing cloud spend, governance, and a lack of resources to be significant barriers to entry.

Upsolver develops software designed to prime data lakes for analytics, CEO Ori Rafael told VentureBeat via email. Founded in 2014 by Rafael and Yoni Eini, the company’s platform offers a visual structured query language (SQL) interface and automations for data optimization, tuning, and orchestration.

“We wanted to store data affordably in the cloud without analytics vendor lock-in,” Rafael said. “Unfortunately, what used to take three hours using SQL turned into 30 days of hand-coding and complex Spark configuration. We created Upsolver to bridge this gap between raw cloud data and analytics-ready data.”

Upsolver

Upsolver’s product lets companies perform analytics using a range of query engines and data systems including PrestoDB, Trino, Athena, Snowflake, Redshift, Synapse Analytics, Splunk, and Elastic. Ultimately, the goal is to replace all a customer’s code-heavy approaches with Upsolver’s compute layer, which sits between the customer’s cloud storage and their preferred tools, engines, and apps.

“Upsolver is a critical component for successfully implementing a cloud data lake for analytics, which is a popular approach due to the affordability that data lakes provide. We complement cloud data warehouses, search engines, and other purpose-built data stores as well,” Rafael said. “Upsolver is often used to output prepared data to those platforms making data available to standalone query engines. Data engineers use Upsolver’s visual user interface to build any data transformations and preparation tasks. This generates SQL that the data engineer can also directly edit to fine-tune the processing.”

Rafael says that Upsolver will soon add the ability to replicate databases into data lakes while keeping them up to date. The platform recently launched on Azure and is set to become available in a community edition delivering “free-forever” capabilities to those with smaller workloads, providing a proving ground for companies who might want to work with Upsolver on larger use cases.

Scale partner Ariel Tseitlin asserts that Upsolver benefits from having a foot in two fast-growing markets: big data analytics and data lakes. The global data lake market size was valued at $7.6 billion in 2019, according to Grand View Research. And in 2017, Forbes reported that 53% of companies had adopted big data analytics, with a large portion opting to run workloads in the cloud.

Thirty-employee Upsolver says that revenue tripled 2020 as brands including Cox Automotive, Wix, and AppsFlyer joined its customer base.

“Monolithic analytics platforms are a thing of the past. Today’s organizations require a variety of analytics tools to fully capitalize on their data,” Tseitlin said in a press release. “Data lakes originally promised this variety and openness but also required a large, ongoing investment in engineering. Upsolver eliminates this trade-off.”

Existing investors Vertex Ventures US, Wing Venture Capital, and JVP also participated in Upsolver’s series B round. It comes on the heels of a $13 million series A and brings the company’s total raised to date to $42 million.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Full Speed Automation raises $3.2M for no-code solutions that accelerate industrial digitization

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


For all the talk about the factory of the future and Industry 4.0, manufacturing systems can still be quite antiquated and fragmented when it comes to achieving real automation. While some areas may be digitized, monitoring these systems and getting them to communicate with each other can still be a daunting, labor-intensive challenge.

After leading the automation engineering team at Tesla for almost 2 years, Luc Leroy believes he knows how to solve this problem. Last summer, he left Tesla to found Full Speed Automation, which has just raised $3.2 million to finish the development of the first version of its platform, called Vitesse (the French word for ‘speed’).

The Vitesse platform will be a middleware layer that automatically detects all the digital devices in a manufacturing operation and allows managers to connect them with a simple point-and-click. This no-code approach uses APIs to accelerate the speed at which manufacturers can enhance their own systems.

For Leroy, the startup is the culmination of a long journey that began in classic software development and unexpectedly took him into manufacturing. The differences in terms of speed and agility were jarring, and he is determined to bridge that divide.

“I started to see a huge gap between the expectations you would get from mobile users and web and tech startups where we had to get something running in 24 hours,” Leroy said. “The world of manufacturing was extremely slow. And I started to get intrigued about that. There was no notion of iteration and fail quickly, the things we all take for granted for the past 10 to 15 years in the world of pure software.”

Leroy founded his first startup more than 20 years ago in Paris, an augmented reality service for television broadcasts. In 2003, he co-founded Okyz, which enabled the creation of 3D models that could be placed in PDF documents. Adobe acquired the company in 2004, and Leroy moved to Silicon Valley where he began interacting more and more with manufacturers who were using the PDF 3D product.

What he learned surprised him.

“I realized that everything we learn at school about lean manufacturing and just-in-time and zero stock, it’s not right,” he said. “When you order a replacement part, it still gets made two months earlier in Japan and then crossed the world on a polluting ship and then sits on a shelf until it’s needed. And we still call that zero stock.”

Part of the issue was that manufacturing systems were still too slow. When it came time to upgrade software, employees were walking around the factory floor with USB sticks, manually updating each portion of the assembly line. To get these different parts of the process to communicate, companies had to spend huge resources writing custom software that could be slow and unresponsive at times as the system evolved.

Following a stint working for a company building an Iron Man-like smart helmet, Leroy was recruited to work at Tesla where automating the assembly line to meet ambitious production goals had become a high priority.

“The mission was to accelerate the transition to sustainable transportation and ultimately, sustainable energy,” Leroy said. “That meant a lot to me.”

Leroy’s team was placed in charge of solving the Tesla Model 3’s “production hell.” Over time, Leroy’s team developed tools to automate those updates on the thousands of robots in Tesla’s factory. After 18 months, Leroy began to think that the insights he’d gained at Tesla could be useful for a far larger base of manufacturers.

“You can have top-notch AI, and computer vision for quality control and safety,” he said. “But the software for each of these exists in its own verticals.”

Democratizing automation

While Tesla’s buzz and market cap allow it to hire many of the best engineers, that’s a tougher haul for incumbent manufacturers. Even though, in reality, giants in verticals like automotive, finance, and insurance have effectively become software companies over the past decade, employing huge armies of developers.

So what Tesla built may not be easy for others to replicate. To do solve that problem, Leroy argues that any solution has to be as simple as possible to manage, so it is accessible to anyone. That’s why he is focusing on the no-code approach. The UI will be point and click to connect all devices in a factory.

“We need to find a way to democratize these tools so that experienced technicians can use them and very quickly,” Leroy said.

Vitesse will be horizontal, linking to all of these different assets via an API. In doing so, factory managers will be able to dramatically expand the level of automation in their shops. Once this intermediate layer is in place, factories should be able to iterate rapidly and reset product lines in days instead of weeks or months. In addition, the collection of data will be simpler and neater, allowing for better monitoring in real-time.

Since last summer, Leroy has assembled a team of 6 to build the proof of concept. He’s working with an undisclosed list of manufacturing partners to refine it with the goal of releasing a beta version in Q2 of this year. That also includes partnering with component providers to integrate a large catalog of equipment into Vitesse’s library to ease integration.

To further that goal, the company has raised a Seed round of $3.2 million led by HCVC, the venture capital arm of Paris-based Hardware Club. The round also included money from Catapult Venture Capital, Seedcamp, Serena Capital, Kima Ventures, Diaspora Ventures, as well as Industry Executives such as Bruno Bouygues, CEO of manufacturing supplier GYS.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link