Categories
AI

Cnvrg.io develops on-demand service to run AI workloads across infrastructures

Cnvrg.io, the Intel-owned company that offers a platform to help data scientists build and deploy machine learning applications, has opened early access to a new managed service called Cnvrg.io Metacloud.

The offering, as the company explains, gives AI developers the flexibility to run, test, and deploy AI and machine learning (ML) workloads on a mix of mainstream infrastructure and hardware choices, even within the same AI/ML workflow or pipeline.

Cnvrg.io Metacloud: Flexibility for AI developers

AI experts can often find themselves struggling to scale their projects due to the limitations of the cloud or on-premise infrastructure in use. They do get the option to switch to a new environment, but that means re-instrumenting a completely new stack as well as spending a lot of cash and time. This eventually keeps most of the users locked on a single vendor, making it a major obstacle to scaling and operationalizing AI.

Cnvrg.io Metacloud tackles this challenge with a flexible software-as-a-service (SaaS) interface, where developers can pick cloud or on-premise compute resources and storage services of their choice to match the demand of their AI/ML workloads.

The solution has been designed using cloud-native technologies such as containers and Kubernetes, which enables developers to pick any infrastructure provider from a partner menu to run their project. All users need to do is create an account, select the AI/ML infrastructure (any public cloud, on-premise, co-located, dev cloud, pre-release hardware, and more), and run the workload, the company said.

Plus, since there is no commercial commitment, developers can always change to a different infrastructure to meet growing project demands or budget constraints. The current list of supported providers includes Intel, AWS, Azure, GCP, Dell, Redhat, VMWare, and Seagate.

“AI has yet to meet its ultimate potential by overcoming all the operational complexities. The future of machine learning is dependent on the ability to deliver models seamlessly using the best infrastructure available,” Yochay Ettun, CEO and cofounder of Cnvrg.io, said in a statement.

“Cnvrg.io Metacloud is built to give flexibility and choice to AI developers to enable successful development of AI instead of limiting them, so enterprises can realize the full benefits of machine learning sooner,” he added.

Cnvrg.io Metacloud will be provided as part of the Cnvrg.io full-stack machine learning operating system, designed to help developers build and deploy machine learning models. The early access version of the solution can be accessed upon request via the company website.

Notably, this is the first major announcement from Cnvrg.io since its acquisition by Intel in 2020. Prior to the deal, the company had raised about $8 million from multiple investors, including Hanaco Venture Capital and Jerusalem Venture Partners.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Marketing automation is key to reducing workloads, Zapier says

All the sessions from Transform 2021 are available on-demand now. Watch now.


From calendar events to email management, automation plays a major role in workflows across industries. A little more than half of marketers used automation in team communication, according to a recent poll by Zapier.

About 42% of marketers reported using automation to identify and target customers. More than a quarter of them used it to schedule emails, and about a third used marketing automation to send tailored messages, manage a subscriber database, or notify their team members of events. Automation saves marketers about 25 hours per week, according to Zapier.

Zapier took a look at how various professionals use automation in their roles. It polled 1,500 workers in marketing, IT, accounting, human resources, and sales or customer service in small to medium businesses across the U.S.  Zapier released its report on Monday.

In IT, 51% of the workers polled used automation to manage emails. Many also used it to communicate with colleagues (46%), update project lists (42%), meet deadlines using a calendar (39%), and test proof of concepts (33%). They save about 20 hours a week, Zapier said.

About 43% of accountants use automation to import hours into payroll, according to Zapier. Around 40% used it to communicate with their team, collect receipts, and streamline purchase and budget approvals. This saves them about four hours a week.

Meanwhile, about four in 10 sales and customer service professionals used automation to message their team members about a lead or a customer, message a lead or a customer, and collect invoices and payments. About a third used automation to forecast sales and for CRM hygiene.

Automation saves sales representatives about four hours a week and customer service representatives about 16 hours a week, according to the poll.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Dell releases open source suite Omnia to manage AI, analytics workloads

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

Dell today announced the release of Omnia, an open source software package aimed at simplifying AI and compute-intensive workload deployment and management. Developed at Dell’s High Performance Compute (HPC) and AI Innovation Lab in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of HPC, AI, and data analytics to create a pool of hardware resources.

The release of Omnia comes as enterprises are turning to AI during the health crisis to drive innovation. According to a Statista survey, 41.2% of enterprise say that they’re competing on data and analytics while 24% say they’ve created data-driven organizations.  Meanwhile, 451 Research reports that 95% of companies surveyed for its recent study consider AI technology to be important to their digital transformation efforts.

Dell describes Omnia as a set of Ansible playbooks that speed the deployment of converged workloads with containers and Slurm, along with library frameworks, services, and apps. Ansible, which was originally created by Red Hat, helps with configuration management and app deployment, while Slurm is a job scheduler for Linux used by many of the world’s supercomputers and computer clusters.

Omnia

Omnia automatically imprints software solutions onto servers — specifically networked Linux servers — based on the particular use case. For example, these might be HPC simulations, neural networks for AI, or in–memory graphics processing for data analytics. Dell claims that Omnia can reduce deployment time from weeks to minutes.

“As AI with HPC and data analytics converge, storage and networking configurations have remained in silos, making it challenging for IT teams to provide required resources for shifting demands,” Peter Manca, senior VP at Dell Technologies, said in a press release. “With Dell’s Omnia open source software, teams can dramatically simplify the management of advanced computing workloads, helping them speed research and innovation.”

Omnia

Above: A flow chart describing how Omnia works.

Image Credit: Omnia

Omnia can build clusters that use Slurm or Kubernetes for workload management, and it tries to leverage existing projects rather than reinvent the wheel. The software automates the cluster deployment process, starting with provisioning the operating system to servers, and can install Kubernetes, Slurm, or both along with additional drivers, services, libraries, and apps.

“Engineers from ASU and Dell Technologies worked together on Omnia’s creation,” Douglas Jennewein, ASU senior director of research computing, said in a statement. “It’s been a rewarding effort working on code that will simplify the deployment and management of these complex mixed workloads, at ASU and for the entire advanced computing industry.”

In a related announcement today, Dell said that it’s expanding its HPC on demand offering to support VMware environments to include VMware Cloud Foundation, VMware Cloud Director, and VMware vRealize Operations. Beyond this, the company now offers Nvidia A30 and A10 Tensor Core GPUs as options for its Dell EMC PowerEdge R750, R750xa, and R7525 servers.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

NeuReality emerges from stealth to accelerate AI workloads at scale

NeuReality, a Caesarea, Israel-based startup developing high-performance AI hardware for cloud datacenters and edge nodes, today emerged from stealth with $8 million. The company, which counts among its board of directors Naveen Rao, former GM of Intel’s AI product group, says the funding will lay the groundwork for the launch of its first product later in 2021.

Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington’s Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

NeuReality aims to solve these scalability challenges with purpose-built computing platforms for recommender systems, classifiers, digital assistants, language-based applications, and computer vision. The company claims its products, which will be made available as a service, can enable customers to scale AI utilization while cutting costs, lowering energy consumption, and shrinking their infrastructure footprint. In fact, NeuReality claims it can deliver 30 times the system cost benefit over today’s state-of-the-art, CPU-centric servers.

“Our mission is to deliver AI users best in class system performance while significantly reducing cost and power,” CEO and cofounder Moshe Tanach told VentureBeat via email. “In order to make AI accessible to every organization, we must build affordable infrastructure that will allow innovators to deploy AI-based applications that cure diseases, improve public safety, and enhance education. NeuReality’s technology will support that growth while making the world smarter, cleaner, and safer for everyone. The cost of the AI infrastructure and AI-as-a-service will no longer be limiting factors.”

NeuReality was cofounded in 2019 by Tanach, Tzvika Shmueli, and Yossi Kasus. Tanach previously served as director of engineering at Marvell and Intel and AVP of R&D at DesignArt-Networks, which was acquired by Qualcomm in 2012. Shmueli is the former VP of backend at Mellanox Technologies and VP of engineering at Habana Labs. And Kasus held a senior director of engineering role at Mellanox and was head of very large-scale integrations at EZChip.

NeuReality has competition in OctoML, a startup that similarly purports to automate machine learning optimization with proprietary tools and processes. Other competitors include Deci and DeepCube, which describe their solutions as “software-based inference accelerators,” and Neural Magic, which redesigns AI algorithms to run more efficiently on off-the-shelf processors by leveraging the chips’ available memory. Yet another rival, DarwinAI, uses what it calls generative synthesis to ingest models and spit out highly optimized versions.

But Tanach says the company is currently active in three main lanes: (1) Public and private cloud datacenter companies, (2) solution providers that build datacenter solutions and large-scale software solutions for enterprises, financial institutions, and government organizations, and (3) OEMs and ODMs that build servers and edge node solutions.

“There are no such solutions in the market today. The competition is split between various silicon and system products. The most obvious ones are the inference deep-learning accelerators [from] companies such as Nvidia, Intel, and startups that are competing in that market. However, these competitors have only part of the solution both from a system perspective and from an AI compute capabilities standpoint,” Tanach said. “[We] will release more information about the solution later this year when its first platform is ready. For now, the company can only share that the total cost of ownership of its AI compute service will be more efficient by an order of magnitude compared to existing solutions.”

Cardumen Capital, OurCrowd, and Varana Capital led today’s seed round, the company’s first public investment. NeuReality has 18 employees.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Granulate raises $30 million for AI that optimizes server workloads

Granulate, a startup developing a platform that optimizes computing infrastructure, today announced it has raised $30 million, bringing its total raised to $45.6 million. The company claims the cash infusion will help it develop products that reduce the time engineers spend tuning enterprise system performance, freeing them up to pursue potentially more creative, impactful, and challenging projects.

Applying AI to datacenter operations isn’t a new idea — IDC predicts that 50% of IT assets in datacenters will run autonomously using embedded AI functionality by 2022. To this end, Concertio, which recently raised $4.2 million, provides AI-powered system optimization tools for boosting hardware and software performance. Facebook has said it employs AI internally to speed up searches for optimal server settings. And IBM offers a tool called Data Center Advisor with Watson that calculates datacenter health scores and anticipates possible issues or failures.

But Granulate’s solution uses software agents that can be installed on any Linux server in datacenters or cloud environments, including virtual machines. These AI-powered agents adapt to operating systems and kernels, prioritizing threads while taking into account each request’s processing stage and employing a network stack that enables parallelism. Granulate analyzes usage patterns and sizes to tailor allocations and memory for each app, autonomously crafting congestion control prioritizations between connections to optimize throughput for the current workload.

Granulate

Above: Above: Granulate’s analytics dashboard. Image Credit: Granulate

Image Credit: Granulate

Granulate’s suite, which works with existing monitoring tools like Prometheus, AppDynamics, New Relic, Datadog, Dynatrace, and Elastic, is installed in dozens of production environments and tens of thousands of servers, including those owned by Coralogix and AppsFlyer. The company claims it’s more performant than most, improving the throughput of machines by up to 5 times and leading to an up to 60% compute cost savings and a 40% reduction in latency.

Startapp, a mobile data platform with over 1.5 billion monthly active users and 800,000 partner apps, reports that Granulate achieved a 30% reduction in average latency and a 25% processor utilization reduction, netting a 33% compute cost reduction. Another customer — advertising technology company Bigabid — says it managed to reduce compute costs by 60% within 15 minutes of deploying Granulate.

In October, as part of a new partnership with Microsoft Azure, Granulate launched a real-time continuous optimization solution for Azure cloud and hybrid environments in the Microsoft Azure Marketplace. Earlier in the year, Granulate announced support for software-as-a-service contracts in AWS Marketplace, enabling Amazon Web Services customers to prepay for Granulate based on expected usage tiers through contracts up to two years long.

Granulate claims that in the past 10 months, its 21 customers have saved over 3 billion hours of core usage, with the number of CPU cores under management rising by over 10 times to over 300,000 cores. It expects 2021 revenue to hit $5 million, up from between $1 million and $2 million in 2020.

“The pandemic created a large market opportunity for us, as companies of all shapes and sizes began implementing solutions that could reduce costs. Our unique optimization solution enables companies to achieve significant computing cost reduction and performance improvement on any infrastructure or environment with minimal effort or code changes,” CEO Asaf Ezra, who cofounded Granulate in 2018 with Tal Saig, told VentureBeat via email. “We look forward to continuing to help customers of all sizes and industries as we develop new solutions to drive further improvements in computing efficiency.”

Granulate’s series B round was led by Red Dot Capital Partners with the participation of Insight Partners, TLV Partners, and Hetz Ventures. Dawn Capital also joined as a new investor.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link