Categories
AI

Evisort embeds AI into contract management software, raises $100M

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


For lawyers and the organizations that employ them, time is quite literally money. The business of contract management software is all about helping to optimize that process, reducing the time and money it takes to understand and manage contracts. 

As it turns out, there is big money in the market for contract management software as well. An increasingly integral part of the business is the use of AI-powered automation. To that end, today contract management vendor Evisort announced that it raised $100 million in a series C round of funding, bringing total funding to date up to $155.6 million. 

Evisort was founded in 2016 and raised a $15 million series A back in 2019. The company was founded by a team of Harvard Law and MIT researchers and discovered early on that there was a market opportunity for using AI to help improve workflow for contracts within organizations.

“If you think about it, every time a company sells something, buys something or hires somebody, there’s a contract,” Evisort cofounder and CEO Jerry Ting told VentureBeat. “Contract data really is everywhere.”

Contract management is a growing market

Evisort fits squarely into a market that analysts often refer to as contract lifecycle management (CLM). Gartner Peer Insights lists at least twenty vendors in the space, which includes both startups and more established vendors.

Among the large vendors in the space is DocuSign, which entered the market in a big way in 2020 with its $188 million acquisition of AI contract discovery startup Seal Software. Startups are also making headway, with SirionLabs announcing this week that it has raised $85 million to help add more AI and automation to its contract management platform.

The overall market for contract lifecycle management is set to grow significantly in the coming years, according to multiple reports. According to Future Market Insights, the global market for CLM in 2021 generated $936 million in revenue and is expected to reach $2.4 billion by 2029. MarketsandMarkets provides a more considerable number, with the CLM market forecast to grow to $2.9 billion by 2024.

Ting commented that while every organization has contracts, in this view many organizations still do not handle contracts with a digital system and rely on spreadsheets and email. That’s one of the key reasons why he expects to see significant growth in the CLM space as organizations realize there is a better way to handle contracts.

Integrating AI to advance the state of contract management

Evisort’s flagship platform uses AI to read contracts that users then upload into the software-as-a-service (SaaS)-based platform.

Ting explained that his company developed its own algorithm to help improve natural language processing and classification of important areas in contracts. Those areas could include terms of a deal, such as deadlines, rates and other conditions of importance for a lawyer who is analyzing a contract. Going a step further, Evisort’s AI will now also analyze the legal clauses in an agreement.

“We can actually pull the pertinent data out of a contract, instead of having a human have to type it into a different system,” Ting said. 

Once all the contract data is understood and classified, the next challenge that faces organizations is what to do with all the data. That’s where the other key part of Evisort’s platform comes into play, with a no-code workflow service. The basic idea with the workflow service is to help organizations collaborate on contract activities, including analysis and approvals.

What $100M of investment into AI will do for Evisort

With the new funding, Ting said that his company will continue to expand its go-to market and sales efforts. Evisort will also be investing in new AI capabilities that Ting hopes will democratize access to AI for contract management.

To date, he explained that Evisort’s AI works somewhat autonomously based on definitions that Evisort creates. With future releases of the platform, Ting wants to enable users to take Evisort’s AI and adjust and train the algorithm for specific and customized needs. The plan is to pair Evisort’s no-code capabilities into the future feature, in an approach that will make it easy for regular users and not just data scientists, to build AI capabilities to better understand and manage contracts.

“I think the 100 million dollar mark tells the market, hey, this company is a serious player and they’re here to stay,” Ting said. “It’s a scale-up, not a startup.”

The new funding round was led by TCV with participation from Breyer Capital as well as existing investors Vertex Ventures, Amity Ventures, General Atlantics and Microsoft’s venture capital fund M12.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

OpenAI can translate English into code with its new machine learning software Codex

AI research company OpenAI is releasing a new machine learning tool that translates the English language into code. The software is called Codex and is designed to speed up the work of professional programmers, as well as help amateurs get started coding.

In demos of Codex, OpenAI shows how the software can be used to build simple websites and rudimentary games using natural language, as well as translate between different programming languages and tackle data science queries. Users type English commands into the software, like “create a webpage with a menu on the side and title at the top,” and Codex translates this into code. The software is far from infallible and takes some patience to operate, but could prove invaluable in making coding faster and more accessible.

“We see this as a tool to multiply programmers,” OpenAI’s CTO and co-founder Greg Brockman told The Verge. “Programming has two parts to it: you have ‘think hard about a problem and try to understand it,’ and ‘map those small pieces to existing code, whether it’s a library, a function, or an API.’” The second part is tedious, he says, but it’s what Codex is best at. “It takes people who are already programmers and removes the drudge work.”

OpenAI used an earlier version of Codex to build a tool called Copilot for GitHub, a code repository owned by Microsoft, which is itself a close partner of OpenAI. Copilot is similar to the autocomplete tools found in Gmail, offering suggestions on how to finish lines of code as users type them out. OpenAI’s new version of Codex, though, is much more advanced and flexible, not just completing code, but creating it.

Codex is built on the top of GPT-3, OpenAI’s language generation model, which was trained on a sizable chunk of the internet, and as a result can generate and parse the written word in impressive ways. One application users found for GPT-3 was generating code, but Codex improves upon its predecessors’ abilities and is trained specifically on open-source code repositories scraped from the web.

This latter point has led many coders to complain that OpenAI is profiting unfairly from their work. OpenAI’s Copilot tool often suggests snippets of code written by others, for example, and the entire knowledge base of the program is ultimately derived from open-source work, shared to benefit individuals, not corporations. The same criticisms will likely be leveled against Codex, though OpenAI says its use of this data is legally protected under fair use.

When asked about these complaints, Brockman responds: “New technology is coming, we do need this debate, and there will be things we do that the community has great points on and we will take feedback and do things differently.” He argues, though, that the wider coding community will ultimately benefit from OpenAI’s work. “The real net effect is a lot of value for the ecosystem,” says Brockman. “At the end of the day, these types of technologies, I think, can reshape our economy and create a better world for all of us.”

Codex will also certainly create value for OpenAI and its investors. Although the company started life as a nonprofit lab in 2015, it switched to a “capped profit” model in 2019 to attract outside funding, and although Codex is initially being released as free API, OpenAI will start charging for access at some point in the future.

OpenAI says it doesn’t want to build its own tools using Codex, as it’s better placed to improve the core model. “We realized if we pursued any one of those, we would cut off any of our other routes,” says Brockman. “You can choose as a startup to be best at one thing. And for us, there’s no question that that’s making better versions of all these models.”

Of course, while Codex sounds extremely exciting, it’s difficult to judge the full scope of its capabilities before real programmers have got to grips with it. I’m no coder myself, but I did see Codex in action and have a few thoughts on the software.

OpenAI’s Brockman and Codex lead Wojciech Zaremba demonstrated the program to me online, using Codex to first create a simple website and then a rudimentary game. In the game demo, Brockman found a silhouette of a person on Google Images then told Codex to “add this image of a person from the page” before pasting in the URL. The silhouette appeared on-screen and Brockman then modified its size (“make the person a bit bigger”) before making it controllable (“now make it controllable with the left and right arrow keys”).

It all worked very smoothly. The figure started shuffling around the screen, but we soon ran into a problem: it kept disappearing off-screen. To stop this, Brockman gave the computer an additional instruction: “Constantly check if the person is off the page and put it back on the page if so.” This stopped it from moving out of sight, but I was curious how precise these instructions need to be. I suggested we try a different one: “Make sure the person can’t exit the page.” This worked, too, but for reasons neither Brockman nor Zaremba can explain, it also changed the width of the figure, squashing it flat on-screen.

“Sometimes it doesn’t quite know exactly what you’re asking,” laughs Brockman. He has a few more tries, then comes up with a command that works without this unwanted change. “So you had to think a little about what’s going on but not super deeply,” he says.

This is fine in our little demo, but it says a lot about the limitations of this sort of program. It’s not a magic genie that can read your brain, turning every command into flawless code — nor does OpenAI claim it is. Instead, it requires thought and a little trial and error to use. Codex won’t turn non-coders into expert programmers overnight, but it’s certainly much more accessible than any other programming language out there.

OpenAI is bullish about the potential of Codex to change programming and computing more generally. Brockman says it could help solve the programmer shortage in the US, while Zaremba sees it as the next step in the historical evolution of coding.

“What is happening with Codex has happened before a few times,” he says. In the early days of computing, programming was done by creating physical punch cards that had to be fed into machines, then people invented the first programming languages and began to refine these. “These programming languages, they started to resemble English, using vocabulary like ‘print’ or ‘exit’ and so more people became able to program.” The next part of this trajectory is doing away with specialized coding languages altogether and replacing it with English language commands.

“Each of these stages represents programming languages becoming more high level,” says Zaremba. “And we think Codex is bringing computers closer to humans, letting them speak English rather than machine code.” Codex itself can speak more than a dozen coding languages, including JavaScript, Go, Perl, PHP, Ruby, Swift, and TypeScript. It’s most proficient, though, in Python.

Codex also has the ability to control other programs. In one demo, Brockman shows how the software can be used to create a voice interface for Microsoft Word. Because Word has its own API, Codex can feed it instructions in code created from the user’s spoken commands. Brockman copies a poem into a Word document and then tells Word (via Codex) to first remove all the indentations, then number the lines, then count the frequency of certain words, and so on. It’s extremely fluid, though hard to tell how well it would work outside the confines of a pre-arranged demo.

If it succeeds, Codex might not only help programmers but become a new interface between users and computers. OpenAI says it’s tested Codex’s ability to control not only Word but other programs like Spotify and Google Calendar. And while the Word demo is just a proof of concept, says Brockman, Microsoft is apparently already interested in exploring the software’s possibility. “They’re very excited about the model in general and you should expect to see lots of Codex applications be created,” he says.

Repost: Original Source and Author Link

Categories
AI

Simpro raises $350M as demand grows for field service automation software

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


Field service management, which refers to the management of jobs performed in the field (e.g., telecom equipment repair), faces continued challenges brought on by the pandemic. While the number of customer service inquiries has increased as enterprises have adopted remote work arrangements, worker availability has decreased. Forty-seven percent of field service companies say that they’re having trouble finding enough quality technicians and drivers to meet business goals, according to a Verizon survey. The shortfall in the workforce has increased the burden on existing staff, who’ve been asked to do more with fewer resources.

Against this backdrop, Simpro, a field service management software company based in Brisbane, Australia, today announced that it raised $350 million from K1 Investment Management with participation from existing investor Level Equity. The new funding brings Simpro’s total capital raised to nearly $400 million, which CEO Sean Diljore says will be put toward product development and customer support with a particular focus on global trade and construction industries.

Simpro also revealed today that it acquired ClockShark, a time-sheeting and scheduling platform, as well as AroFlo, a job management software provider. The leadership teams of Simpro, ClockShark, and AroFlo will operate independently, Diljore says, including continued work on existing services.

How to Set Up Maintenance Planner | simPRO

Above: Simpro’s maintenance-planning dashboard.

“This investment marks the next stage of Simpro’s exciting growth journey. Our mission is to build a world where field service businesses can thrive,” Diljore said in a statement. “We’re thrilled to welcome ClockShark and AroFlo to the Simpro family. Both companies are leaders in their spaces and have incredibly valuable product offerings that will benefit our combined customer bases and help our customers increase revenue. We look forward to growing together and building a range of solutions for the field service and construction industries.”

Managing field service workers

Field service workers feel increasingly overwhelmed by the amount of tasks employers are asking them to complete. According to a study by The Service Council, 75% of field technicians report that work has become more complex and that more knowledge — specifically more technical knowledge — is needed to perform their jobs now versus when they started in field service. Moreover, 70% say that both customer and management demands have intensified during the health crisis.

Simpro, which was founded in 2002 by Curtis Thomson, Stephen Bradshaw, and Vaughan McKillop, claims to offer a solution in software that eases the burden on field workers and their managers. The company’s platform provides quoting, job costing, scheduling, and invoicing tools in addition to capabilities for reporting, billing, testing assets, and planning preventative maintenance.

Bradshaw, a former electrical contractor, teamed up with McKillop, an engineering student, in the early 2000s to build the prototype for Simpro in the early 2000s. Working out of Bradshaw’s garage, they started with the development of job list functionality before adding new features, including a scheduling tool for allocating resources.

Today, Simpro supports over 5,500 businesses in the security, plumbing, electrician, HVAC, and solar and data networking industries. It has more than 200,000 users worldwide and more than 400 employees in offices across Australia, New Zealand, the U.K., and the U.S.

An expanding product

With Simpro, which integrates with existing software including accounting and HR analytics software, customers can use digital templates to build estimates and convert quotes into jobs. From a dashboard, they can schedule field service workers based on availability and job status, plus perform inventory tracking, connect materials to jobs, and send outstanding invoices.

Diljore expects the purchases of ClockShark and AroFlo to bolster Simpro’s suite in key, strategic ways. ClockShark, a Chico, California-based company founded by brothers Cliff Mitchell and Joe Mitchell in 2013, delivers an app that lets teams clock in and out while recording the timesheet data needed for payroll and job costing. Ringwood, Australia-based AroFlo, on the other hand, provides job management features including field service automation, work scheduling, geofencing, and GPS tracking.

Reece is now available for Automatic Catalog and Invoice Sync | simPRO

AroFlo and ClockShark claim to have over 2,200 and 8,000 customers, respectively. AroFlo’s business is largely concentrated in Australia and New Zealand, where it says that over 25,000 workers use its platform for asset maintenance, compliance, and inventory across plumbing, electrical, HVAC, and facilities management.

Somewhat concerningly from a privacy standpoint, AroFlo offers what it calls a “driver management” feature that uses RFID technology as a way of logging which field service worker are driving which work vehicles. Beyond this, AroFlo allows companies to track the current and historical location of devices belonging to their field workers throughout the workday.

While no federal U.S. statutes restrict the use of GPS by employers nor force them to disclose whether they’re using it, workers have mixed feelings. A survey by TSheets showed that privacy was the third-most important concern of field service workers who were aware that their company was tracking their GPS location.

In its documentation, AroFlo suggests — but doesn’t require — employers to “speak to [field] users about GPS tracking.”

Aroflo GPS lets you monitor your field technicians across the entire day,” the company writes on its website. “You’ll always know where they are, what they’re working on, and when they finish.”

A spokesperson told VentureBeat via email: “Simpro will continue offering GPS services and also has its own vehicle GPS tracking add-on, SimTrac. Implementation of GPS fleet tracking can help reduce risks, remain compliant with licenses and vehicle upkeep, and reduce costs in the business. It also benefits the technicians by improving their safety, spending less time in traffic and improving time management. Overall, GPS tracking provides improved visibility of staff and understanding of their location, introduces opportunities to reduce costs associated with travel, schedule smarter and even improve driver safety (by limiting their need to race across to another side of town to complete a job).”

A growing field

The field service management market is rapidly expanding, expected to climb from $2.85 billion in value in 2019 to $7.10 billion in 2026. While as many as 25% of field service organizations are still using spreadsheets for job scheduling, an estimated 48% were using field management software as of 2020, Fieldpoint reports. Customer demand is one major adoption driver. According to data from ReachOut, 89% of customers want to see “modern, on-demand technology” applied to their technician scheduling, and nearly as many would be willing to pay top dollar for it.

“The pandemic made many business owners realize how crucial it is to have the right technology in place for remote work. Trades businesses couldn’t afford to abandon projects or lose out on service and maintenance calls because of delayed response times or drawn-out time to complete,” Diljore told VentureBeat via email. “For these businesses, cloud-based software became a necessity for survival when previously it was a ‘nice to have.’”

Simpro competes with Zinier, which last January raised $90 million to automate field service management via its AI platform. The company has another rival in Workiz, a field service management and communication startup, as well as augmented reality- and AI-powered work management platform TechSee.

According to Tracxn, of the over 3,400 companies developing “field force automation” solutions (which include customer service tracking, order management, routing optimization, and work activity monitoring), more than 700 attracted a cumulative $5.8 billion from investors from 2018 to 2020.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Mabl aims to automate software testing, nabs $40M

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


In the enterprise, the pandemic brought about sharp increases in software release cadences and automation. That’s thanks in part to the adoption of low-code development platforms, which abstract away the programming traditionally required to build apps and software. Respondents to a 2021 Statista survey report that low-code tools have enabled them to release apps 40% to 60% faster, while a full 60% of developers tell GitLab that they’re committing code two times faster and bringing higher-impact technologies into their processes.

However, the accelerated pace of development has introduced new blockers, like a backlog of testing and code reviews. GitLab reports that only 45% of developers review code weekly, with 22% opting to do it every other week instead. It’s not that the pace of innovation in software testing is slowing — it isn’t — but that aspects like installation, maintenance, security, and incompatible problems remain hurdles.

The urgency of the challenge — insufficient testing can lead to security vulnerabilities — has put a spotlight on startups like Mabl, which today announced that it raised $40 million in a series C investment led by Vista Equity Partners. Founded in 2017 by Dan Belcher and Izzy Azeri, Mabl is among a crop of newer companies developing platforms that enable software teams to create, run, manage, and automate software and API tests.

“Despite the fact that the testing category is massive, no software-as-a-service leader had emerged in the space [prior to Mabl] … This void left enterprises to choose between legacy testing solutions that were incredibly complex and expensive and self-managed open source frameworks coupled with bespoke infrastructure that were inaccessible to quality assurance professionals,” Belcher told VentureBeat via email. “Mabl envisioned and delivered an end-to-end testing solution for the enterprise that would combine a low-code framework, a software-as-a-service delivery model, deep integration with enterprise environments and workflows, data analytics, and machine intelligence to disrupt the testing industry.”

Automating testing at scale

Belcher and Azeri are second-time founders, having previously launched Stackdriver, a service that provides performance and diagnostics data to public cloud users. Stackdriver was acquired by Google in 2014 for an undisclosed sum, and recently became a part of Google Cloud’s operations suite.

With Mabl, Belcher and Azeri set out to build a product that lets developers orchestrate browser, API, and mobile and web app tests from a single location. Mabl’s low-code dashboard is designed to automate end-to-end tests locally or in the cloud, running tests with different permutations of data to strengthen individual test cases.

“Lack of effective test automation is a major problem in the software industry, because ineffective testing leads to dramatically lower team productivity due to rework, overhead, and a general bottleneck in throughput, as quality engineering cannot execute with the same velocity as development and operations. Likewise, ineffective testing leads to poor product quality — from not only a functional perspective but also in terms of user experience, performance, accessibility, and more,” Belcher added. “Low-code is critical because it democratizes testing — making it accessible to quality engineers, manual testers, developers, product owners, and others — rather than limiting efforts to a quality engineer or dev.”

Mabl

Above: Mabl’s automated testing orchestration dashboard.

Image Credit: Mabl

Mabl can generate tests from the contents of emails and PDFs, adapting as the tested app’s UI evolves with development. An AI-driven screenshot comparison feature attempts to mimic real-life visual UI testing to help spot unwanted changes to the UI, while a link-crawling capability autonomously generates tests that cover reachable paths within the app — giving insight into broken links.

“There are a number of areas where we use machine intelligence to benefit customers. In particular … Mabl automatically trains and updates structural and performance models of page elements, and incorporates these models into the timing and logic of test execution,” Belcher explained. “Mabl [also leverages] visual anomaly detection that uses visual models to differentiate between expected changes resulting from dynamic data and dynamic elements from unexpected changes and defects.”

Mabl allows customers to update and debug tests without affecting master versions. API endpoints can be used to trigger Mabl tests, as well as plugins for CI/CD platforms including GitHub, Bitbucket Pipelines, and Azure Pipelines. On the analytics side, Mabl shows metrics quantifying how well tests cover an app, identifying gaps based on statistics and interactive elements on a page.

Growing trend

Mabl’s users include dev teams at Charles Schwab, ADP, Stack Overflow, and JetBlue, and the Boston, Massachusetts-based company expects to more than double its recurring revenue this year. But the company faces competition from Virtuoso, ProdPerfect, Testim, Functionize, Mesmer, and Sauce Labs, among others. Markets and Markets predicts that the global automation testing market size will grow in size from $12.6 billion in 2019 to $28.8 billion by 2024.

Automated tests aren’t a silver bullet. In a blog post, Amir Ghahrai, lead test consultant at Amido, lays out a few of the common issues that can come up, like wrong expectations of automated tests and automating tests at the wrong layer of the app stack.

Mabl

Still, a growing number of companies are adopting automated testing tools — especially in light of high-profile ransomware and software supply chain attacks. According to a LogiGear survey, at least 38% of developers have at least tried test automation — even if it failed in its first implementation. Another source estimates that 44% of IT companies automated 50% or more of all testing in 2020.

“Mabl has over 200 customers globally, which includes 10 of the Fortune 500 and 32 of the Fortune Global 2000 companies. We have active, paid users in over 60 countries,” Belcher said. “The Mabl community in Japan has more than doubled in the past six months — reaching nearly 1,500 members. The Japanese customer base has increased by 322% and revenue has increased by over 300% in the past year, now representing 10% of the company’s total global revenue.”

Existing investors Amplify Partners, Google Ventures, Presidio Capital, and CRV also participated in 55-employee Mabl’s round. It brings the startup’s total capital raised to over $77 million.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Engineering software startup nTopology lands $65M

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


NTopology, a company that develops software used in the design and manufacturing of 3D-printed parts and products, today announced that it raised $65 million in a series D round led by Tiger Global, bringing the company’s total financing to $135 million at a $400 million post-money valuation. Oldslip Group, Root Ventures, Canaan Partners, Haystack, and Insight Partners participated in the tranche, which cofounder and CEO Bradley Rothenberg says will be put toward expanding the types of apps nTopology serves in the product development process.

The pandemic accelerated digital transformation in the engineering sector. Shifts that likely would’ve happened over years instead happened within a matter of months, as engineers found themselves with more time to pilot different design software. In many respects, this was a positive development. According to a report by the Xpera Group, 95.5% of all data captured goes unused in the engineering and construction industry. In 2016, the World Economic Forum predicted that full-scale digitization within 10 years could lead to savings between $0.7 and $1.2 trillion in design, engineering, and construction and $0.3 and $0.5 trillion in construction operations.

nTopology 3.0 Review - DEVELOP3D

Founded in 2015 by Bradley Rothenberg and Greg Schroy, nTopology’s software helps engineers create parts with functional requirements built in. The company’s tools give its users control over spatial variations of shape, allowing for a range of designs and analyses.

“I founded nTopology … as I set out to find a design software solution that was specifically tailored to the needs of advanced manufacturing,” Rothenberg told VentureBeat via email. “NTopology’s digital engineering software, which combines generative design (for example, lattices and topology optimization), workflow management, and production preparation, is bridging the limitations of existing computer-assisted design (CAD) programs for the advanced manufacturing sector.”

Augmented CAD

According to Rothenberg, the main verticals nTopology operates in are aerospace and defense, health care, automotive, and consumer applications. The company services thousands of users across over 300 customers including Ford, Lockheed Martin, Honeywell, Emerson, and Wilson, which use its modeling systems to design the components of their products.

In less than a year, nTopology claims to have doubled revenue and increased its headcount to 120. The company anticipates increasing its employee count 150% by the end of 2022.

Looking ahead, Rothenberg says that nTopology will explore how AI and machine learning can assist engineers in discovering new product designs. “While advanced manufacturing opens a new and exciting design space for engineers, it doesn’t come for free,” he added. “One of the core uses of our new capital will be to expand our applications within our primary verticals and beyond as we head into Q1, 2022. The areas we plan to focus on include heat exchangers, medical implants, architectural materials, lightweighting, and industrial design.”

Boding well for nTopology and its competitors, which include Siemens NX, Catia, and Ansys, the global engineering software market is expected to exceed $46 billion by 2024.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

ControlUp lands $100M to help enterprise IT teams manage remote software

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


San Jose, California-based ControlUp, an IT infrastructure management, monitoring, and troubleshooting platform, today announced that it raised $100 million co-contributed by K1 Investment Management and JVP, bringing its total raised to $140 million. CEO Asaf Ganot says that the investment will enable ControlUp to expand its employee headcount while supporting ongoing product development efforts.

“This injection of capital will accelerate our ability to help more enterprises open the door to the limitless possibilities of a simpler, more reliable work-from-anywhere experience,” Ganot said in a statement. “We give IT real-time visibility into system status, with the ability to resolve help desk calls faster, and even handle potential system issues before they happen. All this translates to fewer headaches, lower costs, higher productivity, and happier people.”

ControlUp’s tranche comes as IT teams struggle to contend with remote and hybrid work setups emerging during the pandemic. According to a PricewaterhouseCoopers survey, 17% of employers say that the shift to remote work hasn’t been successful for their company. Among the other headaches are security and governance vulnerabilities — 45% of professionals expect their company to suffer a data breach during the pandemic due to staff using personal devices that aren’t properly protected.

ControlUp aims to address the growing challenges with a software-as-a-service product that collects device metrics (e.g., CPU, RAM, bandwidth, and I/O usage; protocol latency; and app load time) to help customers troubleshoot and remediate software issues. The platform collects up to one year of virtual desktop interface, server, and device environment data, analyzing it to proactively warn of potential issues with the availability of enterprise resources including domain name servers, file shares, and print services.

Device monitoring

ControlUp was founded in 2008 by Ganot and Yoni Avital, who began their careers at a Citrix services company implementing end-user computing projects. While there, they built tools that helped expose common technical problems in virtual desktop environments, which became the cornerstone of ControlUp’s current product offering.

ControlUp provides telemetry dashboards, updated every few seconds, that highlight and help to fix problems with virtual desktops and apps — for example, slow app response. IT admins can leverage search and grouping options to show resources as they change states or opt for automated actions and scripts that clean up temp directories, expand disk size, log off idle users, and more.

ControlUp’s “top insights” pane summarizes findings through widgets that spotlight anomalies, key performance indicators, and other metrics that could impact performance. The metrics are compared against both internal averages and a “global benchmark” consisting of anonymized data aggregated from all ControlUp’s enterprise customers. ControlUp uses the data to, among other things, provide AI-driven recommendations for increasing or decreasing assigned CPU and RAM to machines, and to answer granular questions like “Is my user’s profile load time phase long or short compared to other organizations with SSD storage?”

ControlUp

Above: ControlUp’s monitoring dashboard.

“By analyzing data from tens of thousands of troubleshooting sequences and continuously improving its machine learning algorithms, [ControlUp] recommends the shortest drill down path to uncover the root cause of [a] problem,” the company says on its website. “ControlUp’s … real-time engine connects to a multitude of data sources using flexible and expandable data collectors that cover a wide array of architectures and technologies. It utilizes a high performance in-memory database in order to digest, associate, and correlate hundreds of thousands of records in a single node.”

Expanding market

A 2021 Omdia survey predicts that, going forward, only 24% of knowledge workers will be permanently based in a office and working from a single desk. This is likely to further strain IT departments already struggling to adapt to the new norm. According to Riverbed, 94% of companies experienced technology problems that impacted their business while employees worked remotely, particularly disconnections from corporate networks, slow file downloads, and long response times when loading apps.

Against this backdrop, business has been booming for 250-employee ControlUp, which says it’s seen 50% revenue growth and 67% growth in enterprise accounts year-over-year, with over 1 million new seats deployed around the world. While it competes with 1E, Nexthink, and Lakeside, ControlUp notes that it currently supports over 5 million devices across 1,500 customers including four of the top five U.S. health insurance companies and five of the top eight US health care companies.

“The pandemic amplified the complexities of supporting employees when they started working from remote locations. These issues have been significant — and our solutions help,” Ganot told VentureBeat via email. “While [the pandemic] did not create the ‘work from anywhere’ trends, we have seen this experience accelerated. Enterprises across industries and around the world are looking closely at how they solve these problems. ControlUp is focused on helping companies give their people the freedom and flexibility to work from anywhere. We do this by empowering IT teams to optimize remote environments, prevent user downtime, and resolve issues faster. Ultimately, we enable businesses deliver an outstanding digital employee experience.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Security software company McAfee acquired for $14 billion

Security software company McAfee is going private via a buyout from an investor group in a deal valued at more than $14 billion, the company announced Monday. Bloomberg first reported last week that a likely deal was imminent.

McAfee was founded in 1987 by John McAfee and became known for its computer antivirus software. McAfee, the founder, left McAfee, the company, in 1994, and the company was acquired by Intel in 2010 for $7.68 billion. In 2014, Intel announced it was phasing out the McAfee brand name for the security software and rebranding it as “Intel Security.”

Last October, the company returned to the public stock market. McAfee said in the announcement that it will continue as a “pure-play consumer cybersecurity” company after selling its Enterprise business to private equity firm Symphony Technology Group for $4 billion in July.

The company said the deal announced Monday to sell to the investor group, which includes Advent International Corp., Permira Advisers, and Crosspoint Capital Partners, is expected to close in the first half of 2022.

John McAfee had led a troubled life after departing the company; in 2012, he fled to Guatemala as authorities in Belize sought to question him in connection with a neighbor’s murder. In March, he was charged with securities fraud over a “pump and dump” cryptocurrency scheme, where authorities said he and his bodyguard persuaded their Twitter followers to invest in certain coins, then sold their holdings in the currencies when the prices went up. McAfee was found dead in a prison cell in Barcelona in June, not long after Spain had approved extraditing him to the US to face tax fraud charges.

Repost: Original Source and Author Link

Categories
AI

Linux Foundation to promote dataset sharing and software dev techniques

At the Linux Foundation Membership Summit this week, the Linux Foundation — the nonprofit tech consortium founded in 2000 to standardize Linux and support its growth — announced new projects: Project OpenBytes and the NextArch Foundation. OpenBytes is an “open data community” as well as a new data standard and format primarily for AI applications, while NextArch — which is spearheaded by Tencent — is dedicated to building software development architectures that support a range of environments.

OpenBytes

Dataset holders are often reluctant to share their datasets publicly due to a lack of knowledge about licenses. In a recent Princeton study, the coauthors found that the opaqueness around licensing — as well as the creation of derivative datasets and AI models — can introduce serious ethical issues, particularly in the computer vision domain.

OpenBytes is a multi-organization effort charged with creating an open data standard, structure, and format with the goal of reducing data contributors’ liability risks, under the governance of the Linux Foundation. The format of data published, shared, and exchanged will be available on the project’s future platform, ostensibly helping data scientists find the data they need and making collaboration easier.

Linux Foundation senior VP Mike Dolan believes that if data contributors understand that their ownership of data is well-protected and that their data won’t be misused, more data will become accessible. He also thinks that initiatives like OpenBytes could save a large amount of capital and labor resources on repetitive data collection tasks. According to a CrowdFlower survey, data scientists spend 60% of their time cleaning and organizing data and 19% of their time actually collecting datasets.

“The OpenBytes project and community will benefit all AI developers, both academic and professional and at both large and small enterprises, by enabling access to more high-quality open datasets and making AI deployment faster and easier,” Dolan said in a statement.

Autonomous car company Motional (a joint venture of Hyundai and Aptiv), Predibase, Zilliz, Jina AI, and ElectrifAi are among the early members of OpenBytes.

NextArch

As for NextArch, it’s meant to serve as a “neutral home” for open source developers and contributors to build an architecture that can support compatibility between microservices. “Microservices” refers to a type of architecture that enables the rapid, frequent, and reliable delivery of large and complex apps.

Cloud-native computing, AI, the internet of things (IoT), and edge computing have spurred enterprise growth and digital investment. According to market research, the digital transformation market was valued at $336.14 billion in 2020 and is expected to grow at a compound annual growth rate of 23.6% from 2021 to 2028. But a lack of common architecture is preventing developers from fully realizing these technologies’ promises, Linux Foundation executive director Jim Zemlin asserts.

“Developers today have to make what feel like impossible decisions among different technical infrastructures and the proper tool for a variety of problems,” Zemlin said in a press release. “Every tool brings learning costs and complexities that developers don’t have the time to navigate, yet there’s the expectation that they keep up with accelerated development and innovation.”

Enterprises generally see great value in emerging technologies and next-generation platforms and customer channels. Deloitte reports that the implementation of digital technologies can help accelerate progress towards organizational goals such as financial returns, workforce diversity, and environmental targets by up to 22%. But existing blockers often prevent companies from fully realizing these benefits. According to a Tech Pro survey, buy-in from management and users, training employees on new technology, defining policies and procedures for governance, and ensuring that the right IT skillsets are onboard to support digital technologies remain challenges for digital transformation implementation.

Toward this end, NextArch aims to improve data storage, heterogeneous hardware, engineering productivity, telecommunications, and more through “infrastructure abstraction solutions,” specifically new frameworks, designs, and methods. The project will seek to automate operations and processes to “increase the autonomy of [software] teams” and create tools for enterprises that address the problems of productization and commercialization in digital transformation.

“NextArch … understands that solving the biggest technology challenges of our time requires building an open source ecosystem and fostering collaboration,” Dolan said in a statement. “This is an important effort with a big mission, and it can only be done in the open source community. We are happy to support this community and help build open governance practices that benefit developers throughout its ecosystem.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Intel CTO Greg Lavender interview — Why chip maker is spending on both manufacturing and software

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Intel has been on a spending spree ever since Pat Gelsinger returned to the company as CEO earlier this year. He pledged to spend $20 billion on U.S. factories and another $95 billion in Europe. Those expenses are scary to investors as they could take a toll on the chip giant’s bottom line, but Gelsinger said he hopes they will pay off over four or five years.

And Intel is making investments in other ways too. In June, Gelsinger brought aboard Greg Lavender, formerly of VMware, as chief technology officer and senior vice president and general manager of the Software and Advanced Technology Group.

I spoke with Lavender in an interview in advance of the online Intel Innovation event happening on October 27-28. In that event, a revival of the Intel Developer Forum that Gelsinger used to lead years ago, Intel will re-engage with developers.

The event will highlight not only what Intel is doing with its manufacturing recovery (after multiple years of delays and costly mistakes). It will also focus on software, such as Intel’s oneAPI technology. Lavender is tasking Intel’s thousands of software engineers to create more  sophisticated software that help brings more value with a systems-focused approach, rather than just a chip-based approach.

Webinar

Three top investment pros open up about what it takes to get your video game funded.


Watch On Demand

We talked about a wide variety of subjects across the spectrum of technology. Here’s an edited transcript of our interview.

Above: Greg Lavender is CTO of Intel.

Image Credit: Intel

VentureBeat: Tell me more about yourself. This seems like a very different role for you.

Greg Lavender: I’ve been in the technology industry for a long time, working for hardware companies like Sun and Cisco. In the early days I was a network software engineer for 25 years, writing system software. Always working close to the metal. I have graduate degrees in engineering and computer science. We all get the same courses on Maxwell’s electromagnetic theory and physics. I’m a math geek. But I came up with the growth of the industry, right? Pat is three months older than me. Our careers have kind of tracked along. We’ve both known each other for not quite 14 years.

VentureBeat: What is the task that [Intel CEO] Pat Gelsinger gave you when he brought you aboard?

Lavender: We’ve known each other since I was running Solaris engineering. He was CTO at Intel. Intel launched the Nehalem platforms, if you remember back when that was their first server CPU. We were only shipping AMD Opteron, dual socket, dual core boxes at the time. Pat gave us some money to port it over to the Intel CPU chipsets. We got to know each other and built a trust relationship there. He obviously hired me into VMware and continued that relationship. He knows I’ve got that hardware and software background.

He surprised me when he called me up. I understood the CTO part, but then he also said I’d be the SVP GM of the software group. I said, “How big is that software group?” He said, “Well, we don’t have a software group. We have fragmented parts of software across the company.” In my first 120 days, about how long I’ve been here, I ran a defrag, a disk defrag, and pulled the other 6,000 person software organization together. Everything from firmware to BIOS to compilers to operating systems, all the Linux, Windows, Chrome, Android. All of our system software, all the security software.

I have a big team now. There’s other parts of software going on in the company, but I’m in the driver’s seat for the software strategy and ensuring the software quality for every hardware product we ship.

Above: Intel is focusing on oneAPI to make software creation easier.

Image Credit: Intel

VentureBeat: Is this a smaller percentage of the staff than it would have been in different years? There were things like Intel Architecture Labs and some of the investments that happened in the last decade way outside the chip space. Has that narrowed down again to a smaller percentage of the overall employees?

Lavender: We have a lot, and I’m hiring more. But I’d just say that Pat came in with his eight years at VMware. I was there for half of that. It’s a real software mindset, that the value of software is enabling the open source software ecosystem. Maybe we don’t need to directly monetize our software, right? We can monetize our very diverse platforms.

I’ve spent most of my time here pushing changes into the new compiler system. We just delivered the AMX accelerator code into the Linux kernel, so that when Sapphire Rapids comes out next year we already have the advanced matrix multiplier for machine learning and AI workloads in the Linux kernel. I have a compiler team — I’m sure you’re familiar with the LLVM compiler ecosystem, where all of our new compilers are built on LLVM. We can accelerate our GPUs, CPUs, and FPGAs. It’s a massive set of IP, and it’s IP we give away for free to enable our platforms. We’re contributing to PyTorch, TensorFlow, ONNX. We just updated Intel acceleration into TensorFlow 2.6. That had 8 million downloads just in Q3. We’re enabling the ecosystem for all the developers out there with these accelerated capabilities. We have our crypto library using OpenSSL, accelerated crypto as software.

I think Intel has just failed to tell everyone about all the cool stuff we’re doing. We talk about our chips and our hardware and our customers. We don’t talk about all this great software. We’ve pulled it all together into my org. And I have Intel Labs, 700 researchers at Intel Labs, with all our future software and AI and ML, as well as our quantum computing group. We have this neural computing chip. We just taped out the second version of it. We open-sourced the programming environment for it, called Lava. There were some articles about Loihi 2. That’s our neural processing chip.

VentureBeat: Is some of the investment in software more around the edges of what Intel does? Would that be harder, because there’s so much capital spending going into manufacturing now, with this recommitment to making sure the core manufacturing part of Intel was taken care of? Maybe that leaves less money for software investment.

Lavender: Our view is we need to prime the ecosystem. We need to be open, be trusted. We need to practice responsible AI in all the things we do with our software. My goal is to meet the developers where they are. Historically Intel wanted to capture the developers. I want to enable them and set them free, so that they have choice.

You may be familiar with the SYCL open source programming language, data parallel C++. It’s an extension to C++ for programming GPUs and FPGAs. We have a SYCL compiler built on LLVM. We make that freely available through our oneAPI ecosystem. We have a new website coming online next week, developer.intel.com, where you’ll find all these things. We’ve just been poor about letting the world know about what those investments have already paid for and delivered. Developers would be shocked to know how much of the open source technology they’re currently using has Intel free software in it. It gives them both a better TCO for running their workloads in the clouds, as well as the datacenters or on their laptops.

If anything is lacking, it’s efficient amplification and communication. Just telling everybody, “This is already here.” From my perspective, I just have to leverage it and go further up the stack. We’ve mostly just pushed out software that enables and tickles the hardware. But we’ve been quietly, or relatively quietly, sprinkling all of these accelerated capabilities in all the common open source environments. I mentioned PyTorch. We just don’t talk about it. What I have to change is marketing and communication. We’re going to do this at Intel.

That’s one of the major themes: engaging with the developer community and getting them access to all this cool technology so that they can choose which platforms they want to run on and get that enablement for free. They don’t have to do anything. Maybe set a flag or something. But they don’t have to do any new coding. As you well know, most developers — of 24 million developers, according to some recent data — are up the stack. If you look at the systems people, there’s maybe 1 million. There’s this big group of people in the middleware layer, the dev sec ops people. Maybe not the no-code/low-code developers, the top of the stack. But there are four million enterprise developers just on Red Hat. The fact that I’m pushing stuff into the new compiler ecosystem, pushing stuff into the Linux kernel, into Chrome, means all that technology will be there for all those enterprise developers. I can instantly enable 4 million developers for Sapphire Rapids or Ponte Vecchio GPU.

Intel's Ponte Vecchio is an almagation of graphics cores.

Above: Intel’s Ponte Vecchio is an amalgamation of graphics cores.

Image Credit: Intel

VentureBeat: If you think of things that Intel is getting back to, that maybe it used to do when it communicated through things like the Intel Developer Forum, are there things you expect will be reminders of that?

Lavender: Intel Developer Forum was one of the best tech conferences back when I was at Sun and Cisco. I think it stopped in, what, 2013? Intel Innovation is essentially a relaunch of that theme. “The geek is back,” as Pat would like to say. We were just rehearsing our dialogues for next week. I love it. We’ve grown up together in the industry. I was originally an assembly language programmer on the 8088 and the 8086. Pat and I cut our teeth on Intel as young kids. It’s just so great to be here together at this time given some of Intel’s missteps in the past. We’re in the driver’s seat, and we’re going to steer this massive company into the future.

All those investments we’ve talked about into our fabs and our foundry services business are part of the overall game plan. But if we build all these chips and then don’t have software to make it sing, what good is that? The software is what makes the hardware sing.

VentureBeat: What are some of the messages for people about how Intel has gotten over those missteps in things like the manufacturing process?

Lavender: Pat’s already been out communicating on that and what he’s doing, putting the company’s balance sheet to work to address the world’s lack of capacity to support the demand for semiconductor technologies. When we broke ground in Arizona three weeks ago, there was a lot of press around that. I think you covered Intel Accelerated, where we discussed Ponte Vecchio and how it will use our new process technology, even using TSMC tiles for the Ponte Vecchio general-purpose GPU. We’ve been adopting the new processes we’ve talked about. We’re getting the yields we need. We’re highly optimistic that the industry demand for semiconductor technologies will make IFS a strong business for us. My team, by the way, develops all the pre-silicon simulation software that IFS customers can use to simulate the functionality of their chip before they send it for tape-out.

VentureBeat: I’ve written a few stories from Synopsys and Cadence about how much AI is going into chip design these days. I imagine you’re making use of all that.

Lavender: Being CTO, I get to look across the whole company. That’s one of the advantages of being CTO. I spend a lot of time with the people in our process technology. They’re leading adopters of AI and ML technology in the manufacturing process, both in terms of optimizing yield from each wafer — wafers are expensive and you want to get the most out of every wafer — and then also for diagnostics, for defects.

Every company has silent data errors as a result of their manufacturing processes. As you get to lower and lower nanometer, into angstroms, the physics gets interesting. Physics is a statistical science. You need statistical reasoning, which is what AI and ML are really about, to help us make sure we’re reducing our defects per million, as well as getting the densities we want per wafer. You’re right. That’s the data to physics layer. You have to use machine learning and inference. We have our own models for that, about how to optimize that so we’re more competitive than our competitors.

Above: Intel CEO Pat Gelsinger breaking ground on chip production.

Image Credit: Intel

VentureBeat: If we go back in history some, Nvidia’s investments in Cuda were interesting for breaking the GPU out of its straitjacket, loosening it up for AI. That led to many changes in the industry. Does Intel have its own version of how you’d like to have something like that happen again?

Lavender: There’s at least three parts to that in the way I think about it. Everyone’s interested in roofline performance. Those are the bragging rights in the industry, whether it’s for a CPU or a GPU. We’ve released some preliminary ML performance numbers for Ponte Vecchio. I think it’s on the 23rd of this month that we’ll be submitting additional ML performance numbers for Xeon into the community for validation and publication. I don’t want to pre-announce those, but wait a couple of days.

We’re continually making progress on what we’re doing there. But it’s really about the software stack. You mentioned Cuda. Cuda has become the de facto standard for programming the GPU in the AI and ML space, not just for gaming. But there are alternatives. Some people do OpenCL. Are you familiar with SYCL, the open source effort for data parallel C++? All of our oneAPI compilers compile for CPU, for Xeon and our client CPUs, for GPU and FPGAs, which are also going into network accelerators particularly. If you want to program in C++ with the SYCL extensions, which are up for standardization in the ISO C++ standards bodies, there’s a lot of effort going into writing SYCL as an open source, industry neutral technology. We’re supporting that for our own platforms, and we’d like to see more adoption across the industry.

I’m sure you’re familiar with AMD announcing their HIP, this thing called a heterogeneous programming environment, which is essentially — think of it as a source-to-source translation of Cuda into this HIP syntax for running on their own CPU and GPU. From Intel’s perspective, we want to support the open source community. We want open standards for how to do this. We’re investing, and we’re going to support the SYCL open source community, which is the Khronos Group. We think that provides a more neutral environment. In fact, I’m told you can program SYCL on top of Nvidia GPUs.

That’s sort of step two, once you get competitive at the GPU level. Step three is, what’s the ecosystem that’s already out there? There’s lots of ISVs that are already in these spaces like health care, edge computing, automotive. Everybody wants choice. Nobody wants proprietary lock-in. We’re going to pursue the path of presenting the market and the industry and our customers with choice.

VentureBeat: How open do you want to be? That’s always a good question.

Lavender: We’ll announce this more specifically at Intel Innovation, but the oneAPI ecosystem we’ve talked about — in some sense, the oneAPI name doesn’t mean there’s one single API. It’s really just a brand name. We have more than seven different vertical toolkits for building various things with the technology. We have more than 40 components — toolkits, SDKs, and so on — that make up the oneAPI ecosystem. It’s really an ecosystem of Intel accelerated technologies, all freely available. We’re doing the oneAPI release. We’re accelerating everything from crypto to codecs to GPUs to FPGAs to CPUs — x86 CPUs, obviously, but not necessarily ours. You can use those tools on AMD if you choose.

Our view is to provide the toolkits out there, and we’ll compete at the system level together with our customers, our partners. We’ll enable all the ISVs. It’s not just the open source. We’ll enable the ISVs to use those libraries. It enables anybody doing cloud development. It enables those 4 million enterprise developers on Red Hat. Just enable everybody. We all know about how software eats the world. The more software that’s out there, in the end, cloud to edge — ubiquitous computing, we call it — that enables the advancement of society, the advancement of culture, the advancement of security.

We’re big on pushing our security features in our hardware through those software components. We’re going to get to a more secure world with less supply chain risk from hackers. Even now, machine learning models are being stolen. People spend millions of dollars to train these things, develop these models, and when they deploy them at the edge people are stealing them, because the edge is not secure. We can use all the security features like SGX and TDX in our hardware to create a security as a service capability for software. We can have secure containers. We pushed an open source project called Kata Containers that gets security from our trusted extensions and our hardware through Linux.

The more we can deliver the value of those innovations in our hardware — that most people don’t know about — through the software stack, then that value materializes. If you use Signal messenger for your communications, did you know that Signal’s servers run on Intel hardware with SGX providing a secure enclave for your security credentials, so your communications aren’t hacked or viewed by the cloud vendors? Only Signal has access to the certificates. That’s enabled by us running on Intel hardware in the cloud. The CTO of Signal will be on stage with me as we talk about this, along with the CTO of Red Hat. The CTO of Signal did his undergraduate honors thesis under me on secure anonymous communication over the internet in 2002. I’m really proud of my student and what he’s done.

Greg Lavender came to Intel in June from VMware.

Above: Greg Lavender came to Intel in June from VMware.

Image Credit: Intel

VentureBeat: How do you think about something like RISC-V?

Lavender: It shows that innovation is ever-present and always occurring. RISC-V is another set of technologies that will be adopted particularly, I would think, outside the United States, as in Europe and China and elsewhere in Asia people want alternatives to ARM for their own reasons. It’ll be another open architecture, open ecosystem, but the challenge we have as an industry is we have to develop the software ecosystem for RISC-V. There’s a massive software system that’s evolved over a decade or more for ARM. Either we co-opt that software ecosystem for RISC-V, or a new one emerges. There’s appetite for both, I think. There’s already investment in ARM, but at the same time there’s potential to develop something that’s not tied to the ARM environment.

There are differing opinions. I’ve heard from various people about the opportunity for RISC-V. But clearly it’s happening. I think it’s good. It gives more choice in the industry. Intel will track and see where it goes. I generally believe that it’s a positive trend in the industry.

VentureBeat: As far as what people can expect next week, when it was in person there were so many different kinds of options for deep dives. I guess you may have even more options when you’re doing it online. How would you compare this experience to what people might remember from before about Intel Developer Forum?

Lavender: It’s going to be very interactive, with Pat and myself, Sandra Rivera, Gregory Bryant for the client side, Nick McKeown. Sandra, myself, and Nick are all new in our roles, around 100-plus days. It’s going to be a lively conversation style. I forget the total number, but we have more than 100 “meet the geek” demos. We’ll have some cool stuff, everything from 5G edge robotics to deep learning, AI, ML, obviously graphics. We’re going to show off our new Alder Lake processor. Lots of stuff about various open source toolkits we’ve launched. You may not have heard of iPDK. It’s an open source project we launched. A lot of people are jumping on the bandwagon to offload workloads that traditionally run on the cores to the smart NIC. We have some partners that will be showing up to talk about our technology and how they’re using it.

It’s only a two-day event, but there’s a lot of material packed into those two days. It’s a video format. You can browse around and pick and choose what you want. I think we’re all fatigued of these virtual conferences. We’re trying to make it not just a bunch of talking heads, but more of an interactive dialogue about things we’re doing, about our customers and how they’re taking advantage of it, and then quickly transitioning to live or recorded demos to show that it’s real. It’s not just marketing. It’s real.

VentureBeat: Does this sort of thing make you wish the metaverse was here, that we could make it happen faster?

Lavender: There’s this whole sociological, anthropological conversation to have about the transition we’ve all been through for the last two years. For me, I worked in banking, so I’ve learned to think like a global economist. You can’t help but do that when you’re CTO of a global financial company. I look at these things at more of the macroeconomic level in terms of the likely societal changes. Clearly the shortages in the supply chain and the chokes in the supply chain have shown the insatiable demand for technology generally. Everything we’re doing now is technology-enabled. Can you imagine if we didn’t have Zoom, Teams, whatever? What would that have been like? Obviously this is something in the human experience. We’ve all experienced that.

Above: Intel has 6,000 software engineers.

Image Credit: Intel

But without a doubt, the demand for semiconductors, the demand for software will outstrip the talent, the global talent we have to produce it. We have to get economies of scale. This is where Intel has an advantage. We have those economies of scale more than anyone. We can satisfy more of that demand, even if we have to build factories. We have to accelerate all of that with software. This is why there’s a software-first strategy here. If we’re talking five years from now, it could be a very different story, because the company is putting its mojo back into software, particularly open source software. We’re going to continue to deliver a broad portfolio of technologies to enable that global demand to be met in multiple verticals. We all know software is the liquid. It’s the lubricant that enables that technology to add social and economic value.

VentureBeat: Does it look like 2023 is when the supply chain gets back to its healthier self?

Lavender: I read the same press you read. It seems like it’s a two-year cycle to get there. I’ve read stories about people building their own containers to take over on a ship and collect the parts to bring back. Walking supplies through customs in various countries to get it through the process and the bureaucracy. Right now it seems like a lot of unusual things are happening. I’ve even heard about people receiving SOC packages and they go to test them and there’s actually no guts inside the SOC. That hasn’t happened to us, but these are the stories I’ve read about in the press.

VentureBeat: I would hope that the U.S. government comes around and sees the need to invest in bringing a lot of this back onshore.

Lavender: The CHIPS Act — I’m sure you’re familiar with that. It’s passed the Senate. It hasn’t yet passed the House. I think it’s tied up in the politics of the current spending bill. The Biden administration is trying to put it through. Obviously we’re supporters of that. It’s as good for the industry as it is for Intel. But your guess is as good as mine about geopolitics. It’s not an area that I have any expertise in.

VentureBeat: As far as some futuristic things, I wonder if you’ve thought about some things like Web 3 and the decentralized web, whether that may come to pass or whether it needs certain investments across the industry to happen.

Lavender: There’s a lot of talk. We all think that the datacenter of the future — you may have heard us talk about going from exascale to zettascale. When you get to those scales, to zettascale, it becomes a communications issue. We’ve invested and pioneered in silicon photonics. We can get latencies over distances to a millisecond. That’s quite a distance you can travel at the speed of light.

First off, the innovations in core networking and the edge — it’s not just 5G. I have a new Nighthawk modem from Netgear. I get 400 megabits download. It cost me 800 bucks for that device, but if you’re on a good 5G network, you see the value of it. We’re going to be close to gigabit before too much longer. 6G is going to give you much more antenna bandwidth as well. The bandwidth has to go there before all the other compute density distributes.

I think what you’re talking about is workloads moving not necessarily to the cloud, but away from the cloud and more to the edge. That’s certainly a trend. We see that in our own business and our own growth, in demand for FPGAs and our 5G technologies. Compute becomes ubiquitous. That’s what we’ve said. Network connectivity becomes pervasive. And it’s software-controlled. There has to be software to manage that level of distribution, that level autonomy, that level of disaggregation.

Humans aren’t good about building distributed control planes. Just look at what goes on today. The security architecture that has to overlay all of that — you’ve created a massive surface area for attack vectors. Again, here at Intel we think about these things. We have the capacity and the manufacturing capability to start building prototype technology. I have Intel Labs. That’s 700 researchers. Those are areas we’re discussing as we look at our funding for the next fiscal year, to start exploring these distributed architectures. But most important, back to the software story — I can build the hardware. We can do that. It’s about how you actually manage that at zettascale.

Above: Intel is taking a systems approach to software.

Image Credit: Intel

VentureBeat: You must be happy that Windows 11 has that hardware security feature built in. I think some of these game companies are starting to realize that ring zero access for things like anti-cheat in multiplayer games is important.

Lavender: Windows 11 requires TPM. I have an old Intel NUC that I use for programming. I’ve tried to upgrade to Windows 11 and it told me I needed to buy a new one because I didn’t have the Trusted Platform Module. I asked my colleagues here when the next NUC is coming out. I don’t want to get the currently shipping one. I want one with the new chips. So I’m in line for a beta box.

I just got put onto the Open Source Security Foundation, along with the CTOs of VMware and Red Hat and HPE and Dell. We’re really going to tackle this problem for the industry in that form. From my platform at Intel as the CTO, I want to engage with all my ecosystem partners so that we solve this problem as an industry. It’s too big a problem to solve one-off.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
Game

Oculus Go unlock software released for hardware freedom

An Oculus Go unlocked OS build was released today, courtesy of John Carmack. After a prolonged period of time and a whole lot of work in securing the rights to release the software, Carmack and crew made this build a reality. It’s a surprise, honestly, now that Oculus is owned by Facebook. It IS a welcome surprise, in any case.

The Oculus Go software unlock is relatively simple to enact. The most difficult part of the process will be your decision to push the software, given the agreement you must make with Oculus. You’ll effectively be agreeing that they are no longer responsible for any part of the device’s software or any future software updates.

SEE TOO: Our original Oculus Go review

Per Oculus, “this process will not work on any other device or OS,” and “please note that unlocking your device is not reversible and you will no longer receive OTA updates.” So why would you want to initiate such an unlock?

Because once you unlock you Oculus Go, you can install any software you like. Per Oculus, this process will give you access to the OS build so that you might “repurpose Oculus Go for more things today.”

Per Carmack, this process “opens up the ability to repurpose the hardware for more things today, and means that a randomly discovered shrink wrapped headset twenty years from now will be able to update to the final software version, long after over-the-air update servers have been shut down.”

Take a peek at the Unlocking Oculus Go page at the Oculus for Developers website. There you’ll be taken through the steps that are required to unlock the device and open your software door to the future. And if you happen to find a very awesome use for this headset and would like to share, let us know! Or just let me know over on Twitter – let’s chat!



Repost: Original Source and Author Link