Colossal Biosciences spins off Form Bio software, offers platform for advanced, AI-based applications

To further strengthen our commitment to providing industry-leading coverage of data technology, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Watch for their articles in the Data Pipeline.

Dallas-based Colossal Biosciences, the self-described de-extinction company behind the woolly mammoth and thylacine, today announced that it is spinning off Form Bio, as an independent software company offering a computational life sciences platform that bridges the gap between data and discovery. 

Driven by deep learning artificial intelligence (AI) algorithms, Form Bio empowers life scientists with a software platform for managing large datasets, executing verified workflows, visualizing results and collaborating with their peers. The platform brings these capabilities together in one cohesive unit with a user experience designed to simplify computational work and bolster life science breakthroughs across companies, labs and universities. 

It is designed to be applied across an array of use cases including drug discovery, gene and cell therapy, manufacturing efficiency, academic research and more. Form Bio enters the market with a $30 million series A funding round led by JAZZ Venture Partners with participation from Thomas Tull, Colossal lead investor. 

“When you have a big scientific endeavor like de-extincting a species, you not only need the smartest scientists in the world, you need powerful software, much of which simply hasn’t existed until now,” said Ben Lamm, Colossal cofounder and CEO. “After reviewing everything available on the market, we chose to create our own software solution. Now, we want to share this platform with the broader community to impact other areas of scientific innovation, including human health.”


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Form Bio was created almost in parallel with Colossal, the company that rose to fame for applying CRISPR technology for the purposes of species restoration, critically endangered species protection and the repopulation of critical ecosystems that support the continuation of life on Earth. At the time, the Colossal team recognized the lack of necessary software capabilities in the market and inadequacies of traditional bioinformatics processes to rapidly analyze enormous volumes of data. More importantly, they noticed that the market lacked the tools required by the team at Colossal themselves for comparative genomics and computational biology.

Developed initially to advance de-extinction, Form Bio was a solution that had the potential to address the data deluge while responding to the industry’s acute need for a simple, user-friendly platform to replace mountains of code, cumbersome data wrangling processes and underdeveloped tools. 

Colossal’s team of computation biologists, along with the team of scientists at the Church Lab, and more than 35 geneticists at Colossal’s labs in Boston and Dallas, worked to develop and refine Form Bio’s core platform capabilities for the broader market.

The future of bioengineering: AI and machine learning

“Computer-aided design, fabrication, testing analyses and machine learning are key to the future of bioengineering in general and specifically restoration of endangered and extinct genetic diversity for keystone species in vital ecosystems,” said Colossal cofounder George Church, Professor at Harvard Medical School and MIT, and director of Synthetic Biology at the Wyss Institute in Boston. “Form Bio is the software critical to pave the way. As scientist-engineers, we need these pipelines and look forward to faster breakthroughs in scientific discoveries and applications, now that software has caught up with science.”

The Form platform sits at the intersection of biology and discovery. It brings together core components of data management, workflows, results visualization and collaboration in one cohesive solution. Included as part of Form is an extensive catalog of verified workflows, covering a wide range of scientific use cases from ancient DNA analysis and transcriptomics to AAV gene therapy. 

The platform also serves as a foundation for advanced, AI-based applications tailored to the needs of specific industries and academic fields. “Scientists do like to tinker and do things very bespoke. The platform allows them to upload their own workflows or use the workflows that we’ve already developed and validated. So it gives them the confidence that it is something that they can trust the output of and that if they bring a different set of data in six months to get comparable results,” said Claire Aldridge, Ph.D., chief strategy officer, Form Bio. 

With the launch of Form Bio, Kent Wakeford will transition from his day-to-day role as Colossal’s COO to Form Bio’s co-CEO, where he will work alongside co-CEO Andrew Busey and support Colossal’s CEO and Board of Directors in an executive special advisor role. Former Biolabs COO, Adam Milne, has also joined Colossal as its new Chief Operating Officer. 

“We’ve created one integrated platform that’s been in development for the last 18 months. It allows users to take an idea and move it all the way into production in a way that is more efficient and cost effective. By taking a more open and transparent approach, the platform can use different toolsets, export data in any way they choose, perform computational analysis with our machine learning and AI models, and share the information with peers or other journals. In a nutshell, we aspire to be like the GitHub for science,” Wakeford told VentureBeat.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


AI software: The bridge from data to insights

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Artificial Intelligence (AI) everywhere has the potential to transform every business and improve the life of every person on the planet. In fact, every day we hear about AI breaking new ground, from detecting cancer and playing Minecraft, to creating “sentient” chatbots and generating compelling art. The goal of AI is simple: To accelerate “data to insights.” We have seen tremendous progress in the basic AI ingredients — the exponential growth of “data, compute and algorithms.”

Data, as measured by the total number of bytes, is in zettabytes (1021). Compute, as measured by hardware execution capacity of operations per second, is in petaflops (1015) to exaflops (1018), and algorithms, as measured by the number of parameters in a neural network, have exceeded a trillion (1012).

However, research has found that 87% of AI concepts do not make it into deployment for several reasons, including performance, infrastructure, and multi-vendor software and tooling. As data sets grow and systems become more complex, developers face new challenges with AI implementation and deployment. As a result, business objectives are slowed drastically as developers spend valuable time and resources resolving technical, process and organizational issues, working through failed projects and updating code — all of which create additional cost. 

AI is an end-to-end problem that requires end-to-end support. To truly experience AI everywhere, developers and data scientists working within the space need to bring compute, data and algorithms together. For those looking to broaden the use of AI within their organization, focusing on the principles of human productivity and computer performance is key.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The question is: What is the best way to do this?

Bridging the divide

AI software is the bridge from “data to insights” with the support of “compute and algorithms.” Still, this software bridge needs to be constructed for millions of data scientists and developers, whose AI applications are in turn used by billions of users. 

AI software can enhance human productivity to scale AI everywhere. To drive the proliferation of AI, it is important that the industry actualizes methods that make it easier for developers and data scientists to build on current AI solutions and algorithms, or pioneer new ones. It should not require a PhD in AI to apply AI widely. Therefore, it is equally important to ensure that the data and the infrastructure are readily accessible. 

Productivity can be achieved with the right data and AI platform and tooling, such as those that increase performance of popular industry-standard AI frameworks or provide open tools to facilitate end-to-end AI workflows. These might include AI analytics toolkits, development and deployment toolkits, end-to-end distributed AI toolkits, reference toolkits and AutoML toolkits.

Also: domain specific toolkits, low-code or no-code development environments, data labeling and augmentation tools, bias detection tools, and tools for transfer learning, federated learning and others.

All of these are open, standards-based, unified and secure to make it easier for developers and data scientists to engineer data and build and deploy AI solutions. For example, some tools can increase human productivity by more than ten-fold.

Accelerating AI software

There is no “one-size-fits-all” solution for the software that is utilized during each phase of the AI application lifecycle, because it varies across verticals and use cases. As a result, leaders within the industry must collaborate on open- source tools.

For instance, Intel is partnering with Accenture to help enterprises innovate and accelerate their digital transformation journey with the introduction of open source AI reference kits. These reference kits can reduce the time to solution from weeks to days, helping data scientists and developers train models faster and at a lower cost by overcoming the limitations of proprietary environments.

AI software can enhance computer performance through automatic software optimizations. The impact of software AI acceleration can be significant, from 10 to 100 times in many cases. Computer performance is often the primary requirement that IT teams work toward because of the resource and compute-intensive nature of AI workloads, which lead to cost or compute time constraints.

Hardware AI acceleration needs to be complemented with software AI acceleration because of the performance optimizations that it enables. Without advanced software optimizations, the utilization of petaflops or exaflops could be very low, especially when new hardware is released. That means that more than half of the hardware execution capability is idle.

Software AI acceleration can help improve the performance of AI hardware by reducing training length, inference time, energy consumption, memory usage and cost — all while maintaining high levels of performance and accuracy. This is key for easing the development and deployment of intelligent applications.

Getting to AI everywhere

Given the diversity of workloads in AI, a heterogenous architecture strategy that provides greater choice to users works best for hardware, which ties directly to the performance of these models. CPUs with built-in AI acceleration, GPUs, custom AI accelerators and even FPGAs all have a role to play. In addition, AI software can provide a consistent user interface to allow users and developers to move from one hardware accelerator to another depending on the workloads.

Across all industries, AI is growing. According to Gartner, worldwide AI software revenue alone is expected to reach $62.5 billion in 2022, an increase of 21.3% from 2021.

AI software is the bridge for AI everywhere, increasing human productivity and computer performance. To experience AI everywhere, developers and data scientists need to simplify processes related to AI systems, ensure productivity through software that features automation, and find solutions that can optimize performance of AI workloads in open ecosystems and secure cross-architecture environments. Only then can organizations bring AI everywhere to life.

Wei Li is the vice president and general manager of AI & Analytics at Intel.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


In human-centered AI, UX and software roles are evolving

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Software development has long demanded the skills of two types of experts. There are those interested in how a user interacts with an application. And those who write the code that makes it work. The boundary between the user experience (UX) designer and the software engineer are well established. But the advent of “human-centered artificial intelligence” is challenging traditional design paradigms.

“UX designers use their understanding of human behavior and usability principles to design graphical user interfaces. But AI is changing what interfaces look like and how they operate,” says Hariharan “Hari” Subramonyam, a research professor at the Stanford Graduate School of Education and a faculty fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

In a new preprint paper, Subramonyam and three colleagues from the University of Michigan show how this boundary is shifting and have developed recommendations for ways the two can communicate in the age of AI.  They call their recommendations “desirable leaky abstractions.” Leaky abstractions are practical steps and documentation that the two disciplines can use to convey the nitty-gritty “low-level” details of their vision in language the other can understand.

Read the study: Human-AI Guidelines in Practice: The Power of Leaky Abstractions in Cross-Disciplinary Teams

“Using these tools, the disciplines leak key information back and forth across what was once an impermeable boundary,” explains Subramonyam, a former software engineer himself.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Less is not always more

As an example of the challenges presented by AI, Subramonyam points to facial recognition used to unlock phones. Once, the unlock interface was easy to describe. User swipes. Keypad appears. User enters the passcode. Application authenticates. User gains access to the phone.

With AI-inspired facial recognition, however, UX design begins to go deeper than the interface into the AI itself. Designers must think about things they’ve never had to before, like the training data or the way the algorithm is trained. Designers are finding it hard to understand AI capabilities, to describe how things should work in an ideal world, and to build prototype interfaces. Engineers, in turn, are finding they can no longer build software to exact specifications. For instance, engineers often consider training data as a non-technical specification. That is, training data is someone else’s responsibility.

“Engineers and designers have different priorities and incentives, which creates a lot of friction between the two fields,” Subramonyam says. “Leaky abstractions are helping to ease that friction.”

Radical reinvention

In their research, Subramonyam and colleagues interviewed 21 application design professionals — UX researchers, AI engineers, data scientists, and product managers — across 14 organizations to conceptualize how professional collaborations are evolving to meet the challenges of the age of artificial intelligence.

The researchers lay out a number of leaky abstractions for UX professionals and software engineers to share information. For the UX designers, suggestions include things like the sharing of qualitative codebooks to communicating user needs in the annotation of training data. Designers can also storyboard ideal user interactions and desired AI model behavior. Alternatively, they could record user testing to provide examples of faulty AI behavior to aid iterative interface design. They also suggest that engineers be invited to participate in user testing, a practice not common in traditional software development.

For engineers, the co-authors recommended leaky abstractions, including compiling of computational notebooks of data characteristics, providing visual dashboards that establish AI and end-user performance expectations, creating spreadsheets of AI outputs to aid prototyping and “exposing” the various “knobs” available to designers that they can use to fine-tune algorithm parameters, among others.

The authors’ main recommendation, however, is for these collaborating parties to postpone committing to design specifications as long as possible. The two disciplines must fit together like pieces of a jigsaw puzzle. Fewer complexities mean an easier fit. It takes time to polish those rough edges.

“In software development, there is sometimes a misalignment of needs,” Subramonyam says. “Instead, if I, the engineer, create an initial version of my puzzle piece and you, the UX designer, create yours, we can work together to address misalignment over multiple iterations, before establishing the specifics of the design. Then, only when the pieces finally fit, do we solidify the application specifications at the last moment.”

In all cases, the historic boundary between engineer and designer is the enemy of good human-centered design, Subramonyam says, and leaky abstractions can penetrate that boundary without rewriting the rules altogether.

Andrew Myers is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Copyright 2022


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Kidas’ ProtectMe software is keeping an ear on your kids

Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.

Here’s a question for you. How do you make sure to parent your kid’s internet time without invading their privacy? Kidas might have an answer for you, with its ProtectMe software. ProtectMe isn’t just a piece of nanny software, alerting parents to their own kid misbehaving. It also recognizes signs of incoming harassment.

The program uses artificial intelligence to monitor alarming behaviors between kids in over 120 games. ProtectMe specifically looks for context; it can tell the difference between a playful jab and an actual threat.

At the end of every week the software generates a report for the parents. This report alerts them to any concerning behaviors from — or towards — their kid.

It sounds good, in theory. But I had a few questions about the particulars. So, I got to asking a few questions to Kidas boss Ron Kerbs. Just a few questions about how exactly the software works and where it’s going in the future.

How it currently works

GamesBeat: Is ProtectMe limited to voice-chat only?

Ron Kerbs: The ProtectMe software provides both voice and text monitoring. It knows when to turn on and off based on when games or communication apps such as Discord are turned on and off.

GamesBeat: Is it user-only? That is, is the main goal of ProtectMe to observe the child’s outgoing speech, and not incoming speech?

Kerbs: ProtectMe has the ability to understand communication both in and out, meaning we know if a child was exposed to, for example, cyberbullying, as the victim, the perpetrator or a witness. For privacy and compliance reasons, we do not provide gamer tags, personal details or transcripts to parents or guardians. Only in the severest of circumstances will we offer additional context such as if there is clear potential for trafficking or the child was planning to meet a stranger in person, amongst other critical circumstances.

GamesBeat: Can the software deal with audio-splitting? If a microphone is plugged in through an audio mixer, or run through a digital audio cord, can ProtectMe detect the default Windows option isn’t in use and adjust accordingly?

Kerbs: Kidas supports audio splitting; in some instances, we analyze more than eight channels at one time. Our technology can differentiate between relevant data and background audio that we can ignore.

Context matters

GamesBeat: Can ProtectMe tell the difference between a child actively saying something to someone and reading something out loud? Between another user talking and a voiced cutscene?

Kerbs: What’s novel about our technology is that it understands context as part of the game and goes beyond language processing and keyword recognition. We all know that trash talk is part of the game experience; however, the ProtectMe difference is that because our AI software has been tested against thousands of scenarios, it is able to respond correctly every time and decipher whether an interaction represents a direct threat or good-natured trash talk.

That’s pretty incredible and something that no one else in the market is offering. We also know that, generally, what’s appropriate for say a 7-year-old to be exposed to is quite different than that of a 14-year-old, and thus with the guidance of child psychologists and cyberbullying experts, we provide different threat severity scores and recommendations based on age. If someone uses a voice changer or a recording, we would be listening to the context and words being spoken, not the depth or pitch of the spoken words.

Privacy is still important

GamesBeat: Can ProtectMe recognize repeat voices? If a child routinely teams up with someone who is regularly flagged as a vocal harasser, does Kidas alert parents of who their kid is hanging out with?

Kerbs: No, we are incredibly vigilant with respect to privacy laws and security. We encourage parents who are seeing repeated threats coming through to carry out further discussion within their family. Given that these topics can be incredibly challenging for parents to discuss, we provide personalized recommendations for how they can talk to their children about what has happened.

GamesBeat: Does ProtectMe track and/or provide updates-over-time for parents? For example: If ProtectMe flags a child 5 times in a week and reports that to parents, and the next week ProtectMe flags a child 3 times, is the parent alerted to the better behavior?

Kerbs: We do track week-over-week changes but not quite like that. Our reports start with a general color scoring system to categorize the severity of threats detected that week:

Red: Immediate Action Needed; The most concerning threats such as contact with a pedophile.
Orange: Action Needed; For concerning threats such as bullying.

Yellow: Worth Noting; Behavior you should be aware of such as age-appropriate trash talking.

Green: Nothing Concerning: No indication of concerning threats found.

Our reports contain recommendations based on the overall severity score and which threat was detected. When we see repeated exposure to the same threats, a sequence of unique recommendations based on historical threats detected is provided.

From there, we also provide time spent playing analytics. These show what games were played the most, how many hours were played each day and in total for the week, how this compares to last week and how the total time spent playing compares to all ProtectMe gamers. We are screen-time advocates and want to keep gaming safe and fun; however, parents have told us they never know how much their kids are actually playing and how this compares to other gamers. So now, they have even more insight than without the software.

What the future looks like

GamesBeat: Will ProtectMe be able to someday monitor game-specific harassment? (eg: a user who spams pre-setchat functions in Rocket League, like ”what a save!”; over and over, to mock a player)

Kerbs: This is not a current offering, however, we have plans to expand what our analysis looks at including game-specific harassment. For example, as part of our partnership with Overwolf, we use Overwolf’s APIs to get access to game-specific events to detect those situations.

GamesBeat: Is more in-depth tracking in the future for ProtectMe? (eg, tea-bagging, counter-strike sprays, etc.)

Kerbs: Although our current focus is on audio and text chats, detecting other toxic events that don’t involve peer-to-peer communication such as teabagging and smurfing are definitely on our roadmap.

We have also partnered with some of the largest game studios and gaming equipment manufacturers to bring them custom solutions that fit their unique technology, storylines and challenges.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Learn more about membership.

Repost: Original Source and Author Link


Google will start distributing a security-vetted collection of open-source software libraries

Google announced a new initiative Tuesday aimed at securing the open-source software supply chain by curating and distributing a security-vetted collection of open-source packages to Google Cloud customers.

The new service, branded Assured Open Source Software, was introduced in a blog post from the company. In the post, Andy Chang, group product manager for security and privacy at Google Cloud, pointed to some of the challenges of securing open-source software and stressed Google’s commitment to open source.

“There has been an increasing awareness in the developer community, enterprises, and governments of software supply chain risks,” Chang wrote, citing last year’s major log4j vulnerability as an example. “Google continues to be one of the largest maintainers, contributors, and users of open source and is deeply involved in helping make the open source software ecosystem more secure.”

Per Google’s announcement, the Assured Open Source Software service will extend the benefits of Google’s own extensive software auditing experience to Cloud customers. All open-source packages made available through the service are also used internally by Google, the company said, and are regularly scanned and analyzed for vulnerabilities.

Currently, a list of the 550 major open-source libraries being continuously reviewed by Google is available on GitHub. While these libraries can all be downloaded independently of Google, the Assured OSS program will see audited versions distributed through Google Cloud — mitigating against incidents where developers intentionally or unintentionally corrupt widely used open-source libraries. At present, this service is in early access mode and is expected to be made available for wider customer testing in Q3 2022.

The announcement from Google comes as part of an industry-wide drive to improve the security of the open-source software supply chain and one that has also been supported by the Biden administration.

In January, a group of some of the nation’s largest tech companies met with representatives of federal agencies including the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency to discuss open-source software security in the wake of the log4j bug. Since then, a recent meeting of the companies involved resulted in a pledge of more than $30 million in funding to boost open-source software security.

Besides contributing funding, Google is also putting engineering hours toward keeping the supply chain secure. The company recently announced the formation of an “Open Source Maintenance Crew” that would work with the maintainers of popular libraries to improve security.

Repost: Original Source and Author Link


After ‘protestware’ attacks, a Russian bank has advised clients to stop updating software

As the Russian invasion of Ukraine draws on, consequences are being felt by many parts of the technology sector, including open-source software development.

In a recent announcement, the Russian bank Sber advised its customers to temporarily stop installing software updates to any applications out of concern that they could contain malicious code specifically targeted at Russian users, labeled by some as “protestware.”

As quoted in Russian-language news sites, Sber’s announcement reads:

Currently, cases of provocative media content being introduced into freely distributed software have become more frequent. In addition, various content and malicious code can be embedded in freely distributed libraries used for software development. The use of such software can lead to malware infection of personal and corporate computers, as well as IT infrastructure.

Where there was an urgent need to use the software, Sber advised clients to scan files with an antivirus or carry out manual review of source code — a suggestion that is likely to be impractical, if not impossible, for most users.

Though framed in general terms, the announcement was likely made in reference to an incident that took place earlier in March, where the developer of a widely used JavaScript library added an update that overwrote files on machines located in Russia or Belarus. Supposedly implemented as a protest against the war, the update raised alarm from many in the open-source community, with fears that it would undermine confidence in the security of open-source software overall.

The update was made in a JavaScript module called node-ipc, which, according to the NPM package manager, is downloaded around 1 million times per week and used as a dependency by the popular front-end development framework Vue.js.

According to The Register, updates to node-ipc made on March 7th and March 8th added code that checked whether the IP address of a host machine was geolocated in Russia or Belarus, and if so, overwrote as many files as possible with a heart symbol. A later version of the module dispensed with the overwriting function and instead dropped a text file on users’ desktops containing a message that “war is not the answer, no matter how bad it is,” with a link to a song by Matisyahu.

Although the most destructive features of the “protestware” module no longer appear in the code, the consequences are harder to undo. Since open-source libraries are fundamental to software development, a general loss of trust in their integrity could have knock-on effects for users in Russia and elsewhere.

In a tweet, cybersecurity analyst Selena Larson referred to it as “forced insecurity”; in general, the open-source community has fiercely condemned the node-ipc update and pushed back on the idea of protest through module sabotage, even for worthy causes.

More broadly, the Ukraine conflict has posed difficult ethical questions to technology companies working in Russia. While many global tech leaders like Apple, Amazon, and Sony have paused or halted sales in the Russian market, others remain: in a blog post from March 7th, Cloudflare CEO Matthew Prince said that the company would continue to provide service in Russia despite calls to pull out, writing that “Russia needs more Internet access, not less.”

Repost: Original Source and Author Link


Evisort embeds AI into contract management software, raises $100M

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

For lawyers and the organizations that employ them, time is quite literally money. The business of contract management software is all about helping to optimize that process, reducing the time and money it takes to understand and manage contracts. 

As it turns out, there is big money in the market for contract management software as well. An increasingly integral part of the business is the use of AI-powered automation. To that end, today contract management vendor Evisort announced that it raised $100 million in a series C round of funding, bringing total funding to date up to $155.6 million. 

Evisort was founded in 2016 and raised a $15 million series A back in 2019. The company was founded by a team of Harvard Law and MIT researchers and discovered early on that there was a market opportunity for using AI to help improve workflow for contracts within organizations.

“If you think about it, every time a company sells something, buys something or hires somebody, there’s a contract,” Evisort cofounder and CEO Jerry Ting told VentureBeat. “Contract data really is everywhere.”

Contract management is a growing market

Evisort fits squarely into a market that analysts often refer to as contract lifecycle management (CLM). Gartner Peer Insights lists at least twenty vendors in the space, which includes both startups and more established vendors.

Among the large vendors in the space is DocuSign, which entered the market in a big way in 2020 with its $188 million acquisition of AI contract discovery startup Seal Software. Startups are also making headway, with SirionLabs announcing this week that it has raised $85 million to help add more AI and automation to its contract management platform.

The overall market for contract lifecycle management is set to grow significantly in the coming years, according to multiple reports. According to Future Market Insights, the global market for CLM in 2021 generated $936 million in revenue and is expected to reach $2.4 billion by 2029. MarketsandMarkets provides a more considerable number, with the CLM market forecast to grow to $2.9 billion by 2024.

Ting commented that while every organization has contracts, in this view many organizations still do not handle contracts with a digital system and rely on spreadsheets and email. That’s one of the key reasons why he expects to see significant growth in the CLM space as organizations realize there is a better way to handle contracts.

Integrating AI to advance the state of contract management

Evisort’s flagship platform uses AI to read contracts that users then upload into the software-as-a-service (SaaS)-based platform.

Ting explained that his company developed its own algorithm to help improve natural language processing and classification of important areas in contracts. Those areas could include terms of a deal, such as deadlines, rates and other conditions of importance for a lawyer who is analyzing a contract. Going a step further, Evisort’s AI will now also analyze the legal clauses in an agreement.

“We can actually pull the pertinent data out of a contract, instead of having a human have to type it into a different system,” Ting said. 

Once all the contract data is understood and classified, the next challenge that faces organizations is what to do with all the data. That’s where the other key part of Evisort’s platform comes into play, with a no-code workflow service. The basic idea with the workflow service is to help organizations collaborate on contract activities, including analysis and approvals.

What $100M of investment into AI will do for Evisort

With the new funding, Ting said that his company will continue to expand its go-to market and sales efforts. Evisort will also be investing in new AI capabilities that Ting hopes will democratize access to AI for contract management.

To date, he explained that Evisort’s AI works somewhat autonomously based on definitions that Evisort creates. With future releases of the platform, Ting wants to enable users to take Evisort’s AI and adjust and train the algorithm for specific and customized needs. The plan is to pair Evisort’s no-code capabilities into the future feature, in an approach that will make it easy for regular users and not just data scientists, to build AI capabilities to better understand and manage contracts.

“I think the 100 million dollar mark tells the market, hey, this company is a serious player and they’re here to stay,” Ting said. “It’s a scale-up, not a startup.”

The new funding round was led by TCV with participation from Breyer Capital as well as existing investors Vertex Ventures, Amity Ventures, General Atlantics and Microsoft’s venture capital fund M12.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


OpenAI can translate English into code with its new machine learning software Codex

AI research company OpenAI is releasing a new machine learning tool that translates the English language into code. The software is called Codex and is designed to speed up the work of professional programmers, as well as help amateurs get started coding.

In demos of Codex, OpenAI shows how the software can be used to build simple websites and rudimentary games using natural language, as well as translate between different programming languages and tackle data science queries. Users type English commands into the software, like “create a webpage with a menu on the side and title at the top,” and Codex translates this into code. The software is far from infallible and takes some patience to operate, but could prove invaluable in making coding faster and more accessible.

“We see this as a tool to multiply programmers,” OpenAI’s CTO and co-founder Greg Brockman told The Verge. “Programming has two parts to it: you have ‘think hard about a problem and try to understand it,’ and ‘map those small pieces to existing code, whether it’s a library, a function, or an API.’” The second part is tedious, he says, but it’s what Codex is best at. “It takes people who are already programmers and removes the drudge work.”

OpenAI used an earlier version of Codex to build a tool called Copilot for GitHub, a code repository owned by Microsoft, which is itself a close partner of OpenAI. Copilot is similar to the autocomplete tools found in Gmail, offering suggestions on how to finish lines of code as users type them out. OpenAI’s new version of Codex, though, is much more advanced and flexible, not just completing code, but creating it.

Codex is built on the top of GPT-3, OpenAI’s language generation model, which was trained on a sizable chunk of the internet, and as a result can generate and parse the written word in impressive ways. One application users found for GPT-3 was generating code, but Codex improves upon its predecessors’ abilities and is trained specifically on open-source code repositories scraped from the web.

This latter point has led many coders to complain that OpenAI is profiting unfairly from their work. OpenAI’s Copilot tool often suggests snippets of code written by others, for example, and the entire knowledge base of the program is ultimately derived from open-source work, shared to benefit individuals, not corporations. The same criticisms will likely be leveled against Codex, though OpenAI says its use of this data is legally protected under fair use.

When asked about these complaints, Brockman responds: “New technology is coming, we do need this debate, and there will be things we do that the community has great points on and we will take feedback and do things differently.” He argues, though, that the wider coding community will ultimately benefit from OpenAI’s work. “The real net effect is a lot of value for the ecosystem,” says Brockman. “At the end of the day, these types of technologies, I think, can reshape our economy and create a better world for all of us.”

Codex will also certainly create value for OpenAI and its investors. Although the company started life as a nonprofit lab in 2015, it switched to a “capped profit” model in 2019 to attract outside funding, and although Codex is initially being released as free API, OpenAI will start charging for access at some point in the future.

OpenAI says it doesn’t want to build its own tools using Codex, as it’s better placed to improve the core model. “We realized if we pursued any one of those, we would cut off any of our other routes,” says Brockman. “You can choose as a startup to be best at one thing. And for us, there’s no question that that’s making better versions of all these models.”

Of course, while Codex sounds extremely exciting, it’s difficult to judge the full scope of its capabilities before real programmers have got to grips with it. I’m no coder myself, but I did see Codex in action and have a few thoughts on the software.

OpenAI’s Brockman and Codex lead Wojciech Zaremba demonstrated the program to me online, using Codex to first create a simple website and then a rudimentary game. In the game demo, Brockman found a silhouette of a person on Google Images then told Codex to “add this image of a person from the page” before pasting in the URL. The silhouette appeared on-screen and Brockman then modified its size (“make the person a bit bigger”) before making it controllable (“now make it controllable with the left and right arrow keys”).

It all worked very smoothly. The figure started shuffling around the screen, but we soon ran into a problem: it kept disappearing off-screen. To stop this, Brockman gave the computer an additional instruction: “Constantly check if the person is off the page and put it back on the page if so.” This stopped it from moving out of sight, but I was curious how precise these instructions need to be. I suggested we try a different one: “Make sure the person can’t exit the page.” This worked, too, but for reasons neither Brockman nor Zaremba can explain, it also changed the width of the figure, squashing it flat on-screen.

“Sometimes it doesn’t quite know exactly what you’re asking,” laughs Brockman. He has a few more tries, then comes up with a command that works without this unwanted change. “So you had to think a little about what’s going on but not super deeply,” he says.

This is fine in our little demo, but it says a lot about the limitations of this sort of program. It’s not a magic genie that can read your brain, turning every command into flawless code — nor does OpenAI claim it is. Instead, it requires thought and a little trial and error to use. Codex won’t turn non-coders into expert programmers overnight, but it’s certainly much more accessible than any other programming language out there.

OpenAI is bullish about the potential of Codex to change programming and computing more generally. Brockman says it could help solve the programmer shortage in the US, while Zaremba sees it as the next step in the historical evolution of coding.

“What is happening with Codex has happened before a few times,” he says. In the early days of computing, programming was done by creating physical punch cards that had to be fed into machines, then people invented the first programming languages and began to refine these. “These programming languages, they started to resemble English, using vocabulary like ‘print’ or ‘exit’ and so more people became able to program.” The next part of this trajectory is doing away with specialized coding languages altogether and replacing it with English language commands.

“Each of these stages represents programming languages becoming more high level,” says Zaremba. “And we think Codex is bringing computers closer to humans, letting them speak English rather than machine code.” Codex itself can speak more than a dozen coding languages, including JavaScript, Go, Perl, PHP, Ruby, Swift, and TypeScript. It’s most proficient, though, in Python.

Codex also has the ability to control other programs. In one demo, Brockman shows how the software can be used to create a voice interface for Microsoft Word. Because Word has its own API, Codex can feed it instructions in code created from the user’s spoken commands. Brockman copies a poem into a Word document and then tells Word (via Codex) to first remove all the indentations, then number the lines, then count the frequency of certain words, and so on. It’s extremely fluid, though hard to tell how well it would work outside the confines of a pre-arranged demo.

If it succeeds, Codex might not only help programmers but become a new interface between users and computers. OpenAI says it’s tested Codex’s ability to control not only Word but other programs like Spotify and Google Calendar. And while the Word demo is just a proof of concept, says Brockman, Microsoft is apparently already interested in exploring the software’s possibility. “They’re very excited about the model in general and you should expect to see lots of Codex applications be created,” he says.

Repost: Original Source and Author Link


Simpro raises $350M as demand grows for field service automation software

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 

Field service management, which refers to the management of jobs performed in the field (e.g., telecom equipment repair), faces continued challenges brought on by the pandemic. While the number of customer service inquiries has increased as enterprises have adopted remote work arrangements, worker availability has decreased. Forty-seven percent of field service companies say that they’re having trouble finding enough quality technicians and drivers to meet business goals, according to a Verizon survey. The shortfall in the workforce has increased the burden on existing staff, who’ve been asked to do more with fewer resources.

Against this backdrop, Simpro, a field service management software company based in Brisbane, Australia, today announced that it raised $350 million from K1 Investment Management with participation from existing investor Level Equity. The new funding brings Simpro’s total capital raised to nearly $400 million, which CEO Sean Diljore says will be put toward product development and customer support with a particular focus on global trade and construction industries.

Simpro also revealed today that it acquired ClockShark, a time-sheeting and scheduling platform, as well as AroFlo, a job management software provider. The leadership teams of Simpro, ClockShark, and AroFlo will operate independently, Diljore says, including continued work on existing services.

How to Set Up Maintenance Planner | simPRO

Above: Simpro’s maintenance-planning dashboard.

“This investment marks the next stage of Simpro’s exciting growth journey. Our mission is to build a world where field service businesses can thrive,” Diljore said in a statement. “We’re thrilled to welcome ClockShark and AroFlo to the Simpro family. Both companies are leaders in their spaces and have incredibly valuable product offerings that will benefit our combined customer bases and help our customers increase revenue. We look forward to growing together and building a range of solutions for the field service and construction industries.”

Managing field service workers

Field service workers feel increasingly overwhelmed by the amount of tasks employers are asking them to complete. According to a study by The Service Council, 75% of field technicians report that work has become more complex and that more knowledge — specifically more technical knowledge — is needed to perform their jobs now versus when they started in field service. Moreover, 70% say that both customer and management demands have intensified during the health crisis.

Simpro, which was founded in 2002 by Curtis Thomson, Stephen Bradshaw, and Vaughan McKillop, claims to offer a solution in software that eases the burden on field workers and their managers. The company’s platform provides quoting, job costing, scheduling, and invoicing tools in addition to capabilities for reporting, billing, testing assets, and planning preventative maintenance.

Bradshaw, a former electrical contractor, teamed up with McKillop, an engineering student, in the early 2000s to build the prototype for Simpro in the early 2000s. Working out of Bradshaw’s garage, they started with the development of job list functionality before adding new features, including a scheduling tool for allocating resources.

Today, Simpro supports over 5,500 businesses in the security, plumbing, electrician, HVAC, and solar and data networking industries. It has more than 200,000 users worldwide and more than 400 employees in offices across Australia, New Zealand, the U.K., and the U.S.

An expanding product

With Simpro, which integrates with existing software including accounting and HR analytics software, customers can use digital templates to build estimates and convert quotes into jobs. From a dashboard, they can schedule field service workers based on availability and job status, plus perform inventory tracking, connect materials to jobs, and send outstanding invoices.

Diljore expects the purchases of ClockShark and AroFlo to bolster Simpro’s suite in key, strategic ways. ClockShark, a Chico, California-based company founded by brothers Cliff Mitchell and Joe Mitchell in 2013, delivers an app that lets teams clock in and out while recording the timesheet data needed for payroll and job costing. Ringwood, Australia-based AroFlo, on the other hand, provides job management features including field service automation, work scheduling, geofencing, and GPS tracking.

Reece is now available for Automatic Catalog and Invoice Sync | simPRO

AroFlo and ClockShark claim to have over 2,200 and 8,000 customers, respectively. AroFlo’s business is largely concentrated in Australia and New Zealand, where it says that over 25,000 workers use its platform for asset maintenance, compliance, and inventory across plumbing, electrical, HVAC, and facilities management.

Somewhat concerningly from a privacy standpoint, AroFlo offers what it calls a “driver management” feature that uses RFID technology as a way of logging which field service worker are driving which work vehicles. Beyond this, AroFlo allows companies to track the current and historical location of devices belonging to their field workers throughout the workday.

While no federal U.S. statutes restrict the use of GPS by employers nor force them to disclose whether they’re using it, workers have mixed feelings. A survey by TSheets showed that privacy was the third-most important concern of field service workers who were aware that their company was tracking their GPS location.

In its documentation, AroFlo suggests — but doesn’t require — employers to “speak to [field] users about GPS tracking.”

Aroflo GPS lets you monitor your field technicians across the entire day,” the company writes on its website. “You’ll always know where they are, what they’re working on, and when they finish.”

A spokesperson told VentureBeat via email: “Simpro will continue offering GPS services and also has its own vehicle GPS tracking add-on, SimTrac. Implementation of GPS fleet tracking can help reduce risks, remain compliant with licenses and vehicle upkeep, and reduce costs in the business. It also benefits the technicians by improving their safety, spending less time in traffic and improving time management. Overall, GPS tracking provides improved visibility of staff and understanding of their location, introduces opportunities to reduce costs associated with travel, schedule smarter and even improve driver safety (by limiting their need to race across to another side of town to complete a job).”

A growing field

The field service management market is rapidly expanding, expected to climb from $2.85 billion in value in 2019 to $7.10 billion in 2026. While as many as 25% of field service organizations are still using spreadsheets for job scheduling, an estimated 48% were using field management software as of 2020, Fieldpoint reports. Customer demand is one major adoption driver. According to data from ReachOut, 89% of customers want to see “modern, on-demand technology” applied to their technician scheduling, and nearly as many would be willing to pay top dollar for it.

“The pandemic made many business owners realize how crucial it is to have the right technology in place for remote work. Trades businesses couldn’t afford to abandon projects or lose out on service and maintenance calls because of delayed response times or drawn-out time to complete,” Diljore told VentureBeat via email. “For these businesses, cloud-based software became a necessity for survival when previously it was a ‘nice to have.’”

Simpro competes with Zinier, which last January raised $90 million to automate field service management via its AI platform. The company has another rival in Workiz, a field service management and communication startup, as well as augmented reality- and AI-powered work management platform TechSee.

According to Tracxn, of the over 3,400 companies developing “field force automation” solutions (which include customer service tracking, order management, routing optimization, and work activity monitoring), more than 700 attracted a cumulative $5.8 billion from investors from 2018 to 2020.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Mabl aims to automate software testing, nabs $40M

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 

In the enterprise, the pandemic brought about sharp increases in software release cadences and automation. That’s thanks in part to the adoption of low-code development platforms, which abstract away the programming traditionally required to build apps and software. Respondents to a 2021 Statista survey report that low-code tools have enabled them to release apps 40% to 60% faster, while a full 60% of developers tell GitLab that they’re committing code two times faster and bringing higher-impact technologies into their processes.

However, the accelerated pace of development has introduced new blockers, like a backlog of testing and code reviews. GitLab reports that only 45% of developers review code weekly, with 22% opting to do it every other week instead. It’s not that the pace of innovation in software testing is slowing — it isn’t — but that aspects like installation, maintenance, security, and incompatible problems remain hurdles.

The urgency of the challenge — insufficient testing can lead to security vulnerabilities — has put a spotlight on startups like Mabl, which today announced that it raised $40 million in a series C investment led by Vista Equity Partners. Founded in 2017 by Dan Belcher and Izzy Azeri, Mabl is among a crop of newer companies developing platforms that enable software teams to create, run, manage, and automate software and API tests.

“Despite the fact that the testing category is massive, no software-as-a-service leader had emerged in the space [prior to Mabl] … This void left enterprises to choose between legacy testing solutions that were incredibly complex and expensive and self-managed open source frameworks coupled with bespoke infrastructure that were inaccessible to quality assurance professionals,” Belcher told VentureBeat via email. “Mabl envisioned and delivered an end-to-end testing solution for the enterprise that would combine a low-code framework, a software-as-a-service delivery model, deep integration with enterprise environments and workflows, data analytics, and machine intelligence to disrupt the testing industry.”

Automating testing at scale

Belcher and Azeri are second-time founders, having previously launched Stackdriver, a service that provides performance and diagnostics data to public cloud users. Stackdriver was acquired by Google in 2014 for an undisclosed sum, and recently became a part of Google Cloud’s operations suite.

With Mabl, Belcher and Azeri set out to build a product that lets developers orchestrate browser, API, and mobile and web app tests from a single location. Mabl’s low-code dashboard is designed to automate end-to-end tests locally or in the cloud, running tests with different permutations of data to strengthen individual test cases.

“Lack of effective test automation is a major problem in the software industry, because ineffective testing leads to dramatically lower team productivity due to rework, overhead, and a general bottleneck in throughput, as quality engineering cannot execute with the same velocity as development and operations. Likewise, ineffective testing leads to poor product quality — from not only a functional perspective but also in terms of user experience, performance, accessibility, and more,” Belcher added. “Low-code is critical because it democratizes testing — making it accessible to quality engineers, manual testers, developers, product owners, and others — rather than limiting efforts to a quality engineer or dev.”


Above: Mabl’s automated testing orchestration dashboard.

Image Credit: Mabl

Mabl can generate tests from the contents of emails and PDFs, adapting as the tested app’s UI evolves with development. An AI-driven screenshot comparison feature attempts to mimic real-life visual UI testing to help spot unwanted changes to the UI, while a link-crawling capability autonomously generates tests that cover reachable paths within the app — giving insight into broken links.

“There are a number of areas where we use machine intelligence to benefit customers. In particular … Mabl automatically trains and updates structural and performance models of page elements, and incorporates these models into the timing and logic of test execution,” Belcher explained. “Mabl [also leverages] visual anomaly detection that uses visual models to differentiate between expected changes resulting from dynamic data and dynamic elements from unexpected changes and defects.”

Mabl allows customers to update and debug tests without affecting master versions. API endpoints can be used to trigger Mabl tests, as well as plugins for CI/CD platforms including GitHub, Bitbucket Pipelines, and Azure Pipelines. On the analytics side, Mabl shows metrics quantifying how well tests cover an app, identifying gaps based on statistics and interactive elements on a page.

Growing trend

Mabl’s users include dev teams at Charles Schwab, ADP, Stack Overflow, and JetBlue, and the Boston, Massachusetts-based company expects to more than double its recurring revenue this year. But the company faces competition from Virtuoso, ProdPerfect, Testim, Functionize, Mesmer, and Sauce Labs, among others. Markets and Markets predicts that the global automation testing market size will grow in size from $12.6 billion in 2019 to $28.8 billion by 2024.

Automated tests aren’t a silver bullet. In a blog post, Amir Ghahrai, lead test consultant at Amido, lays out a few of the common issues that can come up, like wrong expectations of automated tests and automating tests at the wrong layer of the app stack.


Still, a growing number of companies are adopting automated testing tools — especially in light of high-profile ransomware and software supply chain attacks. According to a LogiGear survey, at least 38% of developers have at least tried test automation — even if it failed in its first implementation. Another source estimates that 44% of IT companies automated 50% or more of all testing in 2020.

“Mabl has over 200 customers globally, which includes 10 of the Fortune 500 and 32 of the Fortune Global 2000 companies. We have active, paid users in over 60 countries,” Belcher said. “The Mabl community in Japan has more than doubled in the past six months — reaching nearly 1,500 members. The Japanese customer base has increased by 322% and revenue has increased by over 300% in the past year, now representing 10% of the company’s total global revenue.”

Existing investors Amplify Partners, Google Ventures, Presidio Capital, and CRV also participated in 55-employee Mabl’s round. It brings the startup’s total capital raised to over $77 million.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link