Categories
Security

A series of patent lawsuits is challenging the history of malware detection

In early March, cybersecurity firm Webroot and its parent company OpenText launched a series of patent litigation containing some eye-opening claims. Filed March 4th in the famously patentholder-friendly Western District of Texas court, the four lawsuits claim that techniques fundamental to modern malware detection are based on patented technology — and that the company’s competitors are infringing on intellectual property rights with their implementation of network security software.

The defendants named in the suits are a who’s who of security companies: CrowdStrike, Kaspersky, Sophos, and Trend Micro are all named. According to OpenText, the companies are using patented technology in their anti-malware applications, specifically in the endpoint security systems that protect specific devices on a network. It’s a sweeping lawsuit that puts much of the security industry in immediate danger. And, for critics, it’s a bitter reminder of how much damage a patent troll can still do.

So far, endpoint security companies have shown fierce opposition to the very idea of the case. A Kaspersky spokesperson said that the company is “reviewing the issue” but did not offer any further comment on the case.

Sara Eberle, vice president of global public relations at Sophos, was more forthcoming, telling The Verge that the company would fight the lawsuit: “Sophos prefers to compete in the marketplace rather than in the courtroom, but we will vigorously defend ourselves in this litigation,” Eberle said. “We invite Webroot and OpenText to join the ranks of serious cybersecurity companies that are trying to solve problems rather than create them.”

Responses from Trend Micro COO Kevin Simzer and CrowdStrike’s senior director of corporate communications Kevin Benacci went further: both accused OpenText of “patent trolling” in statements sent to The Verge.

Made notorious by companies like Intellectual Ventures, “patent trolling” refers to the practice of buying up patents for use in litigation rather than research and development. The end result is a drag on anyone building technology — but it can be quite lucrative for companies who can play the game well.

But OpenText insists the lawsuits are about protecting intellectual property. In response to the defendants’ comments, OpenText’s chief communications officer Jennifer Bell said that the lawsuits were being brought to defend the company against unfair and unlawful actions from its competitors. “OpenText brings these lawsuits to protect its intellectual property investments and to hold these parties accountable for their infringement and unlawful competition,” Bell said. “These lawsuits allege that defendants infringe and unlawfully compete against aspects of the OpenText family of companies’ endpoint security products and platforms. OpenText intends to vigorously enforce its intellectual property rights.”

Charles Duan, a postdoctoral fellow at Cornell University and specialist in intellectual property law, described possible outcomes that could range from financial redress to an effective ban on the infringing software should the plaintiff win the case.

“The court can issue a number of remedies here,” Duan said. “One of them is an injunction: they could say that all these other companies who are using the patented technology have to stop doing so. They can also issue money damages, basically saying that these companies have to compensate the company for using their patented technology.”

But simple economics suggest that the most likely outcome is a settlement: a fact that points to the incentives for bringing even flimsy patent suits and highlights the material basis for patent trolling.

“As a practical matter, a lot of these cases never actually get to that point [of judgment] just because the cost of litigation makes it not worth going through a whole trial, even if the patent is very questionable or it seems likely that the companies don’t infringe,” Duan said.

Though the lawsuit is being brought in 2022, a judgment would hinge in part on whether the techniques described in the patent were widely known at the time that the patent application was filed. One of the patents at the heart of the suit — US Patent No. 8,418,250, referred to as “the ‘250 patent” in the lawsuit — was granted in the United States in 2013 but first issued by the British patent office in 2005. Another, US Patent No. 8,726,389 or the ‘389 patent, was also issued in the UK in 2005 and granted in the US in 2014.

Even taking into account the age of the patents, some experts are clear that the techniques described in them are overly broad. Joe Mullin, senior policy analyst at Electronic Frontier Foundation (EFF), told The Verge that some of the features in the patent were potentially too abstract to be unpatentable:

“The ‘389 patent claims very basic behavior that could be performed with a pen and paper,” Mullin said. “It simply describes ‘receiving data’ then ‘correlating’ and ‘classifying’ the data, ‘comparing’ the data to other computer objects, and then classifying something as malware (or not) based on that comparison.”

“A core principle of patent law is that you can’t get a monopoly on an ‘abstract idea,’ because that would take away too much from the public and not represent a real invention by the patent holder. This patent should be found invalid because it concerns ‘abstract ideas,’” Mullin said.

But where critics see a broad patent, OpenText paints the case as an argument about the evolution of network security itself. In its complaint filed against Trend Micro, OpenText argues that where malware detection used to rely on a categorization of what a program is, the patented technology is based on analysis of what a program does. Instead of matching file data to a library of known viruses, modern endpoint security looks at actions performed within a computer system. As a result, this kind of malware detection can flag and contain previously unseen examples of malicious software. It’s a real shift in the way companies approach endpoint security. And, according to OpenText, the shift traces back to the patents listed in the case.

Opponents to these claims — including not only the defendants but also cybersecurity researchers who have criticized the lawsuits online — take issue with the broadness of the argument, alleging that the patented technology reflects general developments in the evolution of malware detection over time. (As a strategy, patent trolling relies on this kind of generality: according to EFF, an overworked US Patent and Trademark Office has issued “a flood of bad patents on so-called inventions that are unoriginal, vague, overbroad, and/or so unclear that bad actors can easily use them to threaten all kinds of innovators.”)

What’s more, opposition to the lawsuits may be based on the fact that OpenText was not involved in the research that created the patent: instead, through acquisition of Carbonite, which had previously acquired Webroot, OpenText came to own a number of patents that were assigned to the smaller cybersecurity firm. Having bought the company that controlled the original patents, OpenText now has valuable IP and a chance to extract value from it — regardless of skepticism over whether the techniques described in the patents can really be traced back to innovations developed by one group of researchers.

There are still some protections for defendants. Where patents are overly vague, the fight against them can happen in venues other than the courtroom — with one other option being an appeal to the patent office, Charles Duan explained. “There are proceedings that were created about 10 years ago, they go by the name of inter partes review or post-grant review, and these give companies the chance to argue to the patent office that when the office granted the patents they made a mistake,” Duan said. “That is probably an avenue that some of these security companies will be interested in pursuing.”

In a post-grant review process, companies attempt to convince the patent office that the techniques described in the patent should actually be considered unpatentable. If that argument is successful — and the patent office returns a decision before the trial date — then the basis for the lawsuit falls apart. But, since any delay could prove extremely costly, some companies can’t take the risk of waiting for that decision.

In the meantime, critics of the current patent system will see the OpenText lawsuits as exemplary of an intellectual property framework that stifles innovation rather than promoting it.

“What may be going on here is that [OpenText] is not really trying to stop these companies, and more that they’re signaling they will put up a fight before settling at some point,” said Duan.



Repost: Original Source and Author Link

Categories
Security

OpenSea is adding NFT copy detection and verification features

OpenSea is rolling out features to “improve authenticity” on the digital marketplace, the company announced in a series of blog posts today. The updates include a new system to detect and remove copycat NFTs and an overhaul to the account verification process.

“Copymints” are tokens ripping off other NFTs and have proved to be a problem for platforms like OpenSea. Last year, the platform banned two collections that mimicked Bored Ape Yacht Club NFTs by flipping them so the image is mirrored. And though the owner of an NFT is recorded on the blockchain, fakes are rampant. In February, OpenSea said that over 80 percent of items it removed for violations were created with its free minting tool.

OpenSea says it’s implementing a new two-part copy detection system, noting copies make it harder for users to find authentic content. The company says it will use image recognition tech to scan NFTs on the platform and compare them with authentic collections, looking for flips, rotations, and other variants. OpenSea says human reviewers will also look at removal recommendations.

“We’re committed to threading the needle between removing copymints and giving space for those substantively additive remixes to prosper,” the blog post reads. OpenSea says it’s already started removing offending content and will scale up the removal process in the coming weeks.

Account verification on OpenSea is also getting an update. An invite-only verification application will be available to accounts with a collection of at least 100 ETH volume, and the company says it plans to broaden eligibility soon. Collections can get a blue badge when owned by a verified account and meet the 100 ETH trading volume.

OpenSea has rolled out other safety features in recent months following reports of scams and fraudsters on or related to the platform. In February, the company announced a verified customer support system, a response to scammers who were impersonating OpenSea employees and gaining access to people’s cryptocurrency wallets.


Related:



Repost: Original Source and Author Link

Categories
AI

Why AI is the future of fraud detection

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


The accelerated growth in ecommerce and online marketplaces has led to a surge in fraudulent behavior online perpetrated by bots and bad actors alike. A strategic and effective approach to online fraud detection will be needed in order to tackle increasingly sophisticated threats to online retailers.

These market shifts come at a time of significant regulatory change. Across the globe, new legislation is coming into force that alters the balance of responsibility in fraud prevention between users, brands, and the platforms that promote them digitally. For example, the EU Digital Services Act and US Shop Safe Act will require online platforms to take greater responsibility for the content on their websites, a responsibility that was traditionally the domain of brands and users to monitor and report.

Can AI find what’s hiding in your data?

In the search for security vulnerabilities, behavioral analytics software provider Pasabi has seen a sharp rise in interest in its AI analytics platform for online fraud detection, with a number of key wins including the online reviews platform, Trustpilot. Pasabi maintains its AI models based on anonymised sets of data collected from multiple sources.

Using bespoke models and algorithms, as well as some open source and commercial technology such as TensorFlow and Neo4j, Pasabi’s platform is proving itself to be advantageous in the detection of patterns in both text and visual data. Customer data is provided to Pasabi by its customers for the purposes of analysis to identify a range of illegal activities – – illegal content, scams, and counterfeits, for example – – upon which the customer can then act.

Chris Downie, Pasabi CEO says, “Pasabi’s technology uses AI-driven, behavioral analytics to identify bad actors across a range of online infringements including counterfeit products, grey market goods, fake reviews, and illegal content. By looking for common behavioral patterns across our customers’ data and cross-referencing this with external data that we collect about the reputation of the sources (individuals and companies), the software is perfectly positioned to help online platforms, marketplaces, and brands tackle these threats.”

The proof is in the data

Pasabi shared with VB that their platform is built entirely in-house, with some external services to enhance their data such as translation services. Pasabi’s combination of customer (behavioral) and external (reputational) data is what they say allows them to highlight the biggest threats to their customers.

In the Q&A, Pasabi told VentureBeat that their platform performs analysis on hundreds of data points, which are provided by customers and then combined with Pasabi’s own data collected from external sources. Offenders are then identified at scale, revealing patterns of behavior in the data and potentially uncovering networks working together to mislead consumers.

Anoop Joshi, senior director of legal at Trustpilot said, “Pasabi’s technology finds connections between individuals and businesses, highlighting suspicious behavior and content. For example, in the case of Trustpilot, this can help to detect when individuals are working together to write and sell fake reviews. The technology highlights the most prolific offenders, and enables us to use our investigation and enforcement resources more efficiently and effectively to maintain the integrity of the platform.”

Relevant data is held on Google Cloud services, using logical tenant separation and VPCs. Data is stored securely using encryption in transit and encryption at rest.  Data is stored only for as long as strictly necessary and solely for the purpose of identifying suspicious behavior.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

API vulnerability detection firm Salt Security raises $70M

Elevate your enterprise data technology and strategy at Transform 2021.


API discovery and vulnerability detection platform Salt Security today raised $70 million in a series C funding round led by Advent International. The Palo Alto, California-based startup says it plans to use the capital to expand its global operations across R&D, sales and marketing, and customer success.

Application programming languages (APIs) dictate the interactions between software programs. They define the kinds of calls or requests that can be made, how they’re made, the data formats that should be used, and the conventions to follow. As over 80% of web traffic becomes API traffic, APIs are coming under increasing threat. Gartner predicts that by 2022, API abuses will move from an infrequent to the most frequent attack vector, resulting in data breaches for enterprise web apps.

Salt’s platform aims to prevent these attacks with a combination of AI and machine learning technologies. It analyzes a copy of the traffic from web, software-as-a-service, mobile, microservice, and internet of things app APIs and uses this process to gain an understanding of each API and create a baseline of normal behavior. From these baselines, Salt identifies anomalies that might be indicators of an attack during reconnaissance, eliminating the need for things like signatures and configurations.

“I’m a former elite cybersecurity unit veteran that led development of high-end security systems to protect the largest network in Israel of the Israel Defense Forces and the government,” cofounder and CEO Roey Eliyahu told VentureBeat via email. “During my service and afterwards in different roles, I consistently found that APIs were surprisingly simple to hack and that existing security technologies could not identify API attacks. I joined forces with my cofounder and COO, Michael Nicosia, to build Salt Security on the premise that we needed to take a fundamentally different approach — to use big data and AI to solve the problem of securing APIs, a problem traditional security tools cannot solve because of their legacy architectures.”

Salt Security

Above: The web dashboard for the Salt Security platform.

Image Credit: Salt Security

Salt leverages dozens of behavioral features to identify anomalies. Its machine learning models are trained to detect when an attacker is probing an API, for instance, because this deviates from typical usage. They analyze the “full communication,” taking into consideration factors like how an API responds to malicious calls. And they correlate attacker activity, enabling Salt to connect probing attempts performed over time to a single attacker, even if the perpetrator attempts to conceal their identity by rotating devices, API tokens, IP addresses, and more.

Confirmed anomalies trigger a single alert to security teams with a timeline of attacker activity.

“APIs connect all of today’s vital data and services. Organizations rely on the Salt Security API Protection Platform to identify API security vulnerabilities ahead of launching them in production,” Eliyahu said. “These remediation insights enable companies to move fast in their application development while still reducing risk by finding security gaps before they can be exploited. The Salt platform provides runtime protection, blocking attacks such as credential stuffing, data exfiltration, account misuse, and fraud. Salt also helps companies meet compliance needs, providing documentation of all APIs as well as where they expose sensitive data.”

Upward trajectory

Salt takes an approach similar — but not identical — to that of Elastic Beam, an API cybersecurity startup that was acquired by Denver, Colorado-based Ping Identity in June 2018. Other rivals include Spherical Defense, which adopts a machine learning-based approach to web application firewalls, and Wallarm, which provides an AI-powered security platform for APIs, as well as websites and microservices.

But Salt is doing brisk business, with customers like Equinix, Finastra, TripActions, Armis, and DeinDeal. The company, which was founded in 2016, claims to have driven 400% growth in revenue, 160% growth in employees (to more than 65), and 380% growth in the API traffic it secures.

“We have high double-digit numbers of enterprise customers in financial, fintech, insurance, retail, software-as-a-service, ecommerce, and other verticals … For most Salt customers, the pandemic accelerated their digital transformation and cloud migration journeys. Digital transformation depends heavily on APIs, so most of our customers were writing APIs at a much more rapid rate,” Eliyahu said. “Our customer, Armis, for example, had to integrate with many more device types in its internet of things security offering to serve its customers, whose employees were now working from home. Instead of having dozens of APIs to write and protect, the company suddenly had hundreds, and manual testing and documentation efforts simply could not scale, so they needed to deploy Salt earlier and more broadly than originally expected. Several Salt customers experienced a similar acceleration, and our revenue grew faster as a result.”

This latest financing round had participation from Alkeon Capital and DFJ Growth along with investors Sequoia Capital, Tenaya Capital, S Capital VC, and Y Combinator. It brings Salt’s total raised to $131 million to date following a $30 million round in December 2020.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Fitbit’s most recent app update hints at future snore detection feature

It looks like Google is preparing to roll out a couple of new Fitbit features, including the ability to detect snoring while you sleep. The functionality was found within the latest Fitbit app update for Android, though it isn’t yet available to users. As well, the code revealed an upcoming noise detection feature that listens for ambient noises while you sleep.

The latest version of the Fitbit app started rolling out to users on Android yesterday. The folks at 9to5Google did an APK teardown and found evidence of an unannounced feature called Snore & Noise Detect. With that, a Fitbit wearable will be able to monitor nighttime noises using its built-in microphone, using it to detect noises and snoring.

The feature description explains that Fitbit will find a baseline noise level at night, then use an algorithm to detect whether louder noises are from snoring or something else. Users will see their (or their partner’s) snoring frequency listed in one to three categories: none to mild, moderate, and frequent. Those who are ‘frequent’ snorers are snoring 40-percent or more of their time spent asleep.

The microphone will also provide a label for the general noise level in your sleeping environment — including snoring — with the range going from ‘very quiet’ to ‘very loud.’ Users will, based on the code found in the app, advise users to charge their wearable before going to bed if they plan to use the sleep noise monitoring feature due to the battery demands.

The microphone noise monitoring will start once the wearable determines that the user has fallen asleep. 9to5Google also found evidence that Google is working on a ‘sleep animal’ feature for Fitbit, one that will possibly include things like a tortoise, dolphin, kangaroo, bear, hummingbird, and giraffe. Users may also soon get sleeping profile designations like ‘short sleeper’ or ‘restless sleeper,’ but when these features may arrive remains unclear.

Repost: Original Source and Author Link

Categories
Tech News

Grillo, IBM, and the Clinton Foundation expand low-cost earthquake detection to the Caribbean

The Clinton Foundation today announced a commitment to action around deploying a low-cost, open-source-based novel earthquake detection system in the Caribbean, starting with Puerto Rico.

The system is being developed by Grillo in tandem with the long-running Call for Code challenge.

We’ve been covering David Clarke Causes and IBM’s Call for Codefor years here at TNW and Grillo’s work has been one of many success stories to come out of the coding event.

Grillo worked with IBM to develop open-source kits for the contest last year and, with the help of local scientists and government officials, began testing it’s earthquake-sensing technology in Puerto Rico.

Today, the company is gearing up to deploy 90 sensors across the island, developed through what’s called “OpenEEW,” or open-source earthquake early warning.

These low-cost sensors can provide robust seismic activity detection when combined with Grillo’s machine learning solutions.

Per a Clinton Global Initiative press release:

The Caribbean is a highly seismic region due to its location at the convergence zone between major tectonic plates and communities across the region are frequently impacted by seismic events. In January 2020, southern Puerto Rico was impacted by a series of earthquakes over several weeks that damaged homes and infrastructure and caused displacement.

Earthquakes can be difficult to detect. While it’s usually impossible to miss the big ones as they’re happening, many smaller and medium-sized seismic events go undetected by modern sensors.

In fact, when Grillo developed its system in Mexico and Puerto Rico, the team found that it’s inexpensive open-source sensors and detection tech outperformed costly state systems by a significant margin.

Here’s the best part: Grillo and IBM are committed to developing these systems in tandem with local and global developers. In other words: you can help.

Per the press release:

In another important step in helping prepare and alert citizens ahead of earthquakes, they’ll be hosting deployments in Puerto Rico, leveraging the 90+ sensors located across the island and calling on the open source community to help introduce affordable, community-driven detection solutions to the region.

Grillo would like the open source communities help on the following actions:

  • Developers can help improve and test the sensor firmware so that it is more reliable and easier to provision.
  • The MQTT backend needs to be ported to additional open source platforms to allow for a scalable global hosted solution that will ingest data from citizen scientists everywhere.
  • The detection code can use help and is being developed in Python and deployed in Kubernetes. This will allow for rapid and precise detection of earthquakes using many more sensors and distributed systems.
  • The team is experimenting with machine learning for improved accuracy using the latest seismological algorithms.
  • There is also work on the mobile and wearable apps both for provisioning of a sensor device, as well as receiving alerts from the cloud.
  • Work is underway on the public Carbon/React dashboard which allows users to see and interact with devices, as well as view recent earthquake events.

If you think you have what it takes to contribute, or you’d like to learn how to use and develop these solutions as part of the open-source community, you can find out more here.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
AI

Anvilogic raises $10M to scale no-code cyberattack detection platform

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Cybersecurity detection automation company Anvilogic today announced a $10 million series A round led by Cervin Ventures. CEO Karthik Kannan says the capital will be put toward scaling and R&D.

In a 2017 Deloitte survey, only 42% of respondents considered their institutions to be extremely or very effective at managing cybersecurity risk. The pandemic has certainly done nothing to alleviate these concerns. Despite increased IT security investments companies made in 2020 to deal with distributed IT and work-from-home challenges, nearly 80% of senior IT workers and IT security leaders believe their organizations lack sufficient defenses against cyberattacks, according to IDG.

Anvilogic is a VC-funded cybersecurity startup based in Palo Alto, California and founded by former Splunk, Proofpoint, and Symantec data engineers. The company’s product, which launched in 2019, is a collaborative, no-code platform that streamlines detection engineering workflows by helping IT teams assess cloud, web, network, and endpoint environments and build and deploy attack-pattern detection code.

Anvilogic

Anvilogic is designed to provide improved visibility, enrichment, and context across alerting datasets, enhancing the ability to aggregate, detect, and respond using existing data. The platform provides a continuous maturity scoring model and AI-assisted use case recommendations based on industry trends, threat priorities, and data sources. Using Anvilogic, security teams can visualize suspicious activity patterns and synchronize content metadata for detection and alerting.

Key areas of automation

As Kannan explained to VentureBeat via email, the Anvilogic platform has four key functionality focus areas. The first is automated assessment of state of security, which spans the ability to automatically score a customer’s security readiness with a metric, along with a gap analysis. This capability provides AI-driven prioritization to guide the customer on where to start and when to go deeper, based on criteria such as their industry, the current landscape, peer behavior, available data sources, current gaps, and more.

The platform’s next area is automation of detection engineering, which includes AI-based suggestions for security teams, data sources, a no-code build environment to construct detections, and an integrated workflow for task management and detection deployment. Then there’s automation of hunting and triage, where AI-based correlations of signals produce higher-order threat detection outcomes, which provide the entire story of an alert. Anvilogic auto-enriches alerts based on a hunting and triage framework. The final piece is ongoing learning across enterprises to learn new workflows, patterns, and actions and to provide the entire network better insights and recommendations for detections, hunting, and triage.

“All use cases are connected and have a smooth handoff via a task management workspace, along with baked-in access controls such that the entire detection engineering and hunting/triage process is automated by the platform,” Kannan said. “The user experience is guided by our intuitive and domain-driven user interface, and the maturity score provides users guidance on what to build/deploy and also serves as a tracker of progress and gaps.”

Cybersecurity during the pandemic

Reflecting the pace of adoption, the AI in cybersecurity market will reach $38.2 billion in value by 2026, Markets and Markets projects. That’s up from $8.8 billion in 2019, representing a compound annual growth rate of around 23.3%. Just last week, a study from MIT Technology Review Insights and Darktrace found that 96% of execs at large enterprises are considering adopting “defensive AI” against cyberattacks.

“Our vision is to deliver complete automation to the security operation center (SOC) in the emerging cloud-first world and deliver what we call SOC neutrality. We believe that all logging will be on a distributed cloud warehouse in the future, and there will be even more silos of alerts and workflows (e.g., primary on-premises logging, traditional network workloads, and newer cloud workloads) in the SOC,” Kannan said. “Anvilogic will become the unified security fabric that delivers total end-to-end SOC automation across silos, successfully delivering detection and hunting capability by correlating across workloads, powered by AI, domain-specific frameworks and automation.”

Beyond Cervin, Foundation Capital, Point 72 Ventures, and Dan Warmenhoven participated in 25-employee Anvilogic’s latest funding round. It brings the company’s total raised to date to over $15 million.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Expert.ai adds emotion and style detection tools to natural language API

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Enterprises and investors are increasingly excited about using natural language (NL) processing to assist in tasks like data mining for sales intelligence, tracking how marketing campaigns change over time, and better defending against phishing and ransomware attacks.

Still, AI products using natural language engines to analyze text have a long way to go to capture more than a fraction of the nuance humans use to communicate with each other. Expert.ai hopes the addition of new emotion- and behavior-measuring extensions and a new style-detecting toolkit for its natural language API will provide AI developers with more human-like language analysis capabilities. The company this week announced new advanced features for its cloud-based NL API designed to help AI developers “[extract] emotions in large-scale texts and [identify] stylometric data driving a complete fingerprint of content,” Expert.ai said in a statement.

Based in Modena, Italy and with U.S. headquarters in Rockville, Md., Expert.ai changed its name from Expert System in 2020. The company’s customers include media outlets like the Associated Press, which uses NL software for content classification and enrichment, business intelligence consultants like L’Argus de la Presse, which conducts brand reputation analysis with NL processing, and financial services firms like Zurich Insurance, which uses Expert.ai’s platform to develop cognitive computing solutions.

Freeing people up for higher-order tasks

Expert.ai’s software platform enables natural language solutions that take unstructured language data from sources like social media sites and emails, transforming it into more digestible, usable intelligence before human analysts look at it. An example of a basic NL capability would be to distinguish between different ways a word like “jaguar” is used contextually—-to signify the animal, the vehicle, or the name of a sports team. This allows for process automation steps to be introduced to text gathering, categorization and analysis workloads, freeing up human analysts to perform higher-order tasks with the data.

Several NL software developers, including Expert.ai, used their algorithms last year to attempt to predict the outcome of the U.S. presidential election, with mixed results. While trying to weed out bot accounts, Expert.ai scraped Twitter and other social media sites to determine which candidate was ahead on “positive” sentiment and thus likely to win the popular vote. The company’s final polling gave Joe Biden a 50.2 percent to 47.3 percent edge over Donald Trump—-not too far off Biden’s final tally of 51.3 percent to Trump’s 46.9 percent of the national popular vote.

With the new extensions, the Expert.ai NL API now captures a range of 117 different traits in analyzed language, the company said. The natural language engine categorizes eight different “emotional traits” found in analyzed text (anger, fear, disgust, sadness, happiness, joy, nostalgia and shame) and seven different “behavioral traits” (sociality, action, openness, consciousness, ethics, indulgence and capability). Traits are further rated on a three-point scale as “low,” “fair,” or “high.”

Identifying individual authors via writeprint

Additionally, Expert.ai’s new writeprint extension improves the NL engine’s ability to process and understand the mechanics and styles of written language. The writeprint extension “performs a deep linguistic style analysis (or stylometric analysis) ranging from document readability and vocabulary richness to verb types and tenses, registers, sentence structure and grammar.” The ability to identify individual authors of texts via the writeprint extension could be put to several uses, such as identifying forgeries or impersonations, as well as categorizing content based on writing style and readability, the company said.

“From apps that analyze customer interactions, product reviews, emails or chatbot conversations, to content enrichment that increases text analytics accuracy, adding emotional and behavioral traits provides critical information that has significant impact,” Expert.ai head of product management Luisa Herrmann-Nowosielski said in a statement.

“By incorporating this exclusive layer of human-like language understanding and a powerful writeprint extension for authorship analysis into our NL API, we are conquering a new frontier in the artificial intelligence API ecosystem, providing developers and data scientists with unique out-of-the-box information to supercharge their innovative apps.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Wyze will try pay-what-you-want model for its AI-powered person detection

Smart home company Wyze is experimenting with a rather unconventional method for providing customers with artificial intelligence-powered person detection for its smart security cameras: a pay-what-you-want business model. On Monday, the company said it would provide the feature for free as initially promised, after it had to disable it due to an abrupt end to its licensing deal with fellow Seattle-based company Xnor.ai, which was acquired by Apple in November of last year. But Wyze, taking a page out of the old Radiohead playbook, is hoping some customers might be willing to chip in to help it cover the costs.

AI-powered person detection uses machine learning models to train an algorithm to differentiate between the movement of an inanimate object or animal and that of a human being. It’s now a staple in the smart security camera market, but it remains rather resource-intensive to provide and expensive as a result. It is more expensive than Wyze at first realized, in fact. That’s a problem after the company promised last year that when its own version of the feature was fully baked, it would be available for free without requiring a monthly subscription, as many of its competitors do for similar AI-powered functions.

Yet now Wyze says it’s going to try a pay-what-you-want model in the hopes it can use customer generosity to offset the bill. Here’s how the company broke the good (and bad) news in its email to the customers eligible for the promotion, which includes those that were enjoying person detection on Wyze cameras up until the Xnor.ai contract expired at the end of the year:

“Over the last few months, we’ve had this service in beta testing, and we’re happy to report that the testing is going really well. Person Detection is meeting our high expectations, and it’s only going to keep improving over time. That’s the good news.

The bad news is that it’s very expensive to run, and the costs are recurring. We greatly under-forecasted the monthly cloud costs when we started working on this project last year (we’ve also since hired an actual finance guy…). The reality is we will not be able to absorb these costs and stay in business.”

Wyze says that while it would normally charge a subscription for a software service that involves recurring monthly costs, it told about 1.3 million of its customers that it would not charge for the feature when it did arrive, even if it required the company pay for pricey cloud-based processing. “We are going to keep our promise to you. But we are also going to ask for your help,” Wyze writes.

It sounds risky, and Wyze admits that the plan may not pan out:

When Person Detection for 12-second event videos officially launches, you will be able to name your price. You can select $0 and use it for free. Or you can make monthly contributions in whatever amount you think it’s worth to help us cover our recurring cloud costs. We will reevaluate this method in a few months. If the model works, we may consider rolling it out to all users and maybe even extend it to other Wyze services.

If Wyze is able to recoup its costs by relying on the goodwill of customers, it could set the company up to try more experimental pricing models. After all, radical pricing strategies and good-enough quality is how Wyze became a bit of a trailblazer in the smart home camera industry, and it could work out for them again if customers feel like the feature works so well it warrants chipping in a few bucks a month.



Repost: Original Source and Author Link

Categories
AI

Facebook contest reveals deepfake detection is still an ‘unsolved problem’

Facebook has announced the results of its first Deepfake Detection Challenge, an open competition to find algorithms that can spot AI-manipulated videos. The results, while promising, show there’s still lots of work to be done before automated systems can reliably spot deepfake content, with researchers describing the issue as an “unsolved problem.”

Facebook says the winning algorithm in the contest was able to spot “challenging real world examples” of deepfakes with an average accuracy of 65.18 percent. That’s not bad, but it’s not the sort of hit-rate you would want for any automated system.

Deepfakes have proven to be something of an exaggerated menace for social media. Although the technology prompted much handwringing about the erosion of reliable video evidence, the political effects of deepfakes have so far been minimal. Instead, the more immediate harm has been the creation of nonconsensual pornography, a category of content that’s easier for social media platforms to identify and remove.

Mike Schroepfer, Facebook’s chief technology officer, told journalists in a press call that he was pleased by the results of the challenge, which he said would create a benchmark for researchers and guide their work in the future. “Honestly the contest has been more of a success than I could have ever hoped for,” he said.

Examples of clips used in the challenge. Can you spot the deepfake?
Video by Facebook

Some 2,114 participants submitted more than 35,000 detection algorithms to the competition. They were tested on their ability to identify deepfake videos from a dataset of around 100,000 short clips. Facebook hired more than 3,000 actors to create these clips, who were recorded holding conversations in naturalistic environments. Some clips were altered using AI by having other actors’ faces pasted on to their videos.

Researchers were given access to this data to train their algorithms, and when tested on this material, they produced accuracy rates as high as 82.56 percent. However, when the same algorithms were tested against a “black box” dataset consisting of unseen footage, they performed much worse, with the best-scoring model achieving an accuracy rate of 65.18 percent. This shows detecting deepfakes in the wild is a very challenging problem.

Schroepfer said Facebook is currently developing its own deepfake detection technology separate from this competition. “We have deepfake detection technology in production and we will be improving it based on this context,” he said. The company announced it was banning deepfakes earlier this year, but critics pointed out that the far greater threat to disinformation was from so-called “shallowfakes” — videos edited using traditional means.

The winning algorithms from this challenge will be released as open-source code to help other researchers, but Facebook said it would be keeping its own detection technology secret to prevent it from being reverse-engineered.

Schroepfer added that while deepfakes were “currently not a big issue” for Facebook, the company wanted to have the tools ready to detect this content in the future — just in case. Some experts have said the upcoming 2020 election could be a prime moment for deepfakes to be used for serious political influence.

“The lesson I learned the hard way over the last couple of years, is I want to be prepared in advance and not be caught flat footed,” said Schroepfer. “I want to be really prepared for a lot of bad stuff that never happens rather than the other way around.”

Repost: Original Source and Author Link