Categories
Game

Massive ‘Grand Theft Auto VI’ leak shows off early gameplay footage

A massive trove of footage from the next installment in Rockstar’s Grand Theft Auto series has leaked online. On early Sunday morning, a hacker who goes by teapotuberhacker uploaded 90 videos from a test build of Grand Theft Auto VI to GTAForums. Since PCGamer spotted the post, the clips have proliferated across YouTube and social media, and as of the writing of this article, they’re still viewable.

In line with reporting Bloomberg’s Jason Schreier published in July, the footage shows two playable protagonists. One of them is a female character named Lucia, who we see robbing a restaurant in one of the clips. In a separate video, you can see the other playable character riding the “Vice City Metro,” pointing to the fact that GTA VI will take place in a fictionalized version of Miami. According to Schreier, the leaked footage is legitimate.

“Not that there was much doubt, but I’ve confirmed with Rockstar sources that this weekend’s massive Grand Theft Auto VI leak is indeed real. The footage is early and unfinished, of course,” he tweeted. “This is one of the biggest leaks in video game history and a nightmare for Rockstar Games.”

Adding intrigue to an already interesting story, teapotuberhacker claims they’re also responsible for the recent Uber hack. They said they obtained the test build after gaining access to a Rockstar employee’s Slack account and may upload additional data online, including source code and assets from GTA V and GTA VI, as well as the test build itself. It’s unclear how old this version of the game is. Rockstar has reportedly been working on GTA VI since 2014. In July, Schreier reported the studio was at least another two years away from releasing the game to the public.

Grand Theft Auto series publisher Take-Two Interactive did not immediately respond to Engadget’s request for comment.

Update, 9/19/2022, 9:18am ET: Rockstar Games confirmed the leak Monday morning in a tweet:

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.



Repost: Original Source and Author Link

Categories
AI

Nexar, which sells analytics services on top of dash cam footage, raises $53M

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


Nexar, a startup developing an analytics service on top of car dash cam footage, today announced that it raised $53 million in a series D round led by Qumra Capital with participation from State Farm Ventures, Catalyst Investments, Banca Generali, Valor, Atreides Management, Corner Ventures, Regah Ventures, Aleph, and others. The proceeds — which bring the company’s total raised to $96.5 million — will be used to expand Nexar’s sales and marketing efforts and support the launch of its flagship Nexar One line of dashboard cameras, according to CEO Eran Shir.

Shir and Bruno Fernandez-Ruiz founded Tel Aviv, Israel- and New York-based Nexar in 2015. An MIT graduate, Fernandez-Ruiz was previously global head of advertising personalization at Yahoo and managed Accenture’s center for strategic research. As a research affiliate at MIT, he studied large-scale computing for traffic estimation and prediction using statistical learning and simulation techniques. As for Shir, he became the head of Yahoo’s creative innovation center after the company acquired digital marketing firm Dapper, his second startup (the first being Cogniview Systems), in October 2020.

In 2015, Nexar launched a dash cam app for iOS and Android that uses a phone’s camera to monitor the road ahead, leveraging AI to detect footage of dangerous events and log sensor data to recreate a simulation of what happened to the car. A cursory web search turns up an abundance of dash cam apps on app stores, but in 2016, Nexar uniquely began providing crowdsourced data to help predict collisions by sharing info among users of its app.

“[We wanted] to turn the vehicles of the world into a massive live mesh network of smart cameras and sensors continuously mapping our public spaces. Nexar aims to crawl and index the physical world just like Google crawls and indexes the web,” Shir told VentureBeat via email. “The vision behind Nexar was to make the world collision-free, using crowd-sourced vision and connected cars. Nexar is built on using consumer-grade dash cameras that generate a fresh, high-quality understanding of the world at a street level and the transient changes in it.”

Leaning into crowdsourced data

Nexar isn’t the only company tapping the power of the crowd to analyze road data. In June 2020, Lyft announced that it would collect data from cameras mounted to some cars in its ride-hailing network to improve the performance of its autonomous vehicle systems. Both Mobileye and Tesla harvest information from millions of drivers to train their perception, planning, control, and coordination systems. So does startup Comma.ai, whose aftermarket assisted driving kit — Comma Three — leans on models developed against tens of millions of miles of crowdsourced driving data.

But Nexar now sells its own dash cam hardware and monetizes the data that it collects. According to Shir, the startup is among the top suppliers of consumer dash cams in the U.S. and plans to globally expand with the rollout of the aforementioned Nexar One, which features dedicated LTE connectivity.

“In the general sense of collecting vision data from a network of cars, our competitive situation is pretty unique,” Shir said. “Instead of expensive cameras on a few cars, Nexar shows how a massive fleet can be created to generate data with minimal costs.”

Nexar claims to have partnerships with — and customers in — insurance companies, drivers on ride-sharing services like Uber and Lyft, and automakers and municipalities. Several government agencies are using its tech, which can automatically extract road features from camera footage, to monitor traffic and street infrastructure — for example to spot potholes, cracks, and other potentially dangerous road damage. Most recently, Nexar teamed up with with Nevada public road transit authorities to create “digital twins” representing virtual models of road work.

Nexar

Above: Data from Nexar’s network.

Image Credit: Nexar

The annual investment required to maintain roads, highways, and bridges in the U.S. is roughly $185 billion a year, according to the National Surface Transportation Policy and Revenue Study Commission. Capital from recently passed infrastructure legislation is expected to bridge gaps in budgets, but analytics can — at least in theory — help funnel the money to where it’s needed the most.

In this area, Nexar competes with vehicle telematics startups like Tactile Mobility and NIRA Dynamics, which apply machine learning algorithms to vehicle sensors and controllers to measure grip, friction, and other metrics in real time. NIRA is working with authorities in Sweden and Norway to provide road condition forecasts, and Tactile is conducting a road-monitoring pilot with the City of Detroit and a major automaker. But Shir says that there’s no substitute for camera footage, which he argues can capture phenomena that other sensors are likelier to miss.

“At 150 million miles per month, Nexar ‘sees’ quite a lot of the world, and is the only company (other than Tesla) to have solved the question of the ingestion of vision data from a network of cars — and to do so scalably and economically,” Shir said. “Today, work zone detection — which is non-trivial from the AI standpoint and requires fresh dash cam coverage in any given area — applies to many markets in which we operate. For cities and departments of transportation, especially given the Biden administration’s infrastructure bill, work zones are set to proliferate, impacting safety and congestion on the roads.”

Nexar is also looking to tap into the autonomous vehicle market — in 2017, signaling its ambitions, the company released a dataset of 55,000 street pics from 80 countries for an open autonomous vehicle perception competition. In addition, Nexar is investigating AI collision reconstruction techniques to create “forensically preserved” digital twins of an accident scene, dipping its toes in the insurance market. The company’s privacy policy hints at this — it mentions that insurance partners might consider Nexar collision reports as part of an offer or policy in the future.

Privacy and beyond

The volume of data that Nexar records and processes might rightly give some drivers pause. Nationwide found in a recent survey that more than 60% of consumers have privacy concerns about telematics — i.e., remote vehicle monitoring — despite the fact that interest in telematics is on the rise.

For its part, Nexar states that it’ll “never share any data belonging to individual users with a third party.” The company also asserts that it doesn’t “track individuals or allow third parties to use the Nexar system and violate the privacy of our individual users or the privacy of third parties whose activity was captured by Nexar.”

“[Users only] grant us a perpetual right to use parts of [their] uploaded content — once they are de-linked, de-identified, anonymized, and then aggregated with other user’s [sic] data —  so that it cannot be traced back to them,” Nexar writes in its privacy policy. “We make money by capturing facts about the world that are not private in any way. We actually do not care about your private drives. We care about potholes on the road and orange cones and the free parking spot or how the construction of a new townhouse is progressing. Or whether or not it rained near the stadium.”

Nexar claims to have trained a classifier model to differentiate between images captured inside of cars and road-facing images, as well as footage from garages and other compromising locations. Another model identifies and hides unique markers and objects visible on the car dashboard, including windshield elements. A third model blurs the faces of pedestrians and license plates within view of Nexar’s software and cameras.

Nexar

Even assuming that the models perform perfectly, Nexar admits that it might be compelled to hand over data to law enforcement official in certain circumstances. While the company pledges to disallow groups from using its network as a “digital panopticon,” in part by building its technology so that it can’t be used in conjunction with tools to track drivers, Nexar only promises that it won’t voluntarily share information with government agencies — not that it won’t ever.

This being said, Nexar offers users the option to permanently delete their data from its database. The company otherwise retains it for “as long as is required in order to comply with [its] legal and contractual obligations.”

Nexar — which has 165 employees — claims to service hundreds of thousands of customers and users around the world. Business-to-business annual recurring revenue tripled since its last funding announcement in January 2018, as dash cam sales grew by 280%.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Deep North, which uses AI to track people from camera footage, raises $16.7M

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Deep North, a Foster City, California-based startup applying computer vision to security camera footage, today announced that it raised $16.7 million in a series A-1 round. Led by Celesta Capital and Yobi Partners, with participation from Conviction Investment Partners, Deep North plans to use the funds to make hires and expand its services “at scale,” according to CEO Rohan Sanil.

Deep North, previously known as Vmaxx, claims its platform can help brick-and-mortar retailers “embrace digital” and protect against COVID-19 by retrofitting security systems to track purchases and ensure compliance with masking rules. But the company’s system, which relies on algorithms with potential flaws, raises concerns about both privacy and bias.

“Even before a global pandemic forced retailers to close their doors … businesses were struggling to compete with a rapidly growing online consumer base,” Sanil said in a statement. “As stores open again, retailers must embrace creative digital solutions with data driven, outcome-based computer vision and AI solutions, to better compete with online retailers and, at the same time, accommodate COVID-safe practices.”

AI-powered monitoring

Deep North was founded in 2016 by Sanil and Jinjun Wang, an expert in multimedia signal processing, pattern recognition, computer vision, and analytics. Wang — now a professor at Xi’an Jiaotong University in Xi’an, China — was previously a research scientist at NEC before joining Epson’s R&D division as a member of the senior technical staff. Sanil founded a number of companies prior to Deep North, including Akirra Media Systems, where Wang was once employed as a research scientist.

“In 2016, I pioneered object detection technology to help drive targeted advertising from online videos. When a major brand saw this, they challenged m e to create a means of identifying, analyzing, and sorting objects captured on their security video cameras in their theme parks,” Sanil told VentureBeat via email. “My exploration inspired development that would unlock the potential of installed CCTV and security video cameras within the customer’s physical environment and apply object detection and analysis in any form of video.”

After opening offices in China and Sweden and rebranding in 2018, Deep North expanded the availability of its computer vision and video analytics products, which offer object and people detection capabilities. The company says its real-time, AI-powered and hardware-agnostic software can understand customers’ preferences, actions, interactions, and reactions “in virtually any physical setting” across “a variety of markets,” including retailers, grocers, airports, drive-thrus, shopping malls, restaurants, and events.

Deep North says that retailers, malls, and restaurants in particular can use its solution to analyze customer “hotspots,” seating, occupancy, dwell times, gaze direction, and wait times, leveraging these insights to figure out where to assign store associates or kitchen staff. Stores can predict conversion by correlating tracking data with the time of day, location, marketing events, weather, and more, while shopping centers can draw on tenant statistics to understand trends and identify “synergies” between tenants, optimizing for store placement and cross-tenant promotions.

Deep North

“Our algorithms are trained to detect objects in motion and generate rich metadata about physical environments such as engagement, pathing, and dwelling. Our inference pipeline brings together camera feeds and algorithms for real-time processing,” Deep North explains on its website. “[We] can deploy both via cloud and on-premise and go live within a matter of hours. Our scalable GPU edge appliance enables businesses to bring data processing directly to their environments and convert their property into a digital AI property. Video assets never leave the premise, ensuring the highest level of security and privacy.”

Beyond these solutions, Deep North developed products for particular use cases like social distancing and sanitation. The company offers products that monitor for hand-washing and estimate wait times at airport check-in counters, for example, as well as detect the presence of masks and track the status of maintenance workers on tarmacs.

“With Deep North’s mask detection capability, retailers can easily monitor large crowds and receive real-time alerts,” Deep North explains about its social distancing products. “In addition, Deep North … monitors schedules and coverage of sanitization measures as well as the total time taken for each cleaning activity … Using Deep North’s extensive data, [malls can] create tenant compliance scorecards to benchmark efforts, track overall progress, course-correct as necessary. [They] can also ensure occupancy limits are adhered to across several properties, both locally and region-wide, by monitoring real-time occupancy on our dashboard and mobile apps.”

Bias concerns

Like most computer vision systems, Deep North’s were trained on datasets of images and videos showing examples of people, places, and things. Poor representation within these datasets can result in harm — particularly given that the AI field generally lacks clear descriptions of bias.

Previous research has found that ImageNet and Open Images — two large, publicly available image datasets — are U.S.- and Euro-centric, encoding humanlike biases about race, ethnicity, gender, weight, and more. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. And because of how images of words like “wedding” or “spices” are presented in distinctly different cultures, object recognition systems can fail to classify many of these objects when they come from the Global South.

Bias can arise from other sources, like differences in the sun path between the northern and southern hemispheres and variations in background scenery. Studies show that even differences between camera models — e.g., resolution and aspect ratio — can cause an algorithm to be less effective in classifying the objects it was trained to detect.

Tech companies have historically deployed flawed models into production. ST Technologies’ facial recognition and weapon-detecting platform was found to misidentify black children at a higher rate and frequently mistook broom handles for guns. Meanwhile, Walmart’s AI- and camera-based anti-shoplifting technology, which is provided by Everseen, came under scrutiny last May over its reportedly poor detection rates.

Deep North doesn’t disclose on its website how it trained its computer vision algorithms, including whether it used synthetic data (which has its own flaws) to supplement real-world datasets. The company also declines to say to what extent it takes into account accessibility and users with major mobility issues.

In an email, Sanil claimed that Deep North “has one of the largest training datasets in the world,” derived from real-world deployments and scenarios. “Our human object detection and analysis algorithms have been trained with more than 130 million detections, thousands of camera feeds, and various environmental conditions while providing accurate insights for our customers,” he said. “Our automated and semi-supervised training methodology helps us build new machine learning models rapidly, with the least amount of training data and human intervention.”

In a follow-up email, Sanil added: “Our platform detects humans, including those with unique gaits, and those that use mobility aids and assistive devices. We don’t do any biometric analysis, and therefore there is no resulting bias in our system … In the simplest terms, the platform interprets everything as an object whether it’s a human or a shopping cart or a vehicle. We provide object counts entering or exiting a location. Our object counting and reporting is not influenced by specific characteristics.” He continued: “We have a large set of labeled data. For new data to be labeled, we need to classify some of the unlabeled data using the labeled information set. With the semi-supervised process we can now expedite the labeling process for new datasets. This saves time and cost for us. We don’t need annotators, or expensive and slow processes.”

Privacy and controversy

While the purported goal of products like Deep North’s are health, safety, and analytics, the technology could be coopted for other, less humanitarian intents. Many privacy experts worry that they’ll normalize greater levels of surveillance, capturing data about workers’ movements and allowing managers to chastise employees in the name of productivity.

Deep North is no stranger to controversy, having reportedly worked with school districts and universities in Texas, Florida, Massachusetts, and California to pilot a security system that uses AI and cameras to detect threats. Deep North claims that the system, which it has since discontinued, worked with cameras with resolutions as low as 320p and could interpret people’s behavior while identifying objects like unattended bags and potential weapons.

Deep North is also testing systems in partnership with the U.S. Transportation Security Administration, which furnished it with a grant last March. The company received close to $200,000 in funding to provide metrics like passenger throughput, social distancing compliance, agent interactions, and bottleneck zones as well as reporting of unattended baggage, movement in the wrong direction, or occupying restricted areas.

“We are humbled and excited to be able to apply our innovations to help TSA realize its vision of improving passenger experience and safety throughout the airport,” Sanil said in a statement. “We are committed to providing the U.S. Department of Homeland Security and other government entities with the best AI technologies to build a safer and better homeland through continued investment and innovation.”

Deep North admitted in an interview with Swedish publication Breakit that it offers facial characterization services to some customers to estimate age range. And on its website, the startup touts its technologies’ ability to personalize marketing materials depending on a person’s demographics, like gender. But Deep North is adamant that its internal protections prevent it from ascertaining the identity of any person captured via on-camera footage.

“We have no capability to link the metadata to any single individual. Further, Deep North does not capture personally identifiable information (PII) and was developed to govern and preserve the integrity of each and every individual by the highest possible standards of anonymization,” Sanil told TechCrunch in March 2020. “Deep North does not retain any PII whatsoever, and only stores derived metadata that produces metrics such as number of entries, number of exits, etc. Deep North strives to stay compliant with all existing privacy policies including GDPR and the California Consumer Privacy Act.”

To date, 47-employee Deep North has raised $42.3 million in venture capital.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Amazing NASA footage shows Perseverance rover setting down on Mars

NASA today released breathtaking new footage from the Perseverance rover’s arrival on the surface of Mars.

The Perseverance rover left for the red planet on 30 July 2020. After a mostly uneventful transit through space, it managed to touch down on the surface of Mars at 20:55 UTC on 18 February 2021.

Its mission, per NASA:

Seek signs of ancient life and collect samples of rock and regolith (broken rock and soil) for possible return to Earth.

The rover’s been hard at work in the red dirt of Mars all weekend and will probably remain there, well, forever. But that won’t stop it from fulfilling its mission to send samples back to Earth. Once Perseverance obtains the proper samples, the plan is to fire them back to Earth in rockets.

But this isn’t slated to happen until at least 2031. In the meantime, the rover’s headed to spots on Mars that are just too difficult to land directly on.

And that brings us to this fascinating video NASA released today showing the rover’s descent to the Martian surface:

For more information check out the Perseverance page on NASA’s website here.



Repost: Original Source and Author Link

Categories
AI

Edgybees raises $9.5 million for AI that augments drone camera footage

Edgybees, a provider of georegistration and augmented reality tools for drone operators, today announced that it raised $9.5 million, bringing its total raised to $15 million. The company says the proceeds will be used to drive product research, expand global adoption, and support an “aggressive” hiring strategy.

The commercial drone market was already accelerating, with reports the industry would grow more than fivefold by 2026 from the $1.2 billion it was reportedly worth in 2018. But the pandemic has increased demand for drone services in areas such as medical supply deliveries and site inspections. Honeywell, a major supplier of aerospace systems, launched a new business unit covering drones, air taxis, and unmanned cargo delivery vehicles. And in early January, startup American Robotics snagged the first-ever U.S. Federal Aviation Administration (FAA) approval to fly automated drones beyond the line of sight.

Palo Alto, California-based Edgybees was cofounded in 2017 by Adam Kaplan, Nitay Megides, and Menashe Haskin. Haskin was formerly an engineering manager at Amazon, where he headed the software team for Amazon’s Fire TV platform and the Amazon Prime Air development site in Israel.

Edgybees claims its platform addresses the challenge of piloting drones with decreased air visibility from flames, smoke, and chemical spills. The company augments live video feeds from drones with geoinformation layers like maps, building layouts, points of interest, user-generated markers, and more captured from cameras or other data sources, leveraging a combination of computer vision, data analytics, and video synthesis.

Edgybees

Above: Edgybees’ augmented reality overlays.

Image Credit: Edgybees

Edgybees’ technology was first applied to an augmented reality racing game for drones, released in partnership with DJI in early 2017. Later that year, Edgybees debuted First Response, a drone flying app first deployed by officials operating in the Florida Keys in the aftermath of Hurricane Irma. First Response was also used by local authorities to track firemen’s locations during the Northern California fires in October 2018, according to Edgybees.

More recently, 36-employee Edgybees was awarded a contract from the U.S. Air Force following a round of awards the company received at the end of 2019, some from AFventures’ Strategic Finance Program (StratFi). And in March, Edgybees inked a deal with AFWerx, an innovation program within the US Air Force, to integrate its technology with the U.S. Air Force’s systems.

“During COVID-19 we have seen a massive growth in advanced drone delivery, homeland security and space technologies,” Kaplan told VentureBeat via email. “Edgybees provides high precision geo-tagging of real time video feeds allowing patients to receive their medicine on-time, search and rescue teams to save lives during fires and hurricanes, and enables defense, public safety and critical infrastructure industries to make crucial decisions in real-time.”

New investors Seraphim Capital Refinery Ventures, LG Technology Ventures, and Kodem Growth Partners participated in Edgybees’ latest round of fundraising. OurCrowd, 8VC, Verizon, and Motorola also participated.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Theator raises $15.5 million to analyze surgical footage with computer vision

Surgical platform developer Theator today closed a $15.5 million series A round led by Insight Partners, bringing the company’s total raised to date to over $18 million. Theator says the funds will be used to scale its commercial operations and partnerships with U.S. providers as it looks to grow its R&D team.

Some experts assert that surgeons’ efforts to overcome inequalities are stunted by an apprenticeship model that overemphasizes scope of practice, limiting surgeons’ knowledge. In the U.S., Black children are three times as likely as white children to die within a month of surgery, according to a study published in the journal Pediatrics. An estimated 5 billion people lack access to safe surgical care worldwide. And of the at least 7 million people who suffer complications following surgery each year, including more than 1 million who die, at least 50% of these deaths and complications are preventable.

Theator, a two-year-old company based in San Mateo, California, claims its product addresses these challenges by enabling surgeons to view video recordings of their procedures as lists (e.g., “Decompression, Cystic duct leak, Critical view of safety”) and jump to steps from which they wish to learn. Users get annotated summaries of their surgical performance as well as KPIs, analytics, and rankings to identify where skills training might be needed.

Theator was founded in 2018, but the company’s backstory dates back to 2015, when CEO and founder Tamir Wolf’s wife and then-boss both suffered from appendicitis a few months apart. As a physician, Wolf was able to diagnose their condition and took them to the emergency room for treatment. “They were treated at two different hospitals, and I was struck by the wildly different levels of care they experienced at each one, even though they were a mere 7 miles apart,” he told VentureBeat via email. “As I tried to understand why the approach to a seemingly straightforward illness was so different, I began to unearth the widespread disparity and variability that is plaguing the medical field, and surgery specifically.”

Theator

The use of AI and machine learning in the operating theater is on the rise. In 2019, McGill University developed an algorithm that was able to tell when trainee surgeons were ready to perform a real operation. And last April, researchers from Stanford created a model to help protect health care professionals from COVID-19 in the operating room.

As for Theator, it leverages computer vision to scan video footage of real-world procedures and identify key moments in order to annotate them with metatags. Through desktop and mobile apps, surgeons gain access to an indexed library of over 400,000 minutes of surgical video encompassing over 80,000 “intraoperative” moments.

“[We use] AI to complete high-granularity indexing of large scale surgical datasets, which enables big data analysis of surgical patterns for the first time, introducing a scientific data-driven understanding of best practices,” Wolf explained. “At Theator, we’re using computer vision, on edge, to detect any frames that might have identifying features (everything that is extra-cavitary), removing these before uploading to our cloud-based system … Theator’s research team [also] recently developed VTN (Video Transfer Network), a first of its kind Transformer-based framework for video action recognition that yields a significantly reduced training runtime (x16.1) and an accelerated inference process (x5.1) while maintaining state-of-the-art performance. VTN has the potential to significantly accelerate the video input and review component of Theator’s platform, in turn improving our ability to roll it out at scale in hospitals worldwide.”

Other investors in Theator’s latest round of fundraising include 23andMe cofounder and CEO Anne Wojcicki, former Netflix chief product officer Neil Hunt, and Zebra Medical Vision’s cofounder Eyal Gura.

“The lack of in-person contact with experienced surgeons has accelerated the need for Theator’s solution, which broadens surgeon expertise, even remotely or asynchronously. The pandemic has highlighted a reality wherein surgeons barely have time for in-person interaction, allowing them a greater degree of freedom to continuously improve performance,” Wolf said. “Surgeons also used to  learn new techniques and approaches at conferences, but these have all but been removed from the equation. Theator’s comprehensive surgical video dataset, the ability to upload your own procedures, view others, and receive and provide feedback has made it an increasingly vital tool when more and more continuous development is being carried out remotely.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link