Future Intel Laptops Could Mandate 8-Megapixel Webcams

Tired of a miserly low-resolution webcam on your Intel-powered laptop? That could soon be a thing of the past, if leaked specifications for Intel’s Evo 4.0 platform are anything to go by, as much better picture quality is apparently in the offing.

According to NotebookCheck, the fourth generation of Intel’s Evo platform — which could be introduced with the upcoming Raptor Lake series of processors pegged for the third quarter of 2022 — will mandate 8-megapixel cameras on all laptops running this spec. In other words, if laptop manufacturers want to work with Intel to be Evo-accredited, they will need to up their webcam game.

Riley Young/Digital Trends

High resolution isn’t the only thing that could become a requirement. NotebookCheck claims other specs are likely to be part of the Evo 4.0 specification, including an 80-degree field of view, plus a passing grade on the VCX benchmark.

What is VCX, you ask? Well, Intel is now part of the VCX forum (short for Valued Camera eXperience), which scores laptop webcams based on certain benchmarks. These include texture loss, motion control, sharpness, dynamic range, the camera’s performance under various lighting conditions, and more. At the end, a final score is given. And it now seems that Intel will be expecting manufacturers’ webcams to hit a minimum score (as yet unknown) in order to pass muster.

Interestingly, NotebookCheck’s report says that any webcams placed below the user’s eye line will be awarded negative points in the VCX test. Someone better tell the Huawei MateBook X Pro.

With Intel’s Raptor Lake series set for later in 2022, could we see some of these webcam improvements in this year’s Alder Lake-based laptops? That’s certainly possible. Intel will allegedly have VCX benchmark scores ready by the first quarter of 2022, so we might see a few devices appear that meet these standards before Raptor Lake steps into the limelight. Just don’t bet the farm on it.

Alongside Intel, Microsoft has also reportedly begun enforcing minimum standards for its partner devices. Like Intel, the company wants manufacturers to hit certain specs for webcams, microphones, and speakers. With two giants of the industry pushing manufacturers to up their game, we could finally be able to bid flimsy webcams and crackly mics adieu.

Editors’ Choice

Repost: Original Source and Author Link


Respawn pulls Titanfall from sale with vague promises for the future

Respawn Entertainment today announced that it has delisted Titanfall, effectively removing it from sale. Titanfall was the first game Respawn Entertainment made as a new studio back in 2014, and now it seems the title is being retired. It isn’t all bad news, though, as those who own the game will still be able to play it even after it disappears from storefronts.

Respawn/Electronic Arts

Respawn pulls the game that started it all

Respawn announced its decision to pull Titanfall in a statement published to Twitter today. “We’ve made the decision to discontinue new sales of the original Titanfall game starting today and we’ll be removing the game from subscription services on March 1, 2022,” Respawn said. “We will, however, be keeping servers live for the dedicated fanbase still playing and those who own the game and are looking to drop into a match.”

Even though Titanfall has disappeared from storefronts and will vanish from streaming services next year, the servers are going to stay live so those who already own the title can continue playing it. While we’re sure most people who wanted to play Titanfall have already purchased the game, those who missed the chance to buy the digital version always have used disc copies they can pick up.

It’s a little bit strange to see Respawn pull Titanfall from storefronts while keeping the servers up and running. Still, if the game has a decent number of players routinely dropping into multiplayer, Respawn probably didn’t want to risk losing consumer goodwill by turning the servers off.

What’s next for the Titanfall series?

After announcing that Titanfall will be delisted, Respawn went on to assure fans that the game won’t be forgotten. “Rest assured, Titanfall is core to Respawn’s DNA and this incredible universe will continue,” the studio said. “Today in Titanfall 2 and Apex Legends, and in the future.”

That part is particularly interesting because it suggests there’s more Titanfall to come. Though the original Titanfall and its sequel have garnered a sizable fanbase, these days, Respawn’s attention is on Apex Legends – a free-to-play battle royale title set in the Titanfall universe that is fairly distinct from a gameplay perspective.

The success of Apex Legends (and the rise of the battle royale genre as an alternative to traditional FPS games) has prompted some Titanfall fans to assume the series is largely over and that we won’t see another Titanfall game in the future. Respawn’s statement today possibly suggests otherwise, though it’s too vague to say for sure.

Still, it’s always possible that Respawn is removing Titanfall from sale because it has something new with the franchise in the works. Until we get confirmation of any such plans, it’s probably safe to assume the company is simply removing the game to focus on Apex Legends. We’ll let you know if Respawn announces anything major in the future, so stay tuned for more.

Repost: Original Source and Author Link


Meta Envisions Haptic Gloves As the Future Of the Metaverse

The metaverse seems to be coming, as is the futuristic hardware that will increase immersion in virtual worlds. Meta, the company formerly known as Facebook, has shared how its efforts to usher in that new reality are focusing on how people will actually feel sensations in a virtual world.

The engineers at Meta have developed a number of early prototypes that tackle this goal and they include both haptic suits and gloves that could enable real-time sensations.

Meta’s Reality Labs was tasked to develop, and in many cases invent, new technologies that would enable greater human-computer interaction. The company started by laying out a vision earlier this year about the future of augmented reality (AR) and VR and how to best interact with virtual objects. This kind of research is crucial if we’re moving toward a future where a good chunk of our day is spent inside virtual 3D worlds.

Sean Keller, Reality Labs research director, said that they want to build something that feels just as natural in the AR/VR world as it does in the real world. The problem, he admits, is the technology isn’t yet advanced enough to feel natural and this experience probably won’t arrive for another 10 to 15 years.

According to Keller, we’d ideally use haptic gloves that are soft, lightweight, and able to accurately reproduce the correct pressure, texture, and vibration that corresponds with a virtual object. That requires hundreds of tiny actuators that can simulate physical sensations. Currently, the existing mechanical actuators are too bulky, expensive, and hot to realistically work well. Keller says that it requires softer, more pliable materials.

To solve this problem, the Reality Labs teams turned to research into prosthetic limbs, namely soft robotics and microfluidics. The researchers were able to create the world’s first high-speed microfluidic processor, which is able to control the air flow that moves tiny, soft actuators. The chip tells the valves in the actuators when to move and how far.

Meta researcher holding prototype haptic glove.

The research team was able to create prototype gloves, but the process requires them to be “made individually by skilled engineers and technicians who manufacture the subsystems and assemble the gloves largely by hand.” In order to build haptic gloves at scale for billions of people, new manufacturing processes would have to be invented. Not only do the gloves have to house all of the electronics and sensors, they also have to be slim, lightweight, and comfortable to wear for extended periods of time.

The Reality Labs materials group experimented with various polymers to turn them into fine fibers that could be woven into the gloves. To make it even more efficient, the team is trying to build multiple functions into the fibers including capacitance, conductivity, and sensing.

There have been other attempts at creating realistic haptic feedback. Researchers at the University of Chicago have been experimenting with “chemical haptics.” This involves using various chemicals to simulate different sensations. For example, capsaicin can be used to simulate heat or warmth while menthol does the opposite by simulating coolness.

Meta’s research imto microfluidic processors and tiny sensors woven into gloves may be a bit more realistic than chemicals applied to the skin. It will definitely be interesting to see where Reality Labs takes its research as we move closer to the metaverse.

Editors’ Choice

Repost: Original Source and Author Link


Deep tech, no-code tools will help future artists make better visual content

This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital.

Despite the hype, the “creator economy” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online.

The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report.

Group of superheroes on a dark background


Image Credit: ©LDV CAPITAL INSIGHTS 2021

We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content.

Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.

Computer vision will aid livestreaming

Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content.

Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory.

The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today.

Synthetic visual content will be ubiquitous

A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.

By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.

L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Above: L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Image Credit: ©LDV CAPITAL INSIGHTS 2021

In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck.

Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media.

Artificial intelligence will deliver motion capture to the masses

Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content.

Graph displaying motion capture landscape


Image Credit: ©LDV CAPITAL INSIGHTS 2021

The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K

Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more.

Nearly all content will be gamified

Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement.

Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content.

AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.

People will interact with many digital beings

We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online.

Five examples of digital identities


Image Credit: ©LDV CAPITAL INSIGHTS 2021

Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI.

These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more.

Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article.

Machine learning will curate, authenticate, and moderate content

For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created.

In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails.

Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications.

It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required.

Visual content tools are the greatest opportunity in the creator economy

The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation.

Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Seeing into our future with Nvidia’s Earth-2

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 

This article was contributed by Jensen Huang, Founder and CEO, NVIDIA

The earth is warming. The past seven years are on track to be the seven warmest on record. The emissions of greenhouse gases from human activities are responsible for approximately 1.1°C of average warming since the period 1850-1900.

What we’re experiencing is very different from the global average. We experience extreme weather — historic droughts, unprecedented heatwaves, intense hurricanes, violent storms, and catastrophic floods. Climate disasters are the new norm.

We need to confront climate change now. Yet, we won’t feel the impact of our efforts for decades. It’s hard to mobilize action for something so far in the future. But we must know our future today — see it and feel it — so we can act with urgency.

To make our future a reality today, simulation is the answer.

To develop the best strategies for mitigation and adaptation, we need climate models that can predict the climate in different regions of the globe over decades.

Unlike predicting the weather, which primarily models atmospheric physics, climate models are multidecade simulations that model the physics, chemistry, and biology of the atmosphere, waters, ice, land, and human activities.

Climate simulations are configured today at 10- to 100-kilometer resolutions.

But greater resolution is needed to model changes in the global water cycle — water movement from the ocean, sea ice, land surface, and groundwater through the atmosphere and clouds. Changes in this system lead to intensifying storms and droughts.

Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what’s currently available. It would take decades to achieve that through the ordinary course of computing advances, which accelerate 10x every five years.

For the first time, we have the technology to do ultra-high-resolution climate modeling, to jump to lightspeed, and predict changes in regional extreme weather decades out.

We can achieve million-x speedups by combining three technologies: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers, along with vast quantities of observed and model data to learn from.

And with super-resolution techniques, we may have within our grasp the billion-x leap needed to do ultra-high-resolution climate modeling. Countries, cities, and towns can get early warnings to adapt and make infrastructures more resilient. And with more accurate predictions, people and nations will act with more urgency.

So, we will dedicate ourselves and our significant resources to direct NVIDIA’s scale and expertise in computational sciences, to join with the world’s climate science community.

NVIDIA this week revealed plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change. Named Earth-2, or E-2, the system would create a digital twin of Earth in Omniverse.

The system would be the climate change counterpart to Cambridge-1, the world’s most powerful AI supercomputer for healthcare research. We unveiled Cambridge-1 earlier this year in the U.K. and it’s being used by a number of leading healthcare companies.

All the technologies we’ve invented up to this moment are needed to make Earth-2 possible. I can’t imagine a greater or more important use.

[Note: A version of this story originally ran on the NVIDIA blog.]

Jensen Huang founded NVIDIA in 1993 and has served since its inception as president, chief executive officer, and a member of the board of directors. In 2017, he was named Fortune’s Businessperson of the Year. In 2019, Harvard Business Review ranked him No. 1 on its list of the world’s 100 best-performing CEOs over the lifetime of their tenure. 


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Teleoperation and the future of safe driving

This post was written by Amit Rosenzweig, CEO of Ottopia.

Teleoperation: the technology that enables a human to remotely monitor, assist and even drive an autonomous vehicle.

Teleoperation is a seemingly simple capability, yet it involves numerous technologies and systems in order to be implemented safely. In the first article of this series, we established what teleoperation is and why it is critical for the future of autonomous vehicles (AVs). In the second article, we showed the legislative traction and emphasis gained for this technology. In the third and fourth articles, we explained two of the many technical challenges that needed to be overcome in order to enable remote vehicle assistance and operation. In this article, we will explore how this is all achieved in the safest possible way. 

More than a decade ago, the major AV companies made a promise. They claimed that autonomous vehicles would by now be completely self-sufficient. Human driving was obsolete. As the years pass, we continue to see how this goal is elusive, and that there will always be the need for a human to be kept in the loop. The initial response to this was remote driving.

Remote Driving? Major danger

Teleoperation was originally a system that overrides the autonomy of a vehicle and allows a human to manually drive it remotely. Essentially it would replace all self-driving functions and safety systems with a remote driver. This would appear to make a degree of sense. Currently, the solution for unknown situations, aka edge cases, is to put a “safety driver” in the driver’s seat. This way, when the autonomy does not know what to do and gets stuck, the human can manually solve the problem by driving the car for just a few seconds. By enabling the human driver to be in a remote location, they can monitor and solve problems for multiple vehicles, thereby cutting down on driver costs.

Chances are when people first envisioned this remote driving, they assumed we would have perfect and fully immersive virtual reality with zero latency as seen in a sci-fi movie like Black Panther. Unfortunately, there are critical shortcomings with regard to remote driving. As it is, from the instant a driver recognizes an obstacle in the road until their foot hits the brake pedal – brake reaction time – it takes about 0.7 seconds. This means that at a speed of only 30 mph, which translates to 44 feet per second, over 30 feet of braking distance are needed to prevent a collision. This is if the driver is IN the vehicle, traveling at ONLY 30 mph, and the car stops on the spot.

Ottopia Teleoperation in crowded environments

Above: Figure 1: “Obstacles” can appear in almost every environment

Image Credit: Ottopia

For a remote driver, one must factor in at least a few fractions of a second in latency plus the lack of haptic feedback. In other words, the brake reaction time alone is at least 0.8 seconds, with a minimum of 35 feet needed to avoid a collision at 30 mph. And this does not even factor in braking distance. Maybe this is why in a different sci-fi movie, Guardians of the Galaxy 2, one can see how remote pilots are inferior to those onboard the ship.

Clearly, humans cannot be allowed to drive a vehicle from a remote location. At least not on their own.

Advanced Teleoperator Assistance System (ATAS ®): the first transformation for teleoperation

Yes, originally the teleoperation system would shut off the autonomy stack and enable a person to drive the vehicle, but why? Why would you shut off this incredible piece of technology that already knows how to sense, react and respond in ways a person will never be able to do? This is why the second stage of teleoperation involved systems like ATAS® (an Ottopia registered trademark).

Like the more familiar ADAS (Advanced Driver Assistance System) the purpose of ATAS® is to work with the (remote) driver while leveraging the existing safety functions enabled by the vehicle’s autonomous capabilities. The main directive of an ATAS® is to prevent collisions. There are two main ways to do this, both made possible by the autonomy stack.

The first is collision warning. At every given moment, the powerful LiDAR, perception, and computation capabilities are ascertaining each and every object in the field of view of the AV. As the vehicle progresses on its way, the system identifies the speed and trajectory of the vehicle in addition to things that may pose a safety hazard. The teleoperator display has a layer that shows their heading and can alert if anything might be a reason to slow down, stop or circumnavigate the particular obstacle. This system helps compensate for the reactive shortcomings of a human driver while still allowing them to make the important decisions of how to get where they need to go.

Remote collision warning in action

Above: Figure 2: Remote collision warning in action

Image Credit: Ottopia

The second is collision avoidance. The ultimate safety decision-making power does not and cannot lie with the human driver. Yes, the human is subject to what the autonomy decides is safest! This may seem backwards until you remember that the vehicle is in the moment. It has instant perception abilities. It sees the oncoming crash before any human ever could. Furthermore, even if the human driver could see the potential risk, it is possible they are distracted or blinded or otherwise incapable of recognizing the impending danger. That is why, only with regard to braking in safety situations, the vehicle and its corresponding autonomy system must make the decision to stop the vehicle and prevent a disaster.

Clearly, a remote driver must have a system like ATAS® in order to ensure the safety of those in an AV and those around it. However, there remains serious room for improvement.

Tele-assistance. The final form?

Tele-assistance, also known as remote vehicle assistance (RVA), high-level commands, or indirect control – is when the operator gives certain orders to the AV without directly deciding how it completes that task. Tele-assistance helps reduce many of the risks involved in remote driving, even with ATAS®. Tele-assistance is also dramatically more efficient in terms of how many operators are needed.

This is how Tele-assistance works: In the traditional teleoperation situation, an AV would be driving along when it encounters an event which it does not know how to handle. It pulls over to the safest possible spot, stops, and triggers an alert for human intervention. That human would link in, observe the situation, and decide on how best to remedy the problem. Instead of putting their hands on a steering wheel and feet on pedals, the operator will choose from a menu of commands they can give to the vehicle to guide it out of its predicament.

Examples of such commands include path choosing – where the operator selects one of a few offered choices for an optimal path forward; path drawing – where the operator makes a custom path for the AV to follow; and object override – recognizing when the seeming obstacle is not a problem (e.g., a small cardboard box in the middle of the lane) and, in fact, the vehicle can simply continue on its way.

Tele-assistance in action (Image courtesy of Ottopia)

Above: Figure 3: Tele-assistance in action

Image Credit: Ottopia

Traditional teleoperation created more problems than it solved. It is hubristic to claim that a human can remote-drive any normal-sized automobile or truck without any assistance or dedicated safety technology. While humans are required to handle situations confronted by autonomy, the solution for driving is ideally assistance, and at the very least, driving with a safety system like ATAS®.

When tele-assistance is coupled with maximized network connectivity and dynamic video compression, as described in the previous two articles, autonomous vehicles can be commercially deployed in the safest and most efficient manner.

Amit Rosenzweig is the CEO & Founder of Ottopia


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Google’s future in enterprise hinges on strategic cybersecurity

Gaps in Google’s cybersecurity strategy make banks, financial institutions, and larger enterprises slow to adopt the Google Cloud Platform (GCP), with deals often going to Microsoft Azure and Amazon Web Services instead.

It also doesn’t help that GCP has long had the reputation that it is more aligned with developers and their needs than with enterprise and commercial projects. But Google now has a timely opportunity to open its customer aperture with new security offerings designed to fill many of those gaps.

During last week’s Google Cloud Next virtual conference, Google executives leading the security business units announced an ambitious new series of cybersecurity initiatives precisely for this purpose. The most noteworthy announcements are the formation of the Google Cybersecurity Action Team, new zero-trust solutions for Google Workspace, and extending Work Safer with CrowdStrike and Palo Alto Networks partnerships.

The most valuable new announcements for enterprises are on the BeyondCorp Enterprise platform, however. BeyondCorp Enterprise is Google’s zero-trust platform that allows virtual workforces to access applications in the cloud or on-premises and work from anywhere without a traditional remote-access VPN. Google’s announced Work Safer initiative combines BeyondCorp Enterprise for zero-trust security and their Workspace collaboration platform.

Workspace now has 4.8 billion installations of 5,300 public applications across more than 3 billion users, making it an ideal platform to build and scale cybersecurity partnerships. Workspace also reflects the growing problem chief information security officers (CISOs) and CIOs have with protecting the exponentially increasing number of endpoints that dominate their virtual-first IT infrastructures.

Bringing order to cybersecurity chaos

With the latest series of cybersecurity strategies and product announcements, Google is attempting to sell CISOs on the idea of trusting Google for their complete security and public cloud tech stack. Unfortunately, that doesn’t reflect the reality of how many legacy systems CISOs have lifted and shifted to the cloud for many enterprises.

Missing from the many announcements were new approaches to dealing with just how chaotic, lethal, and uncontrolled breaches and ransomware attacks have become. But Google’s announcement of Work Safer, a program that combines Workspace with Google cybersecurity services and new integrations to CrowdStrike and Palo Alto Networks, is a step in the right direction.

The Google Cybersecurity Action Team claimed in a media advisory it will be “the world’s premier security advisory team with the singular mission of supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses.”  But let’s get real: This is a professional services organization designed to drive high-margin engagement in enterprise accounts. Unfortunately, small and mid-tier enterprises won’t be able to afford engagements with the Cybersecurity Action Team, which means they’ll have to rely on system integrators or their own IT staff.

Why every cloud needs to be a trusted cloud

CISOs and CIOs tell VentureBeat that it’s a cloud-native world now, and that includes closing the security gaps in hybrid cloud configurations. Most enterprise tech stacks grew through mergers, acquisitions, and a decade or more of cybersecurity tech-buying decisions. These are held together with custom integration code written and maintained by outside system integrators in many cases. New digital-first revenue streams are generated from applications running on these tech stacks. This adds to their complexity. In reality, every cloud now needs to be a trusted cloud.

Google’s series of announcements relating to integration and security monitoring and operations are needed, but they are not enough. Historically Google has lagged behind the market when it comes to security monitoring by prioritizing its own data loss prevention (DLP) APIs, given their proven scalability in large enterprises. To Google’s credit, it has created a technology partnership with Cybereason, which will use Google’s cloud security analytics platform Chronicle to improve its extended detection and response (XDR) service and will help security and IT teams identify and prevent attacks using threat hunting and incident response logic.

Google now appears to have the components it previously lacked to offer a much-improved selection of security solutions to its customers. Creating Work Safer by bundling the BeyondCorp Enterprise Platform, Workspace, the suite of Google cybersecurity products, and new integrations with CrowdStrike and Palo Alto Networks will resonate the most with CISOs and CIOs.

Without a doubt, many will want a price break on BeyondCorp maintenance fees at a minimum. While BeyondCorp is generally attractive to large enterprises, it’s not addressing the quickening pace of the arms race between bad actors and enterprises. Google also includes Recapture and Chrome Enterprise for desktop management, both needed by all organizations to scale website protection and browser-level security across all devices.

It’s all about protecting threat surfaces

Enterprises operating in a cloud-native world mostly need to protect threat points. Google announced a new client connector for its BeyondCorp Enterprise platform that can be configured to protect Google-native and also legacy applications — which are very important to older companies. The new connector also supports identity and context-aware access to non-web applications running in both Google Cloud and non-Google Cloud environments. BeyondCorp Enterprise will also have a policy troubleshooter that gives admins greater flexibility to diagnose access failures, triage events, and unblock users.

Throughout Google Cloud Next, cybersecurity executives spoke of embedding security into the DevOps process and creating zero trust supply chains to protect new executable code from being breached. Achieving that ambitious goal for the company’s overall cybersecurity strategy requires zero trust to be embedded in every phase of a build cycle through deployment.

Cloud Build is designed to support builds, tests, and deployments on Google’s serverless CI/CD platform. It’s SLSA Level -1 compliant, with scripted builds and support for available provenance. In addition, Google launched a new build integrity feature as Cloud Build that automatically generates a verifiable build manifest. The manifest includes a signed certificate describing the sources that went into the build, the hashes of artifacts used, and other parameters. In addition, binary authorization is now integrated with Cloud Build to ensure that only trusted images make it to production.

These new announcements will protect software supply chains for large-scale enterprises already running a Google-dominated tech stack. It’s going to be a challenge for mid-tier and smaller organizations to get these systems running on their IT budgets and resources, however.

Bottom line: Cybersecurity strategy needs to work for everybody  

As Google’s cybersecurity strategy goes, so will the sales of the Google Cloud Platform. Convincing enterprise CISOs and CIOs to replace or extend their tech stack and make it Google-centric isn’t the answer. Recognizing how chaotic, diverse, and unpredictable the cybersecurity threatscape is today and building more apps, platforms, and adaptive tools that learn fast and thwart breaches.

Getting integration right is just part of the challenge. The far more challenging aspect is how to close the widening cybersecurity gaps all organizations face — not only large-scale enterprises — without requiring a Google-dominated tech stack to achieve it.



VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


The future of ‘Minecraft’ includes swamps, scary monsters and a Game Pass bundle

On Saturday, Mojang held its annual Minecraft Live fan convention. As in years past, the event saw the studio detail the future of its immensely popular sandbox game. And if you’re a fan of Minecraft, the livestream did not disappoint.   

The studio kicked off the event with the announcement of The Wild Update. Set to come out sometime in 2022, Mojang promises this latest DLC will change how players explore and interact with the game’s overworld. The update will introduce an entirely new swamp biome that includes mangroves players can pick fruit from and replant to nurture new plants.

The Deep Dark, which was previously planned for 2021, will now launch instead in 2022 alongside The Wild Update. First announced at Minecraft Live 2020, the DLC adds the Warden, a new enemy character that is one of the game’s scariest yet. Players who brave the DLC will find special new items only available in the deep dark.

In the meantime, fans can look forward to part two of the Caves and Cliffs update coming out later this year. In the first half of 2021, Mojang made the decision to split the update into two parts due to the complexity of the included features. At Minecraft Live, the studio said that was the right decision, in part because it allowed the team to take into consideration community feedback. As previously announced, the update will include expanded caves and biomes. It will also increase the height and depth limit of worlds.

Mojang hasn’t forgotten about Minecraft Dungeons. In December, the studio will introduce a new feature called Seasonal Adventures. Each week, you and your friends will have to chance to take on weekly challenges. As you complete them, you’ll earn progress towards a seasonal progression track that unlocks rewards like new skins, pets and emotes. Season One, The Cloudy Climb, will add a new Tower feature and adventure hub for players to explore.

Now is also the perfect time to either try Minecraft for the first time or return to the game after an extended break. On November 2nd, Microsoft will release a Minecraft bundle for Xbox Game Pass on PC. The pack includes both the Bedrock and Java editions of the game, with support for a single MSA log-in across both.

The updates come at a time when Minecraft has never been more popular. Just this past August, Mojang said more than 140 million players logged in to play the game, representing a new milestone for the title. Minecraft Live then was about positioning the game for a future where it continues to grow.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


GPUs Could Become Trojan Horses for Future Cyberattacks


The graphics card inside your computer is a powerful tool for gaming and creative work, but it can also potentially serve as a Trojan horse for malware. Cybercriminals are finding ways to exploit graphics cards and their VRAM to inject malicious code into your system. The approach is claimed to have worked during a proof-of-concept hack on both discrete and integrated GPUs from AMD, Intel, and Nvidia.

Because antivirus software today cannot scan the graphics card’s own video RAM, known as VRAM, hackers are now targeting GPUs to carry out their dirty work. On the other hand, conventional methods used today that target the system’s main memory would trigger the antivirus software.

According to Bleeping Computer, a brief description of the hack was posted on a hacker forum, where one seller was trying to sell his proof-of-concept method to exploit the VRAM on GPUs. The seller stated that the method worked on Intel’s integrated UHD 620 and 630 graphics, as well as discrete solutions including the AMD Radeon RX 5700 and Nvidia GeForce GTX 1650. It’s unclear if the attack would also work on other GPUs, like the recent Radeon RX 6000 series from AMD and the Geforce RTX 3000 series from Nvidia, both of which have seen high demand and short supply.

The listing to sell the proof of concept was posted on August 8, and the method of exploit was sold on August 25, though details about the transactions were not revealed. It’s unknown who purchased the hack or how much was paid.

Though specifics about the exploit that was sold to other hackers are not known, cybersecurity researchers at VX-Underground stated that the method allowed the code to be run by the GPU and in the VRAM rather than by the CPU. The researchers said that they will be demonstrating the method of exploit soon.

While targeting the GPU for cyberattacks may be different from traditional hacks today, the method isn’t entirely novel. This latest exploit follows a similar proof of concept from six years ago known as JellyFish.

With the JellyFish proof of concept, researchers exploited the graphics card with a GPU-based keylogger. The seller of this latest GPU-based hack denied similarities behind his method and JellyFish, Bleeping Computer stated.

Given that your GPU could potentially be exploited by a malicious actor in the future to hide and execute malware, PC owners, gamers, and creators should stay vigilant of suspicious emails, links, files, and downloads. This is especially pertinent given that malware that sits in VRAM can be undetectable by antivirus software.

Editors’ Choice

Repost: Original Source and Author Link


Redesigned Windows 11 Apps Preview The Future of Windows

Microsoft today announced updates for some of the built-in apps in Windows 11. The updates themselves aren’t massive in terms of features, but keep the design philosophy in line with the new visual aesthetic of Windows 11, the upcoming operating system update that’s currently in beta.

The apps being updated included Calculator, Mail and Calendar, and the Snipping Tool — each with a new update that you can check out now.

Snipping Tool

Longtime Windows 10 users will know that Microsoft has been promising the Snipping Tool is “moving to a new home” for a while now. Microsoft’s Panos Panay also teased a first look at the new Snipping Tool just last week.

Now fully revealed, Microsoft merged the classic Snipping Tool and Snip & Sketch apps into a single Snipping Tool app. This new Snipping Tool has the updated Fluent Design language that features rounded corners and emphasis on touch-enabled controls. It will also honor your choice of light or dark theme or allow you to set the theme independently.

Microsoft notes that if you have notifications turned off or Focus Assist turned on, you won’t be notified when you take a screenshot. That said, the company promises this will be fixed in a future update.

The app supports the Win + Shift + S keyboard shortcut to take a screenshot and introduces a new settings page. Of course, all of the editing features are here such as annotations and cropping.

Mail and Calendar

The new Mail app in Windows 11.

The Mail and Calendar apps were updated to support Fluent Design and themes. The core functionality remains unchanged for now, but the rounded edges and clean design should make writing emails and scheduling meetings a bit more pleasant.

These apps still remain distinct from Outlook, strangely enough, which means Microsoft will maintain supporting both apps into the future as of now. Time will tell how Microsoft manages the two applications as it continues to tie Windows and Microsoft 365 more closely together.


The new version of the Calculator app in Windows 11.

The new Calculator app has the new Windows 11 design language, including the ability the apply themes. Like the new Snipping Tool, you can apply a theme separate from Windows itself.  The emphasis on touch support really shines here with larger touch targets to press when using Windows 11 on a touchscreen. Microsoft says the updated app was written in C# to encourage enterprising software developers to contribute on GitHub.

Beyond that, the Calculator functions as it normally does with the ability to use standard or scientific modes, plot equations on a graph, convert currencies, and even switch on a special “Programmer Mode” for coders and engineers.

Microsoft launched the early Windows 11 preview in late June for insiders. Enthusiasts were able to get an early look at the changes Microsoft is making and test out the updated operating system for themselves. Since then, the company has been steadily adding features such as an improved search box in the Start Menu. Microsoft also cleaned up the context menus, adding copy and paste functionality right into the menu, as well as the ability to “group” commands for easier navigation.

You can try out the updated apps now if you’re a part of the Windows 11 Dev Channel. If you have yet to try Windows 11 for yourself, we’ve built a guide on how to install the Windows 11 Preview build. Just know the typical caveats apply when installing beta software.

Editors’ Choice

Repost: Original Source and Author Link