Categories
AI

Deep Dive: How synthetic data can enhance AR/VR and the metaverse

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The metaverse has captivated our collective imagination. The exponential development in internet-connected devices and virtual content is preparing the metaverse for general acceptance, requiring businesses to go beyond traditional approaches to create metaverse content. However, next-generation technologies such as the metaverse, which employs artificial intelligence (AI) and machine learning (ML), rely on enormous datasets to function effectively. 

This reliance on large datasets brings new challenges. Technology users have become more conscious of how their sensitive personal data is acquired, stored and used, resulting in regulations designed to prevent organizations from using personal data without explicit permission

Without large amounts of accurate data, it’s impossible to train or develop AI/ML models, which severely limits metaverse development. As this quandary becomes more pressing, synthetic data is gaining traction as a solution.

In fact, According to Gartner, by 2024, 60% of the data required to create AI and analytics projects will be generated synthetically. 

Machine learning algorithms generate synthetic data by ingesting real data to train on behavioral patterns and generate simulated fake data that retains the statistical properties of the original dataset. Such data can replicate real-world circumstances and, unlike standard anonymized datasets, it’s not vulnerable to the same flaws as real data.

Reimagining digital worlds with synthetic data 

As AR/VR and metaverse developments progress towards more accurate digital environments, they now require new capabilities for humans to interact seamlessly with the digital world. This includes the ability to interact with virtual objects, on-device rendering optimization using accurate eye gaze estimation, realistic user avatar representation and the creation of a solid 3D digital overlay on top of the actual environment. ML models learn 3D objects such as meshes, morphable models, surface normals from photographs and obtaining such visual data to train these AI models is challenging.

Training a 3D model requires a large quantity of face and full body data, including precise 3D annotation. The model also must be taught to  perform tasks such as hand pose and mesh estimation, body pose estimation, gaze analysis, 3D environment reconstruction and codec avatar synthesis. 

“The metaverse will be powered by new and powerful computer vision machine learning models that can understand the 3D space around a user, capture motion accurately, understand gestures and interactions, and translate emotion, speech, and facial details to photorealistic avatars,” Yashar Behzadi, CEO and founder of Synthesis AI, told VentureBeat.  

 “To build these, foundational models will require large amounts of data with rich 3D labels,” Behzadi said.  

An example of rendering gesture estimation for digital avatars. Source: Synthesis AI

For  these reasons, the metaverse is experiencing a paradigm shift — moving away from modeling and toward a data-centric approach to development. Rather than making incremental improvements to an algorithm or model, researchers can optimize a metaverse’s AI model performance much more effectively by improving the quality of the training data.

“Conventional approaches to building computer vision rely on human annotators who can not provide the required labels. However, synthetic data or computer-generated data that mimics reality has proven a promising new approach,” said Behzadi. 

Using synthetic data, companies can generate customizable data that can make projects run more efficiently as it can be easily distributed between creative teams without worrying about complying with privacy laws. This provides greater autonomy, enabling developers to be more efficient and focus on revenue-driving tasks. 

Behzadi says he believes coupling cinematic visual effects technologies with generative AI models will allow synthetic data technologies to provide vast amounts of diverse and perfectly labeled data to power the metaverse.

To enhance user experience, hardware devices used to step into the metaverse play an equally important role. However, hardware has to be supported by software that makes the transition between the real and virtual worlds seamless, and this would be impossible without computer vision. 

To function properly, AR/VR hardware  needs to understand its position in the real world to augment users with a detailed and accurate 3D map of the virtual environment. Therefore, gaze estimation( i.e., finding out where a person is looking by the picture of their face and eyes), is a crucial problem for current AR and VR devices. In particular, VR depends heavily on foveated rendering, a technique in which the image in the center of a field of view is produced in high resolution and excellent detail, but the image on the periphery deteriorates progressively.

Eye-gaze estimation and tracking architecture for VR devices deploys foveated rendering. That is, the image in the center of a field of view is produced in high resolution but the image on the periphery deteriorates progressively for more efficient performance. Source: Synthesis AI

According to Richard Kerris, vice president of the Omniverse development platform at NVIDIA, synthetic data generation can act as a remedy for such cases, as it can provide visually accurate examples of use cases when interacting with objects or constructing environments for training. 

“Synthetic data generated with simulation expedites AR/VR application development by providing continuous development integration and testing workflows,” Kerris told VentureBeat. “Furthermore, when created from the digital twin of the actual world, such data can help train AIs for various near-field sensors that are invisible to human eyes, in addition to improving the tracking accuracies of location sensors.”

When entering virtual reality, one needs to be represented by an avatar for an immersive virtual social experience. Future metaverse environments would need photorealistic virtual avatars that represent real people and can capture their poses. However, constructing such an avatar is a tricky computer vision problem, which is now being addressed by the use of synthetic data. 

Kerries explained that the biggest challenges for virtual avatars is how highly personalized they are. This generation of users want a diverse variety of avatars with high fidelity, along with accessories like clothes and hairstyles, and related emotions, without compromising privacy. 

“Procedural generation of diverse digital human characters at a large scale can create endlessly different human poses and animate characters for specific use cases. Procedural generation by using synthetic data helps address these many styles of avatars,”Kerries said. 

Identifying objects with computer vision

For estimating the position of 3D objects and their material properties in digital worlds such as the metaverse, light must interact with the object and its environment to generate an effect similar to the real world. Therefore, AI-based computer vision models for the metaverse require understanding the object’s surfaces to render them accurately within the 3D environment.

According to Swapnil Srivastava, global head of data and analytics at Evalueserve, by using synthetic data, AI models could predict and make more realistic tracking based on body types, lighting/illumination, backgrounds and environments among others.

“Metaverse/omniverse or similar ecosystems will depend highly on photorealistic expressive and behavioral humans, now achievable with synthetic data. It is humanly impossible to annotate 2D and 3D images at a pixel-perfect scale. With synthetic data, this technological and physical barrier is bridged, allowing for accurate annotation, diversity, and customization while ensuring realism,” Srivastava told VentureBeat. 

Gesture recognition is another primary mechanism for interacting with virtual worlds. However, building models for accurate hand tracking is intricate, given the complexity of the hands and the need for 3D positional tracking. Further complicating the task is the need to capture data that accurately represents the diversity of users, from skin tone to the presence of rings, watches, shirt sleeves and more. 

Behzadi says that the industry is now using  synthetic data to train hand-tracking systems to overcome such challenges.

“By leveraging 3D parametric hand models, companies can create vast amounts of accurately 3D labeled data across demographics, confounds, camera viewpoints and environments,” Behzadi said. 

“Data can then be produced across environments and camera positions/types for unprecedented diversity since the data generated has no underlying privacy concerns. This level of detail is orders of magnitude greater than what can be provided by humans and is enabling a greater level of realism to power the metaverse,” he added.

Srivastava said that compared to the current process, the metaverse will collect more personal data like facial features, body gestures, health, financial, social preference, and biometrics, among many others. 

“Protecting these personal data points should be the highest priority. Organizations need effective data governance and security policies, as well as a consent governance process. Ensuring ethics in AI would be very important to scaling effectiveness in the metaverse while creating responsible data for training, storing, and deploying models in production,” he said. 

Similarly, Behzadi said that synthetic data technologies will allow building more inclusive models in  privacy-compliant and ethical ways. However, because  the concept is new, broad adoption will require education. 

“The metaverse is a broad and evolving term, but I think we can expect new and deeply immersive experiences — whether it’s for social interactions, reimaging consumer and shopping experiences, new types of media, or applications we have yet to imagine. New initiatives like OpenSynthetics.com are a step in the right direction to help build a community of researchers and industrial partners to advance the technology,” said Behzadi. 

Creating simulation-ready data sets is challenging for companies wanting to use synthetic data generation to build and operate virtual worlds in the metaverse. Kerris says that off-the-shelf 3D assets aren’t enough to implement accurate training paradigms. 

“These data sets must have the information and characteristics that make them useful. For example, weight, friction and other factors must be included in the asset for them to be useful in training,” Kerris said. “We can expect an increased set of sim-ready libraries from companies, which will help accelerate the use cases for synthetic data generation in metaverse applications, for industrial use cases like robotics and digital twins.”

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

Intel’s mid-range Arc A770 GPU arrives October 12th for $329

Intel’s long-promised desktop GPUs are finally close to reaching gamers worldwide. As part of its flurry of announcements, Intel has confirmed the Arc A770 GPU will be available in a range of models on October 12th starting at $329. As the price suggests, this is aimed squarely at the GeForce RTX 3060, Radeon RX 6650 XT and other mid-tier video cards — Intel claims both “1440p gaming performance” and up to 65 percent stronger “peak” ray tracing performance than rivals, although it didn’t name specific hardware.

Like competitors, Intel is counting as much on AI as it is raw computing power. The Arc A770 supports Xe Super Sampling (XeSS) that, like NVIDIA’s DLSS or AMD’s FidelityFX Super Resolution, uses AI upscaling to boost frame rates at higher resolutions. It supports Intel’s dedicated and integrated GPUs, and should be available in over 20 games by the end of 2022.

Tom’s Hardware notes the Intel’s first mainstream desktop GPU, the Arc A380, was exclusive to China. This is the first chance many outside of that country will have to buy a discrete Intel graphics card.

Intel is delivering the A770 later than expected, having promised the GPU for this summer. Even so, the timing might be apt. NVIDIA is currently focusing its attention on the high-end with the RTX 40 series, while AMD hasn’t done much more than speed-bump the RX 6000 line. The A770 may stand out as a viable option for budget-conscious gamers, particularly when GPUs like the RTX 3060 still have higher official prices.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Computing

Want a 72% GPU boost for free? AMD just delivered one

AMD has just revealed that its new AMD Software: Pro Edition 22.Q3 driver can increase GPU performance by up to 72%. With this update, AMD addresses a major pain point for its customers: subpar performance in OpenGL applications.

While the driver is aimed at AMD’s GPU range for professionals, consumer graphics cards can benefit, too, and the gains even stretch to gaming. Here’s what we know about the new Radeon driver and how to get it.

AMD/Tom’s Hardware

To make this kind of a performance gain real, AMD had to change the architecture of its Pro Edition driver. Historically, AMD has gotten some bad feedback over the state of its OpenGL drivers, and over the years, this was one of those things that Nvidia definitely did a better job at, and this applies both to professional software and gaming scenarios. After several attempts, though, it seems that AMD may have finally hit the jackpot with its OpenGL driver.

According to a blog post from AMD, the new driver improves GPU performance by as much as 115%. The metrics are all based on AMD’s professional GPUs, and the company compares its Radeon W6800 to the Nvidia RTX A5000, claiming that its own graphics card managed to beat Nvidia in several applications.

AMD itself calls this driver update a “giant leap for OpenGL-based applications.” In the blog post, AMD stated: “The release of AMD Software: PRO Edition 22.Q3 […] brings our most significant performance advancements to date in all OpenGL applications and many of your other favorite creating, designing, modeling, and CAD software applications. The latest improvements are edging us toward and beyond the competition, such as Autodesk Maya, where we see improvements up to 41% greater on the AMD Radeon PRO W6800 GPU versus an Nvidia RTX A5000 GPU.”

As mentioned, it’s not just the professionals who will benefit from this new change, because AMD is adding these updates to its consumer-grade Adrenalin drivers, too. This means that if you use OpenGL-based apps or play such games, you should be able to see the performance uplift for yourself. In addition, if you have one of the best graphics cards from AMD and you use it to run these professional visualization apps, you can also try out the new drivers for yourself.

AMD first introduced this new architecture in July 2022 and it claims to have gotten great feedback, so it seems like it’s worth giving it a try. This isn’t the first time AMD made a huge leap with just one driver update — just recently, we had a 92% performance boost from drivers alone, too.

You can download the new AMD Software: Pro Edition 22.Q3 driver or the latest Adrenalin drivers on AMD’s official website.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

10 top artificial intelligence (AI) applications in healthcare

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Artificial intelligence (AI)  is being applied across the healthcare spectrum — from administration to patient interaction and medical research, diagnosis and treatment. 

What is healthcare AI?

Healthcare AI is the application of artificial intelligence to medical services and the administration or delivery of medical services. Machine learning (ML), large and often unstructured datasets, advanced sensors, natural language processing (NLP) and robotics are all being used in a growing number of healthcare sectors. 

Along with great promise, the technology offers significant potential concerns — including the abuse that can come from the centralization and digitalization of patient data as well as  possible linkages with nanomedicine or universal biometric IDs. Equity and bias have both also been concerns in some early AI applications, but the technology may also be able to improve healthcare equity.

Although deployment of AI in the healthcare sector has truly just begun, it is becoming more commonly used. Gartner pegged 2021 global healthcare IT spending at $140 billion, with enterprises listing AI and robotic process automation (RPA) as their lead spending priorities.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Healthcare costs approached a fifth (19.7%) of the total U.S. economy in 2020 (an estimated 19.7% or $4.1 trillion). Over half of that spending, for the first time, was racked up by the government, where fraud is especially high

Thus, the potential value of healthcare AI, from administration to medical AI is vast.  

10 top applications of artificial intelligence in healthcare in 2022

Here are 10 of the top areas where healthcare AI use cases are being developed and deployed today. 

1. Healthcare administration

Administrative expenses are estimated to comprise 15% to 25% of total healthcare costs. Tools to improve and streamline administration are valuable for insurers, payers and providers alike. 

Identifying and cutting down fraud, however, may provide the most immediate return as ealthcare fraud can happen on many levels and be committed by various parties. In some of the worst cases, fraud may cause insurers to get billed for services not rendered or result in surgeons performing unnecessary operations to get higher insurance payments. Insurers may also get billed for defective devices or test kits. 

AI can be a useful tool in stopping fraud before it happens. Just as banks commonly use algorithms to detect unusual transactions, and health insurers can do the same..

2. Public health

AI is already being applied across the public health sector. Including

  • ML algorithms are being applied to large public health datasets, and the CDC has compiled some of the many ways AI has been applied in analyzing public health for COVID-19 and beyond. 
  • NLP is being applied in public health contexts.
  • Increasingly, diagnostic imaging data is being harnessed for population-level analysis and predictions.
  • Lirio applies consumer data science and behavioral “nudging” techniques to creating “precision”, or personalized, nudges to prompt healthcare visits, medical compliance and the like.

3. Medical research

The applications for AI in medical research are also expansive. Examples range from new and repurposed drug discovery to clinical trials, including:

  • Finding new drugs to treat conditions can be incredibly complicated . In silicon computer-aided drug design (CADD) is its own complex field
  • In some cases, the goal is to repurpose existing drugs. One recent example came when AI analyzed cell images to see which drugs were most effective for patients with neurodegenerative diseases. Neurons change shape when positively responding to these treatments. However, conventional computers are too slow to spot these differences.
  • Pharma provider Bayer believes AI could enhance clinical trials by creating a virtual control group using medical database information. They’re exploring other AI clinical trial applications, too, that could make these investigations safer and more effective.

4. Medical training

AI may also alter how medical school students receive parts of their education. Including in cases like the following:

  • One example gave students feedback from an AI tutor as they learned to remove brain tumors. The system had a machine learning algorithm that taught students safe, effective techniques, then critiqued their performance. People learned skills 2.6 times faster and performed 36% better than those not taught with AI.
  • Organizations in the U.S. and the U.K. have also deployed AI-based virtual patients to facilitate virtual and remote training. That approach was particularly useful when the COVID-19 pandemic halted group gatherings. The AI supported practicing several skills, likecomforting distressed patients or delivering bad news.

5. Medical professional support

 AI is also deployed to support medical professionals in clinical settings, including the following: 

  • AI is applied to support intake professionals in medical facilities. One Stanford University pilot project uses algorithms to determine if patients are high-risk enough to need ICU care or to experience code-related events or those that require rapid response teams. They assess the likelihood of those events occurring within a six to 18-hour window, helping physicians make more confident decisions.
  • AI-based applications are being developed to support nurses, with decision support, sensors to notify them of patient needs and robotic assistance in challenging or dangerous situations among the areas addressed.

6. Patient engagement

AI is also deployed to support patients directly:

  • Hospitals use AI chatbots to check in with patients and help them get necessary information faster. When Northwell Health implemented patient chats, there was a 94% engagement rate among those utilizing oncology services. Clinicians who tried the tool agreed it extended the care they delivered. Chatbots are able to check on patients’ symptoms, recoveries and more. Many people are also used to chatting by text, which increases adoption. Chatbots also reduce challenges patients may encounter while seeking care. People can use them to find hospitals or clinics, book appointments and describe needs.
  • Estimates suggest that as many as half of all patients don’t take medications as prescribed. However, AI can increase the chances of patients taking their medications as they should. Some platforms use smart algorithms to suggest when health professionals should engage with patients about compliance and through which channels. Medication reminder chatbots exist, too. In a recent example, researchers collaborated and used AI to assist with finding the best medications for people with Type 2 diabetes. The algorithms helped choose the right options for more than 83% of patients, even in cases where the people needed more than one medication simultaneously.

7. Remote medicine

Telemedicine in the form of virtual doctor visits have become increasingly common since the COVID-19 lockdowns. In addition to those, AI is supporting other forms of remote medicine as well, including:

  • VirtuSense applies predictive AI to remotely monitor and alert providers about high-risk changes that may precipitate a fall. 
  • Some facilities currently using AI for monitoring rely on it for conditions ranging from heart disease to diabetes. Hospitals also used this technology to oversee COVID-19 patients, making it easier to decide which could receive home care and which needed hospital treatment.

8. Diagnostics

AI is also utilized for healthcare center diagnostics, including by:

  • One AI system used to spot breast cancer can detect current issues and a patient’s likelihood of developing the disease in the next several years.
  • Some applications of AI in healthcare detect mental ailments, too. Researchers have used trained algorithms to identify depressed people by listening to their voices or scanning their social media feeds, for example.

9. Surgery

AI does not eliminate surgical issues, but it can potentially reduce them while enhancing outcomes for patients and surgeons alike. This is illustrated in the following examples

  • A startup called Theator recently raised $39.5 million in a series A funding round. The company has an AI video solution built to help surgeons see what went wrong and right during procedures. They can then study the footage to make improvements for the future.
  • Artificial intelligence applications in healthcare include surgical robots that are increasingly common in operating rooms. Many are minimally invasive and often achieve outcomes superior to non-robotic interventions. These uses of AI won’t replace humans’ surgical expertise. Though, they can work as surgeons’ partners, improving the likelihood of procedures succeeding.

10. Hospital care

Along with the above-described diagnostic use cases, clinicians also must meet patients physical needs and, more prosaically, stock supplies and deliver goods. AI-powered collaborative robots are starting to ease the burden. Gartner expects 50% of U.S. providers to invest in robotics process automation (RPA) by 2023. Some examples of RPA in hospitals include:

  • One hospital recently deployed five robots named Moxie. These machines will proactively determine when nurses need supplies or assistance with lab test logistics. They’ll then respond before the provider’s workload gets too intensive.

Atheon provides robots that support not only medical functions, but tasks such as linen distribution and waste removal.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

‘Overwatch 2’ moderation tools include voice chat transcriptions and SMS verification

Overwatch 2 is set to go live and free-to-play on October 4th, and in preparation for the big day, Blizzard has outlined a suite of moderation tools aimed at curbing abusive and disruptive player behavior. The new system will require a phone number to be linked to every account, and will introduce audio transcriptions of reported voice chat interactions, among other changes. Blizzard is calling the initiative Defense Matrix, named after D.Va’s hologram shield ability.

The phone-linking system, SMS Protect, means every Overwatch 2 player will need to connect a phone number to their Battle.net account, and that number can’t be used to operate or create another account. This makes it easier to enforce suspensions and bans, and makes it harder for players to cheat the matchmaking system. SMS Protect isn’t a new idea in the world of competitive online gaming, and it’s a proven way to reduce smurfing — a practice where skilled players create new accounts and creep into lower-tier matches, whether to boost their friends, avoid a ban or simply troll.

Another notable feature of Defense Matrix is the addition of audio transcriptions for problematic voice chat recordings and automated review tools for the resulting text. The transcription process relies on players reporting abusive speech as it happens — but once someone is reported, this system collects a temporary recording of the match’s voice chat and transcribes it to text. That text is then analyzed by Blizzard’s existing AI-driven abuse-detection tools. 

When it comes to the longevity of the recordings and text files, Blizzard said the following: “Once the audio recording has been transcribed to text, it’s quickly deleted as the file’s sole purpose is to identify potentially disruptive behavior. The text file is then deleted no later than 30 days after the audio transcription.”

The studio said audio transcriptions will roll out in the weeks after launch. Additionally, the general chat feature won’t exist in Overwatch 2, leaving Twitch streamers one fewer outlet for their watch-me spam. Blizzard outlined the complete Defense Matrix strategy on the Overwatch blog, alongside checklists for existing and new players. October 2nd is the final full day to play the original Overwatch, and Overwatch 2 is scheduled to go live worldwide at 3pm ET on October 4th.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Computing

Tesla AI Day: How to watch and what to expect

Tesla is holding its AI Day today, helmed by CEO Elon Musk. It’s been a turbulent year for the divisive figure since the inaugural AI Day last year. Still, the focus of the event is expected to be squarely focused on robotics and AI initiatives within Tesla — not any of Musk’s personal controversies and side interests.

The topics to be discussed could range from advancements in self-driving cars to the first demo of Optimus, the company’s humanoid robot project.

How to watch Tesla AI Day

Tesla hasn’t publicly announced details on the specifics of the event yet, but according to the tickets that have been posted online, we may be in for something entirely different tomorrow.

A digital ticket that has been posted on Twitter reveals some juicy — and downright strange — details about the event tomorrow. It’ll supposedly take place in Palo Alto and last from 5 p.m. PT to 11 p.m. PT, which is a very odd time to hold an event. Also, a six-hour event? Whew. The actual presentation likely won’t last the entirety of the event, but last year’s nearly three-hour runtime should give you an idea of what to expect.

AI Day 2022 on Sept 30 🤖 pic.twitter.com/S9LZ5SefUC

— Tesla (@Tesla) August 23, 2022

Either way, we’re expecting the event to be livestreamed on YouTube through Tesla’s page as it was last year. As of now, though, a live event hasn’t been posted yet, and with the mysterious nature of the event, anything’s possible at this point.

What to expect from Tesla’s AI Day

We don’t have an agenda for the event, so it’s hard to know quite what Tesla has in store. Obviously, we can expect Musk to share more about Tesla’s work on self-driving cars, specifically with FSD (Full Self-Driving), the software behind its driver assistance system. FSD is currently in beta for those willing to pay $15,000 to try it out on their Teslas and is expected to roll out later this year. So don’t be surprised if we get into some of the extreme technical details behind FSD and what it’ll be able to do.

We may also see Musk touch on the Tesla robotaxi idea, a concept that’s been around since 2016. The futuristic taxi, which may have no steering wheel or pedals, was last mentioned at Tesla’s first-quarter earnings call earlier this year.

Of course, Optimus is the project we’re all excited to see an update on. The humanoid robot was first previewed at last year’s AI Day, though it was more an idea than an actual product. This year, we’re all hoping to see this concept come to life with an actual functioning prototype available. AI Day was supposedly delayed just in time to get the humanoid robot prototype ready, so it’s a safe bet it’ll make an appearance one way or another.

The Tesla Optimus humanoid robot.

As described at last year’s event, Optimus is a humanoid robot meant to replace “dangerous, menial, or boring tasks,” whether that’s in factories or in homes. Despite its intimidating appearance, Musk has said that Optimus will be friendly and would be easily overcome by a human, if it came down to it. Tesla has an obvious application for Optimus working in its own Tesla factories, which already contain some of the most advanced robotics on the planet — but it’s the more practical applications that have captured the interest of the wider world.

Will Tesla deliver on the exciting and possibly terrifying idea of Optimus? That’ll be the main topic of discussion coming out of Tesla’s AI Day.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Meta’s new Make-a-Video signals the next generative AI evolution

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


This morning Meta CEO Mark Zuckerberg posted on his Facebook page to announce Make-A-Video, a new AI system that allows users to turn text prompts, like “a teddy bear painting a self-portrait,” into short, high-quality, one-of-a-kind video clips.

Sound like DALL-E? That’s the idea: According to a press release, Make-A-Video builds on AI image generation technology (including Meta’s Make-A-Scene work from earlier this year) by “adding a layer of unsupervised learning that allows the system to understand motion in the physical world and apply it to traditional text-to-image generation.”

“This is pretty amazing progress,” Zuckerberg wrote in his post. “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”

A year after DALL-E

It’s hard to believe that it has been only about a year since the original DALL-E was unveiled January 2021, while 2022 has seemed to be the year of the text-to-image revolution thanks to DALL-E 2, Midjourney, Stable Diffusion and other large generative models allowing users to create realistic images and art from natural text prompts.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Is Meta’s new Make-A-Video a sign that the next step of generative AI, text-to-video, is about to go mainstream? Given the sheer speed of text-to-image evolution this year — Midjourney even created controversy with an image that won an art competition at the Colorado State Fair — it certainly seems possible. A couple of weeks ago, video editing software company Runway released a promotional video teasing a new feature of its AI-powered web-based video editor that can edit video from written descriptions.

And the demand for text-to-video generators at the level of today’s text-to-image options is high, thanks to the need for video content across all channels — from social media advertising and video blogs to explainer videos.

Meta, for its part, seems confident, according to its research paper introducing Make-A-Video: “In all aspects, spatial and temporal revolution, faithfulness to text, and quality, we present state-of-the-art results in text-to-video generation, as determined by both qualitative and quantitative measures.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

Netflix now lets you create your own gamertag

Netflix has launched the ability to create public handles for its games, laying the foundation for additional features that would make the service more social. People can use this public username across all its titles, allowing them to find friends (or to meet new ones) to play with in multiplayer games like Rival Pirates without having to reveal their Netflix name and profile icon. It’s also what’s going to be displayed on leaderboards for single-player games, such as Dominoes Café and the platformer Lucky Luna

As TechCrunch had previously reported, there are codes in the app suggesting that the company is also working on ways that would allow users to invite each other to play games and to show other people that they’re online. Netflix didn’t confirm that those features were underway, but Mobile Games Product Manager Sophia Yang said in the company’s announcement that the launch of game handles “is only the beginning in building a tailored game experience for our members around the world.” Yang added: “We’ll continue to adapt and evolve our service to meet the needs of our members…” Seeing as Netflix recently revealed that it’s going all in on games and is building its own studio in Helsinki, Finland, it wouldn’t come as a surprise for the company to roll out features that would make its service more interactive.

To set a public nickname, Android users can select the games tab in the navigation bar and navigate to “Create your Netflix game handle.” iOS users will first have to download Rival Pirates or Lucky Luna and then launch the game to get a prompt asking them to create a handle. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Computing

MacBooks vs. Windows laptops: How do you choose?

The MacBooks versus Windows laptops debate has been raging for decades, but never has it been this intense or important. New advances in chip technology are propelling even entry-level MacBooks to high-performance targets, and a shift away from cheap plastics evens the playing field between these two platforms. Both Windows 11 and MacOS are intuitive and clean operating systems. But where they differ comes down to one key element: their ecosystems.

What this means for you is that you must carefully decide where you’re going to sink your hard-earned money. The laptop you choose now will greatly influence which accessories you buy, which apps you use, and even what kind of phone you carry. Your entire workflow will depend on the platform you go with, from how you manage windows to keyboard shortcuts. It’s not a light decision.

Arif Bacchus/Digital Trends

The 2020s have thus far been incredible for computing. This new era has also been incredibly difficult for consumers because ecosystems rule everything. It’s not easy to jump from one to the other once you’ve sunk money and time into your chosen ecosystem. It’s time to choose wisely.

Build quality

There’s no denying that Apple hits physical hardware out of the park. From a purely aesthetic (and subjective) viewpoint, MacBooks are gorgeous. They look great. They feel great. That boxy industrial-minimalist design feels as if it’s worth $2,000 or more. Ever since the debut of the M2 MacBook Air in June 2022, every MacBook has followed the same design.

Don’t forget the actual quality you’ll get with a Mac. Take the hinges as an example. You can open up any Mac with one hand. The screen simply opens up while the base sits as is. Also, the screen stays firmly in whichever position you left it in. There’s no wobble. There’s no dropping. Apple has nailed the hinges, and no Windows OEM comes close.

You’ll also get an amazing keyboard now that Apple has ditched those awful butterfly keys. You’ll appreciate this keyboard if you’re a coder or a writer. There’s nothing else like it available on a laptop. The same goes with the Mac trackpad, which is hands-down the best of any laptop. The haptic feedback, the accuracy, the swipe gestures … no Windows laptop has a trackpad like a MacBook.

But Windows laptops have an ace up their sleeve when it comes to build quality: variety. The top Windows laptops share the same sort of industrial design language as MacBooks, whether that’s the Asus ZenBook, the Dell XPS line, or even Microsoft’s own Surface Laptops. Taken together as a whole, you get a lot more choice in design and color than you do with MacBooks. Some of them even have decent trackpads, such as the Surface Laptop, although it is still not on the same level as a MacBook.

What you won’t get on any Windows laptop is the same attention to the hinges. This isn’t a game changer, but just be prepared to use two hands to open your computer.

MacBooks are generally the superior laptop when it comes to build quality, at least when you consider the entire range of MacBooks and how uniform that quality is.

The screen of the MacBook Air on a table.

Internals

You need to consider what kind of computing power you want from your laptop when choosing between MacBooks and Windows laptops. MacBooks use ARM chips, and these are getting more powerful by the day. They’re able to combine graphics and processing into one small chip, which gives you extraordinary battery life. You’ll barely hear the fans spool up.

Windows laptops, on the other hand, mostly use either Intel or AMD processors and Nvidia or AMD graphics cards. This means a greater power draw and lower battery life, and the fans are constantly running. But it also means more versatility, so you can play more games and download more programs thanks to the x86 architecture. Also, some Windows laptops are also switching to ARM chips, so this field isn’t as divergent as it used to be.

Both MacBooks and Windows laptops offer incredible computing power, and there is no real difference in what they’re capable of, although you’ll have greater access to higher-end graphics on Windows machines.

Operating system

The meat on the bones of the MacBooks versus Windows laptops debate is their respective operating systems. MacOS is a gorgeous and mature UNIX-based system. It hasn’t changed much in the past 20 years, aside from some tweaks and visual overhauls.

The entire OS is uniform across programs. This means the menu items, buttons, and overall look and feel of every app are consistent. For example, you’ll always find the File menu in the same place, no matter which program you use.

Windows is a completely different beast. It has undergone several major overhauls in the past years, with Windows 10 and then Windows 11 being the most significant. Windows 11 is a little MacOS-like, with the centered taskbar, rounded corners, and slick, glassy look. Both Windows 10 and Windows 11 look as nice as MacOS, so it’s under the hood where you’ll find the biggest differences.

A lot of past versions of Windows have been left in the system, however. You’ll find subsystem control panels dating back to the Windows XP era. The Windows code itself is a Frankenstein’s monster of years of different versions all mashed together. Each app you use will have its own look and feel. Menu items can be wherever the developer wants to put them. Windows 11 is trying to bring some conformity to the overall system, but it’s still a jungle out there.

Both operating systems are equally good from a visual perspective, but MacOS is clearly superior when it comes to ease of use and its uniform, UNIX environment.

A person having a FaceTime call on macOS Ventura having just used Handoff to transfer the call from an iPhone to a Mac.

Windows

There are some annoyances with MacOS, beginning with the way windows behave. The top-corner “X” does not close windows, it simply minimizes them. You need to use Command + Q or right-click on the window icon to completely close out of it. Windows users coming over to MacOS will find this extremely frustrating.

Windows management overall is a painful experience on MacOS. You can work in two windows side by side, but you’ll need a third-party toolbar app if you want to do anything else. Also, if you have two windows of the same application running, you can’t use Command + Tab between them. You need to right-click on the icon and carefully click on the window you want.

The same goes for trying to see how many instances of an app you have open. You’ll only see a small black dot below the icon, but you won’t see that you have four Safari windows open, for example.

Window management is phenomenal in Windows 11. There are multiple built-in layouts you can use. You can snap windows to the sides of the screen simply by dragging them, and you can re-center everything by grabbing a window with your mouse and then vigorously shaking it.

Every instance of an app you have running shows up on the taskbar. If you have four Edge browsers open, each window will have its own icon on the taskbar, and you can easily find the one you need. You can press Alt +Tab between them. If you have multiple monitors, you can drag individual windows to the second monitor, and the taskbar icon will also move over so you know which window is open on which monitor.

And the best part of Windows? Clicking Close actually closes the app. Revolutionary, right?

Despite the things MacOS gets right, Windows is clearly the superior OS when it comes to windows management.

Dell XPS 15 9520 front view showing display and keyboard deck.
Mark Coppock/Digital Trends

Ecosystem

Both Windows laptops and MacBooks come with a healthy ecosystem of first-party apps, such as email, calendars, note-taking, and reminders. Apple’s offerings on MacBooks are still barebones. Notes and Reminders have come a long way in the past five years but still don’t match up to many third-party apps. Apple Mail is dismal, despite Apple’s mediocre updates since MacOS Ventura.

Where MacBooks really shine is in the Apple ecosystem. Using an iPhone, iPad, Apple Watch, AirPods — anything Apple, really — with a MacBook is a joy. You can AirDrop large files from one device to another nearly instantly. Your AirPods connect without you needing to lift a finger. Continuity allows such perks as copying a link on your iPhone and simply pasting it on your MacBook. iMessage on Mac is great, and Apple Keychain means your passwords carry over across all your Apple devices.

The downside to this ecosystem is Apple itself. You’ll be locked into Apple’s narrow view of what an ecosystem should be like. Android won’t work with your Mac. Windows won’t work. And if you depend on Apple’s first-party apps, you will be limited to using only Apple devices. The iCloud.com website is barebones, and you won’t be able to do much else off the platform.

Windows laptops, on the other hand, are much more open, and this is where the entire Windows platform really shines. Thanks to Microsoft’s Your Phone app for Android, you can get a lot of the same functionality on your Windows laptop as you would on a MacBook, such as messages and file transfers (up to a limit). Samsung phones, in particular, work extremely well with Windows.

You’ll also get Microsoft’s excellent first-party apps built right in. Microsoft’s productivity software is light years ahead of Apple. Even the base Windows Mail client is more functional and easier to use than Apple’s horrible Mail app. OneNote is a beast and possibly one of the greatest productivity apps ever created.

Best of all, every Microsoft app is available on every platform. You can get apps for your Apple devices, and the Microsoft web apps are just as powerful.

Finally, there’s one area where a Windows laptop is hands-down superior to anything Apple can offer, and that is gaming. You simply cannot enjoy Mac gaming the same way you can enjoy gaming on a Windows laptop. Sure, there are some big titles available on MacBooks. You can cloud game with Game Pass Ultimate, Stadia, and GeForce Now. But you can’t get all the functionality, smoothness or any offline capabilities.

The Windows ecosystem is superior to Apple’s ecosystem. That sounds counterintuitive because Apple is famous for its ecosystem. However, Apple is too locked down and too dependent on Apple-only devices for the real world. It is a carefully manicured garden rather than a true ecosystem filled with diversity.

Windows, on the other hand, hits the entire concept of an ecosystem out of the park. Although the more-curated experience is better for some, the fact is that you can use more devices and more apps with a Windows laptop than with a MacBook.

Gaming on the Surface Laptop Go 2.

How to choose

MacBooks are superior when it comes to build quality and the UNIX-based MacOS operating system. Windows laptops take everything else, including the ecosystem.

The only reason you should choose a MacBook over a Windows laptop is if you want to be comfortable inside that Apple garden. You give up the diversity of accessories and apps, as well as the ability to really game, but you get a polished, good-looking computing experience.

Everyone else should get a Windows laptop. You’ll have so much more freedom to use the machine how you want. You shouldn’t even be considering a MacBook if you’re packing an Android phone. There really isn’t a choice for gamers, either. It’s Windows or bust.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

How Onyxia uses security AI to help CISOs improve their security posture

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Managing cybersecurity risks is challenging, not necessarily because vulnerabilities are hard to find, but because most organizations rely on manual processes to do so. However, security AI has the potential to automatically measure risks in the environment, and provide recommendations on what to address first. 

Security provider Onyxia, which launched today with $5 million in seed funding, demonstrates this approach by enabling organizations to use artificial intelligence (AI) to monitor their security posture in real time. 

As complexity increases in modern networks, AI-driven solutions will become more important for identifying gaps in an enterprise’s defenses, and reduce the chance of threat actors being able to exploit any vulnerabilities. 

Using security AI to mitigate risk 

The key challenge of mitigating cyber-risks is to understand that the level of risk isn’t static, but changes as technology and users in the environment move in and out. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

In environments that aren’t driven by AI, security teams and CISOs can struggle to keep up with the rate the environment changes. At the same time, the pace of work makes it difficult to make accurate judgment calls on which security risks to address first to improve the overall security posture of the organization. 

By using AI, an organization can eliminate this guesswork and start accurately assessing what actions they can take to better secure their environments.

Helpnet Security reported that out of 3,800 CISOs surveyed, 61% of security teams are understaffed and 69% say that hiring managers don’t accurately understand their company’s cybersecurity hiring needs, adding training and educational responsibilities that most IT teams cannot spare,” said Sivan Tehila, CEO of Onyxia. 

“Currently, security priorities are shifting as 90% of organizations fail to address cybersecurity risks. Onyxia enables CISOs and security teams to gain a holistic view of their entire cybersecurity environment while highlighting the best solutions and strategies to close security gaps, filling in the gaps that they didn’t know existed,” Tehila said. 

Onxyia is well-placed to meet these challenges, given founder Sivan Tehila’s technology pedigree, previously serving as CISO of the research and analysis division and head of information security for the Israeli Defense Force (IDF). 

The vendor’s solution uses machine learning (ML) and AI to provide CISOs with custom suggestions on how to improve their organization’s security posture. Choice are based on industry-specific needs, special risks and budget, and enable decision-makers to find the most effective way to improve cyber-resilience. 

AI risk management solutions 

According to Tehila, Onxyia is defining a new solution category for security teams and has no direct competitors. 

“Onxyia is a proactive solution that takes internal and macro-environment factors into account. A proactive solution is necessary for security managers to have real-time insight into their cybersecurity postures and implement proactive measures to ensure business continuity. Currently most of these processes are being done manually,” Tehila said. 

Although it’s important to note that Onxyia isn’t the only provider leveraging AI to identify risks in enterprise environments. For instance, Securiti uses AI to automatically map unstructured and structured data records in real time, while providing an overview of risk scores for data risks. Securiti most recently raised $50 million as part of a series B funding round

Similarly, OneTrust also uses AI to discover and classify data, identifying at-risk data and enabling the user to monitor it with analytics displays. To date, OneTrust has raised $920 million in funding

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link