Categories
Game

Sony reportedly showed off its next-generation PSVR at a developer’s conference

Sony already hinted that it’s working on a new PlayStation VR headset, promising “dramatic leaps” in performance, higher resolution, a wider field of view, better tracking and a new controller. On Tuesday, Sony reportedly revealed more specifics about the headset at a developer’s summit, according to the YouTube channel PSVR Without Parole (below) and UploadVR, The Verge has reported.

The device is reportedly codenamed next-gen VR (NGVR) and features controllers with capacitive touch sensors that can detect when you’re holding the controller or touching the buttons, and even sense the distance to your fingers. Sony also reportedly told developer’s that it’s planning optional VR support for all AAA releases, so you could play them either in VR or on your TV — much like it did with Resident Evil 7 and No Man’s Sky on the PS4 and PS5. 

PSVR Without Parole also noted that the next-gen PSVR will offer a 110-degree field of view that’s 10 degrees wider than the PSVR. To make the most of those pixels, it will use flexible scaling resolution, along with foveated rendering that uses eye-tracking to improve resolution where you’re looking. UploadVR, meanwhile, said that the the headset will feature high-resolution 2,000 x 2,040 OLED displays (4K in total). 

We’ve already heard that the PSVR will connect to PlayStation consoles with a single cable, with no passthrough box required. It will also use inside-out tracking and offer adaptive triggers and haptic feedback on the controllers.

All told, the PSVR 2 (or whatever it’s called) should have features mostly on par with rival headsets like the Oculus Quest 2 and HTC Vive Pro 2. However, Sony itself said that the headset won’t launch until at least next year, and a Bloomberg report from June indicated it might not come until late in 2022. For now, though, all of that is still grist for the rumor mill until Sony announces something official, possibly later this year. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

PS5 VR details leak from private developers conference

It’s arguable that Facebook’s Oculus now takes up the majority of the attention in the virtual reality market, but it is hardly the only major player. HTC is still actively working on Vive, and Microsoft’s Windows Mixed Reality also dips into that field. And then there’s PlayStation VR, the only console-based system among the VR giants. With the new PlayStation 5 console, however, the interest in a VR system to match has also grown. Fortunately, Sony does seem to have big plans for what the PS5 VR will offer, both in hardware and content.

The Next-Gen VR or NGVR, the alleged codename for the PS5 VR, will come with a headset that will boast significant upgrades over its predecessor. Considering the PSVR pictured above hasn’t exactly gotten major upgrades since it launched in 2016, that’s not exactly a surprising revelation.

According to the details reported by PSVR Without Parole, the headset will feature a new HDR OLED screen with a combined 4000×2040 resolution and 110 field-of-view. Eye-tracking will be used to implement foveated rendering, and a new flexible scaling resolution will supposedly improve performance. The new controllers will also allegedly have capacitive touch sensors for the thumb, index, and middle finger, probably for finger tracking.

An upgraded VR system, however, also needs upgraded VR experiences, and Sony is looking into bringing AAA titles to its VR ecosystem. That might mean requiring new titles to support a hybrid VR version alongside the regular flat screen game. There is no word yet on backward compatibility, though.

This PS5 VR upgrade could take Sony’s VR system to the next level and help it catch up with its peers. Unfortunately, it seems that fans will have to wait next year for that to happen.

Repost: Original Source and Author Link

Categories
Tech News

What to expect from Apple’s WWDC conference this year

Apple is hosting its Worldwide Developer Conference (WWDC) on June 7 in an all-virtual format again.

We’re expecting updates to iOS, macOS, WatchOS, and tvOS, along with privacy-focused features across Apple’s ecosystem. Plus, there might be some hardware announcements too. Let’s dive into it.

iOS 15 and iPadOS 15

Apple introduced grouped notifications in iOS 12, and now it might bring a refresh to that system. According to a report by Bloomberg’s Mark Gurman, you’ll be able to set custom notification context modes.  These include modes such as working, sleeping, or driving. Based on that different apps will show you notifications.

Plus, the report noted that there will be changes made to lock screen and control center icons. An earlier leak on Reddit suggested that Apple might change the icon design to make them non-flat.

Categories
Computing

Everything New in Outlook Announced at Build 2021 Conference

At Microsoft’s annual Build developer conference, Microsoft talked about some upcoming updates for Outlook, its popular email solution. There are two new features coming soon: Organization Explorer and message extensions in Outlook for the web.

Organization Explorer is a new embedded app for Outlook that will be coming this summer. It’s goal is to help you find co-workers or teams with similar skills as your own so you can collaborate together.

Microsoft says this app comes at a time when businesses have become more distributed, making this task a challenge. With the app, you can visually search across your company to explore colleagues and teams and identify skills to help you complete your work.

The new Organization Explorer option is available to Office Insiders in the Beta channel running version 14101 or later. Not everyone will see it right away, though, as Microsoft is planning to slowly release it to a larger number of Insiders over time. The feature will come to non-insiders once beta testing is complete.

Message extensions in Outlook for the web, meanwhile, is more of a feature for developers with the aim of making your email process easier. With the support for message extensions in Outlook.com, developers should see a unified experience across both Teams and Outlook on the web.

For you, this means that when you go to compose a message, you’ll see a new menu of search-based extensions to choose from. You might be able to compose an email, then use a message extension that pulls tasks from your Teams apps, and then send that out to your teammates.

A final change that you’re not likely to directly notice in Outlook is a developer-centric one that relates to Teams. Microsoft announced that developers can now build one “Adaptive Card” and use it across Teams and Outlook with one universal action model. This means developers can share user interface data so that their experiences are more consistent across both Teams and Outlook.

Microsoft

This is a change from the past, where developers had to build two separate Adaptive Card integrations for Outlook and Teams. Basically, Outlook and Teams apps should be more concise and in line with each other.

Build 2021 is still underway, and it’s expected to come with additional announcements surrounding Teams, Windows 10, and the rest of Microsoft 365. Check out our dedicated Build page for all the latest from the virtual developer event.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Under the AI hood: A view from RSA Conference

Elevate your enterprise data technology and strategy at Transform 2021.


Artificial intelligence and machine learning are often touted in IT as crucial tools for automated detection, response, and remediation. Enrich your defenses with finely honed prior knowledge, proponents insist, and let the machines drive basic security decisions at scale.

This year’s RSA Conference had an entire track dedicated to security-focused AI, while the virtual show “floor” featured no fewer than 45 vendors hawking some form of AI or machine learning capabilities.

While the profile of AI in security has evolved over the past five years from a dismissible buzzword to a legitimate consideration, many question its efficacy and appropriateness — and even its core definition. This year’s conference may not have settled the debate, but it did highlight the fact that AI, ML, and other deep-learning technologies are making their way deeper into the fabric of mainstream security solutions. RSAC also showcased formal methodology for assessing the veracity and usefulness of AI claims in security products, a capability beleaguered defenders desperately need.

“The mere fact that a company is using AI or machine learning in their product is not a good indicator of the product actually doing something smart,” said Raffael Marty, an expert in the use of AI, data science, and visualization in security. “On the contrary, most companies I have looked at that claim to use AI for some core capabilities are doing it wrong in some way.”

“There are some that stick to the right principles, hire actual data scientists, apply algorithms correctly, and interpret the data correctly,” Marty told VentureBeat. Marty is also an IANS faculty member and author of Applied Security Visualization and The Security Data Lake. “Unfortunately, these companies are still not found very widely.”

In his opening-day keynote, Cisco chair and CEO Chuck Robbins pitched the need for emerging technologies — like AI — to power security approaches capable of quick, scalable threat identification, correlation, and response in blended IT environments. Today these include a growing number of remote users, along with hybrid cloud, fog, and edge computing assets.

“We need to build security practices around what we know is coming in the future,” Robbins said. “That’s foundational to being able to deal with the complexity. It has to be based on real-time insights, and it has to be intelligent, leveraging great technology like AI and machine learning that will allow us to secure and remediate at a scale that we’ve never been able to yet always hoped we could do.”

Use cases: Security AI gets real

RSAC offered examples of practical AI and machine learning information security applications, like those championed by Robbins and other vendor execs.

One eSecurity founder Jess Garcia walked attendees through real-world threat hunting and forensics scenarios powered by machine learning and deep learning. In one case, Garcia and his team normalized 30 days of real data from a Fortune 50 enterprise — some 224,000 events and 24 million files from more than 100 servers — and ran it through a machine learning engine, setting a baseline for normal behavior. The machine learning models built from that data were then injected with malicious event-scheduling log data mimicking the recent SolarWinds attack to see if the machine-taught system could detect the attack with no prior knowledge or known indicators of compromise.

Garcia’s highly technical presentation was notable for its concession that artificial intelligence produced rather disappointing results on the first two passes. But when augmented with human-derived filtering and supporting information about the time of the scheduling events, the malicious activity rose to a detectable level in the model. The lesson, Garcia said, is to understand the emerging technology’s power, as well as its current limitations.

“AI is not a magic button and won’t be anytime soon,” Garcia said. “But it is a powerful weapon in DFIR (digital forensics and incident response). It is real and here to stay.”

For Marty, other promising use cases in AI-powered information security include the use of graph analytics to map out data movement and lineage to expose exfiltration and malicious modifications. “This topic is not well-researched yet, and I am not aware of any company or product that works well yet. It’s a hard problem on many layers, from data collection to deduplication and interpretation,” he said.

Sophos lead data scientist Younghoo Lee demonstrated for RSAC attendees the use of the natural-language Generative Pre-trained Transformer (GPT) to generate a filter that detects machine-generated spam, a clever use case that turns AI into a weapon against itself. Models such as GPT can generate coherent, humanlike text from a small training set (in Lee’s case, fewer than 5,000 messages) and with minimal retraining.

The performance of any machine-driven spam filter improves as the volume of the training data increases. But manually adding to an ML training dataset can be a slow and expensive proposition. For Sophos, the solution was to use two different methods of controlled natural language text generation. This led the GPT model to an increasingly better output that was used to multiply the original dataset by more than 5 times. The tool was essentially teaching itself what spam looked like by creating its own.

Armed with machine-generated messages that replicate both ham (good) and spam (bad) messages, the ML-powered filter proved particularly effective at detecting bogus messages that were, in all probability, created by a machine, Lee said.

“GPT can be trained to detect spam, [but] it can be also retrained to generate novel spam and augment labeled datasets,” Lee said. “GPT’s spam detection performance is improved by the constant battle of text generating and detecting.”

A healthy dose of AI skepticism

Such use cases aren’t enough to recruit everyone in security to AI, however.

In one of RSAC’s most popular panels, famed cryptographers Ron Rivest and Adi Shamir (the R and S in RSA) said machine learning is not ready for prime time in information security.

“Machine learning at the moment is totally untrustworthy,” said Shamir, a professor at the Weizmann Institute in Rehovot, Israel. “We don’t have a good understanding of where the samples come from or what they represent. Some progress is being made, but until we solve the robustness issue, I would be very worried about deploying any kind of big machine-learning system that no one understands and no one knows in which way it might fail.”

“Complexity is the enemy of security,” said Rivest, a professor at MIT in Cambridge, Massachusetts. “The more complicated you make something, the more vulnerable it becomes. And machine learning is nothing but complicated. It violates one of the basic tenets of security.”

Even as an AI evangelist, Marty understands such hesitancy. “I see more cybersecurity companies leveraging machine learning and AI in some way, [but] the question is to what degree?” he said. “It’s gotten too easy for any software engineer to play data scientist. The challenge lies in the fact that the engineer has no idea what just happened within the algorithm.”

Developing an AI litmus test

For enterprise defenders, the academic back and forth on AI adds a layer of confusion to already difficult decisions on security investments. In an effort to counter that uncertainty, the nonprofit research and development organization Mitre Corp. is developing an assessment tool to help buyers evaluate AI and machine learning claims in infosec products.

Mitre’s AI Relevance Competence Cost Score (ARCCS), aims to give defenders an organized way to question vendors about their AI claims, in much the same way they would assess other basic security functionality.

“We want to be able to jump into the dialog with cybersecurity vendors and understand the security and also what’s going on with the AI component as well,” said Anne Townsend, department manager and head of NIST cyber partnerships at Mitre. “Is something really AI-enabled, or is it really just hype?”

ARCCS will provide an evaluation methodology for AI in information security, measuring the relevance, competence, and relative cost of an AI-enabled product. The process will determine how necessary an AI component is to the performance of a product; whether the product is using the right kind of AI and doing it in a responsible way; and whether the added cost of the AI capability is justified for the benefits derived.

“You need to be able to ask vendors the right questions and ask them consistently,” Michael Hadjimichael, principal computer scientist at Mitre, said of the AI framework effort. “Not all AI-enabled claims are the same. By using something like our ARCCS tool, you can start to understand if you got what you paid for and if you’re getting what you need.”

Mitre’s ongoing ARCCS research is still in its early stages, and it’s difficult to say how most products claiming AI enhancements would fare with the assessment. “The tool does not pass or fail products — it evaluates,” Townsend told VentureBeat. “Right now, what we are noticing is there isn’t as much information out there on products as we’d like.”

Officials from vendors such as Hunters, which features advanced machine learning capabilities in its new XDR threat detection and response platform, say reality-check frameworks like ARCCS are sorely needed and stand to benefit both security sellers and buyers.

“In a world where AI and machine learning are liberally used by security vendors to describe their technology, creating an assessment framework for buyers to evaluate the technology and its value is essential,” Hunters CEO and cofounder Uri May told VentureBeat. “Customers should demand that vendors provide clear, easy-to-understand explanations of the results obtained by the algorithm.”

May also urged buyers to understand AI’s limitations and be realistic in assessing appropriate uses of the technology in a security setting. “AI and ML are ready to be used as assistive technologies for automating some security operations tasks and for providing context and information to facilitate decision-making by humans,” May said. “But claims that offer end-to-end automation or massive reduction in human resources are probably exaggerated.”

While a framework like ARCCS represents a significant step for decision-makers, having such an evaluation tool doesn’t mean enterprise adopters should now be expected to understand all the nuances and complexities of a complicated science like AI, Marty stressed.

“The buyer really shouldn’t have to know anything about how the products work. The products should just do what they claim they do and do it well,” Marty said.

Crossing the AI chasm

Every year, RSAC shines a temporary spotlight on emerging trends, like AI in information security. But when the show wraps, security professionals, data scientists, and other advocates are tasked with shepherding the technology to the next level.

Moving forward requires solutions to three key challenges:

Amassing and processing sufficient training data

Every AI use case begins with ingesting, cleaning, normalizing, and processing data to train the models. The more training data available, the smarter the models get and the more effective their actions become. “Any hypothesis we have, we have to test and validate. Without data, that’s hard to do,” Marty said. “We need complex datasets that show user interactions across applications, data, and cloud apps, along with contextual information about the users.”

Of course, data access and the work of harmonizing it can be difficult and expensive. “This kind of data is hard to get, especially with privacy and regulations like GDPR putting more processes around AI research efforts,” Marty said.

Recruiting skilled experts

Leveraging AI in security demands expertise in two complex domains — data science and cybersecurity. Finding, recruiting, and retaining talent in either specialty is difficult enough. The combination borders on unicorn territory. The AI skills shortage exists at all experience levels, from starters to seasoned practitioners. Organizations that hope to be ready to take advantage of the technology over the long haul should focus on diversifying sources of AI talent and building a deep bench of trainable, tech- and security-savvy team members who understand operating systems and applications and can work with data scientists, rather than hunting for just one or two world-class AI superstars.

Making adequate research investments

Ultimately, the fate of AI security hinges on a consistent financial commitment to advancing the science. All major security firms do malware research, “but how many have actual data science teams researching novel approaches?” Marty asked. “Companies typically don’t invest in research that’s not directly related to their products. And if they do, they want to see fairly quick turnarounds.” Smaller companies can sometimes pick up the slack, but their ad hoc approaches often fall short in scalability and broad applicability. “This goes back to the data problem,” Marty said. “You need data from a variety of different environments.”

Making progress on these three important issues rests with both the vendor community, where decisions that determine the roadmap of AI in security are being made, and enterprise user organizations. Even the best AI engines nested in prebuilt solutions won’t be very effective in the hands of security teams that lack the capacity, capability, and resources to use them.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI ethics research conference suspends Google sponsorship

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The ACM Conference for Fairness, Accountability, and Transparency (FAccT) has decided to suspend its sponsorship relationship with Google, conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand confirmed today. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru. Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak.

“FAccT is guided by a Strategic Plan, and the conference by-laws charge the Sponsorship Chairs, in collaboration with the Executive Committee, with developing a sponsorship portfolio that aligns with that plan,” Ekstrand told VentureBeat in an email. “The Executive Committee made the decision that having Google as a sponsor for the 2021 conference would not be in the best interests of the community and impede the Strategic Plan. We will be revising the sponsorship policy for next year’s conference.”

The decision followed days of questions about whether FAccT would continue its relationship with Google following the company’s treatment of Ethical AI team leaders. The news first emerged Friday, when FAccT program committee member Suresh Venkatasubramanian tweeted that FAccT would pause its relationship with Google.

Putting Google sponsorship on hold doesn’t mean the end of sponsorship from Big Tech companies, or even Google itself. DeepMind, another sponsor of the FAccT conference that incurred an AI ethics controversy in January, is also a Google company. Since its founding in 2018, FAccT has sought funding from Big Tech sponsors like Google and Microsoft, along with the Ford Foundation and the MacArthur Foundation. An analysis that compares Big Tech funding of AI ethics research to Big Tobacco released last year found that nearly 60% of researchers at four prominent universities have taken money from major tech companies.

Last December, Googlers protested what they called “unprecedented research censorship.” Last week, Reuters reported on a separate instance of alleged interference in AI research involving large language models that a coauthor described as “deeply insidious” edits by the Google legal team.

According to the FAccT website, Gebru, who was a cofounder of the organization, continues to work as part of a group advising on data and algorithm evaluation and as a program committee chair. Mitchell is a program co-chair of the conference and a FAccT program committee member. Gebru was fired from her role at Google in December 2020, following disputes around factors like the lack of diversity in tech companies and the review of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In addition to recognizing that pretrained language models may disproportionate harm marginalized communities, it questions whether performance on benchmark tests qualifies as genuine progress, and heightens the potential for misuse or automation bias.

“If a large language model, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads.

Gebru is one of two primary authors of the paper, which was accepted this week for publication at FAccT. Her lead co-author is University of Washington linguist Emily Bender, whose writing about potential shortcomings of large language models and the need for deeper criticism received an award last summer from the Association for Computational Linguistics.

A copy of the paper VentureBeat obtained last year from a source familiar with the matter lists Mitchell as a co-author, as well as Google researchers Mark Diaz and Ben Hutchinson, a trio with backgrounds in language analysis and models. Mitchell may be known today for her work in ethics, but she is most highly cited as a computer vision and NLP researcher and is the author of a 2008 master’s thesis on text generation at the University of Washington. Ben Hutchinson worked with co-authors from the Ethical AI team at Google on a paper that found bias in NLP models that disfavors people with disabilities in sentiment analysis and toxicity prediction. Mark Diaz has examined age-related bias found in text.

Bender and Gebru are listed as primary coauthors in various versions of the paper. A version of the paper made available ahead of the conference by the University of Washington also lists “Shmargaret Scmitchell” as an author.

Fallout from the firing of Gebru, a prominent researcher into algorithmic oppression and one of the only Black women to work as an AI researcher at Google, led to public opposition from thousands of Googlers and accusations of racism and retaliation. The incident also sparked questions from members of Congress with a documented interest in regulating algorithms. And it led researchers to question the ethics of receiving ethics research funding from Google. Experts in AI, ethics, and law told VentureBeat a range of policy changes could come about as a result of Gebru’s dismissal, including support for stronger whistleblower laws. Shortly after being fired, Gebru spoke about the idea of unionization as a means of protection for AI researchers, and Mitchell was a member of the Alphabet Workers Union formed in January 2021.

OpenAI and Stanford University researchers working with experts warned last month that the creators of large language models like Google and OpenAI have only a matter of matter of months to set standards for their ethical use before replications begin to circulate.

Other papers published at FAccT this year include analysis of common obstacles to data sharing practices in African nations, a review of an algorithm impact assessment made by Data & Society’s AI on the Ground team, and research that examines how government repression and censorship impact text data regularly used for training NLP models.

In other recent AI research conference activity, organizers of NeurIPS, the most popular annual machine learning conference, told VentureBeat the organization plans to revise its sponsorship policy following questions surrounding Huawei, a NeurIPS sponsor, reportedly making Uighur Muslim detection system for Chinese authorities.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link