Categories
Game

Activision Blizzard shareholders approve plan for public report on sexual harassment

Activision Blizzard shareholders on Tuesday approved a plan for the company to release an annual, public report detailing its handling of sexual harassment and gender discrimination disputes, and how it’s working to prevent these incidences. The proposal was initially made in February by New York State Comptroller Thomas P. DiNapoli.

Under the proposal, Activision Blizzard will have to publicly disclose the following information each year:

  • The number and total dollar amount of disputes settled by the studio relating to sexual harassment and abuse, and discrimination based on race, religion, sex, national origin, age, disability, genetic information, service member status, gender identify, or sexual orientation — covering the last three years

  • What steps Activision Blizzard is taking to reduce the average length of time it takes to resolve these incidents internally and legally

  • The number of pending complaints facing the studio relating to sexual abuse, harassment and discrimination, internally and in litigation

  • Data on pay and hours worked, as required by the California Department of Fair Employment and Housing

The DFEH sued Activision Blizzard in July 2020, alleging executives there fostered a culture of rampant sexual harassment and systemic gender discrimination. The US Equal Employment Opportunity Commission also sued the studio over these allegations in 2020, and Activision Blizzard settled with the federal agency in March, agreeing to set up an $18 million fund for claimants. Activists, employees and the DFEH have argued that this settlement is too low, and former employee Jessica Gonzalez appealed the ruling in May. The DFEH estimates there are 2,500 injured employees deserving more than $930 million in compensation.

“For years, there have been alarming news reports that detail allegedly rampant sexual abuse, discrimination, harassment, and retaliation directed toward female employees,” a statement in support of the proposal to shareholders reads. As an investor-focused document, it outlines the ways in which systemic discrimination and sexual abuse can damage the studio’s revenue streams and its ability to retain employees, saying, “A report such as the one requested would assist shareholders in assessing whether the company is improving its workforce management, whether its actions align with the company’s public statements and whether it remains a sustainable investment.”

While Activision Blizzard is facing multiple lawsuits and investigations in regards to sexism, harassment and discrimination, some employees at the studio are attempting to unionize with the help of the Communications Workers of America. This would be the first union at a major video game studio and could signal a shift in the industry’s longstanding crunch-centric cycle. At Tuesday’s annual meeting, Activision Blizzard shareholders denied a proposal that would’ve added an employee representative to the board of directors, with just 5 percent voting in favor, according to The Washington Post.

At the same time, Microsoft is in the process of acquiring Activision Blizzard in a deal worth nearly $69 billion. Microsoft has pledged to respect the rights of workers to unionize. And all the while, Activision Blizzard is still making games.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Xbox gamers will soon be able to share gameplay clips via public links

Xbox video game clips are about to become much more easily shareable via unique public URLs on the Xbox mobile app, according to a tweet from Microsoft’s Larry Hryb (@majornelson). Those links will also be collected into a new “trending content” area so you can see what others are doing. The features are now being tested in the app and will “soon’ roll out to all users, Hryb said. 

“With Link sharing, just go to the capture you want to share in the Xbox mobile app to get a link, then paste it anywhere to share with your friends, who don’t need to be signed-in to view your capture. We are now testing this long requested feature,” Hryb said in the thread.

According to images shared by Hryb, the trending content section is a new social media-type site on iOS and Android kinds of resembles (wait for it) TikTok’s feed — somewhat of a trend lately. Clicking on it opens up highlights that you can scroll through and then like, comment and share (top). 

Microsoft already offers the ability to share game clips and screenshots via your profile’s activity feed, clubs, messages and social media, on consoles as well as the mobile app. Sony also recently unveiled a similar feature on the PlayStation 5. However, letting you generate public links should make it more seamless, and Microsoft is making it more accessible with the social media aspect. The company has been testing the feature with developers, but you should see the feature in the near future.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
Game

Netflix Games on day 1: A shocking public launch

Netflix Games was revealed today for the general public on Android devices of many sorts. Netflix Games at launch is made for mobile smart devices – specifically for Android smartphones and tablets – with precious few titles right out the gate. This new game-serving platform is not separate from the standard Netflix app – it exists within the app you already have downloaded to your device.

Netflix Games released

Netflix made a bold decision with Netflix Games, deciding to go right ahead and jam a brand new form of content right inside their video streaming library. They’ve launched this service for free, for now, allowing users to download their first set of mobile games so long as they’re already a Netflix subscriber.

The games launched with Netflix Games were developed by 3 developers. Among them are BonusXP, Frosty Pop, and a combination effort from Amuzo & Rogue Games. That pair of developers at the end of the list created the game Card Blast. Frosty Pop developed Teeter Up and Shooting Hoops, and BonusXP created the headliner games: Stranger Things: 1984, and Stranger Things 3: The Game.

Inside the regular Netflix app

Netflix released a sort of play-test iteration of Netflix Games earlier this year in Poland, Italy, and Spain. If you live or lived in one of those regions and had a Netflix subscription and the Android app downloaded back at the tail end of September, you might already have some experience playing the games listed above.

If you have the latest version of the Netflix Android app this afternoon, you should see the first selection of games available for download. Android mobile phone users will see a dedicated games row and a games tab. Android tablet uses will see a games option in the categories drop-down menu, as well as a dedicated games row in the general content library.

Not for the kids, yet

Kids profiles will not be able to access this first pack of games. This set of games is currently protected with the same PIN access system as adult accounts in the Netflix app. A single Netflix account can allow multiple devices to download this first pack of games, but there is an upper limit – just as there is a limit to the number of devices one account can play content with at one time.

Mobile and Android only?

At the moment it would appear that Netflix is only focusing in mobile games that work on Android devices. Given Apple’s rules with downloadable content and app stores within their App Store, it’s unlikely we’ll see Netflix Games on iOS or iPadOS any time soon, if ever.

Repost: Original Source and Author Link

Categories
Computing

MacOS Monterey Public Beta Hands-On: Apple’s Ecosystem Grows

Last year’s update to Apple’s Mac operating system, MacOS Big Sur, was the largest and most significant refresh in years. This year’s iteration, dubbed MacOS Monterey by Apple’s “crack marketing team,” is more of a point-five update compared to 2020’s behemoth. That’s not to say it is a dull, pedestrian affair, but it is more refinement than revolution despite Apple opting for the MacOS 12 nomenclature rather than MacOS 11.1.

So, what can you expect when you get your hands on it in the fall (or right now if you signed up for the public beta)? Well, expect a lot of bugs for one thing. Apple released the public beta just a few days after only the second developer beta came out. That’s a quick turnaround, and it shows, with some features looking a little creaky right now.

But beyond that, is MacOS Monterey actually any good? And how do the new features work in practice? We took the new public beta for a spin to see what it had to offer.

A familiar design

Safari’s tab bar gets a redesign, and it now takes on the colors of the active tab.

MacOS Big Sur was a complete overhaul of the Mac operating system’s visual style, with new-look buttons, sidebars, menus, and much more. It was a huge improvement and helped bring MacOS kicking and screaming into the modern design era.

Don’t expect that level of makeover in MacOS Monterey — this year’s iteration is far more restrained in what it changes. There are some tweaks here and there, though. Notifications in particular have been spruced up, with user profile shots and larger app icons now showing next to the alert text.

One of the largest visual revamps comes to Safari. Here, almost everything has been streamlined to fit Apple’s minimalist aesthetic, resulting in a stripped-down top bar that is a little confusing to navigate at first, although you do get used to it.

For starters, the URL bar and tab bar are now merged instead of being two separate rows sitting one above the other. If you have multiple tabs open, the active tab is now by far the longest. Click inside it, and you can start typing — this is where the search bar now hides. Websites in the active tab lend their colors to Safari’s entire top bar. It’s a nice touch of visual flair, and it seems pretty good at picking out an appropriate color.

If you’re like me and have an ever-expanding smorgasbord of tabs open at once, Safari’s Tab Groups come as something of a relief. It’s a feature already included in Google Chrome, but Apple’s take is slightly different. You can still group tabs together though and name each group to keep them organized. Opening a group shows only the tabs it contains, no others. Managing these groups is fiddly and confusing at the moment, and it’s very easy to accidentally delete a tab group or struggle to find the command you need. But it’s a start.

Continuity gets serious

Quick Note in Apple's MacOS Monterey public beta.
Adding Quick Note to a Hot Corner means it’s always available with a swift swipe.

One of the main themes of MacOS Monterey is an emphasis on cross-platform integration. A standout feature from the company’s Worldwide Developers Conference (WWDC) reveal of Monterey was Universal Control, a system that lets you seamlessly move between an iPad and a Mac (or two), controlling each device with the same mouse and keyboard. It looked like a piece of Apple magic in the WWDC demo, but how does it work in reality?

Well, unfortunately we cannot yet answer that question. At the time of writing, Universal Control had been absent from both the developer and public betas of MacOS Monterey. We will update this article as soon as Universal Control becomes available and we have a chance to test it, but for now we are going to have to hang tight on this one.

Another feature making its debut on the Mac in Monterey is AirPlay to Mac, but unlike Universal Control, this is actually available to test. Using your Mac as an AirPlay destination has been a long, long, long time coming, but now that it’s finally here, we can say the wait has been worth it.

AirPlay thrives on larger screens. Apple users have been able to send video to an Apple TV for years, using an iPhone or iPad as a remote, but the Mac has been strangely exempt. Now that it is enabled, you can enjoy content from your phone — like videos captured with an iPhone — on the big screen. It works just as AirPlay normally does: Open a video, tap Share, then the AirPlay button, then select your Mac as the output. It’s a small change to MacOS, but a significant one.

Notes also gets a dose of cross-system goodness, although this is focused more on collaboration than with making it work across your devices. You can now mention colleagues and see their edits in shared notes, and notes can be categorized with tags to aid organization. Small steps, but they add up.

As well as that, the Quick Note feature that Apple showed off on iPadOS 15 also comes to MacOS Monterey. You can select any image or text on a web page, for instance, right-click it, and add it to a Quick Note. Next time you’re on the web page, a tiny thumbnail of the Quick Note appears in the bottom-right of your screen, letting you see whatever you noted down before.

My favorite thing about Quick Note is its integration with Hot Corners. These are shortcuts that can be triggered by moving your mouse pointer to one corner of your screen. I set the bottom-right Hot Corner to launch Quick Note, and now creating a new note is just a short swipe away. As great as that is, though, like many of the new features in Monterey, it’s useful without being earth-shaking.

Share and share alike

Apple News and Shared With You in Apple's MacOS Monterey public beta.
Shared news articles show the name of the person who sent it to you at the top.

Shared content and experiences figured prominently in Apple’s WWDC show, and there are plenty of new things here in MacOS Monterey. Unfortunately, not everything worked as planned at the time of writing and will presumably be fixed or updated in upcoming beta releases.

Here’s an example. Apple has always promoted the interconnectedness of its ecosystem, and it’s trying to do so again in MacOS Monterey with things like Shared With You. This highlights items that have been sent to you in Messages and then surfaces them in relevant apps. For instance, news stories sent to you via Messages will appear in the News app.

At least, that’s the theory. When we tried it, many apps did not have functioning Shared With You sections — not that we could find, anyway. The News app has a dedicated Shared With You area in the sidebar, but in apps like Photos and Podcasts it is nowhere to be found.

Apple News and Shared With You in Apple's MacOS Monterey public beta.
The organization of Shared With You articles in News is clunky, but it works.

When it does work, Shared With You is a handy way of collating everything that has come your way, similar to how Messages gathers together all the photos, links, and files from your contacts. Shared With You is a little more basic because each app it works in only collects files that it can play or open rather than everything. But as with Quick Note, it is a welcome, if minor, addition to MacOS.

The other major sharing update in Monterey is SharePlay. The idea behind this is that you can share your screen (or the content you are watching on your screen) with other people during a FaceTime call. In a pandemic world where being together is difficult, it is not hard to see Apple’s motivation. This is one of the few features that has been newly added to the public beta, so we’ll update this post once we’ve had more time with the latest version.

It’s not the only new tool in FaceTime. You can now add a Portrait Mode filter to blur the background, although the quality depends on your camera, and it’s a little hit and miss around the edges of your outline.

As with so much else in this beta, though, lots of things aren’t quite ready. You can send invite links for FaceTime calls (finally!), but joining a call displays everyone in a square, and you can’t change your own camera to be landscape or portrait. In a call made without a link, it’s the opposite, and the grid view doesn’t work. Microphone modes like Voice Isolation and Wide Spectrum aren’t available at all.

Mapping the future

Apple Maps in Apple's MacOS Monterey public beta.
Apple Maps now has better traffic information, including more detailed hazard warnings.

Shortcuts is one of the most powerful native apps on iOS, as it lets you create automated sets of actions that perform complex tasks with a simple trigger. Now, it’s on the Mac, and it’s actually better suited to this platform than the iPhone.

That’s because, like on the iPad, the Mac version has a right-hand sidebar that lets you drag and drop actions into place, creating a visual flowchart that is simple to follow. For many people, the Mac is also where they are likely to perform the most complex tasks, making Shortcuts on MacOS a powerful addition to their arsenal, if one that is also long overdue.

Elsewhere, Apple Maps gets a more detailed look and a slate of new features. Major cities are more detailed, with rich 3D models of major attractions and buildings (except … lots are invisible, at least when we tried them out), there’s a new globe view of the whole Earth, and public transit directions are more helpful and informative. Driving maps give more info on hazards and traffic conditions, too. Maps now also lets you set a time to leave or arrive on a car journey, not just on public transport. That feature’s arrival is years behind Google Maps, sure, but it’s here on the Mac at last.

Apple Maps in Apple's MacOS Monterey public beta.
Want to see Apple Park in 3D? Well you can’t, because at the moment it’s invisible in Apple Maps.

And if everything gets a bit too overwhelming, there’s a new tool called Focus that aims to cut out the barrage of notifications and distractions by only allowing certain people or apps to buzz you. What is interesting is that it allows you to set different Focus modes, each with different rules for different scenarios. For instance, Focus loads up with Do Not Disturb, Driving, and Sleep modes. The latter integrates with the sleep schedule you set in the iOS Health app, for example.

Adding your own is simple. The Automation section is where you set how the Focus mode is activated: At a set time or when you arrive at a specific location. So, you could set a location-based automation that activates when you arrive at the gym, letting you concentrate on pumping iron without distraction. There’s also an app-based automation, but it’s not clear if this refers to apps that will trigger Focus mode or apps that are allowed through. We’d bet on the former seeing as blocked apps are already covered in Apple’s Screen Time tool, but Apple has not made it very clear.

As with so much in the MacOS Monterey beta, there is a lot of potential in something like Focus, but it’s not quite ready for prime time. We will keep testing everything in the MacOS Monterey public beta and will add to this article as Apple updates the beta going forward.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Ensono: Security is a key factor when choosing a public cloud provider

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


There are many factors to considered when selecting a public cloud provider, but 56% in a recent survey said security concerns had the most significant influence during the selection process for public cloud providers, IT services management company Ensono said.

There are many factors on selecting public cloud providers

Above: Ensono Cloud Clarity Report uncovered several areas that significantly influenced buying decisions.

Image Credit: Ensono

Last year’s events caused an immediate shift in business workflow forcing most companies to speed up their digital transformation. In December 2020, Ensono surveyed 500 full-time professionals with cloud procurement decision-making power in a variety of industries across the United States and United Kingdom to understand their cloud perspectives The Cloud Clarity Report uncovered multi-cloud usage is emerging as the dominant cloud strategy and Microsoft Azure is the most-used public cloud vendor among our respondents. Most surprisingly, private cloud remains a permanent and long-term strategic component for a significant number of businesses.

The survey also found that users of multi-cloud environments, many of those who are still maintaining on-premise workloads, are saddled with aging technology. 1 in 10 of respondents (9%) use unspecified legacy technology in their IT infrastructure, and one-third (33%) still have a mainframe environment present in their IT stack. The majority of respondents are in some combination of public and private cloud for business, and only 9% of users are solely operating in the public cloud. Additionally, a relatively small percentage operate only in the private cloud.

Security and compliance are the top drivers of cloud decision-making, and 56% said security considerations had a “significant impact” on the final decision when evaluating a public cloud provider. When ranking factors in terms of importance in the decision-making process, security and technical requirements outrank price and support. The fact that respondents are willing to pay a premium for technical features showcases how important these features are.

It’s clear from the data that today’s decision-makers are going all-in on the cloud but have different factors driving their strategies. The technical and security components of private and hybrid cloud environments cannot be discounted, even as some respondents are choosing a single public cloud provider.

Read Ensono’s full Cloud Clarity Report.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Facebook’s next big AI project is training its machines on users’ public videos

Teaching AI systems to understand what’s happening in videos as completely as a human can is one of the hardest challenges — and biggest potential breakthroughs — in the world of machine learning. Today, Facebook announced a new initiative that it hopes will give it an edge in this consequential work: training its AI on Facebook users’ public videos.

Access to training data is one of the biggest competitive advantages in AI, and by collecting this resource from millions and millions of their users, tech giants like Facebook, Google, and Amazon have been able to forge ahead in various areas. And while Facebook has already trained machine vision models on billions of images collected from Instagram, it hasn’t previously announced projects of similar ambition for video understanding.

“By learning from global streams of publicly available videos spanning nearly every country and hundreds of languages, our AI systems will not just improve accuracy but also adapt to our fast moving world and recognize the nuances and visual cues across different cultures and regions,” said the company in a blog. The project, titled Learning from Videos, is also part of Facebook’s “broader efforts toward building machines that learn like humans do.”

The resulting machine learning models will be used to create new content recommendation systems and moderation tools, says Facebook, but could do so much more in the future. AI that can understand the content of videos could give Facebook unprecedented insight into users’ lives, allowing them to analyze their hobbies and interests, preferences in brands and clothes, and countless other personal details. Of course, Facebook already has access to such information through its current ad-targeting operation, but being able to parse video through AI would add an incredibly rich (and invasive) source of data to its stores.

Facebook is vague about its future plans for AI models trained on users’ videos. The company told The Verge such models could be put to a number of uses, from captioning videos to creating advanced search functions, but did not answer a question on whether or not they would be used to collect information for ad-targeting. Similarly, when asked if users had to consent to having their videos used to train Facebook’s AI or if they could opt out, the company responded only by noting that its Data Policy says users’ uploaded content can be used for “product research and development.” Facebook also did not respond to questions asking exactly how much video will be collected for training its AI systems or how access to this data by the company’s researchers will be overseen.

In its blog post announcing the project, though, the social network did point to one future, speculative use: using AI to retrieve “digital memories” captured by smart glasses.

Facebook plans to release a pair of consumer smart glasses sometime this year. Details about the device are vague, but it’s likely these or future glasses will include integrated cameras to capture the wearer’s point of view. If AI systems can be trained to understand the content of video, then it will allow users to search for past recordings, just as many photo apps allow people to search for specific locations, objects, or people. (This is information, incidentally, that has often been indexed by AI systems trained on user data.)

Facebook has released images showing prototype pairs of its augmented-reality smart glasses.
Image: Facebook

As recording video with smart glasses “becomes the norm,” says Facebook, “people should be able to recall specific moments from their vast bank of digital memories just as easy as they capture them.” It gives the example of a user conducting a search with the phrase “Show me every time we sang happy birthday to Grandma,” before being served relevant clips. As the company notes, such a search would require that AI systems establish connections between types of data, teaching them “to match the phrase ‘happy birthday’ to cakes, candles, people singing various birthday songs, and more.” Just like humans do, AI would need to understand rich concepts comprised of different types of sensory input.

Looking to the future, the combination of smart glasses and machine learning would enable what’s referred to as “worldscraping” — capturing granular data about the world by turning wearers of smart glasses into roving CCTV cameras. As the practice was described in a report last year from The Guardian: “Every time someone browsed a supermarket, their smart glasses would be recording real-time pricing data, stock levels and browsing habits; every time they opened up a newspaper, their glasses would know which stories they read, which adverts they looked at and which celebrity beach pictures their gaze lingered on.”

This is an extreme outcome and not an avenue of research Facebook says it’s currently exploring. But it does illustrate the potential significance of pairing advanced AI video analysis with smart glasses — which the social network is apparently keen to do.

By comparison, the only use of its new AI video analysis tools that Facebook is currently disclosing is relatively mundane. Along with the announcement of Learning from Videos today, Facebook says it’s deployed a new content recommendation system based on its video work in its TikTok-clone Reels. “Popular videos often consist of the same music set to the same dance moves, but created and acted by different people,” says Facebook. By analyzing the content of videos, Facebook’s AI can suggest similar clips to users.

Such content recommendation algorithms are not without potential problems, though. A recent report from MIT Technology Review highlighted how the social network’s emphasis on growth and user engagement has stopped its AI team from fully addressing how algorithms can spread misinformation and encourage political polarization. As the Technology Review article says: “The [machine learning] models that maximize engagement also favor controversy, misinformation, and extremism.” This creates a conflict between the duties of Facebook’s AI ethics researchers and the company’s credo of maximizing growth.

Facebook isn’t the only big tech company pursuing advanced AI video analysis, nor is it the only one to leverage users’ data to do so. Google, for example, maintains a publicly accessible research dataset containing 8 million curated and partially labeled YouTube videos in order to “help accelerate research on large scale video understanding.” The search giant’s ad operations could similarly benefit from AI that understands the content of videos, even if the end result is simply serving more relevant ads in YouTube.

Facebook, though, thinks it has one particular advantage over its competitors. Not only does it have ample training data, but it’s pushing more and more resources into an AI method known as self-supervised learning.

Usually, when AI models are trained on data, those inputs have be to labeled by humans: tagging objects in pictures or transcribing audio recordings, for example. If you’ve ever solved a CAPTCHA identifying fire hydrants or pedestrian crossing then you’ve likely labeled data that’s helped to train AI. But self-supervised learning does away with the labels, speeding up the training process, and, some researchers believe, resulting in deeper and more meaningful analysis as the AI systems teach themselves to join the dots. Facebook is so optimistic about self-supervised learning it’s called it “the dark matter of intelligence.”

The company says its future work on AI video analysis will focus on semi- and self-supervised learning methods, and that such techniques “have already improved our computer vision and speech recognition systems.” With such an abundance of video content available from Facebook’s 2.8 billion users, skipping the labeling part of AI training certainly makes sense. And if the social network can teach its machine learning models to understand video seamlessly, who knows what they might learn?

Repost: Original Source and Author Link

Categories
Tech News

Ring to make police video requests public – as long as you give Amazon your data

Amazon yesterday announced its ‘Neighbors by Ring‘ app would make police video requests “public” going forward. That would be really cool, if it were true.

Up front: Amazon purchased Ring for $2 billion a few years back. The company sells doorbells with cameras installed that offer owners the peace of mind that comes with having 24/7 surveillance on their property.

The big sell is that it’ll help keep your community safe and protect your precious Amazon deliveries from being stolen.

Amazon keeps all the videos and data related to your Ring account and partners with law enforcement agencies to allow officers to reach out to people using the ‘Neighbors by Ring‘ app to request video evidence and information.

The big deal: On the surface this sounds awesome. If someone commits a crime in your neighborhood and everyone has a Ring doorbell installed, there’s a pretty good chance someone’s surveillance system caught something.

But the reality of surveillance is entirely different. There are literally hundreds of studies and thousands of cautionary tales that have been written about the dangers of surveillance.

And Ring is arguably more dangerous than any other kind of surveillance system because it invades our homes.

The big problem: If you want Amazon to conduct 24/7 surveillance on you, your family, and your neighbors all you have to do is buy a Ring camera and install the Neighbors app.

But if you do not want Amazon to conduct 24/7 surveillance on you, your family, and your neighbors, there’s absolutely no way for you to opt out.

Simply put: if your neighbor has a Ring camera and your home, yard, garage, or car is in its typical FOV, Amazon is spying on you without your permission and there’s nothing you can do about it. If you drive past a Ring camera on your way to work, Amazon has that video. If you drive past 10, Amazon has 10 videos. 

What now?

Yesterday, Amazon said it would make all police requests for videos and data “public,” but that appears to just be another ploy to get user data.

You cannot access the “public” police data requests unless you sign up for the Neighbors app by Ring. And you can’t use the app unless you create an account. And, per Amazon, you’re legally obligated to provide the company with your real name and identity – meanwhile, for obvious reasons, the app doesn’t work unless it has your location.

Here’s the relevant passage from the app’s TOS:

You promise to provide us with accurate, complete, and up-to-date registration information about yourself. You may not select as your User ID a name that you do not have the right to use, or the name of another person with the intention of impersonating that person.

Other interesting tidbits in the TOS include notification that use of the app constitutes waiving your right to a trial by jury or to participate in a class-action lawsuit and agreeing that any video recorded retains your sole property and any legal issues stemming from the data obtained by Amazon and Ring is your sole responsibility, but the company has full, irrevocable legal use of your data in perpetuity.

Bottom line: If you don’t use the app, you can’t opt out of the Ring network. And if you don’t give Amazon your location data you can’t see how police are using the app.

Amazon’s Ring system represents a clear and targeted threat to privacy. The false transparency of making police requests “public” for app users doesn’t change that.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.

Repost: Original Source and Author Link

Categories
Tech News

Clubhouse will be available to the general public soon

Partly thanks to taking on the rather unconventional form of audio-only communication, Clubhouse quickly become one of the hottest names in social media since last year. Given its popularity, it might surprise some to discover that the social network is still an exclusive club that requires other members to invite you first in order to get in. It was ironically slow to expand even outside of iOS. Opening its doors to Android users seemingly also opened the floodgates and Clubhouse will no longer be an exclusive club starting sometime this summer.

Clubhouse recently explained that its slow expansion was intentional, even when faced with new rivals in that audio-only social networking space. It wanted to focus on more important aspects of growing the community and stabilizing features first rather than getting bogged down with server maintenance or rushed bug fixing from dozens of reports that would be coming in. It seems, however, that it has reached a point where it’s ready to actually open the doors to everyone.

In one of its tweets highlighting its recent Town Hall, Clubhouse revealed that it already has more than 2 million Android users. That’s rather impressive considering the Android app only went live earlier in May and that it isn’t even in feature parity compared to iOS. It does prove that there were that many people just waiting for Clubhouse to arrive on their favorite mobile platform before jumping in.

More importantly, it also revealed its plans to make the network available to the general public this summer, presuming all goes well. Until then, it will be focusing on tasks that will make the network feel more complete to prepare Clubhouse for a sudden flood of new users without any invitations.

Of course, Clubhouse’s public opening comes at a time when Twitter and Facebook, among others, are already stepping up their game in that same space. Twitter is even starting to test its Ticketed Spaces as a monetization system for both creators and Twitter itself, something that Clubhouse has also launched recently in a very minimal way.



Repost: Original Source and Author Link

Categories
AI

TikTok Tom Cruise deepfake creator: public shouldn’t worry about ‘one-click fakes’

When a series of spookily convincing Tom Cruise deepfakes went viral on TikTok, some suggested it was a chilling sign of things to come — harbinger of an era where AI will let anyone make fake videos of anyone else. The video’s creator, though, Belgium VFX specialist Chris Ume, says this is far from the case. Speaking to The Verge about his viral clips, Ume stresses the amount of time and effort that went into making each deepfake, as well as the importance of working with a top-flight Tom Cruise impersonator, Miles Fisher.

“You can’t do it by just pressing a button,” says Ume. “That’s important, that’s a message I want to tell people.” Each clip took weeks of work, he says, using the open-source DeepFaceLab algorithm as well as established video editing tools. “By combining traditional CGI and VFX with deepfakes, it makes it better. I make sure you don’t see any of the glitches.”

Ume has been working with deepfakes for years, including creating the effects for the “Sassy Justice” series made by South Park’s Trey Parker and Matt Stone. He started working on Cruise when he saw a video by Fisher announcing a fictitious run for president by the Hollywood star. The pair then worked together on a follow-up and decided to put a series of “harmless” clips up on TikTok. Their account, @deeptomcruise, quickly racked up tens of thousands of followers and likes. Ume pulled the videos briefly but then restored them.

“It’s fulfilled its purpose,” he says of the account. “We had fun. I created awareness. I showed my skills. We made people smile. And that’s it, the project is done.” A spokesperson from TikTok told The Verge that the account was well within its rules for parody uses of deepfakes, and Ume notes that Cruise — the real Tom Cruise — has since made his own official account, perhaps as a result of seeing his AI doppelgänger go viral.

Deepfake technology has been developing for years now, and there’s no doubt that the results are getting more realistic and easier to make. Although there has been much speculation about the potential harm such technology could cause in politics, so far these effects have been relatively nonexistent. Where the technology is definitely causing damage is in the creation of revenge porn or nonconsensual pornography of women. In those cases, the fake videos or images don’t have to be realistic to create tremendous damage. Simply threatening someone with the release of fake imagery, or creating rumors about the existence of such content, can be enough to ruin reputations and careers.

The Tom Cruise fakes, though, show a much more beneficial use of the technology: as another part of the CGI toolkit. Ume says there are so many uses for deepfakes, from dubbing actors in film and TV, to restoring old footage, to animating CGI characters. What he stresses, though, is the incompleteness of the technology operating by itself.

Creating the fakes took two months to train the base AI models (using a pair of NVIDIA RTX 8000 GPUs) on footage of Cruise, and days of further processing for each clip. After that, Ume had to go through each video, frame by frame, making small adjustments to sell the overall effect; smoothing a line here and covering up a glitch there. “The most difficult thing is making it look alive,” he says. “You can see it in the eyes when it’s not right.”

Ume says a huge amount of credit goes to Fisher; a TV and film actor who captured the exaggerated mannerisms of Cruise, from his manic laugh to his intense delivery. “He’s a really talented actor,” says Ume. “I just do the visual stuff.” Even then, if you look closely, you can still see moments where the illusion fails, as in the clip below where Fisher’s eyes and mouth glitch for a second as he puts the sunglasses on.

Blink and you’ll miss it: look closely and you can see Fisher’s mouth and eye glitch.
GIF: The Verge

Although Ume’s point is that his deepfakes take a lot of work and a professional impersonator, it’s also clear that the technology will improve over time. Exactly how easy it will be to make seamless fakes in the future is difficult to predict, and experts are busy developing tools that can automatically identify fakes or verify unedited footage.

Ume, though, says he isn’t too worried about the future. We’ve developed such technology before and society’s conception of truth has more or less survived. “It’s like Photoshop 20 years ago, people didn’t know what photo editing was, and now they know about these fakes,” he says. As deepfakes become more and more of a staple in TV and movies, people’s expectations will change, as they did for imagery in the age of Photoshop. One thing’s for certain, says Ume, and it’s that the genie can’t be put back in the bottle. “Deepfakes are here to stay,” he says. “Everyone believes in it.”

Update March 5th, 12:11PM ET: Updated to note that Ume and Fisher has now restored the videos to the @deeptomcruise TikTok account.

Repost: Original Source and Author Link

Categories
AI

OpsRamp unveils software to migrate enterprises to the public cloud

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Enterprise software developer OpsRamp on Tuesday unveiled its spring 2021 release designed to help organizations migrate to the public cloud and better monitor and manage their hybrid IT infrastructure.

The new release delivers “self-service onboarding” for accessing the public cloud, the company said. The updated platform features controls and analytical tools that include “customizable dashboards for visualization of hybrid infrastructure performance and Prometheus metrics ingestion for using homegrown monitoring data within the OpsRamp platform.”

“CloudOps teams are shackled by legacy IT operations tools that were never designed to handle the dynamic and ephemeral nature of public cloud infrastructure,” OpsRamp VP Ciaran Byrne said in a statement.

“OpsRamp’s digital operations management platform enables faster discovery and monitoring of production workloads across multicloud environments, along with data-driven insights for managing the health and performance of a distributed infrastructure ecosystem.”

Founded in 2014, OpsRamp partners with managed service providers (MSPs) to serve enterprise IT organizations with its IT operations management (ITOM) platform. The company’s software delivers hybrid discovery and monitoring of physical, virtualized, and cloud-based IT assets ranging from on-premises hardware to hosted applications, as well as event and incident management and remediation and automation.

Rosy forecast for public cloud services in 2021

OpsRamp is banking on strong projected growth for public cloud services this year and beyond. Worldwide spending on public cloud services is expected to reach $332.3 billion in 2021, an increase of 23.1% from 2020, according to Gartner’s April forecast. Public cloud spending in 2020 was $270 billion.

Gartner had an especially rosy forecast for the infrastructure-as-a-service (IaaS) segment of the public cloud market, projecting 38.5% year-over-year growth to reach $82 billion in 2021. OpsRamp could also enjoy revenue growth in cloud management and security services, which Gartner projects will become a $16 billion market segment this year.

Here are the new features arriving with the OpsRamp spring 2021 release, according to the company:

  • Rapid onboarding. OpsRamp’s hybrid cloud wizard delivers a self-contained guide for discovering and monitoring multicloud and cloud-native infrastructure. Once IT teams provide their cloud infrastructure details, OpsRamp auto-monitoring onboards cloud resources and displays performance metrics within minutes. The platform currently supports auto-monitoring for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) cloud services, along with Kubernetes distributions such as OpenShift and K3s, as well as popular Linux distributions.
  • Cloud-native metrics observability. Kubernetes admins can now ingest Prometheus metrics into OpsRamp for holistic visibility and faster troubleshooting across cloud-native infrastructure. The company’s pull-based mechanism for scraping Prometheus metrics across Kubernetes clusters ensures faster visualization, data federation, and long-term retention of Prometheus insights.
  • Data-driven insights for hybrid IT management. OpsRamp’s new dashboarding model allows cloud operators to visualize any data with a flexible querying framework. Dashboards 2.0 are customizable widgets powered by Prometheus Query Language (PromQL), with the ability to import/export dashboards and customize color palettes and fonts, along with out-of-the-box support for a growing number of cloud services.
  • Flexible and centralized alerting. New alert definition models offer greater flexibility for setting alerts, along with streamlined mechanisms to alert on metric data collected by OpsRamp. CloudOps teams can centrally set thresholds to generate alerts for auto-monitored resources and then use relevant insights to keep their IT services up and running.
  • Comprehensive cloud monitoring. OpsRamp currently offers more than 160 monitoring integrations across leading public cloud providers such as AWS, Azure, and GCP. The OpsRamp spring 2021 release offers expanded coverage for Microsoft Azure, with metrics support for Blob Storage, Table Storage, File Storage, BatchAI Workspaces, BlockChain, Databox Edge, Logic Integration Service Environment, and Kusto Clusters.
  • Hyperconverged infrastructure monitoring. OpsRamp can not only discover and monitor Cisco HyperFlex components such as cluster nodes, hosts, datastores, and virtual machines but also ingest HyperFlex events into the OpsRamp AIOps platform for faster root cause diagnostics. The platform also supports the discovery and monitoring of physical components of Dell EMC VxRail appliances, along with ingestion of VxRail software and hardware events.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link