Categories
Game

‘Hyenas’ is a team shooter from the creators of ‘Alien: Isolation’

Creative Assembly is best known for deliberately-paced games like Alien: Isolation and the Total War series, but it’s about jump headlong into the multiplayer action realm. The developer is partnering with Sega to introduce Hyenas, a team-based shooter coming to PS5, PS4, Xbox Series X/S, Xbox One and PCs in 2023. The title takes its cue from tech headlines, but also doesn’t take itself (or its gameplay mechanics) too seriously.

You join three-person teams to raid spaceship shopping malls for the coveted merch left behind by Mars billionaires. You’ll have to compete against four other loot-seeking teams while simultaneously dealing with security systems, hired goons and zero-gravity. You can not only flip gravity on and off, but use bridge-making goo and other special abilities to claim the upper hand. And yes, it’s pretty silly — you can expect appearances from Richard Nixon masks, Sonic the Hedgehog merch and Pez dispensers.

The creators are currently accepting sign-ups for a closed alpha test on PCs. They’ve also made clear there will be no “pay to win” systems. While that suggests you might have the option of buying cosmetic items, your success should depend solely on talent. It’s just a question of whether Hyenas will be good enough to pry gamers away from multiplayer shooter mainstays like the Call of Duty series or Fortnite.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Pikmin Bloom is Pokemon GO creator’s latest attempt to make magic

Niantic Labs has been around for a long time and, at one point in time, was even a Google company. Its core business was to bring augmented reality games to the mobile market, but its first title wasn’t a commercial success, despite Ingress gaining a faithful following. It finally struck gold with Pokemon GO and has been trying to recreate that success ever since. It has recently partnered with Nintendo again for a relatively more obscure franchise, and Pikmin Bloom is now launching worldwide as people start walking outdoors again.

Pikmin Bloom is a bit of a gamble for Niantic Labs, compared to its other bigger attempts using more popular franchises. Harry Potter: Wizards Unite should have been a roaring success given the brand’s popularity, but its “reskinned Pokemon GO” experience didn’t sell well with fans. Catan: World Explorers should have been able to draw from almost decades of board game experiences, but those didn’t translate well to augmented reality.

In comparison, Pikmin and its plant-like creatures are probably only known to some Nintendo gamers, which might not help spread its name to smartphone users. On the other hand, it’s a name that isn’t immediately associated with a well-known game and gamers, so it might feel more approachable for “casual” mobile gamers. Considering the AR game’s main mechanic, that could actually work in its favor.

Like Pokemon GO and Niantic’s other games, Pikmin Bloom revolves around walking. A lot of walking, in fact. It’s pretty much a gamified pedometer, making flowers bloom as you take more steps. The game also has an element of logging memories and reminiscing about those moments in the future.

As far as gameplay is concerned, Pikmin Bloom is admittedly lighter than Pokemon GO, which involves some level of competition through Pokemon battles. Whether that will help it appeal to more smartphone users, however, still remains to be seen. Niantic Labs hasn’t had a lot of success so far in recreating the magic of Pokemon GO, but at least it doesn’t seem fazed by those failures.

Repost: Original Source and Author Link

Categories
Game

‘Stardew Valley’ creator’s next game is ‘Haunted Chocolatier’

Stardew Valley creator Eric Barone, aka “ConcernedApe,” has made a surprise unveiling of his next game, Haunted Chocolatier. It has the same pixelated SNES look as Stardew, with characters, set-pieces and themes that are similarly cute and quirky. 

“In this game, you will play as a chocolatier living in a haunted castle. In order to thrive in your new role, you will have to gather rare ingredients, make delicious chocolates, and sell them in a chocolate shop,” according to Barone’s blog on the new website. The video shows characters heading out into into a town, the castle, a mountain and other scenarios to seek ingredients and fend off creatures.

It’s Barone’s first game since Stardew Valley launched in 2016, but so far it’s not a lot more than a demo. Barone has yet to finalize the gameplay systems, and said he doesn’t even want to be “tied down to any particular concept of what the game is” ahead of launch. 

Haunted Chocolatier does sound and look a lot like Stardew at first take. However, in a FAQ, Barone said there will be some substantial differences, particularly when it comes to gameplay.

Like Stardew Valley, Haunted Chocolatier is another “town game,” where you move to a new town and try your hand at a new way of living. You’ll get to know the townspeople, achieve your goals and make progress in many ways. All of that is similar to Stardew Valley. However, the core gameplay and theming are quite a bit different. Haunted Chocolatier is more of an action-RPG compared to Stardew Valley. And instead of a farm being the focal point of your endeavors, it’s a chocolate shop.

Barone wouldn’t reveal other details, like whether the new title is set in the same world as Stardew, nor a release date or even a general timeframe. He did say that it would be single-player only, with no plans for multiplayer. The game will “100 percent” come to PC, though he has “every intention of bringing it to the other major platforms as well.”

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

The Elgato Facecam is a Webcam For Content Creators

The company behind popular streaming accessories like the Stream Deck XL is back with another new accessory. Launching June 15 is the Elgato Facecam, a Full HD 1080p webcam built from the ground up with features designed just for content creators.

Priced at $200, the Elgato Facecam isn’t your average webcam. It sports a premium design and tons of advanced imaging features that ensure you look good during your Teams and Zoom calls. What makes it so special is its high-quality lens and DLSR quality sensor, as well as onboard configuration flash memory.

Elgato is using an eight-element fixed-focus lens with an f/2.4 aperture and a focal length of 24mm. This maximizes the image perimeter sharpness, reduces aberration, and enhances contrast and sharpness, even reducing lens flare. As for the sensor under the lens, it’s a Sony Starvis CMOS sensor, helping get to the webcam’s 82-degree diagonal field of view.

For those unfamiliar, this sensor is used in Dell’s Ultrasharp 4K webcam. It also is a sensor that is used in the cameras of filmmakers as well as photographers.

Interestingly, for those situations where the surroundings around you might getting too hot, the webcam even has a heatsink to keep the webcam cool. Elgato also thought about storing your settings. The webcam sports onboard flash memory, so that you don’t need to constantly change settings when you plug and unplug the webcam between different PCs.

One of the best features of the Elgato Facecam is the processor that is tucked inside it. Thanks to an advanced image engine, you can use the Camera Hub software to tweak exposure compensation, white balance, shutter speed. Facecam also provides the ISO readout from the sensor, so you can know how to tweak your lighting around you. Elgato claims this is the first for a webcam.

The Elgato Faceam doesn’t have a microphone on board, as it is expected for creators and professionals to use a dedicated microphone. And, the webcam caps out at 1080p and doesn’t do 4K imagery. Elgato says this is because “Facecam is laser-focused on providing the best possible Full HD 1080p image at a smooth 60 frames per second.”

Elgato Facecam connects to PCs or Macs via USB-C and can clamp on top of a display or attach to a tripod via a one-quarter-inch thread. It supports uncompressed resolutions of 1080p at 60 frames per second or 30 frames per second. It also does 720p at 60 frames per second or 30 frames per second. For the lower end, it can handle 540 p at 60 fps or 30 fps.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Riot Games releases album with free music for streamers and creators

If you’re a streamer or video creator who is paranoid about getting hit with a copyright strike, Riot Games has a new solution. The company has launched “Sessions: Vi,” a new album of “completely free” music for creators and streamers to use during their live streams, in their videos, and more — even if they monetize their content.

Put simply, you can’t use just any music track for your live streams and videos; they’ll result in a copyright strike, which could involve anything from losing the ability to monetize a video to jeopardizing one’s account. Paying for a music license is an option, but those who can’t afford to do this are left with using royalty-free music which often comes with an upfront cost.

Riot Games Music has released “Sessions: Vi” as a free album for anyone to use; you can, for example, play it in the background while streaming a game to add some ambiance to your video or you can add it as a soundtrack for non-live videos uploaded to platforms like YouTube and TikTok. Riot’s full guidelines for using the content can be found here.

The full album has been uploaded to YouTube, as well as major platforms like Apple Music, Deezer, Amazon Music, and Spotify. These platforms are your best option if you want to stream music in the background during a video. Alternatively, you can head over to Riot’s Sessions web page to download the album.

Downloading the album will give you access to the 37 tracks on it, which can be added to your projects using video editing software. Riot says this won’t be the only album of free music offered to streamers and creators. Though we don’t know when the next album will drop (and if it’ll focus on a different genre), the company says it’ll provide updates on its social media accounts when it releases the next album later this year.

Repost: Original Source and Author Link

Categories
Game

EA buys the creators of mobile hit ‘Golf Clash’ for $1.4 billion

EA is keen to grow its footprint in mobile sports gaming beyond the likes of FIFA Mobile. EA has bought Golf Clash creator Playdemic from WarnerMedia for a sizeable $1.4 billion to help “expand [its] sports portfolio.” It’s taking advantage of WarnerMedia’s eagerness to offload assets as it merges with Discovery and focuses its game lineup on titles based on “storied franchises,” according to Warner Bros. Games President David Haddad.

EA still plans to capitalize on the “ongoing success” of Golf Clash, and hints that it wants to bring that game structure to other brands on top of developing new experiences.

That’s an unusually large payout for a studio that you might not know by name, but it makes sense for both companies. Golf Clash has been popular for years, racking up 80 million downloads as well as awards from the likes of BAFTA and the Independent Game Developers’ Association. EA is walking into a franchise that’s already making money, and might not have to spend much more to help it grow.

WarnerMedia, meanwhile, has a clear incentive: it’s going to need money as it spins out from AT&T. The move should help it recover from the merger that much sooner while concentrating on the games that are most likely to prove successful. The sale of a successful game might sting at first, but could pay off in the long run.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

PCWorld’s May Digital Magazine: Meet ConceptD, Acer’s new PCs for creators

Stay on top of the latest tech with PCWorld’s Digital Magazine. Available as single copies or as a monthly subscription, it highlights the best content from PCWorld.com—the most important news, the key product reviews, and the most useful features and how-to stories—in a curated Digital Magazine for Android and iOS, as well for the desktop and other tablet readers.

In the May issue

In May we sat down with Acer CEO Jason Chen and got a first look at their ConceptD PCs: they are powerful, quiet, and cool, but made for creators. We untangled USB4 and what this future standard means for USB chaos and Thunderbolt 3. Find out how Huawei is basically forcing fans to buy the P30 Pro by crippling the P30.

Other highlights include:

  • News: RTX on GTX: Nvidia’s latest driver unlocks ray tracing on GeForce GTX graphics cards
  • Amazon All-new Kindle review: Front lighting and a better screen elevate this entry level e-reader
  • Alienware Area-51m R1 review: Fast, big and upgradable
  • Lenovo Legion Y7000 review: A smart, sophisticated gaming laptop you can actually afford
  • Two-factor authentication explained: How to choose the right level of security for every account
  • Here’s How: 5 Google G Suite changes that will improve your life

Video highlights

Watch: Surprise! Apple announced updated models of the iPad Air and the iPad mini. We should have our hands on the new tablets soon, but we wanted to give you a quick rundown on what’s new and what remains the same for both iPads.

How to subscribe and start reading

Subscribers can visit this page to learn how to access PCWorld on any device and start reading the current issue right away. 

Subscribers: Update your PCWorld app to the latest version today!

Not a subscriber? With the PCWorld’s subscription, you get access to the digital magazine on as many devices as you’d like. Subscribe today!

Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.

Repost: Original Source and Author Link

Categories
Computing

Nvidia’s RTX Studio laptops pair fierce hardware with dedicated drivers for content creators

Intel isn’t the only chipmaker aiming to remold the PC industry to its vision. At Computex in Taiwan on Monday, Nvidia revealed RTX Studio laptops, a new initiative focused on giving content creators powerful computer hardware and rock-solid, creation-focused drivers to match.

These powerful notebooks represent the antithesis of Intel’s “Project Athena” ultra-thin ambitions. While all RTX Studio laptops utilize Nvidia’s energy-efficient Max-Q GPU variants to enable slim designs, these things are loaded down with some serious firepower.

They’ll feature RTX graphics cards equipped with dedicated ray tracing and AI hardware, from both the GeForce and Quadro lineups. In fact, Nvidia launched a fresh stack of Quadro mobile GPUs to debut in RTX Studio laptops, culminating in the Quadro RTX 5000, a fearsome beast of a graphics solution with over 3,000 CUDA cores and 16GB of GDDR6 memory. You’ll find a chart of the new RTX Quadro options at the end of this article.

That’s not all. Check out these potent minimum requirements for Nvidia RTX Studio laptops:

  • GPU: RTX 2060, Quadro RTX 3000 or higher
  • CPU: Intel Core i7 (H-series) or higher
  • RAM: 16GB or higher
  • SSD: 512GB or higher
  • Display: 1080p or 4K

That sort of hardware guarantees these laptops can handle anything you throw at them. And Nvidia’s backing up that hardware with dedicated Nvidia Studio Drivers, a reimagining of the Creator Ready drivers the company debuted earlier this year.

Nvidia Studio drivers optimize around performance in creative apps, like the Unity and Unreal game engines, the DaVinci Resolve video editor, Adobe Lightroom, and more. The Studio Driver release cadence revolves around updates to those major creative apps, rather than video games, and Nvidia says it conducts “in-depth multi-app workflow testing across multiple app versions.” Whenever a new Studio Driver version releases, it’ll also wrap in all the available gaming optimizations found in the most current GeForce Game Ready drivers, since so many content creators mix work and play these days.

razer blade Razer

The Razer Blade Studio Edition.

Nvidia claims that RTX Studio laptops running Studio Drivers deliver a major performance advantage over high-end laptops in ray tracing and AI-centric tasks, such as Apple’s Radeon Vega-equipped MacBook Pro. That’s no surprise, given the steep minimum requirements and dedicated hardware for those tasks, though those sorts of tasks are typically reserved for professionals only. But Nvidia also says that RTX Studio laptops can render video significantly faster than the competition, too, as you can see in the charts below, and that should bring a smile to the face of YouTubers and Twitch streamers.

Nvidia’s latest hardware initiative launches with plenty of support, too. The company says that 17 different RTX Studio laptops from seven major PC vendors—Acer, Asus, Dell, Gigabyte, HP, MSI and Razer—will be available when the notebooks launch in June. Nvidia says prices will start at $1,599. Going by the glimpses we’ve received at RTX Studio laptops so far, expect prices to climb much, much higher for the most premium models.

Repost: Original Source and Author Link

Categories
Computing

Nvidia woos creators with 10 potent RTX Studio laptops, 30-bit color support for GeForce GPUs

Creators, start your engines. Nvidia’s kicking off Siggraph, an annual professional graphics conference, with a pair of announcements designed to make life easier for industry diehards and amateur video producers alike. The company revealed 10 new RTX Studio laptops from a variety of partners, and new Studio drivers that bring capabilities formerly locked to Quadro GPUs alone over to the more mainstream GeForce and Titan lineups.

Nvidia introduced RTX Studio laptops at Computex in May. These laptops run powerful Max-Q hardware to make intense video- and image-editing as painless and portable as possible, backed by drivers devoted to optimizing creative software rather than games.

These are the minimum requirements for Nvidia RTX Studio laptops:

  • GPU: RTX 2060, Quadro RTX 3000 or higher
  • CPU: Intel Core i7 (H-series) or higher
  • RAM: 16GB or higher
  • SSD: 512GB or higher
  • Display: 1080p or 4K

The initiative launched with 17 laptops from Acer, Asus, Dell, Gigabyte, HP, MSI and Razer. At Siggraph on Monday, ten new models were announced. Here are the details, verbatim from Nvidia:

Nvidia’s releasing freshly updated Studio drivers to coincide with the announcement, and it unlocks a new feature for prosumers. The new Studio driver finally brings 30-bit (10bpc) color support in OpenGL applications to GeForce and Titan GPUs.

30 bit color Nvidia

“With 24-bit color, a pixel can be built from 16.7 million shades of color. By increasing to 30-bit color, a pixel can now be built from over 1 billion shades of color, which eliminates the abrupt changes in shades of the same color,” Nvidia’s announcement post explains. The feature helps “seamless color transitions without banding.”

It’s a surprising (but welcome) development; while AMD Radeon graphics cards have supported the feature for years, Nvidia has long restricted 30-bit color support in OpenGL to the professional Quadro lineup for product segmentation reasons. Now, those RTX Studio laptops equipped with RTX 20-series GPUs can get in on editing HDR images, too.

Today’s Studio driver update also includes optimizations for several recent creative application updates: Magix VEGAS Pro v17, Autodesk Arnold, Allegorithmic Substance Painter 2019.2, Blender 2.80, Cinema 4D R21, and Otoy Octane Render 2019.2

Repost: Original Source and Author Link

Categories
AI

ImageNet creators find blurring faces for privacy has a ‘minimal impact on accuracy’

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The makers of ImageNet, one of the most influential datasets in machine learning, have released a version of the dataset that blurs people’s faces in order to support privacy experimentation. Authors of a paper on the work say their research is the first known effort to analyze the impact blurring faces has on the accuracy of large-scale computer vision models. For this version, faces were detected automatically before they were blurred. Altogether, the altered dataset removes the faces of 562,000 people in more than a quarter-million images. Creators of a truncated version of the dataset of about 1.4 million images that was used for competitions told VentureBeat the plan is to eliminate the version without blurred faces and replace it with a version with blurred faces.

“Experiments show that one can use the face-blurred version for benchmarking object recognition and for transfer learning with only marginal loss of accuracy,” the team wrote in an update published on the ImageNet website late last week, together with a research paper on the work. “An emerging problem now is how to make sure computer vision is fair and preserves people’s privacy. We are continually evolving ImageNet to address these emerging needs.”

Computer vision systems can be used for everything from recognizing car accidents on freeways to fueling mass surveillance, and as ongoing controversies over facial recognition have shown, images of the human face are deeply personal.

Following experiments with object detection and scene detection benchmark tests using the modified dataset, the team reported in the paper that blurring faces can reduce accuracy by 13% to 60%, depending on the category — but that this reduction has a “minimal impact on accuracy” overall. Some categories that involve blurring objects close to people’s faces, like a harmonica or a mask, resulted in higher rates of classification errors.

“Through extensive experiments, we demonstrate that training on face-blurred does not significantly compromise accuracy on both image classification and downstream tasks, while providing some privacy protection. Therefore, we advocate for face obfuscation to be included in ImageNet and to become a standard step in future dataset creation efforts,” the paper’s coauthors write.

An assessment of the 1.4 million images included in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset found that 17% of the images contain faces, despite the fact that only three of 1,000 categories in the dataset mention people. In some categories, like “military uniform” and “volleyball,” 90% of the images included faces of people. Researchers also found reduced accuracy in categories rarely related to human faces, like “Eskimo dog” and “Siberian husky.”

“It is strange since most images in these two categories do not even contain human faces,” the paper reads.

Coauthors include researchers who released ImageNet in 2009, including Princeton University professor Jia Deng and Stanford University professor and former Google Cloud AI chief Fei-Fei Li. The original ImageNet paper has been cited tens of thousands of times since it was introduced at the Computer Vision and Pattern Recognition (CVPR) conference in 2009 and has since become one of the most influential research papers and datasets for the advancement of machine learning.

The ImageNet Large Scale Visual Recognition Challenge that took place from 2010 to 2017 is known for helping usher in the era of deep learning and leading to the spinoff of startups like Clarifai and MetaMind. Founded by Richard Socher, who helped Deng and Li assemble ImageNet, MetaMind was acquired by Salesforce in 2016. After helping establish the Einstein AI brand, Socher left his role as chief scientist at Salesforce last summer to launch a search engine startup.

The face-blurring version marks the second major ethical or privacy-related change to the dataset released 12 years ago. In a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) in 2020, creators of the ImageNet dataset removed a majority of categories associated with people because the categories were found to be offensive.

That paper attributes racist, sexist, and politically charged predictions associated with ImageNet to issues like a lack of diversity in demographics represented in the dataset and use of the WordNet hierarchy for the words used to select and label images. A 2019 analysis found that roughly 40% of people in ImageNet photos are women, and about 1% are people over 60. It also found an overrepresentation of men between the ages of 18-40 and an underrepresentation of people with dark skin.

A few months after that paper was published, MIT deleted and removed another computer vision dataset, 80 Million Tiny Images, that’s over a decade old and also used WordNet after racist, sexist labels and images were found in an audit by Vinay Prabhu and Abeba Birhane. Following an NSFW analysis of 80 Million Tiny Images, that paper examines common shortcomings of large computer vision datasets and considers solutions for the computer vision community going forward.

Analysis of ImageNet in the paper found instances of co-occurrence of people and objects in ImageNet categories involving musical instruments, since those images often include people even if the label itself does not mention people. It also suggests the makers and managers of large computer vision datasets take steps toward reform, including the use of techniques to blur the faces of people found in datasets.

On Monday, Birhane and Prabhu urged coauthors to cite ImageNet critics whose ideas are reflected in the face-obfuscation paper, such as the popular ImageNet Roulette. In a blog post, the duo detail multiple attempts to reach the ImageNet team, and a spring 2020 presentation by Prabhu at HAI that included Fei-Fei Li about the ideas underlying Birhane and Prabhu’s criticisms of large computer vision datasets.

“We’d like to clearly point out that the biggest shortcomings are the tactical abdication of responsibility for all the mess in ImageNet combined with systematic erasure of related critical work, that might well have led to these corrective measures being taken,” the blog post reads. Coauthor and Princeton University assistant professor Olga Olga Russakovsky told WIRED a citation of the paper will be included in an updated version of the paper. VentureBeat asked coauthors for additional comment about criticisms from Birhane and Prabhu. This story will be updated if we hear back.

In other work critical of ImageNet, a few weeks after 80 Million Tiny Images was taken down, MIT researchers analyzed the ImageNet data collection pipeline and found “systematic shortcomings that led to reductions in accuracy.” And a 2017 paper found that a majority of images included in the ImageNet dataset came from Europe and the United States, another example of poor representation of people from the Global South in AI.

ILSVRC is a subset of the larger ImageNet dataset, which contains over 14 million images across more than 20,000 categories. ILSVRC, ImageNet, and the recently modified version of ILSVRC were created with help from Amazon Mechanical Turk employees using photos scraped from Google Images.

In related news, a paper by researchers from Google, Mozilla Foundation, and the University of Washington analyzing datasets used for machine learning concludes that the machine learning research community needs to foster a culture change and recognize the privacy and property rights of individuals. In other news related to harm that can be caused by deploying AI, last fall, Stanford University and OpenAI convened experts from a number of fields to critique GPT-3. The group concluded that the creators of large language models like Google and OpenAI have only a matter of months to set standards and address the societal impact of deploying such language models.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link