Categories
AI

Hugging Face takes step toward democratizing AI and ML

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The latest generation of artificial intelligence (AI) models, also known as transformers, have already changed our daily lives, taking the wheel for us, completing our thoughts when we compose an email or answering our questions in search engines. 

However, right now, only the largest tech companies have the means and manpower to wield these massive models at consumer scale. To get their model into production, data scientists typically take one to two weeks, dealing with GPUs, containers, API gateways and the like, or have to request a different team to do so, which can cause delay. The time-consuming tasks associated with honing the powers of this technology are a main reason why 87% of machine learning (ML) projects never make it to production. 

To address this challenge, New York-based Hugging Face, which aims to democratize AI and ML via open-source and open science, has launched the Inference Endpoints. The AI-as-a-service offering is designed to be a solution to take on large workloads of enterprises — including in regulated industries that are heavy users of transformer models, like financial services (e.g., air gapped environments), healthcare services (e.g., HIPAA compliance) and consumer tech (e.g., GDPR compliance). The company claims that Inference Endpoints will enable more than 100,000 Hugging Face Hub users to go from experimentation to production in just a couple of minutes. 

Hugging Face Inference Endpoints is a few clicks to turn any model into your own API, so users can build AI-powered applications, on top of scalable, secure and fully managed infrastructure, instead of weeks of tedious work reinventing the wheel building and maintaining ad-hoc infrastructure (containers, kubernetes, the works.),” said Jeff Boudier, product director at Hugging Face.   

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Saving time and making room for new possibilities

The new feature can be useful for data scientists — saving time that they can instead spend working on improving their models and building new AI features. With their custom models integrated into apps, they can see the impact of their work more quickly.

For a software developer, Inference Endpoints will allow them to build AI-powered features without needing to use machine learning. 

“We have over 70k off-the-shelf models available to do anything from article summarization to translation to speech transcription in any language, image generation with diffusers, like the cliché says the limit is your imagination,” Boudier told VentureBeat. 

So, how does it work? Users first need to select any of the more than 70,000 open-source models on the hub, or a private model hosted on their Hugging Face account. From there, users need to choose the cloud provider and select their region. They can also specify security settings, compute type and autoscaling.  After that, a user can deploy any machine learning model, ranging from transformers to diffusers. Additionally, users can build completely custom AI applications to even match lyrics or music creating original videos with just text, for example. The compute use is billed by the hour and invoiced monthly.  

“We were able to choose an off the shelf model that’s common for our customers to get started with and set it so that it can be configured to handle over 100 requests per second just with a few button clicks,” said Gareth Jones, senior product manager at Pinecone, a company using Hugging Face’s new offering. “With the release of the Hugging Face Inference Endpoints, we believe there’s a new standard for how easy it can be to go build your first vector embedding-based solution, whether it be semantic search or question answering system.”

Hugging Face started its life as a chatbot and aims to become the GitHub of machine learning. Today, the platform offers 100,000 pre-trained models and 10,000 datasets for natural language processing (NLP), computer vision, speech, time-series, biology, reinforcement learning, chemistry and more.

With the launch of the Inference Endpoints, the company hopes to bolster the adoption of the latest AI models in production for companies of all sizes.  

“What is really novel and aligned with our mission as a company is that with Inference Endpoints even the smallest startup with no prior machine learning experience can bring the latest advancements in AI into their app or service,” said Boudier. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

Co-op adventure game ‘It Takes Two’ hits Switch on November 4th

It Takes Two was a breakout hit when it came out in 2021, and now the cooperative adventure game is coming to a fresh platform. It Takes Two is due to hit Switch on November 4th for $40, and pre-orders are open today. The game will take advantage of the Friend’s Pass feature from developer Hazelight and publisher EA, unlocking co-op play even if one person doesn’t own the game.

It Takes Two is a distinctly two-player experience, and on Switch it’ll be playable three ways: in couch co-op mode, with two Switches over a local wireless network, or with a friend online. It’s not playable cross-platform. The Friend’s Pass feature is already a thing for PC and console versions of It Takes Two, and it allows someone who doesn’t own the game to play with someone who does.

The Switch port was handled by Turn Me Up Games, the studio that brought Tony Hawk’s Pro Skater and the Borderlands: Legendary Collection to Nintendo’s latest console.

It Takes Two is also getting the silver-screen treatment, though its storyline is arguably the most distressing part of the game. Amazon Studios is adapting it into a movie, with The Rock rumored as a potential star.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Computing

Move over, Apple — Camo’s update takes on Continuity Camera

Reincubate Camo has come out swinging against Apple’s Continuity Camera technology with a slew of new controls you won’t find anywhere else. These include variable frame rates, intelligent zoom technology, and video stabilization improvements. Many of these go well beyond anything Apple offers in MacOS Ventura.

You may have heard of the Camo app. It allows you to use your iPhone as a Mac webcam and has been a popular piece of software since its release in 2020. Apple may have borrowed a few of Camo’s key concepts when it displayed the Continuity Camera at WWDC 22 in June. Undeterred, Reincubate, the company that owns Camo, wants to differentiate itself from Apple’s more basic tech. Update 1.8 gives you what Apple does not.

image: Reincubate

For starters, there’s a new variable frame rate setting. You can adjust your frame rates between 15 fps and 60 fps. Reincubate claims it is perfect for capturing smooth stream footage with YouTube and Twitch. You’ll see a new frame rate drop-down in the camera settings. Apple’s Continuity Camera does not allow you to change your frame rate, depending on the iPhone’s hardware to figure it out.

You’ll also get Smart Zoom with the Camo 1.8 update. This limits the amount of digital zoom when possible, so you can zoom and crop your scenes without losing image quality. Reincubate describes Smart Zoom in a blog post. “When cropping out part of a scene, Camo will now avoid using digital zoom wherever possible, and instead rely on a higher resolution source image from the camera’s sensor, mimicking lossless optical zoom.”

Video stabilization is another feature in the update. This is a new feature for stabilizing the image from shaky cameras without sacrificing image quality. People with standing desks who type a lot, and laptops on wobbly surfaces can cause the video to jump around wildly. Image stabilization will compensate for the shaking and keep your face centered in the camera. Your coworkers won’t even know you have a mechanical keyboard!

Finally, Reincubate gives you a way to control the vibrancy of your image in a live stream. Vibrancy Control can enliven or darken your image, which is great for creating a more subtle atmosphere around your face. This update allows you to fine-tune things such as lighting, white balance, and other granular adjustments you simply won’t find on Apple’s offering.

Updates such as Camo 1.8 are what give third-party apps an edge over Apple’s own offerings. You can check out our list of the best third-party apps for your Mac.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Teradata takes on Snowflake and Databricks with cloud-native platform

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Database analytics giant Teradata has announced cloud-native database and analytics support. Teradata already had a cloud offering that ran on top of infrastructure-as-a-service (IaaS) infrastructure, enabling enterprises to run workloads across cloud and on-premise servers. The new service supports software-as-a-service (SaaS) deployment models that will help Teradata compete against companies like Snowflake and Databricks.

The company is launching two new cloud-native offerings. VantageCloud Lake extends the Teradata Vantage data lake to a more elastic cloud deployment model. Teradata ClearScape Analytics helps enterprises take advantage of new analytics, machine learning and artificial intelligence (AI) development workloads in the cloud. The combination of cloud-native database and analytics promises to streamline data science workflows, support ModelOps and improve reuse from within a single platform. 

Teradata was an early leader in advanced data analytics capabilities that grew out of a collaboration between the California Institute of Technology and Citibank in the late 1970s. The company optimized techniques for scaling analytics workloads across multiple servers running in parallel. Scaling across servers provided superior cost and performance properties compared to other approaches that required bigger servers. The company rolled out data warehousing and analytics on an as-a-service basis in 2011 with the introduction of the Teradata Vantage connected multicloud data platform.

“Our newest offerings are the culmination of Teradata’s three-year journey to create a new paradigm for analytics, one where superior performance, agility and value all go hand-in-hand to provide insight for every level of an organization,” said Hillary Ashton, chief product officer of Teradata.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Cloud-native competition

Teradata’s first cloud offerings ran on specially configured servers on cloud infrastructure. This allowed enterprises to scale applications and data across on-premise and cloud servers. However, the data and analytics scaled at the server level. If an enterprise needed more compute or storage, it had to provision more servers. 

This created an opening for new cloud data storage startups like Snowflake to take advantage of new architectures built on containers, meshes and orchestration techniques for more dynamic infrastructure. Enterprises took advantage of the latest cloud tooling to roll out new analytics at high speed. For example, Capital One rolled out 450 new analytics use cases after moving to Snowflake. 

Although these cloud-native competitors improved many aspects of scalability and flexibility, they lacked some aspects of governance and financial controls baked into legacy platforms. For example, after Capital One moved to the cloud, it had to develop an internal governance and management tier to enforce cost controls. Capital One also created a framework to streamline the user analytics journey by incorporating content management, project management and communication within a single tool. 

Old meets new

This is where the new Teradata offerings promise to shine. It promises to combine the new kinds of architectures pioneered by cloud-native startups with the governance, cost-controls and simplicity of a consolidated offering. 

“Snowflake and Databricks are no longer the only answer for smaller data and analytics workloads, especially in larger organizations where shadow systems are a significant and growing issue, and scale may play into workloads management concerns,” Ashton said. 

The new offering also takes advantage of Teradata’s various R&D into smart scaling, allowing users to scale based on actual resource utilization rather than simple static metrics. The new offering also promises a lower total cost of ownership and direct support for more kinds of analytics processing. For example, ClearScape Analytics includes a query fabric, governance and financial visibility. This also promises to simplify predictive and prescriptive analytics. 

ClearScape Analytics includes in-database time series functions that streamline the entire analytics lifecycle, from data transformation and statistical hypothesis tests to feature engineering and machine learning modeling. These capabilities are built directly into the database, improving performance and eliminating the need to move data. This can help reduce the cost and friction of analyzing a large volume of data from millions of product sales or IoT sensors. Data scientists can code analytics functions into prebuilt components that can be reused by other analytics, machine learning, or AI workloads. For example, a manufacturer could create an anomaly detection algorithm to improve predictive maintenance. 

Predictive models require more exploratory analysis and experimentation. Despite the investment in tools and time, most predictive models never make it into production, said Ashton. New ModelOps capabilities include support for auditing datasets, code tracking, model approval workflows, monitoring model performance and alerting when models become non-performing. This can help teams schedule model retraining when they start to lose accuracy or show bias.

“What sets Teradata apart is that it can serve as a one-stop shop for enterprise-grade analytics, meaning companies don’t have to move their data,” Ashton said. “They can simply deploy and operationalize advanced analytics at scale via one platform.”

Ultimately, it is up to the market to decide if these new capabilities will allow the legacy data pioneer to keep pace or even gain an edge against new cloud data startups. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Intel VP talks AI strategy as company takes on Nvidia

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Intel is on an artificial intelligence (AI) mission that it considers very, very possible.  

The company is the world’s largest semiconductor chip manufacturer by revenue, and is best known for its CPU market dominance, with its familiar “Intel inside” campaign — reminding us all what resided inside our personal computers. However, in an age when AI chips are all the rage, the company finds itself chasing competitors, most notably Nvidia, which has a massive head start in AI processing with its GPUs. 

There are significant benefits to catching up in this space. According to a report, the AI chip market was worth around $8 billion in 2020, but is expected to grow to nearly $200 billion by 2030. 

At Intel’s Vision event in May, the company’s new CEO, Pat Gelsinger, highlighted AI as central to the company’s future products, while predicting that AI’s need for higher performance levels of compute makes it a key driver for Intel’s overall strategy. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Gelsinger said he envisioned four superpowers that spur innovation at Intel: Pervasive connectivity, ubiquitous compute, AI and cloud-to-edge infrastructure. 

That requires high-performance hardware-plus-software systems, including in tools and frameworks used to implement end-to-end AI and data pipelines. As a result, Intel’s strategy is “to build a stable of chips and open-source software that covers a broad range of computing needs as AI becomes more prevalent,” a recent Wall Street Journal article noted. 

“Each of these superpowers is impressive on its own, but when they come together, that’s magic,” Geisinger said at the Vision event. “If you’re not applying AI to every one of your business processes, you’re falling behind. We’re seeing this across every industry.” 

It is in that context that VentureBeat spoke recently with Wei Li, vice president and general manager of AI and analytics at Intel. He is responsible for AI and analytics software and hardware acceleration for deep learning, machine learning and big data analytics on Intel CPUs, GPUs, AI accelerators and XPUs with heterogeneous and distributed computing. 

Intel’s software and hardware connection

According to Li, it is Intel’s strong connection between software and hardware that makes the company stand out and ready to compete in the AI space. 

“The biggest problem we’re trying to solve is creating a bridge between data and insights,” he said. “The bridge needs to be wide enough to handle a lot of traffic, and the traffic needs to have speed and not get stuck.” 

That means AI needs software to perform efficiently and fast, with an entire ecosystem that enables data scientists to take large amounts of data and devise solutions, as well as hardware acceleration that provides the capacity to process the data efficiently. 

“On the hardware side, when we add specific acceleration inside hardware, we need to know what we accelerate,” Li said. “So we are doing a lot of co-design, where the software team works with the hardware team very closely.”

The two groups operate almost like a single team, he added, to understand the models, discover performance bottlenecks and to add hardware capacity. 

“It’s an outside-in approach, a tightly integrated co-design, to make sure the hardware is designed the right way,” he said, adding that the original GPU was not designed for AI but happened to have the right amount of compute and bandwidth. Since then, GPUs have evolved. 

“When we design GPUs nowadays, we look at AI as an important workload to drive the GPU design,” he said. “There are specific features inside the GPU that are only for AI. That is the advantage of being in a company where we have both software and hardware teams.” 

Intel’s goal is to scale its AI efforts, said Li, which he maintained is about developing an ecosystem rather than separate solutions. 

“It’s going to be how we lead and nurture an open AI software ecosystem,” he explained. “Intel has always been an open ecosystem that enables competition, which allows Intel’s technologies to get to market more quickly at scale.” 

Intel’s trained AI reference kits increase speed

Historically, Intel has done a lot of work on the software capacity side to get better performance – basically increasing the width of the bridge between data and insights. 

Last month, Intel released trained AI reference kits to the open-source community, which Li said is one of the steps the company is taking to increase the speed of crossing the bridge. 

“Traditionally, AI software was designed for specialists, for the most part,” he said. “But we want to target a much broader set of developers.” 

The AI models in the reference kits were designed, trained, and tested from among thousands of models for specific use cases, while data scientists can customize and fine-tune the model with their own data. 

“You get a combination of ease of use because you’re starting from something almost pre-cooked, plus you get all the optimized software as part of the package so you can get your solution quickly,” Li explained. 

Priorities over the next year 

In the coming year, one of Intel’s biggest AI priorities is on the software side.

“We will be spending more effort focusing on ease of use,” Li said. 

On the hardware side, he added, new products will focus heavily on performance, including the Sapphire Rapids Xeon server processor that will be released in 2023

“It’s like a CPU with a GPU embedded inside because of the amount of compute capabilities you have,” said Li. “It’s a game changer to have all the acceleration inside the GPU.” 

In addition, Intel is focusing on the performance of their data center GPU, working with their customer Argonne National Laboratory, which serves their customers and developers. 

Biggest Intel AI challenges

Li said the biggest challenge his team faces is executing on Intel’s AI vision. 

“We really want to make sure we execute well so we can deliver on the right schedule and make sure we run fast,” he said. “We want to have a torrid pace, which is not easy as a big company.” 

However, Li will not blame external factors creating challenges for Intel, such as the economy or inflation. 

“Everybody has headwinds, but I want to make sure we do the best we can with the things we have control over as a team,” he said. “So I’m pretty optimistic, particularly in the AI domain. It’s like it’s back to my graduate student days – you can really think big. Anything is possible.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Amazon iRobot play takes ambient intelligence efforts to next level

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


What is Amazon’s $1.7 billion acquisition of iRobot, the maker of the popular Roomba vacuum cleaner, really about?

At Amazon’s Alexa Live 2022 event in late July, there were clues when the company outlined its general strategy for enabling ambient intelligence – or making AI-powered technology available without the need for users to learn how to operate a service. 

“Some companies have a vision for technology that’s rooted in phone apps or in a VR headset,” Aaron Rubenson, VP of Amazon Alexa, told VentureBeat in July. “Our goal is to build technology that allows customers to spend more time looking at the world and interacting with people.” 

At that time, just a few weeks before announcing its acquisition of iRobot, Rubenson used Roomba as an example of ambient intelligence. 

Alexa got its start, he explained, by responding to users uttering a voice command to do something. The modern Alexa service goes beyond that to anticipate what a user might want through hunches, and then enabling those hunches with routines.

One example of this concept, he added, is robotic vacuum maker iRobot, which uses hunches to analyze the usage patterns of users to recommend a routine that will optimize the cleaning process.

“One of the hallmarks of ambient intelligence is that it’s proactive,” Rubenson said. 

Given those comments, it’s no surprise that many believe Amazon’s vision of moving toward ambient intelligence is at the heart of the iRobot acquisition – and maintain that the same is true of previous acquisitions, including the electric doorbell company Ring. 

Roomba creates maps of homes

“Roomba creates a map of your internal space, which it kind of has to do in order to do its job,” said Ben Winters, counsel at the Electronic Privacy Information Center (EPIC) and leader of EPIC’s AI and Human Rights Project. “If you think of that in combination with a Ring doorbell, spending habits, home network activity, your Whole Foods order, it’s this next level of having an ability to know everything about not just what your habits are, but about your home.” 

According to Brad Porter, former Amazon VP of robotics and cofounder of Collaborative Robotics, any knowledge Amazon gains from iRobot data is about its focus on robotics and consumer devices. 

“While other big tech companies are heavily focused on the metaverse, Amazon is deeply focused on physical interaction with the real world through robotics – the handful of robotics investments in other big tech companies seem far less focused,” he said. “Amazon’s experience deploying robotics in the real world is already a competitive advantage that will likely increase with this acquisition.” 

Ethical AI issues around ambient intelligence

Some experts cite ethical issues around the drive toward ambient intelligence. Triveni Gandhi, responsible AI lead at AI platform Dataiku, said the constant “listening” and monitoring required by devices for data collection isn’t always clearly communicated and rarely offered as an “opt-in” choice. 

“How is this data stored securely, who has access to it, how is it, in turn, used to train and build other unrelated models or products?” she said. “The answers to these questions are often hard to find, and in fact many users unknowingly turn over the rights to their data without understanding the full ramifications of that.”

The second issue, Gandhi continued, is that ambient intelligence, especially in the enterprise, can create a feedback cycle that might prevent innovative approaches to new problems. 

“Automating background tasks is a useful aspect of everyday AI, but it works even more effectively when it is subject to monitoring and retraining,” she said. Ambient AI, while promising, “may create blind spots based on existing biases in data, which is why human oversight and assessment of model outputs is important.” she explained. 

Amazon, iRobot and data privacy

However, Porter pointed out that large tech companies have more to lose if they lose customer trust around data privacy and, as a result, they have stronger, more mature safeguards around data privacy and data protection. 

“Without knowing the current quality of the safeguards iRobot has in place, but just based on Amazon’s typical pattern of post-acquisition investment, I expect Amazon will invest significantly in further strengthening the safeguards around any customer data iRobot collects,” he said. 

But EPIC’s Winters wasn’t so sure about Amazon’s data privacy efforts in the wake of the iRobot acquisition. 

“They know that they are creepy-sounding to a lot of people and they know they aren’t going to win those people over,” he said. “I don’t think they care that much about the sort of general consumer sentiment from people that are really concerned about Amazon – obviously you could throw out your Roomba, but the more of these acquisitions [related to] people living their lives, the less choice people have and the more different things [Amazon] could do with [them].” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
Computing

Twitter takes one more step toward giving us an edit button

Twitter is apparently working on a new tweet embed feature that indicates whether or not an embedded tweet has been edited, taking us one step closer to actually getting a proper edit button.

On Monday, Jane Manchun Wong tweeted a screenshot of the in-progress tweet embed feature. The screenshot features two versions of the same embedded tweet.

Embedded Tweets will show whether it’s been edited, or whether there’s a new version of the Tweet

When a site embeds a Tweet and it gets edited, the embed doesn’t just show the new version (replacing the old one). Instead, it shows an indicator there’s a new version pic.twitter.com/mAz5tOiyOl

— Jane Manchun Wong (@wongmjane) August 1, 2022

The tweet at the top of Wong’s screenshot appears to be a corrected, edited version of the tweet below it. The tweet embed at the top of the screenshot (the edited tweet) features a message that says “Last edited 6:30 PM · Aug 1, 2022” right under the text of the edited tweet. The tweet at the bottom of the screenshot appears to be the original tweet and contains a typo that is later seen to be corrected in the tweet at the top of the screenshot. The tweet embed at the bottom of the screenshot (the original tweet) also features a message, but this message is different. This message says: “There’s a new version of this Tweet.”

Essentially, as Wong notes in the above tweet and in a later reply tweet (see below), the tweet embed feature Twitter is working on is a way for edited tweets to remain transparent about the changes made to them. That way, if a website does embed someone’s tweet in an article and that tweet gets edited later by the tweet author, the tweet embed won’t just automatically morph into the newly edited version of the tweet without context.

Readers could still see the original tweet with an “indicator” as Wong puts it, that informs the reader that a newer version of the tweet exists. Or they could see an edited tweet with an indicator that notes when it was last edited.

It’s for the best — so that the Tweet author won’t be able to “rug pull” the site embedding the Tweet with something completely different

— Jane Manchun Wong (@wongmjane) August 1, 2022

This tweet embed feature (if Twitter ends up rolling it out for everyone) looks like a decisive answer to some of the concerns about the bird app finally getting an edit button: What happens if changes are made to a tweet that alters its meaning? How can we have the freedom to correct our tweets but also remain transparent about the changes that have been made? This tweet embed feature seems to answer those questions and so far it looks like a good answer.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

iOS 16’s new Lockdown Mode takes iPhone security to the max

Apple has introduced an extra layer of security coming to iOS 16, called Lockdown Mode. The Cupertino, California-based company announced the new extreme cybersecurity feature on July 6 with the aim of protecting people at risk of being attacked by targeted mercenary spyware.

Lockdown Mode is an optional feature that not every iPhone user will need, but would most likely be used by politicians, activists, celebrities, and other public figures who fear they’re being targeted by spyware created by private companies. This includes the like of NSO Group, which was sued last fall for using Pegasus to hack the phones of political figures worldwide — including the widow of the late Saudi dissident journalist Jamal Khashoggi, and the prime minister of Spain, as well as dozens of journalists.

“While the vast majority of users will never be the victims of highly targeted cyberattacks, we will work tirelessly to protect the small number of users who are,” said Ivan Krstić, Apple’s head of Security Engineering and Architecture. “That includes continuing to design defenses specifically for these users, as well as supporting researchers and organizations around the world doing critically important work in exposing mercenary companies that create these digital attacks.”

When Lockdown Mode is enabled, it limits the iPhone’s functionality to render it invulnerable to attacks. It blocks some message attachment types other than images, disables preview links, blocks FaceTime calls from unknown contacts, and prevents wired connections to a computer or accessory when the iPhone is locked — among other things.

Apple is also making a $10 million grant out to the Dignity and Justice Fund to bolster research into enhancing cybersecurity, as well as investigating and preventing highly targeted cyberattacks. Any additional research money will come from the damages awarded from the ongoing lawsuit against NSO Group.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Amazon takes on PS5 and Xbox scalpers with a new invite system

is trying to fend off scalpers and bots that snag all of the and consoles before you can secure one. It’s rolling out an invite-based ordering option for high-demand products that are in low supply to help legitimate shoppers get their hands on the items.

The invite option is available now for PS5 in the US. It will be enabled for Xbox Series X in the coming days. The company told it plans to use the system for more products and in other countries.

Requesting an invitation doesn’t cost anything and you don’t need to be a Prime member. When you visit , there’s now a “request invitation” button — you may need to click the “new and used” link to see it alongside the other ordering options.

Amazon's

Amazon

Amazon will assess whether an account that requests an invitation is authentic by looking at things like the account creation date and purchase history. If it believes you are, indeed, a human and your invite request is granted, Amazon will send you an email with instructions on how to buy the product.

You’ll have a certain time period in which to complete your purchase before the invite expires and you’ll see a countdown on the product page. Amazon will dish out more invites for a hot-ticket item as it receives more stock.

Sony has a similar invite system on its , where it sells the PS5 in limited quantities. It would have been nice if Amazon had implemented its version before the consoles arrived in November 2020, but c’est la vie. 

Although requesting an invitation won’t guarantee that you’ll be able to buy a PS5 or Xbox Series X from Amazon, it could help. What’s more, it might mean you don’t have to participate in the rush to secure one whenever there’s a restock.

There are other ways Amazon could fend off scalpers too, such as limiting the price of items for Marketplace sellers. At the time of writing, a third-party seller is offering the PS5 disc version on Amazon for $999 — double the console’s retail price. Others are selling it for around $800.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

The EU takes aim at MagSafe and Surface Connect

The European Parliament, the European Council, and member nations have voted to make USB-C the standard charging on most electronic phones, tablets, and cameras by the fall of 2024. The move comes as a way to reduce electronic waste, and also cut the hassle when buying new devices, but it could pose potential problems for Apple and Microsoft’s charging technologies.

Per the vote and legislation, the EU mentions that customers will “no longer need to buy a different charging device and cable each time they purchase a new device.” This addresses the common problem when switching between iPhone and Android devices, or even budget laptops that have different types of charging ports. EU residents in the future will instead enjoy a one-cable-fits-all move with USB-C.

You’re probably familiar with how this settles the controversy over Lighting ports on iPhones, but on the computing side, the story goes a bit deeper and leaves a question to be answered. For computers specifically, this ruling finds that laptop brands have 40 months to make the change over to USB-C. Yet Apple and Microsoft have been pushing their own proprietary charging solutions on new laptops and tablets.

Apple brought back the magnetic MagSafe charging port to 2021 MacBooks and even to the new MacBook Air M2 model. Microsoft, meanwhile, has the Surface Connect port, which is used to Fast Charge tablets like the Surface Pro 8, and charge up power-hungry devices like the Surface Laptop Studio beyond the typical wattage that USB-C PD chargers offer.

The EU ruling might mean that Microsoft and Apple could have to stop offering these chargers out of the box on these devices in favor of USB-C. But there’s a catch.

You can already charge a Surface via USB-C, and can do so with the MacBook, too. So, it might not exactly spell the end of MagSafe and Surface Connect that the EU wants. Instead, it could mean that these chargers will be added as optional, while USB-C becomes the new normal. Take this glaring message from page 7 of the ruling as an example.

“As regards consumer convenience, the preferred option will ensure interoperability through a common interface and charging performance, reducing sales of standalone EPS and cables and promoting their reuse.”

Apple and Microsoft could already have the lead while the EU comes on board to make USB-C charging the norm on new devices. But lookout, MagSafe and Surface Connect, the EU certainly has its eye on you.

Editors’ Choice




Repost: Original Source and Author Link