The Zoom installer let a researcher hack his way to root access on macOS

A security researcher has found a way that an attacker could leverage the macOS version of Zoom to gain access over the entire operating system.

Details of the exploit were released in a presentation given by Mac security specialist Patrick Wardle at the Def Con hacking conference in Las Vegas on Friday. Some of the bugs involved have already been fixed by Zoom, but the researcher also presented one unpatched vulnerability that still affects systems now.

The exploit works by targeting the installer for the Zoom application, which needs to run with special user permissions in order to install or remove the main Zoom application from a computer. Though the installer requires a user to enter their password on first adding the application to the system, Wardle found that an auto-update function then continually ran in the background with superuser privileges.

When Zoom issued an update, the updater function would install the new package after checking that it had been cryptographically signed by Zoom. But a bug in how the checking method was implemented meant that giving the updater any file with the same name as Zoom’s signing certificate would be enough to pass the test — so an attacker could substitute any kind of malware program and have it be run by the updater with elevated privilege.

The result is a privilege escalation attack, which assumes an attacker has already gained initial access to the target system and then employs an exploit to gain a higher level of access. In this case, the attacker begins with a restricted user account but escalates into the most powerful user type — known as a “superuser” or “root” — allowing them to add, remove, or modify any files on the machine.

Wardle is the founder of the Objective-See Foundation, a nonprofit that creates open-source security tools for macOS. Previously, at the Black Hat cybersecurity conference held in the same week as Def Con, Wardle detailed the unauthorized use of algorithms lifted from his open-source security software by for-profit companies.

Following responsible disclosure protocols, Wardle informed Zoom about the vulnerability in December of last year. To his frustration, he says an initial fix from Zoom contained another bug that meant the vulnerability was still exploitable in a slightly more roundabout way, so he disclosed this second bug to Zoom and waited eight months before publishing the research.

“To me that was kind of problematic because not only did I report the bugs to Zoom, I also reported mistakes and how to fix the code,” Wardle told The Verge in a call before the talk. “So it was really frustrating to wait, what, six, seven, eight months, knowing that all Mac versions of Zoom were sitting on users’ computers vulnerable.”

A few weeks before the Def Con event, Wardle says Zoom issued a patch that fixed the bugs that he had initially discovered. But on closer analysis, another small error meant the bug was still exploitable.

In the new version of the update installer, a package to be installed is first moved to a directory owned by the “root” user. Generally this means that no user that does not have root permission is able to add, remove, or modify files in this directory. But because of a subtlety of Unix systems (of which macOS is one), when an existing file is moved from another location to the root directory, it retains the same read-write permissions it previously had. So, in this case, it can still be modified by a regular user. And because it can be modified, a malicious user can still swap the contents of that file with a file of their own choosing and use it to become root.

While this bug is currently live in Zoom, Wardle says it’s very easy to fix and that he hopes that talking about it publicly will “grease the wheels” to have the company take care of it sooner rather than later.

In a statement to The Verge, Matt Nagel, Zoom’s security and privacy PR lead, said: “We are aware of the newly reported vulnerability in the Zoom auto updater for macOS and are working diligently to address it.”

Update August 12th, 11:09 PM ET: Article updated with response from Zoom.

Repost: Original Source and Author Link


Texas Instruments Blamed As Root of Global Chip Shortage

Digital Trends may earn a commission when you buy through links on our site.

Explanations for the global chip shortage have been vague and varied, but a new report is shedding some light on the unexpected source of the shortage.

The report comes from DigiTimes, which claims that major tech companies are pointing to a singular cause for the global chip shortage — namely, the company Texas Instruments. That’s right, the company that makes the graphing calculators we all used in high school.

Texas Instruments does a lot more than make calculators these days. Taking a quick look at the Texas Instrument website, you’ll see that it offers a gargantuan amount of products, including PWM controllers, RGB LED display drivers, and more.

Most importantly, the company has made a name for itself as a manufacturer of analog chips, which help do things like regulate our computers’ voltage levels. These are small but vital parts of modern chip design that, according to some, are causing the holdup in production.

TSMC (Taiwan Semiconductor Manufacturing Company) is among the companies reported to be pointing the finger. TSMC is the world’s largest semiconductor company, and it makes chips for companies like Apple, AMD, Nvidia, and many others outside of the world of computing.

According to the report, TSMC says Texas Instruments, the market leader in these analog chips, is creating a bottleneck in providing access to these important components.

In a recent Reuters article covering the company’s quarterly earnings, the chief financial officer of Texas Instruments admitted that inventory levels were indeed low, causing the company to miss revenue goals.

While Texas Instruments’ production woes are likely not the sole “cause” of the global chip shortage, it’s a microcosm of how delicate supply chains can be. No one wants to take the blame, of course, but clearly the production shortage of a simple element can have wide-ranging effects on everything from the manufacturing of cars to your ability to buy an RTX graphics card or Playstation 5 this holiday season.

Some of the biggest voices in the industry have claimed the chip shortage could last throughout 2022 and even well into 2023.

Editors’ Choice

Repost: Original Source and Author Link


AI bias is prevalent but preventable — here’s how to root it out

All the sessions from Transform 2021 are available on-demand now. Watch now.

The success of any AI application is intrinsically tied to its training data.  You don’t just need the right data quality and the right data volume; you also have to proactively ensure your AI engineers are not passing their own latent biases on to their creations. If engineers allow their own worldviews and assumptions to influence data sets — perhaps supplying data that is limited to only certain demographics or focal points — applications dependent on AI problem-solving will be similarly biased, inaccurate, and, well, not all that useful.

Simply put, we must continuously detect and eliminate human biases from AI applications for the technology to reach its potential. I expect bias scrutiny is only going to increase as AI continues its rapid transition from a relatively nascent technology into an utterly ubiquitous one. But human bias must be overridden to truly achieve that reality. A 2018 Gartner report predicted that through 2030, 85% of AI projects will provide false results caused by bias that has been built into the data or the algorithms — or that is present (latently or otherwise) in the teams managing those deployments. The stakes are high; faulty AI leads to serious reputational damage and costly failures for businesses that make decisions based on erroneous, AI-supplied conclusions.

Identifying AI bias

AI bias takes several forms. Cognitive biases originating from human developers influences machine learning models and training data sets. Essentially, biases get hardcoded into algorithms. Incomplete data itself also produces biases — and this becomes especially true if information is omitted due to a cognitive bias. An AI trained and developed free from bias can still have its results tainted by deployment biases when put into action. Aggregation bias is another risk, occurring when small choices made across an AI project have a large collective impact on the integrity of results. In short, there are a lot of steps inherent to any AI recipe where bias can get baked in.

Detecting and removing AI bias

To achieve trustworthy AI-dependent applications that will consistently yield accurate outputs across myriad use cases (and users), organizations need effective frameworks, toolkits, processes, and policies for recognizing and actively mitigating AI bias. Available open source tooling can assist in testing AI applications for specific biases, issues, and blind spots in data.

AI Frameworks. Frameworks designed to protect organizations from the risks of AI bias can introduce checks and balances that minimize undue influences throughout application development and deployment. Benchmarks for trusted, bias-free practices can be automated and ingrained into products using these frameworks.

Here are some examples:

  • The Aletheia Framework from Rolls Royce provides a 32-step process for designing accurate and carefully managed AI applications.
  • Deloitte’s AI framework highlights six essential dimensions for implementing AI safeguards and ethical practices.
  • A framework from Naveen Joshi details cornerstone practices for developing trustworthy AI. It focuses on the need for explainability, machine learning integrity, conscious development, reproducibility, and smart regulations.

Toolkits. Organizations should also leverage available toolkits to recognize and eliminate bias present within machine learning models and identify bias patterns in machine learning pipelines. Here are some particularly useful toolkits:

  • AI Fairness 360 from IBM is an extensible (and open source) toolkit that enables examination, reporting, and mitigation of discrimination and bias in machine learning models.
  • IBM Watson OpenScale provides real-time bias detection and mitigation and enables detailed explainability to make AI predictions trusted and transparent.
  • Google’s What-If Tool offers visualization of machine learning model behavior, making it simple to test trained models against machine learning fairness metrics to root out bias.

Processes and policies. Organizations will likely need to introduce new processes purposely designed to remove bias from AI and increase trust in AI systems. These processes define bias metrics and regularly and thoroughly check data against those criteria. Policies should play a similar role, establishing governance to require strict practices and vigilant action in minimizing bias and addressing blind spots.

Remember: AI trust is a business opportunity

Those taking steps to reduce bias in their AI systems can recharacterize this potential for crisis into an opportunity for competitive differentiation. Promoting anti-bias measures can set a business apart by establishing greater customer confidence and trust in AI applications. This is true today but will be even more so as AI proliferates. Transparency in the pursuit of unbiased AI is good for business.

Advanced new AI algorithms are bringing AI into new fields — from synthetic data generation to transfer learning, reinforcement learning, generative networks, and neural networks. Each of these exciting new applications will have their own susceptibility to bias influences, which must be addressed for these technologies to flourish.

With AI bias, the fault is not in the AI but in ourselves. Taking all available steps to remove human bias from AI enables organizations to produce applications that are more accurate, more effective, and more compelling to customers.

Shomron Jacob is Engineering Manager for Applied Machine Learning and AI at


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Nintendo Switch Android root released – but why?

Today an unofficial root project was released for Nintendo Switch, allowing the masses to access Android on the device. But why? Why would you drop in on this sort of software when you have a Nintendo Switch in the first place? Today we’re taking a look at what it’ll take to access this software AND why you might want to do so – or avoid doing so.

Nintendo Switch has been the subject of hack work since it was first released. When you have a device of such repute, the work is inevitable. It’s fun to attempt to dive in to the guts of a device that’s played with by millions of users.

With the work done by XDA member bylaws (and associates), users can now access Android 10 (LineageOS 17.1 with Shield TV trees). They’ve made the process of initiating the hack on one’s own Nintendo Switch relatively easy, too. If you’d like to do so, head over to XDA Developers forum at your own risk.

You may want to do this if you’re excited about running Android on your Nintendo Switch. Maybe you want to run Android TV on your device to access a load of games and apps not normally available on the Nintendo Switch. You’ll find optimized dock support, resolution scaling, an “optimized touch screen driver”, Shield TV remote app support, Hori Joy-Con support, OTA updates (for SHIELD, basically), and more.

BUT, know this: SHIELD-specific games do not work. The normal default keyboard cannot be used with a controller, sleep might not work at all, and some apps don’t like to work with the Joy-Con D-Pad whatsoever.

Basically all the problems of any unofficial hack are here, too. It’s a real pro/con sort of situation, and the worth of the hack will be based entirely on your specific set of wants and needs.

Of course you may be able to play Angry Birds or a ROM of Tony Hawk Pro Skater if you really try hard – so who knows? Worth it? Let us know if you’re going to give it a try!

Repost: Original Source and Author Link