Categories
Computing

The Next MacBook Air May Get White Bezels and Bright Colors

If you believe the latest leaks and renders, the next MacBook Air is about to take a lot of inspiration from the new iMac. The images come from leaker Jon Prosser’s Front Page Tech, and they show the new devices sporting the radical new design. According to Prosser, these are artist renderings he had made based on what he was shown from his own source.

The imagery shows several major changes to the current MacBook Air design, starting with your choice of one of seven available colors. The options appear to be Silver, Blue, Yellow, Orange, Pink, Purple, and Green. Sound familiar? Yep, these are the same options you get on the new iMacs. However, the depicted colors aren’t nearly as bold as what the iMac ended up sporting.

Before the launch of the iMac, Prosser leaked similar renders about the new color options. The actual colors of the iMac ended up being much bolder, and offered a different tone on the front side.

You don’t have to trust the source to see why these new color options are a likely possibility for the next MacBook Air. Apple rarely devises an entire new design scheme without plans to roll it out on other products. Matching the MacBook Air with the 24-inch iMac feels like the right fit, especially if Apple is interested in distinguishing these products from the Pro-level options.

The bezel and keyboard color is also a major departure from the current MacBook Air. The renders show very thin, white bezels, as well as white keycaps. The imagery bears a lot of resemblance to the Razer Book 13, which features a silver chassis and white keyboard. The white bezels were one of the most divisive design elements of the new iMacs.

It’s common to see ultrathin bezels on future product renderings, and Prosser admits that the size of the bezels were largely a guess based on what he was shown. In this case, though, it wouldn’t be hard to believe. Other 13-inch laptops with 16:10 displays already feature bezels this thin, including the Dell XPS 13andr Razer Book 13. The new iMac has a large bottom bezel, but that’s a design restraint of the all-in-one form factor, which forced Apple to stuff the components into the space below the screen to keep the back thin.

Of course, these renderings also show a much thinner chassis, which Prosser says is just tall enough to fit the USB-C port. Apple’s first M1 MacBook Air reused the same chassis as the Intel-based MacBook Air. Despite being completely fanless, the chassis was large enough to fit fans inside, meaning a thinner MacBook Air that’s designed around the M1 is fairly believable. Prosser says the new MacBook Air, though, will come with the second-generation M2 chip.

We don’t know for sure when the next MacBook Air will come out. The next-generation M-chips have already gone into mass production, pointing to a launch in the next few months, possibly at the Worldwide Developers Conference in June, or perhaps in the fall.

The current MacBook Air was updated in October 2020 with the inclusion of the M1 chip, alongside the MacBook Pro and Mac Mini.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Government audit of AI with ties to white supremacy finds no AI

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


In April 2020, news broke that Banjo CEO Damien Patton, once the subject of profiles by business journalists, was previously convicted of crimes committed with a white supremacist group. According to OneZero’s analysis of grand jury testimony and hate crime prosecution documents, Patton pled guilty to involvement in a 1990 shooting attack on a synagogue in Tennessee.

Amid growing public awareness about algorithmic bias, the state of Utah halted a $20.7 million contract with Banjo, and the Utah attorney general’s office opened an investigation into matters of privacy, algorithmic bias, and discrimination. But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.

“Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter, but there was no evidence of any Twitter data incorporated into Live Time,” reads a letter Utah State Auditor John Dougall released last week.

The incident, which VentureBeat previously referred to as part of a “fight for the soul of machine learning,” demonstrates why government officials must evaluate claims made by companies vying for contracts and how failure to do so can cost taxpayers millions of dollars. As the incident underlines, companies selling surveillance software can make false claims about their technologies’ capabilities or turn out to be charlatans or white supremacists — constituting a public nuisance or worse. The audit result also suggests a lack of scrutiny can undermine public trust in AI and the governments that deploy them.

Dougall carried out the audit with help from the Commission on Protecting Privacy and Preventing Discrimination, a group his office formed weeks after news of the company’s white supremacist associations and Utah state contract. Banjo had previously claimed that its Live Time technology could detect active shooter incidents, child abduction cases, and traffic accidents from video footage or social media activity. In the wake of the controversy, Banjo appointed a new CEO and rebranded under the name safeXai.

“The touted example of the system assisting in ‘solving’ a simulated child abduction was not validated by the AGO and was simply accepted based on Banjo’s representation. In other words, it would appear that the result could have been that of a skilled operator as Live Time lacked the advertised AI technology,” Dougall states in a seven-page letter sharing audit results.

According to Vice, which previously reported that Banjo used a secret company and fake apps to scrape data from social media, Banjo and Patton had gained support from politicians like U.S. Senator Mike Lee (R-UT) and Utah State Attorney General Sean Reyes. In a letter accompanying the audit, Reyes commended the results of the investigation and said the finding of no discrimination was consistent with the conclusion the state attorney general’s office reached because there simply wasn’t any AI to evaluate.

“The subsequent negative information that came out about Mr. Patton was contained in records that were sealed and/or would not have been available in a robust criminal background check,” Reyes said in a letter accompanying the audit findings. “Based on our first-hand experience and close observation, we are convinced the horrible mistakes of the founder’s youth never carried over in any malevolent way to Banjo, his other initiatives, attitudes, or character.”

Alongside those conclusions are a series of recommendations for Utah state agencies and employees involved in awarding such contracts. Choice for anyone considering AI contracts include questions they should be asking third-party vendors and the need to conduct an in-depth review of vendors’ claims and the algorithms themselves.

“The government entity must have a plan to oversee the vendor and vendor’s solution to ensure the protection of privacy and the prevention of discrimination, especially as new features/capabilities are included,” reads one of the listed recommendations. Among other recommendations are the creation of a vulnerability reporting process and evaluation procedures, but no specifics were provided.

While some cities have put surveillance technology review processes in place, local and state adoption of private vendors’ surveillance technology is currently happening in a lot of places with little scrutiny. This lack of oversight could also become an issue for the federal government. The Government by Algorithm report Stanford University and New York University jointly published last year found that roughly half of algorithms used by federal government agencies come from third-party vendors.

The federal government is currently funding an initiative to create tech for public safety, like the kind Banjo claimed to have developed. The National Institute of Standards and Technology (NIST) routinely assesses the quality of facial recognition systems and has helped assess the role the federal government should play in creating industry standards. Last year, it introduced ASAPS, a competition in which the government is encouraging AI startups and researchers to create systems that can tell if an injured person needs an ambulance, whether the sight of smoke and flames requires a firefighter response, and whether police should be alerted in an altercation. These determinations would be based on a dataset incorporating data ranging from social media posts to 911 calls and camera footage. Such technology could save lives, but it could also lead to higher rates of contact with police, which can also cost lives. It could even fuel repressive surveillance states like the kind used in Xinjiang to identify and control Muslim minority groups like the Uyghurs.

Best practices for government procurement officers seeking contracts with third parties selling AI were introduced in 2018 by U.K. government officials, the World Economic Forum (WEF), and companies like Salesforce. Hailed as one of the first such guidelines in the world, the document recommends defining public benefit and risk and encourages open practices as a way to earn public trust.

“Without clear guidance on how to ensure accountability, transparency, and explainability, governments may fail in their responsibility to meet public expectations of both expert and democratic oversight of algorithmic decision-making and may inadvertently create new risks or harms,” the British-led report reads. The U.K. released official procurement guidelines in June 2020, but weeks later a grading algorithm scandal sparked widespread protests.

People concerned about the potential for things to go wrong have called on policymakers to implement additional legal safeguards. Last month, a group of current and former Google employees urged Congress to adopt strengthened whistleblower protections in order to give tech workers a way to speak out when AI poses a public harm. A week before that, the National Security Commission on Artificial Intelligence called on Congress to give federal government employees who work for agencies critical to national security a way to report misuse or inappropriate deployment of AI. That group also recommends tens of billions of dollars in investment to democratize AI and create an accredited university to train AI talent for government agencies.

In other developments at the intersection of algorithms and accountability, the documentary Coded Bias, which calls AI part of the battle for civil rights in the 21st century and examines government use of surveillance technology, started streaming on Netflix today.

Last year, the cities of Amsterdam and Helsinki created public algorithm registries so citizens know which government agency is responsible for deploying an algorithm and have a mechanism for accountability or reform if necessary. And as part of a 2019 symposium about common law in the age of AI, NYU professor of critical law Jason Schultz and AI Now Institute cofounder Kate Crawford called for businesses that work with government agencies to be treated as state actors and considered liable for harm the way government employees and agencies are.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

It’s a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

It’s not just Obama, either. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: “This image speaks volumes about the dangers of bias in AI.”

But what’s causing these outputs and what do they really tell us about AI bias?

First, we need to know a little a bit about the technology being used here. The program generating these images is an algorithm called PULSE, which uses a technique known as upscaling to process visual data. Upscaling is like the “zoom and enhance” tropes you see in TV and film, but, unlike in Hollywood, real software can’t just generate new data from nothing. In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.

In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA. Although you might not have heard of StyleGAN before, you’re probably familiar with its work. It’s the algorithm responsible for making those eerily realistic human faces that you can see on websites like ThisPersonDoesNotExist.com; faces so realistic they’re often used to generate fake social media profiles.

A sample of faces created by StyleGAN, the algorithm that powers PULSE.
Image: The Verge

What PULSE does is use StyleGAN to “imagine” the high-res version of pixelated inputs. It does this not by “enhancing” the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user.

This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. It’s also why you can use PULSE to see what Doom guy, or the hero of Wolfenstein 3D, or even the crying emoji look like at high resolution. It’s not that the algorithm is “finding” new detail in the image as in the “zoom and enhance” trope; it’s instead inventing new faces that revert to the input data.

This sort of work has been theoretically possible for a few years now, but, as is often the case in the AI world, it reached a larger audience when an easy-to-run version of the code was shared online this weekend. That’s when the racial disparities started to leap out.

PULSE’s creators say the trend is clear: when using the algorithm to scale up pixelated images, the algorithm more often generates faces with Caucasian features.

“It does appear that PULSE is producing white faces much more frequently than faces of people of color,” wrote the algorithm’s creators on Github. “This bias is likely inherited from the dataset StyleGAN was trained on […] though there could be other factors that we are unaware of.”

In other words, because of the data StyleGAN was trained on, when it’s trying to come up with a face that looks like the pixelated input image, it defaults to white features.

This problem is extremely common in machine learning, and it’s one of the reasons facial recognition algorithms perform worse on non-white and female faces. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, it’s white men who dominate AI research.

But exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Indeed, they’re so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers.

On a technical level, some experts aren’t sure this is even an example of dataset bias. The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below:

These faces were generated using “the same concept and the same StyleGAN model” but different search methods to Pulse, says Klingemann, who says we can’t really judge an algorithm from just a few samples. “There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally ‘correct,’” he told The Verge.

(Incidentally, this is also the reason why tools like this are unlikely to be of use for surveillance purposes. The faces created by these processes are imaginary and, as the above examples show, have little relation to the ground truth of the input. However, it’s not like huge technical flaws have stopped police from adopting technology in the past.)

But regardless of the cause, the outputs of the algorithm seem biased — something that the researchers didn’t notice before the tool became widely accessible. This speaks to a different and more pervasive sort of bias: one that operates on a social level.

Deborah Raji, a researcher in AI accountability, tells The Verge that this sort of bias is all too typical in the AI world. “Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems,” says Raji. “People of color are not outliers. We’re not ‘edge cases’ authors can just forget.”

The fact that some researchers seem keen to only address the data side of the bias problem is what sparked larger arguments about the Obama image. Facebook’s chief AI scientist Yann LeCun became a flashpoint for these conversations after tweeting a response to the image saying that “ML systems are biased when data is biased,” and adding that this sort of bias is a far more serious problem “in a deployed product than in an academic paper.” The implication being: let’s not worry too much about this particular example.

Many researchers, Raji among them, took issue with LeCun’s framing, pointing out that bias in AI is affected by wider social injustices and prejudices, and that simply using “correct” data does not deal with the larger injustices.

Others noted that even from the point of view of a purely technical fix, “fair” datasets can often be anything but. For example, a dataset of faces that accurately reflected the demographics of the UK would be predominantly white because the UK is predominantly white. An algorithm trained on this data would perform better on white faces than non-white faces. In other words, “fair” datasets can still created biased systems. (In a later thread on Twitter, LeCun acknowledged there were multiple causes for AI bias.)

Raji tells The Verge she was also surprised by LeCun’s suggestion that researchers should worry about bias less than engineers producing commercial systems, and that this reflected a lack of awareness at the very highest levels of the industry.

“Yann LeCun leads an industry lab known for working on many applied research problems that they regularly seek to productize,” says Raji. “I literally cannot understand how someone in that position doesn’t acknowledge the role that research has in setting up norms for engineering deployments.”

When contacted by The Verge about these comments, LeCun noted that he’d helped set up a number of groups, inside and outside of Facebook, that focus on AI fairness and safety, including the Partnership on AI. “I absolutely never, ever said or even hinted at the fact that research does not play a role is setting up norms,” he told The Verge.

Many commercial AI systems, though, are built directly from research data and algorithms without any adjustment for racial or gender disparities. Failing to address the problem of bias at the research stage just perpetuates existing problems.

In this sense, then, the value of the Obama image isn’t that it exposes a single flaw in a single algorithm; it’s that it communicates, at an intuitive level, the pervasive nature of AI bias. What it hides, however, is that the problem of bias goes far deeper than any dataset or algorithm. It’s a pervasive issue that requires much more than technical fixes.

As one researcher, Vidushi Marda, responded on Twitter to the white faces produced by the algorithm: “In case it needed to be said explicitly – This isn’t a call for ‘diversity’ in datasets or ‘improved accuracy’ in performance – it’s a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.”

Update, Wednesday, June 24: This piece has been updated to include additional comment from Yann LeCun.



Repost: Original Source and Author Link

Categories
Security

White House now says 100 companies hit by SolarWinds hack, but more may be impacted

The US government has released updated figures on the number of companies and federal agencies it believes were impacted by the recent SolarWinds hack. “As of today, 9 federal agencies and about 100 private sector companies were compromised,” Deputy National Security Advisor Anne Neuberger said in a briefing, though she declined to name specific organizations. Although the hack was “likely of Russian origin,” Neuberger said the hackers launched their attack from inside the US.

The latest figures revealed are lower than the 250 federal agencies and businesses that were previously reported to have been infected, though Neuberger cautioned that the investigation is still in its “beginning stages” and that “additional compromises” may be found. In particular, the technology companies compromised gives hackers potential footholds for future attacks. Up to 18,000 SolarWinds customers are thought to have originally received the malicious code, though hackers did not attempt to gain additional access to all of them.

The hack originally came to light late last year, when it emerged that hackers had compromised SolarWinds’ monitoring and management software, which is used by multiple government agencies and Fortune 500 companies, Bloomberg notes. Companies including Intel, Nvidia, Cisco, Belkin, and VMWare have all reportedly seen computers on their networks infected, as well as the US Treasury, Commerce, State, Energy, and Homeland Security departments.

The scale of the attack means that it may be many months before the government completes its investigation. As part of the process, Neuberger said the government is planning an executive action to fix the security problems identified, and that “discussions are underway” about how to respond to the perpetrator.

Repost: Original Source and Author Link

Categories
Tech News

White House plans podcast-like weekly chats with President Biden

The Biden administration is bringing back weekly addresses from the president, but with a twist that may appeal to modern, younger audiences. According to the White House, the new weekly chats will have an informal podcast-like style, mimicking the sort of casual chats the public is used to hearing in popular audio shows.

On Saturday, the White House published the first of Biden’s planned weekly chats, which was shared in a video on its YouTube channel. White House Press Secretary Jen Psaki said that these chats will include ‘a variety of formats,’ some of them a traditional presidential address, others more casual with everyday Americans who were selected ahead of time.

In the first weekly chat (above), President Biden spoke with Californian Michele Voelkert about her struggles to get unemployment after getting laid off last year, as well as the effort to find a new job. The conversation also included talk about online school, which has replaced traditional schooling during the pandemic.

The idea behind these new digital, online weekly chats is that the average person will be able to engage with the content using the platforms they’re used to. The Biden team embraced digital and alternative formats over traditional methods due to the pandemic; it makes sense that the administration would continue with this more modern alternative.

Weekly presidential addresses have been something of a traditional, but an inconsistent one, with some presidents regularly engaging with the populace in this way and others abandoning it. President Obama was the most recent president to regularly conduct weekly addresses, a practice that persisted for only a short time during Trump’s term.

Repost: Original Source and Author Link