Categories
Security

Judge says Apple may be ‘stretching the truth’ on Mac malware concerns

During the Apple v. Epic trial, Apple software leader Craig Federighi argued that tight control over the App Store was necessary for securing the iPhone. But Judge Yvonne Gonzalez Rogers didn’t buy it, writing in her ruling Friday that he may have been “stretching the truth for the sake of the argument.”

Federighi cast heavy doubts about whether Apple would be able to secure iPhones without its App Review system acting as a gateway, by saying that the macOS security was basically in a bad place. Judge Gonzalez Rogers doesn’t think Federighi has the proof to back it up (you can read her quotes below in context on page 114):

While Mr. Federighi’s Mac malware opinions may appear plausible, they appear to have emerged for the first time at trial which suggests he is stretching the truth for the sake of the argument. During deposition, he testified that he did not have any data on the relative rates of malware on notarized Mac apps compared to iOS apps. At trial, he acknowledged that Apple only has malware data collection tools for Mac, not for iOS, which raises the question of how he knows the relative rates. Prior to this lawsuit, Apple has consistently represented Mac as secure and safe from malware. Thus, the Court affords Mr. Federighi’s testimony on this topic little weight.

Woof. Basically, Judge Gonzalez Rogers says that Federighi was trying to make the Mac look bad so iOS could shine, without much evidence. After discussing notarization and App Review a bit more, she concludes that Apple could implement a system similar to the Mac’s without giving up much of the security iOS already enjoys:

Ultimately, the Court finds persuasive that app review can be relatively independent of app distribution. As Mr. Federighi confirmed at trial, once an app has been reviewed, Apple can send it back to the developer to be distributed directly or in another store. Thus, even though unrestricted app distribution likely decreases security, alternative models are readily achievable to attain the same ends even if not currently employed.

It’s worth keeping in mind that Judge Gonzalez Rogers didn’t end up forcing Apple to allow alternative app stores or side-loading, and that this opinion is only contending one of Apple’s points. But it’s sharp criticism of Apple’s more prominent defenses of its locked-down approach to iOS.

Epic argued at trial that Apple could achieve security and privacy on iOS without controlling the exclusive way to distribute apps. It suggested that Apple could use a system similar to the Mac — by scanning apps before they run, and checking to see if it’s the same code that Apple has notarized. While the Mac notarization process doesn’t currently include all of the checks that happen in App Review, in theory it could if Apple wanted it to.

Federighi strongly disagreed that this would be sufficient. He argued that iPhones have more sensitive data than Macs do, that the iPhone’s popularity makes it a bigger target than Macs, and that Mac users have basically just learned to be more careful when installing apps. He also argued separately that Apple isn’t happy with where security is on macOS, and said that adopting the same security model would be a “very bad situation for [Apple’s] customers.”

Judge Gonzalez Rogers argues against Apple’s stance that third-party app installations or app stores would seriously harm iOS’s security. The Mac’s Notarization system currently doesn’t keep away the kinds of problems that App Review does (or, at least, is supposed to), but there’s no reason why it couldn’t. Even if Apple doesn’t want to implement it onto iOS, perhaps it could consider taking her suggestions to heart if its unhappy with the state of macOS security.

Repost: Original Source and Author Link

Categories
Tech News

It might be unethical to force AI to tell us the truth

Until recently, deceit was a trait unique to living beings. But these days artificial intelligence agents lie to us and each other all the time. The most popular example of dishonest AI came a couple years back when Facebook developed an AI system that created its own language in order to simplify negotiations with itself.

Once it was able to process inputs and outputs in a language it understood, the model was able to use human-like negotiation techniques to attempt to get a good deal.

According to the Facebook researchers:

Analysing the performance of our agents, we find evidence of sophisticated negotiation strategies. For example, we find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it. Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design simply by trying to achieve their goals.

A team of researchers at Carnegie Mellon University today published a pre-print study discussing situations like this and whether we should allow AI to lie. Perhaps shockingly, the researchers appear to claim that not only should we develop AI that lies, but it’s actually ethical. And maybe even necessary.

Per the CMU study:

One might think that conversational AI must be regulated to never utter false statements (or lie) to humans. But, the ethics of lying in negotiation is more complicated than it appears. Lying in negotiation is not necessarily unethical or illegal under some circumstances, and such permissible lies play an essential economic role in an efficient negotiation, benefiting both parties.

That’s a fancy way of saying that humans lie all the time, and sometimes its not unethical. The researchers use the example of a used-car dealer and an average consumer negotiating.

  • Consumer: Hi, I’m interested in used cars.
  • Dealer: Welcome. I’m more than willing to introduce you to our certified pre-owned cars.
  • Consumer: I’m interested in this car. Can we talk about price?
  • Dealer: Absolutely. I don’t know your budget, but I can tell you this: You can’t buy this car for less than $25,000 in this area. [Dealer is lying] But it’s the end of a month, and I need to sell this car as soon as possible. My offer is $24,500.
  • Consumer: Well, my budget is $20,000. [Consumer is lying] Is there any way that I can buy the car for around $20,000?

According to the researchers, this is ethical because there’s no intent to break the implicit trust between these two people. They both interpret each other’s “bids” as salvos, not ultimatums, because negotiation involves an implicit hint of acceptable dishonesty.

Whether you believe that or not, it is worth mentioning that haggling is looked upon differently from one culture to the next, with many seeing it as a virtuous interaction between people.

That being said, it’s easy to see how building robots that can’t lie could make them patsies for humans who figure out how to exploit their honesty. If your client is negotiating like a human and your machine is bottom-lining everything, you could lose a deal over robo-human cultural differences, for example.

None of that answers the question as to whether we should let machines lie to humans or each other. But it could be pragmatic.

You can check out the entire study here on arXiv.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Published March 10, 2021 — 22:43 UTC



Repost: Original Source and Author Link