ChatGPT’s much-heralded Mac app was storing conversations as plain text

Post content hidden for low score. Show…
Post content hidden for low score. Show…

Sajuuk

Ars Tribunus Angusticlavius
9,621
As with Microsoft's Recall, I don't find this particularly damning.

Is it better to protect it? Maybe?

But everything you do is stored nonprotected on your computer. That's the default condition!

And you shouldn't be sending personal information to OpenAI anyway!
"But everything you do is stored nonprotected on your computer. That's the default condition!"

Literally the opposite of what MacOS actually does, which is why OpenAI explicitly opted-out of the default user sandbox.
 
Last edited:
Upvote
187 (197 / -10)

metavirus

Ars Praetorian
469
Subscriptor++
My company has been exploring how to get into AI slowly and responsibly and this is yet another reason why we don’t trust any arrangement that would be direct with OpenAI. They are immature, unreliable, and not anywhere near enterprise-grade; proven over the long haul. This instance in particular is absolutely shameful. Security by design is apparently for mature companies out of their training diapers.
 
Upvote
84 (88 / -4)
Post content hidden for low score. Show…
Post content hidden for low score. Show…
"The app is only available as a direct download from OpenAI's website and is not available through Apple's App Store where more stringent security is required." -- i guess apple wants to maintain some distance, despite the announced integration path, but letting the OpenAI people loose on the brand could really degrade the security brand down the road. i'm quite surprised that some adults weren't assigned to oversee what OpenAI could -- borderline sideload -- on to macs. or, if assigned, thought this was acceptable quality code.
The issue is that Apple would require the security OpenAI didn't think was important. Thats why they didn't submit it to the AppStore. Apple wasn't involved in this discussion persay, nor decided to maintain some distance.

I am curious if OpenAI will get attacked as much for this as Microsoft was with Recall.
 
Upvote
70 (75 / -5)

markgo

Ars Tribunus Militum
2,898
Subscriptor++
I have no idea why you’d disable the default sandbox. This was a business decision.
To gain access to the rest of the file system. It’s the primary reason apps opt out of translocation.

You can get access to individual files while sandboxed but it requires a fancy little dance with platform specific APIs, and user interaction.

I’d disagree with the decision for a privacy sensitive app, but there are technical reasons it might be desirable.
 
Upvote
51 (56 / -5)

Fatesrider

Ars Legatus Legionis
21,799
Subscriptor
It's not a bug and it's not really a major security concern. They just didn't implement the sandboxing feature on macOS, but that's the case for the majority of non-App Store apps, and that's how every Windows app behaves.

It's good that they improved security by implementing the sandbox, but the way the issue was covered by the press made it look like a much more serious issue than what it actually is.
They did not implement the sandbox.
OpenAI has now updated the app, and the local chats are now encrypted, though they are still not sandboxed.
So I think the press losing its mind over this is still fully justified.
 
Upvote
75 (77 / -2)

Reaperman2

Ars Tribunus Militum
1,601
As with Microsoft's Recall, I don't find this particularly damning.

Is it better to protect it? Maybe?

But everything you do is stored nonprotected on your computer. That's the default condition!
My browsers delete all cookies and history when closed, and my computer deletes all temp files when shutting down. And most of my history is never kept anyway because there are only like 5 sites I have javascript enabled on.

Adding features that save all that info without my approval is not okay, because I actually care about security.

You might as well say, "None of these privacy nightmares matter because nobody uses a strong password anyway." You'd be equally wrong, and still a bootlicker.
 
Upvote
47 (60 / -13)
Post content hidden for low score. Show…

Mr. Kite

Ars Scholae Palatinae
920
Subscriptor
Public Service Announcement:

While this is a highly fertile subject for a facepalm GIF, a double facepalm is strongly recommended here.

Apple should have caught this.
How does Apple catch stupid decisions in an app that is not distributed through their app store? How? And why is it Apple‘s job in your mind?
 
Upvote
141 (143 / -2)

mmorales

Ars Praetorian
422
Subscriptor
Folks. Read and think.

Direct loaded Mac apps (what would be called sideloaded on a phone) have full filesystem access and Apple has no involvement in approving. The security responsibility is on the developer — responsibility OpenAI spectacularly failed.

If and only if you go through the Mac App Store does Apple have approval authority, and they require sandboxing. This is part of why a consumer can have a bit more confidence in App Store software from a developer you don’t know.

Many of you are conflating with the phone, where Apple does have approval authority over all apps, and sandboxing is required.
 
Upvote
86 (90 / -4)
Post content hidden for low score. Show…

EVOO

Ars Scholae Palatinae
713
Public Service Announcement:

While this is a highly fertile subject for a facepalm GIF, a double facepalm is strongly recommended here.

Apple should have caught this.
Unlike iOS (for now), MacOS Will run apps from anywhere as long as the user approve it. Other than a signature check, there's nothing Apple can do if people just download an app from rando websites. I don't understand why you think Apple is somehow responsible.
 
Upvote
53 (56 / -3)
My company has been exploring how to get into AI slowly and responsibly and this is yet another reason why we don’t trust any arrangement that would be direct with OpenAI. They are immature, unreliable, and not anywhere near enterprise-grade; proven over the long haul. This instance in particular is absolutely shameful. Security by design is apparently for mature companies out of their training diapers.
But they're shifting the paradigm. Disrupting the status quo. Making the world a better place. Seriously how in 2024 is this kind of shit still going on. Nevermind, I know how....
 
Upvote
0 (9 / -9)

willson556

Seniorius Lurkius
3
Subscriptor++
1) It stores a key somewhere, same issue as it has to read that
Easy. On macOS, the Keychain Services API is perfect for this. Applications can store keys in there easily. Users have to explicitly grant access to any apps that wish to access an item they didn't create.
 
Upvote
41 (41 / 0)
It's not a bug and it's not really a major security concern. They just didn't implement the sandboxing feature on macOS, but that's the case for the majority of non-App Store apps, and that's how every Windows app behaves.

It's good that they improved security by implementing the sandbox, but the way the issue was covered by the press made it look like a much more serious issue than what it actually is.
Imagine... a Chinese firm also trying to be a world AI champ caught with this... would you still feel the same? Perhaps being a ChatGPT fan has clouded your judgment?
 
Upvote
35 (37 / -2)
Post content hidden for low score. Show…

misterjim

Ars Praefectus
5,689
Subscriptor
This isn't some obscure bug, these guys ship product and just don't care about the user.

I don't see how they can be trusted to do anything they promise.
100%

Desktop implementations have been so half assed I have zero trust that any part of the codebase is done right.
 
Upvote
26 (27 / -1)
Post content hidden for low score. Show…
Post content hidden for low score. Show…
Post content hidden for low score. Show…