ChatGPT is treated like a search engine for Siri, and generates text and images for apps.
See full article...
See full article...
One of the liveblog mentioned Sam Altman was in attendance and my heart sank.Just days ago you could see comment after comment stating "Well, at least Apple is being cautious with AI.", "Well of course Apple is taking a 'wait-and-see approach', they're the adults in the room", etc.
This does not bode well.
I also feel strongly about using AI in most of the way it's used. But I'm very stupid. My screen was acting up and I tried to fix it, so I used the best tool I have, my hammer. That didn't work out too well, obviously.I want a single master switch to permanently disable this. Just one toggle in the settings, and if ChatGPT is set to "off" there, then it will never go to that site. I'd rather have no answer than an AI generated one. I really can do without a bullshit generator in my pocket.
But how are they "inflicting" this on users? You can already run ChatGPT in a browsers, this just gives you the option to use it when you're making inquiries. As for "how many times will they ask?", my understanding is they're not "asking" anything, they're just putting a button where you can use it if you want. Not sure what you mean by "will the flag reset", unless there's a way to permanently turn off the option there's nothing to reset.How many times will it ask? Will the flag reset on every IOS update?
Why would Apple inflict this on users when they claim to be the kings of trust and privacy?
That’s my first thought as well. I want nothing to do with those data thieves.Can we turn it off completely? Because that's what I'm doing if it's possible.
Speak for yourself. I do want this. I don't want the OS to take screenshots and regularly send them to Apple but this sounds useful. Don't turn it on if you don't want it.Fucking STOP IT
no one wants this.
no one asked for this.
And one that can be answered perfectly well using a classic AI-free web search.For all the good integrations I saw today - this one seems the most vexxing.
Their example of asking it for recipes was also an interesting choice. It seems to be one of the places ai consistently gets things wrong.
Nobody tell them.This is how I completely go to Linux including Linux phones.
The screenshot attached to this story is of the OS asking if you want to use ChatGPT...But how are they "inflicting" this on users? You can already run ChatGPT in a browsers, this just gives you the option to use it when you're making inquiries. As for "how many times will they ask?", my understanding is they're not "asking" anything, they're just putting a button where you can use it if you want. Not sure what you mean by "will the flag reset", unless there's a way to permanently turn off the option there's nothing to reset.
I was one of those who said “hopefully it’ll be a more thoughtful integration of ML than just cramming in a garbage-in-garbage-out chatbot like everyone else”. Ultimately they decided “why not both?”. At least it’s more akin to a search engine option you can just ignore, just like I currently ignore all the AI chatbot options I could engage with. But it still feels a little lame for Apple to cave and just throw in the same Wrongness Engine as everyone else.Just days ago you could see comment after comment stating "Well, at least Apple is being cautious with AI.", "Well of course Apple is taking a 'wait-and-see approach', they're the adults in the room", etc.
This does not bode well.
ChatGPT is the scapegoat du jour and people here don't want anybody to have the choice to use it. Likewise they'll judge anybody who does or even those who have mixed feelings. You either hate OpenAI (but conspicuously Meta is never mentioned) or you STFU. That is the consensus here, astroturfed or otherwise.For those alarmed by this, I'm not seeing where Siri offering up an option to enlist ChatGPT in your results has any downside. Just don't don't click on the permission? Unless I guess just seeing such a prompt is an enraging violation of some imagined line in the sand?
Adding onto this:The screenshot attached to this story is of the OS asking if you want to use ChatGPT...
Everyone here seems to be panicking that this is some kind of unavoidable deep integration that hands your life over to ChatGPT.Just days ago you could see comment after comment stating "Well, at least Apple is being cautious with AI.", "Well of course Apple is taking a 'wait-and-see approach', they're the adults in the room", etc.
This does not bode well.
I'm personally not alarmed or enraged, but definitely skeptical of AI and disinterested in integration into devices that I use constantly throughout the day.For those alarmed by this, I'm not seeing where Siri offering up an option to enlist ChatGPT in your results has any downside. Just don't don't click on the permission? Unless I guess just seeing such a prompt is an enraging violation of some imagined line in the sand?
1. I generally give Apple the benefit of the doubt, to the point I’ve had people insult me for not “properly” hating them.Prime example of dark patterns right there
See Phat_tony's post before yours. Apple spent the vast majority of the "Apple Intelligence" section of the Keynote talking about specific, real world implementations of what they've always called "machine learning." None of them had anything to do with invasive chat bots or sketchy info or data being shared in an insecure way. They stressed that most of the processing happens on device, and when the software determines that the input is beyond the scope of same it is sent to Apple's purpose built servers with publicly verifiable security. This isn't just some random "let's put AI chat on everything", it's Apple leveraging their silicon/software integration to do some actually pretty cool new stuff. I'm hoping the Siri stuff works as well as they claimed, because that's an area where they are demonstrably behind the curve.I'm personally not alarmed or enraged, but definitely skeptical of AI and disinterested in integration into devices that I use constantly throughout the day.
It's less about ChatGPT specifically and more that "AI" technology in general seems to be half baked. As long as it's wrong a statistically meaningful part of the time, it doesn't add value.
Add to that ethical concerns about how LLM training has been done - often without consent or respect to copyright. OpenAI specifically licenses Reddit content, the way for which was paved by Reddit screwing over big parts of their ecosystem and community. Then there's more ambiguous concerns, like what is done with voice recordings, queries I submit, or personal data needed to answer questions.
All of that for something I expect to add minimal value to some non-critical functions.
What's their motivation for doing that?Prime example of dark patterns right there: Instead of "No" as the option, "Cancel" is intended to make users think that they have to use ChatGPT or they won't be able to continue with what they were trying to do, thus gaining a few additional unwitting users of the service and function so they can claim it's popular and people love it. (This is one of the more popular dark patterns.)
And disabling it later is certain to be buried in settings with a name that doesn't reference the ChatGPT name so it's not obvious, and probably not global but instead each feature that can use ChatGPT has to have it turned off one by one.
None of it changes my opinion. Maybe Apple will do it better and have features where it does add value, but the track record of AI is against them and they'll have to prove it before I give them credit for it.See Phat_tony's post before yours. Apple spent the vast majority of the "Apple Intelligence" section of the Keynote talking about specific, real world implementations of what they've always called "machine learning." None of them had anything to you with invasive chat bots or sketchy info or data being shared in an insecure way. They stressed that most of the processing happens on device, and when the software determines that the input is beyond the scope of same it is sent to Apple's purpose built servers with publicly verifiable security. This isn't just some random "let's put AI chat on everything", it's Apple leveraging their silicon/software integration to do some actually pretty cool new stuff. I'm hoping the Siri stuff works as well as they claimed, because that's an area where they are demonstrably behind the curve.
No, I think a LOT of us areAm I the only one who is only looking forward to these articles for the information within to disable said features?
None of it changes my opinion. Maybe Apple will do it better and have places where it does add value, but the track record of AI is against them and they'll have to prove it before I give them credit for it.
Time will tell.
This! The commentariat here is the new generation of pearl clutching luddites.ChatGPT is the scapegoat du jour and people here don't want anybody to have the choice to use it. Likewise they'll judge anybody who does or even those who have mixed feelings. You either hate OpenAI (but conspicuously Meta is never mentioned) or you STFU. That is the consensus here, astroturfed or otherwise.