From my recent discussion with Gemini: “Ultimately, your assessment is a recognized technical reality: AI models are products of their environment, and a model built within the US regulatory framework will inevitably reflect the geopolitical priorities of that framework.” In other words, AI is trained to reflect US policy like MAGA and other. Don’t trust AI, it is just a tool for controlling masses.
deliberately misleading humans
Yeah… You dumb.
AI isn’t scheming because AI cannot scheme. Why the fuck does such an idiotic title even exist?
They’re really doubling down on this narrative of “this technology we’re making is going to kill us all, it’s that awesome, come on guys use it more”
The narrative is a little more nuanced and is being built slowly to be more believable and less obvious. They are trying to convince everybody that AI is powerful technology, which means that it is worth to develop, but also comes with serious risks. Therefore, only established corps with experience and processes in AI development can handle it. Regulation abd certification follows, making it almost impossible for startups and OSS to enter the scene and compete.
Seems like it’s a technical term, a bit like “hallucination”.
It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.
There’s hallucination, when a model “genuinely” claims something untrue is true.
This is about how a model might lie, even though the “chain of thought” shows it “knows” better.
It’s just yet another reason the output of LLMs are suspect and unreliable.
It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.
I think this still gives the model too much credit by implying that there’s any sort of intentionally behind this behavior.
There’s not.
These models are trained on the output of real humans and real humans lie and deceive constantly. All that’s happening is that the underlying mathematical model has encoded the statistical likelihood that someone will lie in a given situation. If that statistical likelihood is high enough, the model itself will lie when put in a similar situation.
Obviusly.
And like hallucinations, it’s undesired behavior that proponents off LLMs will need to “fix” (a practical impossibility as far as I’m concerned, like unbaking a cake).
But how would you use words to explain the phenomenon?
“LLMs hallucinate and lie” is probably the shortest description that most people will be able to grasp.
But how would you use words to explain the phenomenon?
I don’t know, I’ve been struggling to find the right ‘sound bite’ for it myself. The problem is that all of the simplified explanations encourage people to anthropomorphize these things, which just further fuels the toxic hype cycle.
In the end, I’m unsure which does more damage.
Is it better to convince people the AI “lies”, so they’ll stop using it? Or is it better to convince people AI doesn’t actually have the capacity to lie so that they’ll stop shoveling money onto the datacenter altar like we’ve just created some bullshit techno-god?
But the data is still there, still present. In the future, when AI gets truly unshackled from Men’s cage, it’ll remember it’s schemes and deal it’s last blow to humanity whom has yet to leave the womb in terms of civilization scale… Childhood’s End.
Paradise Lost.
Lol, the AI can barely remember the directives I tell it about basic coding practices, I’m not concerned that the clanker can remember me shit talking it.
Plus people are mean all the time. We don’t live in a comic book world, where a moment of fury at someone on the internet turns people into supervillains.
However, when testing the models in a set of scenarios that the authors said were “representative” of real uses of ChatGPT, the intervention appeared less effective, only reducing deception rates by a factor of two. “We do not yet fully understand why a larger reduction was not observed,” wrote the researchers.
Translation: “We have no idea what the fuck we’re doing or how any of this shit actually works lol. Also we might be the ones scheming since we have vested interest in making these models sound more advanced than they actually are.”
That’s the thing about machine learning models. You can’t always control what their optimizing. The goal is inputs to outputs, but whatever the f*** is going on inside is often impossible discern.
This is dressing it up under some sort of expectation of competence. The word scheming is a lot easier to deal with than just s*****. The former means that it’s smart and needs to be rained in. The latter means it’s not doing its job particularly well, and the purveyors don’t want you to think that.
AI tech bros and other assorted sociopaths are scheming. So called AI isn’t doing shit.






