However, when testing the models in a set of scenarios that the authors said were “representative” of real uses of ChatGPT, the intervention appeared less effective, only reducing deception rates by a factor of two. “We do not yet fully understand why a larger reduction was not observed,” wrote the researchers.
Translation: “We have no idea what the fuck we’re doing or how any of this shit actually works lol. Also we might be the ones scheming since we have vested interest in making these models sound more advanced than they actually are.”
That’s the thing about machine learning models. You can’t always control what their optimizing. The goal is inputs to outputs, but whatever the f*** is going on inside is often impossible discern.
This is dressing it up under some sort of expectation of competence. The word scheming is a lot easier to deal with than just s*****. The former means that it’s smart and needs to be rained in. The latter means it’s not doing its job particularly well, and the purveyors don’t want you to think that.
Translation: “We have no idea what the fuck we’re doing or how any of this shit actually works lol. Also we might be the ones scheming since we have vested interest in making these models sound more advanced than they actually are.”
That’s the thing about machine learning models. You can’t always control what their optimizing. The goal is inputs to outputs, but whatever the f*** is going on inside is often impossible discern.
This is dressing it up under some sort of expectation of competence. The word scheming is a lot easier to deal with than just s*****. The former means that it’s smart and needs to be rained in. The latter means it’s not doing its job particularly well, and the purveyors don’t want you to think that.