There’s also Ignosticism. They believe the question is underspecified because “God” isn’t well-defined.
- 0 Posts
- 7 Comments
monotremata@lemmy.cato
Selfhosted@lemmy.world•I prompt injected my CONTRIBUTING.md – 50% of PRs are botsEnglish
1·8 days agoYeah, agreed. I must have misunderstood your original comment.
monotremata@lemmy.cato
Selfhosted@lemmy.world•I prompt injected my CONTRIBUTING.md – 50% of PRs are botsEnglish
31·9 days agoI’m not sure I totally understand your comment, so bear with me if I’m agreeing with you and just not understanding that.
“let me prioritize PRs raised by humans” … but why? Why do that in the first place? If bots/LLMs/agents/GenAI genuinely worked they would not care if it was made or not by humans, it would just be quality submission to share.
Before LLMs, there was a kind of symmetry about pull requests. You could tell at a glance how much effort someone had put into creating the PR. High effort didn’t guarantee that the PR was high quality, but you could be sure you wouldn’t have to review a huge number of worthless PRs simply because the work required to make something that even looked plausibly decent was too much for it to be worth doing unless you were serious about the project.
Now, however, that’s changed. Anyone can create something that looks, at first glance, like it might be an actual bug fix, feature implementation, etc. just by having the LLM spit something out. It’s like the old adage about arguing online–the effort required to refute bullshit is exponentially higher than the effort required to generate it. So now you don’t need to be serious about advancing a project to create a plausible-looking PR. And that means that you can get PRs coming from people who are just trolls, people who have no interest in the project but just want to improve their ranking on github so they look better to potential employers, people who build competing closed-source projects and want to waste the time of the developers of open-source alternatives, people who want to sneak subtle backdoors into various projects (this was always a risk but used to require an unusual degree of resources, and now anyone can spam attempts to a bunch of projects), etc. And there’s no obvious way to tell all these things apart; you just have to do a code review, and that’s extremely labor-intensive.
So yeah, even if the LLMs were good enough to produce terrific code when well-guided, you wouldn’t be able to discern exactly what they’d been instructed to make the code do, and it could still be a big problem.
monotremata@lemmy.cato
Ask Lemmy@lemmy.world•What is something that desperately needs to be standardized?English
6·9 days agoBut then you’ve got a space that’s 5’ 7 3/8" and you need a clearance of 7/32" on each end, so your piece should be…uh… 5’ 6 15/16" long. So much easier than metric, right?
In metric it would be 1711mm (or 1.711m) and you’d need to take 5.5mm off each end, so it’s 1700mm. (For the record, I picked random numbers in imperial and only did the metric conversion afterwards, I just lucked into the nice round number here.)
I dunno. You need how many sig figs you need in whichever system, but switching between a factor of 12 for the feet, base 10 for the inches, and the equivalent of binary decimals for the partial inches sure does take getting used to. I’ve finally gotten used to it enough that I can do it in my head, but I prefer to work on metric for most things.
I acknowledge that machinists just use thousandths of an inch, which does greatly improve working with that system, but it also introduces a third kind of measurement that can’t easily be interconverted with the other two. I dunno. It just feels like we’re doing way too much work propping up this archaic system when literally everyone else in the world is using something simpler and we could just be on the same system.
monotremata@lemmy.cato
Technology@lemmy.world•Hisense TVs force owners to watch intrusive ads when switching inputs, visiting the home screen, or even changing channels — practice infuriates consumers, brand denies wrongdoingEnglish
1·20 days agoYeah. I think none of us really understands how valuable all our data really is.
monotremata@lemmy.cato
Technology@lemmy.world•Hisense TVs force owners to watch intrusive ads when switching inputs, visiting the home screen, or even changing channels — practice infuriates consumers, brand denies wrongdoingEnglish
5·21 days agoMost of the “commercial” TVs, the ones intended for businesses, don’t have this. They also don’t have streaming services and whatnot not built in. They’re just a display with a few inputs, and maybe a tuner.

Yeah, this is what I was going to call out. Calling it “100% solvable by humans” and saying “if human scores were included, they would be at 100%” when 20-60% of humans solved each task seems kinda misleading. The AI scores are so low that I don’t think this kind of hyperbole is necessary; I assume there are some humans that scored 100%, but I would find it a lot more useful if they said something like “the worst-performing human in our sample was able to solve 45% of the tasks” or whatever. Given that the AIs are still scoring below 1%, that’s still pretty dark.