So… what would a horse girl be in this hypothetical world? A scientist? A deviant? A deviant scientist?
I’m a lonely smut writer in Portugal! Feel free to say hello! :3
- 0 Posts
- 9 Comments
MissesAutumnRains@lemmy.blahaj.zoneto
Technology@lemmy.world•Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect."English
3·1 day agoFor your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.
For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?
I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.
The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.
The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.
MissesAutumnRains@lemmy.blahaj.zoneto
Technology@lemmy.world•Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect."English
2·2 days agoI read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?
Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.
I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.
MissesAutumnRains@lemmy.blahaj.zoneto
Technology@lemmy.world•Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect."English
86·2 days agoRegarding the doctor’s signature thing, that seems a bit preemptive to say a single flawed study invalidates the entire field and tech, especially when the tech is working as intended in that case and it is user error in the study.
And of course, like any tool it should be utilized thoughtfully. Any form of technology directly takes away from the skill previously utilized to get results. Flint and steel took away from the rubbing sticks together skill. The combustion engine took away from many different professional skills.
Consider that, in this case, we don’t just have to replace diagnosis but could augment it instead. What if every hospital around the world could augment regular medical care with a single machine processing results. Every single check-up could include a quick cancer screening. If the machine flags you as ‘at risk’, a doctor could then see you for human diagnosis and validation. The skill of diagnosis is still needed and utilized, but now everyone can have regular screening instead of overwhelming an already overtaxed healthcare system.
Again, all I’m saying is that there are practical, useful use-cases for the technology, they’re just not what we are doing with them.
Edit: as an after thought, I’m no expert here. As far as I understood, LLMs are a type ML, but ML encompasses a way broader category of ‘AI’. I’m mostly against LLMs for just general use like they are currently. I am advocating for ML as a whole, with thoughtful application.
MissesAutumnRains@lemmy.blahaj.zoneto
Technology@lemmy.world•Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect."English
585·2 days agoGenerative AI in its current, public-facing form? Probably not. It’s sort of like an invention of the internet situation. It CAN be used to facilitate learning, share information, and improve lives. Will it be used for that? No.
A friend of mine is training local LLMs to work in tandem for early detection of diseases. I saw a pitch recently about using AI to insulate moderators from the bulk of disturbing imagery (a job that essentially requires people to frequently look at death, CSAM, and violence and SIGNIFICANTLY ruins their mental health). There are plenty of GOOD ways to use it, but it’s a flawed tech that requires people to responsibly build it and responsibly use it, and it’s not being used that way.
Instead it’s being scaled up and pushed into every possible application both to justify the expenses and enrich terrible people, because we as a society incentivize that.
Edit: hugely belated, I misspoke here after checking with my friend. He’s using local models, but they aren’t LLMs. This is why I’m no expert. 😅
MissesAutumnRains@lemmy.blahaj.zoneto
Ask Lemmy@lemmy.world•What's the browser(s) you use?
2·3 days agoI swapped to WaterFox for everything and I’ve been enjoying it so far.
MissesAutumnRains@lemmy.blahaj.zoneto
Technology@lemmy.world•Datacenters in space are a terrible, horrible, no good idea.English
1·5 days agoI’m no expert, but I feel like a data center in space is a super niche use case. Bandwidth seems like it would be a major issue. Heat seems like it would as well. And as you said, jurisdiction would be a problem that many businesses wouldn’t necessarily want to contend with.
While the devices are difficult to get to physically, should an adversarial state actor send something up, it’s not like we could stop them from accessing the devices in a way we could if they were within the borders of a country. They’re harder to reach for smaller adversaries, and significantly easier for bigger ones. Not to mention significantly harder for us to repair if something goes wrong.
I’m not saying data centers in space are a bad idea in general, but I am not seeing a huge benefit to them right now.
MissesAutumnRains@lemmy.blahaj.zoneto
Technology@lemmy.world•President Donald Trump bans Anthropic from use in government systemsEnglish
931·7 days agoThe absurdity to claim that Anthropic is strong-arming the DoD and forcing them to do anything when just the other day it was reported that the DoD was threatening this exact action if Anthropic didn’t comply with their demands is laughable.
It feels like a pretty low bar to clear to tell this orange fuck to piss off, but at least one of the AI companies has a spine.

Puritanism.