AmbitiousProcess (they/them)

  • 1 Post
  • 13 Comments
Joined 8 months ago
cake
Cake day: June 6th, 2025

help-circle


  • I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)

    It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.

    I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)

    In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.

    To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.



  • General strikes are illegal in the US.

    It’s not illegal to strike on a date with other people. It’s illegal for unions to call for a “general strike” because it’s considered them calling a strike on behalf of other non-union employees for other businesses.

    Also, jobs can fire workers on the spot for participating in them

    Not always, (though yes, it would probably be likely for many people) since they can use things like sick/vacation days conveniently timed right, or if they’re backed up by a union, they might have a contract that helps to prevent at-will firing without certain specific causes, excluding striking.

    However, if enough people strike, it’s kind of hard to enforce coming into work via firings, as it’s similar to if an entire unionized company goes on strike. What are you gonna do? Fire every single worker and shut down for good the next day because the only person running every single operation is the remaining CEO?

    even if the workers are part of a union and the union want to participate.

    As long as the union doesn’t say “this is a general strike” and just says “we are striking on this date for better working conditions”, and that date happens to be the same day other unions are striking, it’s legal. There is no law preventing different unions from striking on the same dates, and it would take very long for any legal process to try and make that claim before the strike has already occurred.

    national guards have been sent in to shut down general strikes in the past.

    This is the most likely outcome in my opinion. However, it’s still kind of hard to actually enforce the end of a general strike. It’s one thing to arrest someone, or to stop them from doing a given thing, but it’s another to forcibly remove people from their homes and make them work no matter their condition or reason.

    Essentially, I’m saying it’d be messy.

    Doing it multiple days? You realize most people live paycheck to paycheck? Nobody wants to tell their kids they’re going to be homeless.

    This is the biggest hurdle, though there is a degree to which it can be mitigated, at least for a little while. For example, there are a lot of people with backyard and community gardens, small businesses with stockpiles that are willing to support their community as we’ve seen with the current situation in Minnesota, not to mention that if the situation got bad enough you’d probably just see people stealing from their nearest billionaire-owned store because fuck it, why not screw them over more?

    To clarify, I’m not like, disputing your actual overarching thesis here, or saying a general strike is easy or likely to succeed, I’m just saying it’s not entirely impossible :)


  • Exactly. As the ol’ saying goes: “the Right looks for converts, the Left looks for traitors.” We need to change that.

    The reason the far right has grown so much is because of how they bring people into the fold. “oh, you were one of them libtards, but now you support our one true god Donald J Trump? Hell yeah brother, welcome to the side of the patriots!!11!1!” vs. “oh you voted for Trump, but now you hate ICE? We warned you this would happen, fuck you Trump voter!”




  • It kind of is. For example, Edge will automatically pop up in the corner at checkout and offer coupon codes, most of them will never work, then they’ll steal the affiliate revenue from whoever actually sent you to the site in the first place, or add an affiliate link where it didn’t previously exist, so that the site now has more expenses that are just… paying Microsoft for no reason, making everything you buy more expensive in the long run.

    It pops up whether you want it or not, it’s convoluted to disable, it slows down your browser when it’s running, it financially harms the shops you buy from, and it often just lies about having coupons to waste your time while pretending it’s helping you.



  • Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.

    I can generally agree with this, but I think a lot of people overestimate where it DOES belong.

    For example, you’ll see a lot of tech bros talking about how AI is great at replacing artists, but a bunch of artists who know their shit can show you every possible way this just isn’t as good as human-made works, but those same artists might say that AI is still incredibly good at programming… because they’re not programmers.

    It’s a good grammar and spell check.

    Totally. After all, it’s built on a similar foundation to existing spellcheck systems: predict the likely next word. It’s good as a thesaurus too. (e.g. “what’s that word for someone who’s full of themselves, self-centered, and boastful?” and it’ll spit out “egocentric”)

    It’s also great for troubleshooting consumer electronics.

    Only for very basic, common, or broad issues. LLMs generally sound very confident, and provide answers regardless of if there’s actually a strong source. Plus, they tend to ignore the context of where they source information from.

    For example, if I ask it how to change X setting in a niche piece of software, it will often just make up an entire name for a setting or menu, because it just… has to say something that sounds right, since the previous text was “Absolutely! You can fix x by…” and it’s just predicting the most likely term, which isn’t going to be “wait, nevermind, sorry I don’t think that’s a setting that even exists!”, but a made up name instead. (this is one of the reasons why “thinking” versions of models perform better, because the internal dialogue can reasonably include a correction, retraction, or self-questioning)

    It will pull from names and text of entirely different posts that happened to display on the page it scraped, make up words that never appeared on any page, or infer a meaning that doesn’t actually exist.

    But if you have a more common question like “my computer is having x issues, what could this be?” it’ll probably give you a good broad list, and if you narrow it down to RAM issues, it’ll probably recommend you MemTest86.

    It’s far better at search than google.

    As someone else already mentioned, this is mostly just because Google deliberately made search worse. Other search engines that haven’t enshittified, like the one I use (Kagi), tend to give much better results than Google, without you needing to use AI features at all.

    On that note though, there is actually an interesting trend where AI models tend to pick lower-ranked, less SEO-optimized pages as sources, but still tend to pick ones with better information on average. It’s quite interesting, though I’m no expert on that in particular and couldn’t really tell you why other than “it can probably interpret the context of a page better than an algorithm made to do it as quickly as possible, at scale, returning 30 results in 0.3 seconds, given all the extra computing power and time.”

    Even then it can only help, not replace folks or complete tasks.

    Agreed.



  • The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don’t give a fuck.

    Not only does the article not once state that this is a common problem, only explaining the technical details of how it works, and the possible legal ramifications of it, but they mention how, according to nearly any AI scholar/expert you can talk to, this is not some fixable problem. If you take data, and effectively do extremely lossy compression on it, there is still a way for that data to theoretically be recovered.

    Advancing LLMs while claiming you’ll work on it doing this doesn’t change the fact that this is a problem inherent to LLMs. There are certainly ways to prevent it, reduce its likelihood, etc, but you can’t entirely remove the problem. The article is simply about how LLMs inherently memorize data, and while you can mask it with more varied training data, you still can’t avoid the fact that trained weights memorize inputs, and when combined together, can eventually reproduce those inputs.

    To be very clear, again, I’m not saying it’s impossible to make this happen less, but it’s still an inherent part of how LLMs work, and isn’t some entirely fixable problem. Is it better now than it used to be? Sure. Is it fully fixable? Never.

    Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at

    It’s actually a major problem for artists where people will pass their art through an AI model to reimagine it slightly differently so it can’t be copyright striked, but will still retain some of the more human choices, design elements, and overall composition.

    Spend any amount of time on social platforms with artists and you’ll find many of them now don’t complain as much about people directly stealing their art and reposting it, but more people stealing their images and changing them a bit with AI, then reposting it so it’s just different enough they can feign innocence and tell their followers it’s all their work.

    Basically, if no one is actually using these images except to say, “aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!” why TF should anyone care?

    The thing is, while these are isolated experiments meant to test for these behaviors as quickly as possible with a small set of researchers, when you look at the sheer scale of people using AI tools now, then statistically speaking, you will inevitably get people who put in a prompt that is similar enough to a work that was trained on, and it will output something almost identical to that work, without the prompter realizing.

    Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?

    Because they highlight the flaws that continue to plague existing models, but have been around for long enough that you can run long-term tests, run them more cheaply on current AI hardware at scale, and can repeat tests with the same conditions rather than starting over again every single time a new model is released.

    Again, this memorization is inherent to how these AI models are trained, it gets better with new releases as more training data is used, and more alterations are made, but it cannot be removed, because removing the memorization removes all the training.

    I’ll admit it’s less of a “smoking gun” against use of AI in itself than it used to be when the issue was more prevalent, but acting like it’s a non-issue isn’t right either.

    Generative AI is just the latest way of giving instructions to computers. That’s it! That’s all it is.

    It is not, unless you consider every single piece of software or code ever to be just “a way of giving instructions to computers” since code is just instructions for how a computer should operate, regardless of the actual tangible outcomes of those base-level instructions.

    Generative AI is a type of computation that predicts the most likely sequence of text, or distribution of pixels in an image. That is all it is. It can be used to predict the most likely text, in a machine readable format, which can then control a computer, but that is not what it inherently is in its entirety.

    It can also rip off artists and journalists, hallucinate plausible misinformation about current events, or delude you into believing you’re the smartest baby of 1996.

    It’s like saying a kitchen knife is just a way to cut foods… when it can also be used to stab someone, make crafts, or open your packages. It can be “just a way of altering the size and quantity of pieces of food”, but it can also be a murder weapon or a letter opener.

    Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck

    That would be because it was a fictional series about a nonexistent future that didn’t affect anyone’s life today in a negative way if nonexistent job roles were replaced, and most people didn’t have to think about how it would affect them if it became reality today.

    Do you want the cool shit from Star Trek’s imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It’s here! Have some fun with it!

    People also want flying cars without thinking of the noise pollution and traffic management. Fiction isn’t always what people think it could be.

    Generative AI uses up less power/water than streaming YouTube or Netflix

    But Generative AI is not replacing YouTube or Netflix, it’s primarily replacing web searches. So when someone goes to ChatGPT instead of Google, that uses anywhere from a few tens of times more energy to a couple hundreds more.

    Yet they will still also use Netflix on top of that.

    I expect you’re just as vocal about streaming video, yeah?

    People generally aren’t, because streaming video tends to have a much more positive effect on their lives than AI.

    Watching a new show or movie is fun and relaxing. If it isn’t, you just… stop watching. Nobody forces it down your throat.

    Having LLMs pollute my search results with plausible sounding nonsense, and displace the jobs of artists I enjoy the art of, is not fun, nor relaxing. Talking with someone on social media just to find out they aren’t even a real human is annoying. Trying to troubleshoot an issue and finding made up solutions makes my problem even harder to solve.

    We can’t necessarily all be focusing on every single possible thing that takes energy, but it’s easy to focus on the thing that most people have an overall negative association with the effects of.

    Two birds, one stone.