• 0 Posts
  • 15 Comments
Joined 8 months ago
cake
Cake day: June 8th, 2025

help-circle
  • You do all that? Than you, and you personally only may have my sencire apologies and the world of admiration. And I’m sorry that it is happening to you.

    But I’m done with the benefits of the doubt. I’ll judge you colectively by the outcome. And untill I see the systemic change your country is not trustworthy in my eye.

    If trump died today, it wouldn’t be a sing of relife for me. It would be prove that you collectively broken your system so much that only thing that can stop tyrant in charge is time.

    This is not the outcome of today. It is not happening since yesterday. You ask hypotheticals about my country. Until there is one person who is not using all their purchasing power, all admistrative presure and as much civil disobedience they have in their disposal, your country is not taking their responsibility and I’ll judge you collectively until proven otherwise. Feel free to do the same with me and my country.

    Not everyone can do everything. But litteraly everyone can cancel their fucking Disney+ subscription. And all trump supporters are doing very well, becouse - collectively - a bare minimum of invonviniece seams to be harder to overcome than a shame of betrayal.

    Posting “what else is new” memes is not resistence. Its rolling over.











  • That’s a fair question. Am I sure? No. The problem is that nobody is publishing explicitly about how expensive this all truly is (which is concerning on its own). But a lot of points are out in that direction. I think that “cost of inference” may be a fake metric on its own that it’s designed to obscure that you need multiple sub-queries to run one “thinking” query - but granted, I’m out of my depth, so I may be wrong here.

    AI companies are running two arms races here: one: who can create a better model. two: who can create shitteir cheeper psudo-model that can pass as “AI” (which are also too expensive).

    When you complain that those models can do anything useful, and it makes terrible mistakes, they reply that you don’t use the truly smart one. When you complain that there is no way that the smart one will ever be scalable because it’s crazy expensive, they tell you, “but look how much cheaper the ‘fast’ ones are”. Never truly addressing the questions, and bundling those so you are never sure which one youare using.

    Does anyone successfully run this openy avaible models locally on their own machines? I’m seriously asking, since, as I unserstand that doesn’t work out as well as one would hope. That suggests that the resources required are still massive.


  • Me too. Honestly. But right now it’s subsidized, and Proton lose money every time we do. And the cost will not go down with the scale. Are you going to pay hundrets of dollar every month for it? At the end it’s “slightly more convenient search engine/brainstorming tool that usualy works”. It’s not like its worth nothing. But it’s not worth the fraction of the costs it generatete and the resources it consume.

    But it’s true, it’s private and it probably isn’t optymized to farm engagment over anything else, and it’s less likely that it will advice me suicide over talking with somone else than it. So yes, I prefer it over ChatGPT as well.



  • What arey you talking about? Do you still belive in free launches? There is an opportunity cost to everything. My paid plan includ a resource-havy, going-nowhere technology that nobody was asking for, while the same resources could be spent on things that people ask for - like better gallery or better Linux support.

    I don’t want my postmen to clean my windows even if he do it for free (for now). I just wonder who paid for his cleaning supplies while he’s mailbag is full of holes.