• 4 Posts
  • 176 Comments
Joined 3 months ago
cake
Cake day: January 9th, 2026

help-circle




  • I don’t feel safe doing so. Would a script be able to run escalated rights without asking me a password? Is it somewhere displayed that such a process is started (notification in example or at least in the terminal a message?). And even for applications I am directly starting, I want it be explicit to require a password, that I am always aware its escalated root rights the app has now.

    I can understand your view of convenience and I am “guilty” of some convenience stuff too. But this goes a bit too far for my taste.



  • I sometimes prefer Flatpak over AUR, because I do not trust everyone on the AUR to run scripts with root rights on my system. At least Flatpaks are a bit sandboxed (even if the sandbox is an illusion) and the programs don’t install and run with root rights. Sometimes the Flatpak is from the original developer and the script in AUR is not. Or the AUR script is not updated well and often enough, unlike day one Flatpak updates. But Flatpaks do not integrate well in your system and applications can look out of place too. There is a lot to consider, besides what you already mentioned.

    I use both, prefer the AUR in optimal cases.










  • Congratz on liberating your computer and yourself.

    Just a little advice on using the AUR: It is an user driven repository of software, meaning anyone can upload stuff to it. Usually you are adviced to read the AUR script before installing it (most don’t, especially newcomers). So you should be very careful and only install from trusted AUR scripts. Maybe install from Flatpak instead from AUR if you can, but that depends on many factors.



  • I personally don’t think this service as a license changing of an existing project. If it reads and implements the same thing from scratch, then its a new implementation with a new license. I see it similar to how reverse engineering is done in example. And with the approach of two different agents I think this is okay, as it is a new implementation. I mean this is something humans could do themselves too. The only thing is, can they actually proof that both agents aren’t trained on the data they are reading and re-implementing it again (for the clean room implementation)?

    The biggest problem to me is, using Ai tools in general, because of what and how they are trained on. But that is a different topic for another day.



  • The developers of systemd said they will never support that, so I think its safe for now. Also why do you think systemd would “require” a government id check? systemd is just providing the functionality; it is the distribution / operating system that implements all the functionality. So if an operating system does implement it, I might find a different operating system, regardless of if it uses systemd or not. That is true for any other component too, not just systemd.