You’re not productive if you don’t use a lot of AI, says guy who makes all of his money selling AI hardware
I think that’s far too low tbh
The beauty is when those companies run out of human training data and start training on AI slop, just to generate even more AI slop.
This is probably already happening though.
It is, intentionally. Some of the training data is synthetic
I thought AI would save us money? If it adds 50% onto the cost of a salary and bynall studies does not improve productivity output, then it’s not great.
Oh, he left out there plan to fire half of them and drive wages down by a third.
This may sound weird, but I think anecdotal evidence might be more informative than the productivity stats for now, until the industry settles on a new equilibrium.
Some engineers are more productive with AI, and some (maybe even most, still) are less productive. People are still putting in the effort to learn how to use it more effectively/productively (there is a learning curve), and some of the less productive are getting laid off.
It sucks, but that’s just how it is now.
Also, AI tooling is still evolving very rapidly. A lot of information and stats are only valid for maybe a few months.
It’s moments like these that make me think about the state of the world and my part in it. I may just be a random loser on the Internet, but I do know a lot more shit that some of the biggest multi-quad-spillion-dollar CEOs, apparently.
For example, it’s an old fact that tech CEOs know jack shit about measuring productivity, even when they’re obsessed with it. Yeah. One more example.
The AI bubble went from a trillion dollar worth of circlejerk investments, to a self-sucking circlejerk trillion dollar ponzi scheme. Jensen Huang is right the moment an Invidia’s employee stop sucking their own dick or stop jerking off their coworkers and bosses simultaneously, the whole economy collapse.
It’s amazing they can just make these claims without literally any evidence and no major “media” organization asks for it. They are just propaganda for these companies.
I remember back when Nvidias PR team used to push this humble rags to riches story about Jensen back in the day, I guess even they would have a tough time doing that now that he’s gone mask off
Jesus that’s a lot of tokens.
Even if I was trying to do everything in my power to make burn tokens for real work I’d be hard fucking pressed to burn more than a couple thousand a month and that’s just being wasteful.
you just need to use more context injection and more agents working in parallel on git worktrees or whatever and then get really depressed because you own an absolute fuckload of code now and you are less familiar with it
Jensen Huang should suck my dick.
Only if he has the tokens to do so.
Luigi knows the solution for this.
Wow, that doesn’t sound like a pyramid scheme at all. At all.
Let me translate this for you, “My bonus depends on you showing our massive investment wasn’t a waste so I’m holding your jobs hostage until you make up busy work to pretend it was worthwhile.”
Wouldn’t an AI researcher naturally find generative AI disadvantageous because they are attempting to develop novel tools which could not exist in the training set in the first place?
Even novel solutions are usually built out of smaller common building blocks. E.g. many novel solutions surely use a database. You can make the LLM help you set up and use the database, that your novel solution uses.
“No we don’t need databases anymore, only blockchains.” —Nvidia CEO a few years ago
And AI can help you migrate your database solutions to blockchain, utilizing 3000W worth of Nvidia co-processing power to validate your blockchain database that used to work on a 0.3W ARM processor.
The database was an arbitrary example. A more relevant example would be tenserflow layers in a neural network. As I understand it, you can in some cases get a novel solution to a problem just by choosing a smart enough combination, with the right data.
ChatGPT absolutely knows how to help doing the grunt work setting up the tenserflow configuration, following your directions.
you can in some cases get a novel solution to a problem just by choosing a smart enough combination, with the right data.
Smart, lucky, who can tell the difference?
If used by an expect developer, then the combinations are not just random “lucky” choices.
Or, if you take the machine learning approach, you just try all the combinations and use the one(s) that perform the best.
The world is not that simple. There are too many combinations to try. And you risk hitting local maxima, even if doing the gradient thing.
If you are capable of giving good directions…
I’m probably not arguing with you, and I’m not trying to regardless. You seem like you have tried this, watched it happen, go “huh, neet!” And then get it to take the next step in whatever you were doing in the first place only to find out you didn’t provide adequate requirements for your config.
only to find out you didn’t provide adequate requirements for your config.
Every software development project, ever.
Review your requirements before starting development. Review them again after each phase of development. Address inadequacies, conflicts, ambiguities whenever you find them.
AI is actually helpful in this process - not so much knowing what to choose to do, but pointing out the gaps and contradictions it can be helpful.
Well, yes, that is a central point.
I am a senior programmer. LLMs are amazing - I know exactly what I want, and I can ask for it and review it. My productivity has gone up at least 3-fold, with no decrease in quality, by using LLMs responsibly.
But it seems to me that some people on social media just can’t imagine using LLMs in this way. They just imagine that all LLM usage is vibe coding, using the output without understanding or review. Obviously you are very unlikely to create any fundamentally new solutions if you only use LLMs that way.
only to find out you didn’t provide adequate requirements for your config.
Senior programmer. I know exactly what I want. My requirement communicated to the LLM are precise and adequate.
What I find LLMs doing for my software development is filling in the gaps. Thorough documented requirements coverage, unit test coverage, traceability, oh you want a step by step test procedure covering every requirement? No problem. Installer scripts and instructions. Especially the stuff we NEVER did back in the late 1980s/early 1990s LLMs are really good at all of that.
Nothing they produce seems 100% good to go on the first pass. It always benefits from / usually requires multiple refinements which are a combination of filling in missing specifications, clarifying specifications which have been misunderstood, and occasionally instructing it in precisely how something is expected to be done.
A year ago, I was frustrated by having to repeat these specific refinement instructions on every new phase of a project - the LLM coding systems have significantly improved since then, much better “MEMORY.md” and similar capturing the important things so they don’t need to be repeated ALL THE TIME.
On the other hand, they still have their limits and in a larger recent project I have had to constantly redirect the agents to stop hardcoding every solution and make the solution data driven from a database.
I were simply unable to convince Codex to split a patch into separate git commits in a meaningful way. There are things that just doesn’t work.
Still useful for lots of stuff. Just don’t use it blind.
Yes, this is why I point it out. I agree with you, but no part of this is actually common sense. It just feels like it.
That’s fair. I guess it could be no different than a scientist with some grand scheme handing his plans off to others to implement.
I think I was assuming that cutting edge AI research involves more math/theory than just… bootstrapping existing tech stacks and tweaking configs.
CEO suggests raising employee costs by fifty percent and is immediately fired.
Sorry, we don’t live in a sane world anymore.
What a fucking clown








