And this gets worse over time because you still have to maintain it.
And as the cherry on top - https://www.techradar.com/pro/nearly-half-of-all-code-generated-by-ai-found-to-contain-security-flaws-even-big-llms-affected
I assumed nothing, and evaluated it like I would any other tool. It’s ok for throwaway scripts but if the script does anything non-trivial that could affect anything external the time spent making sure nothing goes awfully wrong is at least as much as the time saved generating the script, at least in my domain.
Someone on Mastodon was saying that whether you consider AI coding an advantage completely depends on whether you think of prompting the AI and verifying its output as “work.” If that’s work to you, the AI offers no benefit. If it’s not, then you may think you’ve freed up a bunch of time and energy.
The problem for me, then, is that I enjoy writing code. I do not enjoy telling other people what to do or reviewing their code. So AI is a valueless proposition to me because I like my job and am good at it.
The real slowdown comes after when you realize you don’t understand your own codebase because you relied too much on AI. To understand it well enough requires discipline, which in the current IT world is lacking anyway. Either you can rely entirely on AI or you need to monitor its every action, in which case you may be better off writing yourself. But this hybrid approach I don’t think will pan out particularly well.
Any new tool or technique will slow ANYONE down until you familiarize yourself and get used to it.
This article mind was say the sky is blue and grass is green, it isn’t news and it’s quite obvious it will take a few uses to get decent with it. Like any other new tool, software, etc.
Writing code with an AI as an experienced software developer is like writing code by instructing a junior developer.
Without the payoff of the next generation of developers learning.
Management: “Treat it like a junior dev”
… So where are we going to get senior devs if we’re not training juniors?
… That keeps making the same mistakes over and over again because it never actually learns from what you try to teach it.
Yep, the junior is capable of learning.
Wait till I get hired as junior
This is not really true.
The way you teach an LLM, outside of training your own, is with rules files and MCP tools. Record your architectural constraints, favored dependencies, and style guide information in your rule files and the output you get is going to be vastly improved. Give the agent access to more information with MCP tools and it will make more informed decisions. Update them whenever you run into issues and the vast majority of your repeated problems will be resolved.
This is why you use a downloaded llm and customize it, there’s ways to fix these issues.
Unless you are retraining the model locally at your 23 acre data center in your garage after every interaction, it’s still not learning anything. You are just dumping more data in to its temporary context.
What part of customize did you not understand?
And lots fit on personal computers dude, do you even know what different llms there are…?
One for programming doesn’t need all the fluff of books and art, so now it’s a manageable size. Llms are customizable to any degree, use your own data library for the context data even!
What part about how LLMs actually work do you not understand?
“Customizing” is just dumping more data in to it’s context. You can’t actually change the root behavior of an LLM without rebuilding it’s model.
“Customizing” is just dumping more data in to it’s context.
Yes, which would fix the incorrect coding issues. It’s not an llm issue, it’s too much data. Or remove the context causing that issue. These require a little legwork and knowledge to make useful. Like anything else.
You really don’t know how these work do you?
You do understand that the model weights and the context are not the same thing right? They operate completely differently and have different purposes.
Trying to change the model’s behavior using instructions in the context is going to fail. That’s like trying to change how a word processor works by typing in to the document. Sure, you can kind of get the formatting you want if you manhandle the data, but you haven’t changed how the application works.
If it’s constantly making an error, fix the context data dude. What about it an llm/ai makes you think this isn’t possible…? Lmfao, you just want to bitch about ai, not comprehend how they work.
Sounds like you have no clue what an LLM/AI actually is or is capable of.
https://medium.com/sciforce/step-by-step-guide-to-your-own-large-language-model-2b3fed6422d0
It’s not hard to keep a data library updated for context, and some are under a TB in siz.
Where are you getting your information from?
It seems you are still confusing context with training? Did you read that text and understand it?
Did you follow it yourself to build an llm?
Why do you think it’s solely a training issue?
So, you did not? Ok






