

I’ll admit my understanding is not deep, but this is how I understand it. Please correct me kindly where I’m wrong.
To get the speed of processing a prompt, it always will depend on the hardware running it to be super simplified. Whether that is run in data centers that serves thousand of people at the same time and you can get as near instant result, or you can run on just a measly consumer hardware that will take longer to process your prompt and get your result.
Data centers take a lot of power to run, so it will disrupt the power grid if it’s not able to cope with it, and increase your power bill.
It takes a lot of water to keep cool, and from what I understand produce water that needs to be treated again to make it safe for consumption. Multi billion dollar corporations are well known for following environmental and safety standards.
It needs a lot of space to build and destroy environments or take away zoning. All those AC will produce a lot of noise pollution
Contrast with running your local machine. Say take a 5090, running with some kind of high end CPU. All those are still running in the confines of your own home. It can not reach the heights of consumption for the infrastructure to support using AI online by the big corporations.
If you’re using a model that a big corpo trained, they are more than likely using the big power hungry data centers. That’s power already spent so going forward I think it’s best that IF you want to use AI, better run it locally that’s on less power hungry “infra”.



Thank you for explaining. I suppose the comparison to personal vehicles and communal vehicles makes sense to me in regards to energy usage.