
I think we can agree that AI is here to stay, having profound impact on our day to day lives. Unlike previous hypes, like crypto or 3D TV’s, this is still going strong with ever increasing significance. Do you remember those, the 3D TV’s? Everyone thought 3D theater experience at home will be as common as having a TV set.
Well, crypto is still around and there is still big money in it, but feels like it has lost a lot of its relevance compared to what it used to be around Covid times.
AI seems to be still on the rise, seeping more and more into all aspects of our lives. I think, this is due to its wider range of usability.
The scariest thing about all this is, how reckless AI development and usage seems to be. Everyone wants in on it. There is a lot of money moving around it, just like it was with the crypto boom. No wonder the hype grows. What makes it more important, is that AI has profound impact on our lives already.
Think of all the AI generated content that is now cheap and easy to create, just to misinform people or to make them more addicted to services that are unhealthy, or even distorting our perception of reality
People have lost jobs due to transition to solutions using AI, while hiring is done with the help of prescreening CV’s with bots.
At the same time, AI can open up new job opportunities. There is a need for engineers who can create, tune the models. As well as a need for people, who understand how prompting works and can make the models produce the desired results, something like a prompt engineer. Also, depending on how much we trust AI, there might be a need for people whose sole job is to supervise AI results and decisions.
I think the dream would be that AI would revolutionize the way people work and live their private life. What if machines could take over all the shitty jobs, so that humans can focus more on tasks, jobs, activities that are more meaningful, that we enjoy? That is the dream. Unfortunately, society and the global economy isn’t set up in a way that could accommodate for such a lifestyle globally.
I’m reading a book by David Greaber, titled Bullshit Jobs. It kind of tackles this problem. In the preface of the book, a theory was mentioned that predicted in the 30’s that due to the technological development by the century’s end, developed countries would have achieved 15 hour work week.
Well, where is it? Queue the John Travolta meme. There are interesting reasons why this has not happened yet. I might share some thoughts on this once I finish the book.
For these AI models, as it was with crypto, the energy requirements are huge. There is going to be a whole nuclear power-plant started up dedicated to power AI servers. On top of that it will be operated by a for profit company. This in it self might not be a bad thing, but I somehow don’t feel sure that all decisions regarding the operation of that plant will be made in the interest of civilians. I do not have anything against nuclear power, or fearing another Chernobyl in general. Technology has improved so much since then, but did we as humans and decision makers matured alongside with it?
The marketing around AI has become so annoying. Everything that you buy is sold to you with some sort of AI label. You have fridges, vacuum cleaners, even Air Fryers with AI! Just because there is some algorithm controlling something in it, maybe not even that, they say it is AI powered. It just seems AI needs to be mentioned in any possible way in every product. And what do these AI’s actually do? The same thing that previously the timer on your Air Fryer told you. It is just ridiculous.
The worst thing is, that a lot of these products and their new AI functions are trying to solve problems or provide features, that no one actually asked for. Or the feature is just simply dumb.
Don’t get me wrong, I’m not completely against AI, I’m not looking for its complete abolition or stopping its development. I believe it is an incredible tool. But just like with every other tool mankind has ever created, it can be used for good and for bad. Just like a good old hammer. Can be used to build or create, but can be used as weapon. Same applies to AI.
There are amazing things that can be done with AI. It can improve so many things in our day to day.
In photo editing, Lightroom’s de-noise feature makes amazing images of noisy pictures. It is an amazing tool for professionals and not so professionals as well, who cannot afford the best and usually expensive gear that would allow for better low light photography.
Prompting for new ideas, brainstorming, or testing different solutions quickly is another great use of these generative models.
We can use AI for science – have you heard of Alpha Fold? It found hundreds of thousands of protein chains and their possible folded version that can be used to cure diseases and a million other things. A thing that took scientist years to find for one single protein chain.
These are all amazing advancements in technology that humanity was hoping for since the conception of artificial intelligence.
What I’m lacking or at least I feel lags behind is proper legislation and common sense. Decisions and laws need to be put in place fast, to be able to direct AI’s future in a way that actually supports, enhances people’s lives, while also protects us from its missuses.
We also need common sense in us, AI’s users. What do we use it for? How do we use it? Have we thought enough about the repercussions of what we generate, what we upload, what we teach these models?
I know this is a difficult topic, for both civilians and legislature, especially in a field changing so quickly. AI has so many promises and the same amount of danger in it. Obviously we want to leverage what this incredible technology has to offer., but we need to thread carefully.
In my opinion what makes this whole situation scary is that in the forefront of AI development are privately owned, for profit companies. How ever naive we are, we need to know that they will make decisions that profit their company, make it better valued, and keep the investors satisfied. Even if they say so, it is not safety or the individuals well being their highest priority. It is money.
I don’t trust these companies that they will make the right decisions, especially in the light of recent uncovered scams. Like the Honey scandal, companies just removing parts of their EULA without a trace or communication of the changes. Or just not caring about security at all (e.g. DeepSeek’s unencrypted data storage and transfer).
And just to mention another big can of worms is, how these models have been built up. On whose intellectual property were they trained on without consent, or without proper compensation? The damage is already done.
Yet this again proves that we need legislation, we need individual action to prevent future abuse and to right some wrongs.
We need to think individually and collectively on this whole topic. Let’s just have a breather on how do we want to utilize this technology and how can we make it so that it is for the benefit of all mankind. Then we need to take action, support causes that call for legislative action and make better informed individual decisions.
Until next time, take care.
Mátyás
P.S. I used AI to generate the feature image of this piece. Is this wrong? Am I helping our future AI overlords?