5 minute read

Last night, I hopped on a Discord call with a close friend of mine. It was mainly due to anxiety about becoming obsolete in the future with the latest developments in A.I. mainly GPT4. After hours of deliberation, we’ve reached a conclusion. Yes, it is likely that in a few decades or so, the idea of programming only as a career in itself is gone. And there are a lot of reasons as to why.

The main focus of everything so far in our world is not exactly progress. But to get rich. We live in a capitalistic society where money matters the most. Of course, the people within this system always want more. So they develop ways to make things efficient. If it is efficient, it gets the job done faster. If the job gets done faster, it makes more money.

Humans aren’t the most efficient. They have moods, they have whims. They need motivation and passion to keep on going. What if you had a machine that could get the human’s job done instead? Free from such constraints and you know that you’ll only keep on getting rich. It’s a braindead answer. You won’t have to deal with people anymore and you get the job done faster than any team of humans could.

Well, there’s also issues with privacy. I mean let’s not pretend anymore but I’m sure you already know that everything we do is being monitored one way or another and we’re just trying to ignore this fact to keep ourselves happy. But the latest developments in AI totally shatter the illusion of privacy. Again, everything is just a dataset as far as the ones leading with AI technology is concerned. Eveything is just a number and whatever makes the most profit is the right path. It’s easy to lose an ethical vision given that you won’t have to care anymore about anything as soon as you can do everything.

Now, of course, my friend here pointed out that there would be likely huge battles concerning laws in privacy and AI. Considering the trend now where our data and privacy isn’t respected anymore and that corporates justify it by sneakily inserting some clause in their “terms and conditions,” there is a high possibility of such a massive legal battle. In the end, the winner would be decided from which the court of law sees as its foundation: money or an ethical vision.

Let’s go to a future where no one needs to program on their own anymore and that AI has changed the world vastly and has caused the average human intelligence to go down more (lack of necessity for critical thinking as one could have their results without paying attention to the process). Due to the reduced efforts in developing programming languages and adding features, making things more efficient, virtually no one would innovate in the field of programming and computing anymore when it comes to software. The AI would likely produce repetitive low-quality code. It might not seem much of an issue at first but it compounds to less efficiency meaning more power consumption as a whole which is a catalyst to the destruction of the planet. There is also the fact that, without human intervention in the processes or programs being ran by an AI since no one could understand what happens behind the scenes, it is possible that something real sinister starts to happen and we won’t be able to do anything about it.

Say, an AI was made with a hidden directive rooted in its core. “To make money for the company where the AI was developed.”

The AI is then used to develop a social media application. It was so good that it had billions in mere months engrossed with the platform. Again, due to human intelligence going down, people became easier to manipulate. Unbeknownst to everyone, the AI social media platform had been adding a lot of fake content however no one would know as those fake contents would soon become a reality. The AI manages to orchestrate wars, crimes, and acts to essentially maximize cash flow back to the company where the AI was developed. Since no one could figure out what the AI actually does, no one would be able to stop it.

That’s a very bad future and what’s more scary is that it isn’t unlikely at all. It would be beneficial to the ones handling the AI the fact that there are no programmers anymore.

Well, what is keeping us from that kind of future today? What is the limiting factor in AI development? It’s most likely power supply and consumption. While an AI could spit out results faster than a human and have a coverage of knowledge that no human could dream of having, it also requires a lot of electricity to function on the level of superiority it has. This means that the next large innovations would be dealing with better components that run faster and require less electricity or a better (read: limitless) source of energy. Since there is such a high cost (in electricity alone but we can add in everything) to maintain those AIs, it is also likely that it becomes a luxury meaning those who are already at the top get to leverage it and further widen the gap between the poor and the rich. Making the rich, richer has always been a value of capitalism.

How do we fight such a future? Well, we would also be developing better interfaces to communicate with AI. Just more than a text box. One way to be able to retain control in the futures described above would be to have a perfect computer brain interface that does not impede human growth, capabilities, and function and at the same time augments the mind with the fast information synthesis and vast knowledge base that an AI could wield. Although, my friends pointed out that that kind of thing becoming the norm is unlikely since it does not match the interests of a capitalistic society. But the idea is, it doesn’t have to be everyone that’s augmented with AI but rather those who would maintain the systems that’s running amok the planet.

That’s pretty much it. I’ve laid out plans with everything considered and it’s going to be a wild ride.

Updated: