There are numerous interpretations of the significant paradigm shift happening fairly recently in the IT world and although my analytical mind tends to stay on the sidelines to process the movement of the waves, now I deem it appropriate to drop my own opinion into the vast ocean of ideas. Since this is my first article on AI, I'll try to offset my biases with some objective data in order to not dissapoint too many readers.
My first observation will point to the famous human limitation of not being able to properly understand exponential growth. My reference takes us to the famous physicist Albert A. Bartlett who gave thousands of presentations explaining the apparently simple intricacies of exponential functions, offering us practical examples of applications within human endeavors across domains and time.
Bartlett, writing in 1976, was particularly concerned about the environmental and societal implications of exponential growth in computing power, internet communication traffic, human population, and energy consumption. He believed physicists had a responsibility to help the public understand that seemingly modest growth rates can lead to "astronomical numbers" when repeatedly doubled. What's fascinating is that some exponential trends he worried about (like world population growth) have since slowed, but his core insight remains devastatingly accurate—especially as we witness it in AI capabilities.
Being on the sympathetic side of the game of chess, it's almost impossible for me not to borrow from one of the legends related to the invention of chess:
Each square doubles the previous. By square 64: 18.4 quintillion grains
Your brain will take you to a number, but definitely not to the correct number which is actually eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred nine million, five hundred fifty-one thousand, six hundred and fifteen (I had to copy this from somewhere of course).
This wheat would weigh 1,199,000,000,000 metric tons
That's 1,600 times more than global annual wheat production
The first half of the chessboard: just 279 tonnes
The second half: virtually all of it
Ray Kurzweil coined the concept of the "second half of the chessboard" to describe this phenomenon—where exponential growth becomes so dramatic that it dwarfs everything that came before. We're now living in the second half of the chessboard for AI capabilities. The first recorded mention of this problem dates back to 1256 by Ibn Khallikan, showing that humans have been struggling with exponential intuition for nearly 800 years.
Your brain is simply not wired to think this way. The reasons might be split either way, my opinion would be that the evolutionary biology of our species did not require a specific adaptation to such an environment where exponential functions hide behind a bushy tree or within the dangerous crocodile-infested rivers.
I'm mentioning all of this in connection to Moore's law, one that engineers love to frequently mention in drunken debates about the future of humanity in relation to technology at your nearest local bar.
Exponential growth in computing power enabling each technological leap
Moore's law, originally formulated by Gordon Moore in 1965, predicted that "the complexity for minimum component costs has increased at a rate of roughly a factor of two per year." He later revised this to doubling every two years in 1975. This observation has largely held true for decades, driving semiconductor industry innovation and enabling exponential improvements in digital electronics.
2022 Status Check:
• Nvidia CEO Jensen Huang: "Moore's Law is dead"
• Intel CEO Pat Gelsinger: "Moore's Law is still viable"
• Reality: Semiconductor advancement began slowing around 2010, but experts predict it may continue for another 10-20 years through quantum computing, AI-driven chip design, and novel materials research.
What's particularly relevant for our AI discussion is that Moore's Law contributed significantly to the computational foundation that made modern AI possible—the exponential growth in processing power, memory capacity, and digital sensor technology that we now take for granted.
Just like forecasting weather seasonality and CO2 level variability across hundreds/thousands of years in order to predict ice ages, we have to thoroughly analyze the exponentiality of technology not since the first transistor was invented in the 1930s and 1940s, but 10+ thousand years into the past when man domesticated animals and increased the energy leverage of their effort for survival, built the first cities and increased the interaction complexity within a large community, built the printing press exponentially increasing the ability to transfer cultural, religious, mathematical information between distant geographical regions.
Understanding all of this and then having to correctly apply some sort of exponential function will take you most definitely to the natural evolution of artificial intelligence in the state that it is today. It's not magic—it's mathematics meeting sufficient computational resources at exactly the right moment in history.
From 2019 to 2024, AI model training compute increased by 100,000x
The time between GPT-3 and GPT-4 was shorter than most software development cycles
We went from "AI can't write code" to "AI can build entire applications" in under 18 months
Transforming this intentional & verbose tangent into a derivation of getting to the point, I'm recalling the way the AI status quo looked like in 2022 right before ChatGPT hit the entire market like a brick. Many companies experimenting with various applications of AI with a large share still at proof-of-concept stage, requiring huge amounts of computing power and GPUs, data, power being a serious bottleneck.
Fast forward to today, we can all see the shift into increased public awareness, the performance of 2022 capabilities can now be replicated with much smaller models, a more mature adoption with over 70% of organizations using AI in some capacity. The AI is moving from Research and Development/Experimentation phase into production and core operations.
What's particularly fascinating is how the democratization happened almost overnight. Remember when running a decent language model required a server farm? Now my grandmother's laptop can run locally-installed AI that would have required a PhD and a million-dollar budget just three years ago. That's exponential progress in accessibility, not just capability.
There is a ton of articles that prop up AI as being the next internet, or the next revolutionary technology and a ton of those that tell you that AI doesn't really exist.
It might not be the self-aware AI that you see in the movies, but I believe there are truths released into this universe that might reside on both sides of the aisle. Each of our individual opinions might shift with the progressive gathering of information and amount of experience dealing with the concept of AI.
It's always fun to ponder about one's personal perception throughout time regarding this environment:
I must confess, my shift from the left to the right did not happen smoothly and it pulsed violently from one place to another in the short term depending on the specific frustrations I had for that week in dealing with all the available tools. It is not my intention for my personal biases to influence you in any way.
Speaking of developer perceptions, there's a fascinating split in our community that roughly follows the classic technology adoption curve, but with some uniquely AI-flavored twists:
These are the folks who were training their own models before ChatGPT was a twinkle in OpenAI's eye. They understand both the potential and the limitations intimately.
Cautiously integrating AI tools into their workflow, seeing real productivity gains but maintaining healthy skepticism about the hype.
Convinced that AI is either useless or will steal their jobs. Often change their tune after one successful debugging session with AI assistance.
These poor souls swing between "AI will solve everything" and "AI is completely useless" depending on whether their last prompt worked or hallucinated wildly.
Generative AI (GenAI) has been on a popularity uptrend especially in the last 6-8 months, the models are beginning to function better and better with fewer required resources. The impact on the ability to write code is significant and software engineers' lives will never be the same.
You've heard it all before, but here is nonetheless my perspective of how AI is impacting programming:
AI is really good with repetitive boring tasks, enables you to not abuse the stress ball on your desk as much. In terms of areas where this shows in my personal work, this category proved to increase my productivity the most.
Conflicting packages or zero-day vulnerabilities in your node_modules folder? Wanting to increase readability of your ugly code? Writing incredibly boring repetitive unit tests? Say no more, the current tools do it all.
To be frank, this ignited my first spike of interest in integrating AI into my day-to-day activity when randomly I stumbled upon a Reddit post a few months ago that described using AI tools to identify hidden bugs within code leveraging a couple of tools.
That Reddit post is now obsolete but I've personally seen an increase in bugfixing efficiency from ~20-30% 6 months ago to ~60-70% success yield depending on the reproducibility context.
This is for me a hit & miss, but here are some things that AI can help you estimate especially if given enough context:
Compound this with faithful reports about the team's spent time, interfacing your tools with a ticketing system where your team consistently tracks effort, have these tools create reports on spent time on a certain feature compared with the initial estimates and in a few iterations the tendency of AI to overshoot timelines will significantly lessen. I know a future app that does this is desperately needed. (Do you know of one?)
I've not yet applied any agentic methodologies to my code review process but the examples I've seen leads me to believe that AI is very good at code review! Keep an eye for a future article of mine when I get to explore it fully.
This is precious for me, because managing and prioritizing technical debt directly impacted the way I deal with small projects that need to be done in record time. There is always that one project, that needs a ton of features but you have to cut corners in order to meet deadlines successfully.
Managing my technical debt in this scenario with AI helped me in significantly lowering my levels of stress, although this is a thing that will never disappear in my opinion. Always do a cost vs. benefit analysis about what you choose to sacrifice in the short run in order to pay the costs in the long run.
Here's an impact I've noticed that doesn't get talked about enough: AI as the ultimate technical mentor. When learning a new framework or debugging in an unfamiliar codebase, AI can provide contextual explanations that are often more patient and detailed than Stack Overflow answers from 2013.
Junior developers can now get instant feedback on their code patterns, and senior developers can quickly grok new technologies without diving into 300-page documentation.
But let's be honest about the flip side: AI can also make you lazy. There's a real risk of becoming dependent on AI assistance for tasks you should understand fundamentally. It's like GPS navigation—incredibly useful, but use it exclusively and you lose your sense of direction.
At this point you may be asking "will this increase my development speed or decrease it"?
Your tendency is to say yes, but let me provide you with some nuance and give you some evidence to the contrary
Recent studies provide some fascinating and sobering insights:
Study Results (16 experienced developers, 246 tasks):
• Developers took 19% longer to complete issues when using AI tools
• Expected AI to speed them up by 24%, but experienced slowdown
• Even after experiencing slowdown, still believed AI sped them up by 20%
• Tools used: Cursor Pro with Claude 3.5/3.7 Sonnet models
This suggests our perception of AI helpfulness might be influenced by psychological factors rather than actual productivity gains.
This study challenges a lot of assumptions about AI productivity gains, but it's worth noting the limitations: small sample size (16 developers), potential learning effects not fully explored, and results may not generalize to all software development contexts. The complexity of measuring AI's impact across different settings is becoming apparent.
Although these studies should be taken with a grain of salt, the reality is that the market perception is heavily influenced by the current hype and evidence lies in headlines like "large companies fire x amount of engineers replacing them with AI".
Should we be worried about the "non-technicals" boasting that they could transform overnight into competent Vibe coders and build the apps on their own with AI only, leaving us poor software engineers in the dirt?
• Beautiful UI
• Smooth interactions
• "Simple" features
• Database architecture & optimization
• Security & authentication systems
• API design & integration
• Performance monitoring & debugging
• Scalability & load balancing
• Error handling & recovery
• Testing & quality assurance
• Deployment & DevOps
• Data validation & sanitization
• Cross-platform compatibility
The classic misconception: "How hard can it be to build an app?"
Most probably not, AI is a tool like any other. Just as a salesman cannot efficiently and properly take a shovel and dig ditches, neither will an ex-Wall Street investor take the reins of some AI agent and smoothly create the app that he needs.
Can that investor properly learn to code anywhere from 6 months to 2 years and finally build that app? Completely answering that question would need some follow-up ones. Is your investor friend building a small app that suggests 10 words in Spanish for you to learn every day with some gamification involved or is he looking to score on the market with the next Duolingo?
The first option merits the answer: of course, but taking into consideration proper pattern recognition while doing a fair amount of debugging, some security implementation, a little performance analysis and a generous amount of logging.
The second option will still compel him to redirect his attention from your friendly ChatGPT to me instead. Yes, me... the lowly engineer that knows that in order to build "Triolingo" you will need to build in parallel an AI platform similar to the Birdbrain system which creates personalized learning experiences for your future 1 million users by analyzing user data and identifying their learning needs.
Then, together with a bunch of other Me's that already have experience with scalability and A/B testing a bunch of complex systems, a lot of Them-too's that are able to capture the nuances of many languages in order to avoid issues like grammar mistakes marking for context valid native phrases and so on.
Let's not talk about the requirements of maintaining such a system, one of them being context and visibility for debugging that makes tracking the infinite number of errors impossible. Take Duolingo as a real-world example: achieving their current scale required unprecedented levels of human effort and coordination—orchestrating 200+ microservices, multiple daily deployments, maintaining 99.9% uptime, and ensuring 50%+ of engineers actively monitor systems to keep millions of users learning languages seamlessly. Note: Check out Sentry.io's article about how Duolingo mitigated all the risks related to this (ahem...neither Sentry nor Duolingo sponsors my article by the way).
While their boundless enthusiasm is admirable, the reality isn't all sunshine and rainbows. Recent infrastructure observations reveal concerning reliability patterns in AI systems—major platforms experiencing noticeable performance degradations and intermittent service disruptions that highlight the gap between promise and operational reality. Meanwhile, top-tier startup accelerators have shifted their acceptance criteria dramatically, with acceptance rates dropping to mere fractions of a percent while prioritizing domain expertise over AI-first approaches, suggesting a market correction toward substance over hype.
Truth: Some companies did fire engineers citing AI replacement
Also Truth: Many of these same companies quietly re-hired engineers 6-12 months later
Reality Check: Most "AI replacements" were actually disguised cost-cutting measures during economic uncertainty
Even at the individual consultancy level open any social media thread about AI and you will see highlights flood you with boasts like: "I quit my $250k per year job and now I'm running 2 SaaS products and 1 mobile app making $50k/month while travelling the globe in a perma-vacation..."
These might make you raise an eyebrow or even involuntarily beat your mouse over the keyboard murmuring about the stupidity of all of this, but I learned to never underestimate the ability of others to sell you pie in the sky.
There is this thing when adapting to new paradigms—and we're definitely in one right now. The question isn't whether AI will change how we work (it already has), but how we adapt to work with it rather than being replaced by it.
Being careful with secondary effects of using AI as a developer: there's a parallel with the time GPS took over driving versus visual orientation. We gained incredible navigation capabilities but lost our innate sense of direction. The same risk exists with AI-assisted coding.
Before GPS: We memorized routes, understood geography, could navigate by landmarks
After GPS: We can get anywhere but are lost without our phones
Programming Parallel: AI can solve problems instantly, but are we losing our problem-solving muscles?
We software engineers possess several critical advantages in this AI-transformed landscape that position us uniquely for success:
Software development remains one of the few fields where continuous learning isn't optional—it's survival. We're already adapted to rapid change, new frameworks, and evolving best practices. AI is just another tool in our ever-expanding toolkit.
So despair not, my esteemed colleague, your days have not yet passed. You are still required by the world, the only thing is that the world is under the temporary illusion that the dependency upon you has reached its zenith. You will adapt, strive to reach excellence and you will prevail.
We are not the poor fellows that manually lit the street gas lanterns worrying about the new thing called electricity, nor the ones that cleaned the horse dung from the streets cursing through their teeth each time they saw a wheeled monstrosity called the automobile.
The craze will settle into normalcy, the hype will mature into utility, and we'll continue doing what we've always done: solving problems, building systems, and pushing the boundaries of what's possible.
Besides, who else will teach the machines that 'undefined is not a function' isn't actually a philosophical statement?
Disclaimer: Neither Sentry nor Duolingo sponsors this article. All opinions and insights are the author's own, based on personal experience and publicly available information.