(+4) 0374 900 200
contact@avangarde-software.com

The AI Craze - A humble personal insight in TODAY's context

The AI Craze

A humble personal insight in TODAY's context
📅 Published: September 29, 2025
📖 Reading time: ~12 minutes | ~3,200 words

There are numerous interpretations of the significant paradigm shift happening fairly recently in the IT world and although my analytical mind tends to stay on the sidelines to process the movement of the waves, now I deem it appropriate to drop my own opinion into the vast ocean of ideas. Since this is my first article on AI, I'll try to offset my biases with some objective data in order to not dissapoint too many readers.

Exponential Blindspot

My first observation will point to the famous human limitation of not being able to properly understand exponential growth. My reference takes us to the famous physicist Albert A. Bartlett who gave thousands of presentations explaining the apparently simple intricacies of exponential functions, offering us practical examples of applications within human endeavors across domains and time.

"The greatest shortcoming of the human race is man's inability to understand the exponential function." — Albert A. Bartlett, 1976, The Physics Teacher

Bartlett, writing in 1976, was particularly concerned about the environmental and societal implications of exponential growth in computing power, internet communication traffic, human population, and energy consumption. He believed physicists had a responsibility to help the public understand that seemingly modest growth rates can lead to "astronomical numbers" when repeatedly doubled. What's fascinating is that some exponential trends he worried about (like world population growth) have since slowed, but his core insight remains devastatingly accurate—especially as we witness it in AI capabilities.

Being on the sympathetic side of the game of chess, it's almost impossible for me not to borrow from one of the legends related to the invention of chess:

If a chessboard were to have a grain of wheat placed on each square such that 1 grain was placed on the first square, 2 on the 2nd, 4 on the third and so on—effectively doubling the number of grains on each subsequent square, how many grains of wheat would be on the chessboard at the finish?

Exponential Growth Visualization

1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
16K
32K
65K
131K
262K
524K
1M
2M
4M
8M
17M
34M
67M
134M
268M
537M
1B
2B
4B
8B
17B
34B
68B
137B
274B
549B
1T
2T
4T
9T
18T
36T
73T
146T
293T
585T
1Q
2Q
4Q
9Q
18Q
37Q
73Q
147Q
294Q
588Q
1Qi
2Qi
4Qi
9Qi

Each square doubles the previous. By square 64: 18.4 quintillion grains

Your brain will take you to a number, but definitely not to the correct number which is actually eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred nine million, five hundred fifty-one thousand, six hundred and fifteen (I had to copy this from somewhere of course).

2⁶⁴ - 1 = 18,446,744,073,709,551,615 grains

The Impossible Scale

This wheat would weigh 1,199,000,000,000 metric tons
That's 1,600 times more than global annual wheat production
The first half of the chessboard: just 279 tonnes
The second half: virtually all of it

Ray Kurzweil coined the concept of the "second half of the chessboard" to describe this phenomenon—where exponential growth becomes so dramatic that it dwarfs everything that came before. We're now living in the second half of the chessboard for AI capabilities. The first recorded mention of this problem dates back to 1256 by Ibn Khallikan, showing that humans have been struggling with exponential intuition for nearly 800 years.

Your brain is simply not wired to think this way. The reasons might be split either way, my opinion would be that the evolutionary biology of our species did not require a specific adaptation to such an environment where exponential functions hide behind a bushy tree or within the dangerous crocodile-infested rivers.

Moore's Law and the Great Acceleration

I'm mentioning all of this in connection to Moore's law, one that engineers love to frequently mention in drunken debates about the future of humanity in relation to technology at your nearest local bar.

Moore's Law: 1970-2025 Exponential Journey

1970
First μP
1990
Internet
2007
iPhone
2012
Deep Learning
2022
ChatGPT

Exponential growth in computing power enabling each technological leap

Moore's law, originally formulated by Gordon Moore in 1965, predicted that "the complexity for minimum component costs has increased at a rate of roughly a factor of two per year." He later revised this to doubling every two years in 1975. This observation has largely held true for decades, driving semiconductor industry innovation and enabling exponential improvements in digital electronics.

⚡ The Moore's Law Debate

2022 Status Check:
Nvidia CEO Jensen Huang: "Moore's Law is dead"
Intel CEO Pat Gelsinger: "Moore's Law is still viable"
Reality: Semiconductor advancement began slowing around 2010, but experts predict it may continue for another 10-20 years through quantum computing, AI-driven chip design, and novel materials research.

What's particularly relevant for our AI discussion is that Moore's Law contributed significantly to the computational foundation that made modern AI possible—the exponential growth in processing power, memory capacity, and digital sensor technology that we now take for granted.

Just like forecasting weather seasonality and CO2 level variability across hundreds/thousands of years in order to predict ice ages, we have to thoroughly analyze the exponentiality of technology not since the first transistor was invented in the 1930s and 1940s, but 10+ thousand years into the past when man domesticated animals and increased the energy leverage of their effort for survival, built the first cities and increased the interaction complexity within a large community, built the printing press exponentially increasing the ability to transfer cultural, religious, mathematical information between distant geographical regions.

Understanding all of this and then having to correctly apply some sort of exponential function will take you most definitely to the natural evolution of artificial intelligence in the state that it is today. It's not magic—it's mathematics meeting sufficient computational resources at exactly the right moment in history.

The Acceleration is Real

From 2019 to 2024, AI model training compute increased by 100,000x
The time between GPT-3 and GPT-4 was shorter than most software development cycles
We went from "AI can't write code" to "AI can build entire applications" in under 18 months

Transforming this intentional & verbose tangent into a derivation of getting to the point, I'm recalling the way the AI status quo looked like in 2022 right before ChatGPT hit the entire market like a brick. Many companies experimenting with various applications of AI with a large share still at proof-of-concept stage, requiring huge amounts of computing power and GPUs, data, power being a serious bottleneck.

Fast forward to today, we can all see the shift into increased public awareness, the performance of 2022 capabilities can now be replicated with much smaller models, a more mature adoption with over 70% of organizations using AI in some capacity. The AI is moving from Research and Development/Experimentation phase into production and core operations.

What's particularly fascinating is how the democratization happened almost overnight. Remember when running a decent language model required a server farm? Now my grandmother's laptop can run locally-installed AI that would have required a PhD and a million-dollar budget just three years ago. That's exponential progress in accessibility, not just capability.

Public Perception

There is a ton of articles that prop up AI as being the next internet, or the next revolutionary technology and a ton of those that tell you that AI doesn't really exist.

It might not be the self-aware AI that you see in the movies, but I believe there are truths released into this universe that might reside on both sides of the aisle. Each of our individual opinions might shift with the progressive gathering of information and amount of experience dealing with the concept of AI.

It's always fun to ponder about one's personal perception throughout time regarding this environment:

AI Attitude Spectrum: Personal Journey 2022-2025

1
AI is a psyop / doesn't really exist
2
AI is completely useless
3
Useless, might be useful in the future
4
AI not that useful: hallucinations everywhere!
5
AI has some use but hard / frustrating to use at present time
6
Some use, but hype is insane
7
AI is useful
8
No need to know coding anymore, AI will do everything
9
AI is the solution to all my problems in life
2022
2025
Initial Position
Current Position

I must confess, my shift from the left to the right did not happen smoothly and it pulsed violently from one place to another in the short term depending on the specific frustrations I had for that week in dealing with all the available tools. It is not my intention for my personal biases to influence you in any way.

"The most honest thing I can tell you about AI is this: it's simultaneously more capable and more limited than most people think. It's like having a brilliant intern who never sleeps but occasionally hallucinates entire programming languages."

Developer Perceptions: A Reality Check

Speaking of developer perceptions, there's a fascinating split in our community that roughly follows the classic technology adoption curve, but with some uniquely AI-flavored twists:

🚀 The Early Adopters (15%)

These are the folks who were training their own models before ChatGPT was a twinkle in OpenAI's eye. They understand both the potential and the limitations intimately.

⚖️ The Pragmatic Majority (60%)

Cautiously integrating AI tools into their workflow, seeing real productivity gains but maintaining healthy skepticism about the hype.

🛡️ The Holdouts (20%)

Convinced that AI is either useless or will steal their jobs. Often change their tune after one successful debugging session with AI assistance.

🎢 The Oscillators (5%)

These poor souls swing between "AI will solve everything" and "AI is completely useless" depending on whether their last prompt worked or hallucinated wildly.

Impact of AI on Software Development

Generative AI (GenAI) has been on a popularity uptrend especially in the last 6-8 months, the models are beginning to function better and better with fewer required resources. The impact on the ability to write code is significant and software engineers' lives will never be the same.

You've heard it all before, but here is nonetheless my perspective of how AI is impacting programming:

🤖 AUTOMATION

AI is really good with repetitive boring tasks, enables you to not abuse the stress ball on your desk as much. In terms of areas where this shows in my personal work, this category proved to increase my productivity the most.

Conflicting packages or zero-day vulnerabilities in your node_modules folder? Wanting to increase readability of your ugly code? Writing incredibly boring repetitive unit tests? Say no more, the current tools do it all.

🐛 BUGFIXING

To be frank, this ignited my first spike of interest in integrating AI into my day-to-day activity when randomly I stumbled upon a Reddit post a few months ago that described using AI tools to identify hidden bugs within code leveraging a couple of tools.

That Reddit post is now obsolete but I've personally seen an increase in bugfixing efficiency from ~20-30% 6 months ago to ~60-70% success yield depending on the reproducibility context.

⏱️ ESTIMATIONS

This is for me a hit & miss, but here are some things that AI can help you estimate especially if given enough context:

  • Your specs are squeaky clean and clear
  • Tech stack is already known
  • You have a good idea about team experience level

Compound this with faithful reports about the team's spent time, interfacing your tools with a ticketing system where your team consistently tracks effort, have these tools create reports on spent time on a certain feature compared with the initial estimates and in a few iterations the tendency of AI to overshoot timelines will significantly lessen. I know a future app that does this is desperately needed. (Do you know of one?)

👁️ CODE REVIEW

I've not yet applied any agentic methodologies to my code review process but the examples I've seen leads me to believe that AI is very good at code review! Keep an eye for a future article of mine when I get to explore it fully.

⚡ TECHNICAL DEBT

This is precious for me, because managing and prioritizing technical debt directly impacted the way I deal with small projects that need to be done in record time. There is always that one project, that needs a ton of features but you have to cut corners in order to meet deadlines successfully.

Managing my technical debt in this scenario with AI helped me in significantly lowering my levels of stress, although this is a thing that will never disappear in my opinion. Always do a cost vs. benefit analysis about what you choose to sacrifice in the short run in order to pay the costs in the long run.

🧠 LEARNING & ONBOARDING

Here's an impact I've noticed that doesn't get talked about enough: AI as the ultimate technical mentor. When learning a new framework or debugging in an unfamiliar codebase, AI can provide contextual explanations that are often more patient and detailed than Stack Overflow answers from 2013.

Junior developers can now get instant feedback on their code patterns, and senior developers can quickly grok new technologies without diving into 300-page documentation.

⚠️ The Double-Edged Sword

But let's be honest about the flip side: AI can also make you lazy. There's a real risk of becoming dependent on AI assistance for tasks you should understand fundamentally. It's like GPS navigation—incredibly useful, but use it exclusively and you lose your sense of direction.

The Million Dollar Question

At this point you may be asking "will this increase my development speed or decrease it"?

Your tendency is to say yes, but let me provide you with some nuance and give you some evidence to the contrary

Recent studies provide some fascinating and sobering insights:

📊 METR Study: The Productivity Paradox

Study Results (16 experienced developers, 246 tasks):
• Developers took 19% longer to complete issues when using AI tools
• Expected AI to speed them up by 24%, but experienced slowdown
Even after experiencing slowdown, still believed AI sped them up by 20%
• Tools used: Cursor Pro with Claude 3.5/3.7 Sonnet models

This suggests our perception of AI helpfulness might be influenced by psychological factors rather than actual productivity gains.

This study challenges a lot of assumptions about AI productivity gains, but it's worth noting the limitations: small sample size (16 developers), potential learning effects not fully explored, and results may not generalize to all software development contexts. The complexity of measuring AI's impact across different settings is becoming apparent.

Although these studies should be taken with a grain of salt, the reality is that the market perception is heavily influenced by the current hype and evidence lies in headlines like "large companies fire x amount of engineers replacing them with AI".

The "Non-Technical" Invasion Myth

Should we be worried about the "non-technicals" boasting that they could transform overnight into competent Vibe coders and build the apps on their own with AI only, leaving us poor software engineers in the dirt?

🏔️ The Development Complexity Iceberg

What Users See (10%)

• Beautiful UI
• Smooth interactions
• "Simple" features

What Engineers Build (90%)

• Database architecture & optimization
• Security & authentication systems
• API design & integration
• Performance monitoring & debugging
• Scalability & load balancing
• Error handling & recovery
• Testing & quality assurance
• Deployment & DevOps
• Data validation & sanitization
• Cross-platform compatibility

The classic misconception: "How hard can it be to build an app?"

Most probably not, AI is a tool like any other. Just as a salesman cannot efficiently and properly take a shovel and dig ditches, neither will an ex-Wall Street investor take the reins of some AI agent and smoothly create the app that he needs.

Can that investor properly learn to code anywhere from 6 months to 2 years and finally build that app? Completely answering that question would need some follow-up ones. Is your investor friend building a small app that suggests 10 words in Spanish for you to learn every day with some gamification involved or is he looking to score on the market with the next Duolingo?

The first option merits the answer: of course, but taking into consideration proper pattern recognition while doing a fair amount of debugging, some security implementation, a little performance analysis and a generous amount of logging.

The second option will still compel him to redirect his attention from your friendly ChatGPT to me instead. Yes, me... the lowly engineer that knows that in order to build "Triolingo" you will need to build in parallel an AI platform similar to the Birdbrain system which creates personalized learning experiences for your future 1 million users by analyzing user data and identifying their learning needs.

Then, together with a bunch of other Me's that already have experience with scalability and A/B testing a bunch of complex systems, a lot of Them-too's that are able to capture the nuances of many languages in order to avoid issues like grammar mistakes marking for context valid native phrases and so on.

Let's not talk about the requirements of maintaining such a system, one of them being context and visibility for debugging that makes tracking the infinite number of errors impossible. Take Duolingo as a real-world example: achieving their current scale required unprecedented levels of human effort and coordination—orchestrating 200+ microservices, multiple daily deployments, maintaining 99.9% uptime, and ensuring 50%+ of engineers actively monitor systems to keep millions of users learning languages seamlessly. Note: Check out Sentry.io's article about how Duolingo mitigated all the risks related to this (ahem...neither Sentry nor Duolingo sponsors my article by the way).

Market Reality Check

While their boundless enthusiasm is admirable, the reality isn't all sunshine and rainbows. Recent infrastructure observations reveal concerning reliability patterns in AI systems—major platforms experiencing noticeable performance degradations and intermittent service disruptions that highlight the gap between promise and operational reality. Meanwhile, top-tier startup accelerators have shifted their acceptance criteria dramatically, with acceptance rates dropping to mere fractions of a percent while prioritizing domain expertise over AI-first approaches, suggesting a market correction toward substance over hype.

The Real Numbers

Truth: Some companies did fire engineers citing AI replacement
Also Truth: Many of these same companies quietly re-hired engineers 6-12 months later
Reality Check: Most "AI replacements" were actually disguised cost-cutting measures during economic uncertainty

  • Some truth to it, a lot of useless software engineering jobs due to IT bubble
  • Some companies realized it was a mistake and re-hired some of those fired engineers after seeing the real level of AI usefulness within their own company. This came under the conditions when cutting costs attracted appraisals from investors and any mistakes in this direction did not cause too many unforeseen consequences.

Even at the individual consultancy level open any social media thread about AI and you will see highlights flood you with boasts like: "I quit my $250k per year job and now I'm running 2 SaaS products and 1 mobile app making $50k/month while travelling the globe in a perma-vacation..."

These might make you raise an eyebrow or even involuntarily beat your mouse over the keyboard murmuring about the stupidity of all of this, but I learned to never underestimate the ability of others to sell you pie in the sky.

📈 Market Corrections

  • Companies rehiring developers after AI "experiments" failed
  • AI bubble showing signs of deflation in some sectors
  • Reality setting in about AI limitations in complex systems

💼 Industry Adjustments

  • Problems with solvency of AI companies, currently offset only by fleeting enthusiasm of investors and state-level promotion of AI adoption
  • Infrastructure reliability concerns surfacing as AI systems scale, with uptime metrics revealing gaps between promised and delivered performance
  • Hiring cheap labor with nasty effects - Builder.ai scandal exposed: 700 Indian engineers pretending to be AI
  • Startup accelerator pivot from AI-first to expertise-first acceptance criteria, reflecting market maturation

The New Normal

There is this thing when adapting to new paradigms—and we're definitely in one right now. The question isn't whether AI will change how we work (it already has), but how we adapt to work with it rather than being replaced by it.

Avoiding the GPS Effect

Being careful with secondary effects of using AI as a developer: there's a parallel with the time GPS took over driving versus visual orientation. We gained incredible navigation capabilities but lost our innate sense of direction. The same risk exists with AI-assisted coding.

🧭 The Navigation Analogy

Before GPS: We memorized routes, understood geography, could navigate by landmarks
After GPS: We can get anywhere but are lost without our phones
Programming Parallel: AI can solve problems instantly, but are we losing our problem-solving muscles?

The Advantage We Have

We software engineers possess several critical advantages in this AI-transformed landscape that position us uniquely for success:

  • Continue our everyday betterment and learning because there are few domains that require you to learn and assimilate information every day for the rest of your career—this is actually an advantage
  • Focus on higher-order skills—system design, architecture, user experience, business logic
  • Build complementary skills—AI prompt engineering, model fine-tuning, and AI system integration

Software development remains one of the few fields where continuous learning isn't optional—it's survival. We're already adapted to rapid change, new frameworks, and evolving best practices. AI is just another tool in our ever-expanding toolkit.

Conclusion

So despair not, my esteemed colleague, your days have not yet passed. You are still required by the world, the only thing is that the world is under the temporary illusion that the dependency upon you has reached its zenith. You will adapt, strive to reach excellence and you will prevail.

We are not the poor fellows that manually lit the street gas lanterns worrying about the new thing called electricity, nor the ones that cleaned the horse dung from the streets cursing through their teeth each time they saw a wheeled monstrosity called the automobile.

The craze will settle into normalcy, the hype will mature into utility, and we'll continue doing what we've always done: solving problems, building systems, and pushing the boundaries of what's possible.

Besides, who else will teach the machines that 'undefined is not a function' isn't actually a philosophical statement?

References

Inability of human beings to understand the exponential function

Chess Invention Story

Moore's Law

AI Development Studies

Industry Case Studies

Disclaimer: Neither Sentry nor Duolingo sponsors this article. All opinions and insights are the author's own, based on personal experience and publicly available information.

AT

About the Author

Adrian Tanca
Chief Technical Officer at Avangarde Software

Passionate about the intersection of human creativity and technological innovation.