Let’s get one thing straight. If you’ve been listening to the so-called “tech futurists” and clueless CEOs, you’ve probably been fed the same line: AI is going to replace every entry-level developer.
It’s a hot take for a podcast, a great headline for a tech bro’s newsletter, and an even better way to get a company’s stock price to jump. But it’s also a complete, unadulterated lie. And it’s time to call it out for what it is.
AI is not a replacement for junior developers. It’s not a silver bullet that eliminates the need for human talent. It’s a tool. It’s an assistant. Anyone who tells you otherwise either doesn’t understand software development or is trying to sell you something.
C#’s Garbage Collector is a powerful tool for automatic memory management, but it doesn’t solve every resource management problem. While it efficiently cleans up unreferenced managed objects, there are specific resources that require a more deterministic approach to ensure they are released promptly. This is where the using statement comes in.
Building APIs that handle large datasets can be a challenge. A common approach is to collect all the data into a list, convert it to JSON, and then send it all at once. But what happens when that dataset is massive? Your API might freeze up while it’s building the response, and you could end up with a huge memory footprint. Fortunately, ASP.NET Core provides a great solution for this problem: IAsyncEnumerable<T>.
Yes, I’m saying it, most unit tests feel like a waste of time.
You write loads of them. They break when you refactor. They rarely catch real bugs. When you change implementations, you delete the tests. And when your build fails, it’s usually because a mocked method didn’t behave as expected — not because your code actually broke.
So why do we keep writing them?
Because while most unit tests are a waste of time, some aren’t — and knowing the difference is what separates a codebase held together by guesswork from one that’s confidently shippable.
Ubuntu’s Snap packaging system was introduced with the promise of universal Linux applications, easy updates, and robust security through confinement. While these goals are admirable, the reality of Snap for many desktop users has been a source of frustration, leading to a growing sentiment that “Ubuntu Snap sucks.” This isn’t just a matter of preference; it’s rooted in several fundamental design choices that hobble user experience, resource efficiency, and even security.
Everybody has heard of the leetcode style interview tests, and given their prominence in the tech news recently, you won’t be surprised to find out that this is another story highlighting their inherent flaws.
Recently, after a remote interview on teams, I was given a set of 3 different leetcode style interview test questions. I was told to take “no more than 3 hours total” and to “write it using typescrypt”.
39 minutes later, I had completed and submitted all 3 tests, and all 3 tests pass all test cases. So now you ask, what’s the problem?
The headlines have been stark: tech giant after tech giant announcing significant layoffs. While the immediate impact on those made redundant is undeniable and deeply upsetting, the fallout from these decisions extends far beyond those who receive the dreaded news. The industry wide repercussions are creating a climate of increased pressure, stifled career movement, and potentially lower earning potential for a vast number of tech professionals.
Let’s unpack the less visible, yet equally significant, ways tech redundancies are impacting everyone else.
The hum of the AI co-pilot has become a familiar soundtrack in the world of software development. These intelligent tools, promising increased efficiency and code generation prowess, have been embraced with open arms by many. But what happens when this reliance morphs into over-dependence? What are the potential pitfalls of blindly trusting algorithms we don’t fully comprehend, especially when they occasionally – or even frequently – get it wrong? And perhaps most worryingly, what becomes of the core skills that define a truly capable software developer?
We’ve all been there. You’ve tweaked a loop, maybe used a more efficient LINQ method, and patted yourself on the back for “optimising” your C# code. The profiler might even show a slight improvement. But then, the application still feels… sluggish. That’s because the initial steps in optimisation can be deceptively easy, leading to what we’ll call The Optimisation Lie, the belief that a few superficial changes equate to truly well-optimised software.
The truth is, while applying basic optimisations in C# is often straightforward, achieving significant and sustainable performance gains requires a much deeper understanding and a more strategic approach.
If you’ve been in software development for more than five minutes, you’ve probably had these acronyms beaten into your head:
SOLID: that five-headed monster of principles that’s supposed to make your code amazing.
DRY: because apparently typing the same logic twice will summon demons or something.
Don’t get me wrong, these principles exist for good reasons! They’ve saved countless developers from nightmare codebases. But here’s the kicker: they’re guidelines, not commandments handed down from the mountain.