Why Copilot is Making Programmers Worse at Programming

Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.

Erosion of Core Programming Skills

One of the most significant risks of relying on tools like Copilot is the gradual erosion of fundamental programming skills. In the past, learning to code involved hands-on problem-solving, debugging, and a deep understanding of how code works at various levels—from algorithms to low-level implementation details.

AI assistants, while useful, often allow developers to bypass these steps. For instance, rather than deeply understanding the underlying structure of algorithms or learning how to write efficient loops and recursion, programmers can now just accept auto-generated code snippets. Over time, this could lead to developers who can’t effectively solve problems without an AI’s assistance because they never fully developed the problem-solving and critical-thinking skills necessary for programming.

Over-Reliance on Auto-Generated Code

With tools like Copilot, developers can quickly produce working code without fully understanding its mechanics. This leads to what’s known as “code dependency”, where developers lean too heavily on AI-generated solutions without checking for correctness, efficiency, or maintainability.

One key issue with auto-generated code is that it may not always be the optimal solution for a specific problem. Without proper vetting, programmers might accept inefficient, buggy, or insecure code that works at first glance but causes problems in the long run. This reliance reduces the incentive to refactor or even review the code, which could harm the codebase and team productivity over time.

Lack of Ownership and Responsibility

AI-assisted code generation can lead to a phenomenon where developers become detached from the code they “write”. When a developer writes every line of code manually, they take full responsibility for its behaviour, whether it’s functional, secure, or efficient. In contrast, when AI generates significant portions of code, it’s easy to shift that sense of responsibility onto the AI assistant.

This lack of ownership can make developers complacent, thinking, “The AI generated it, so it must be correct”. But AI-generated code isn’t immune to errors, bugs, or even security vulnerabilities. Ultimately, it’s the developer’s job to review, understand, and refine the code, but with the convenience of Copilot, that diligence might fade.

Reduced Learning Opportunities

One of the most crucial aspects of becoming a great programmer is the continuous learning process. Every bug encountered, every design decision made, and every algorithm researched is an opportunity to learn something new. However, by providing quick solutions, Copilot and other LLMs may shortcut the learning process, giving developers answers without requiring them to dig deeper into why certain solutions work or don’t work.

When a tool hands you the solution immediately, you’re less likely to seek alternative approaches, experiment with different methods, or fully understand the trade-offs of different implementations. This convenience, while beneficial in the short term, reduces the number of learning experiences that contribute to long-term skill development.

Narrowed Creative Thinking

Programming is as much about creativity as it is about logic. A skilled programmer can approach a problem from multiple angles and come up with a variety of solutions, weighing the pros and cons of each. By offering suggestions that are often based on existing code or patterns, AI-driven tools can limit a developer’s exploration of novel approaches or innovative solutions.

While Copilot can suggest “what works”, it may also promote more conventional or popular patterns at the expense of encouraging out-of-the-box thinking. The fear is that AI tools might cause programming to become a more mechanical process of accepting suggestions rather than a creative pursuit that pushes boundaries.

Dependency on Proprietary Tools

Another downside of AI-driven tools is the dependency they create on proprietary platforms like GitHub Copilot. When developers start using these tools, they become increasingly reliant on them to generate, troubleshoot, or optimise their code. This creates a problem when the tools fail, change their terms of service, or become too expensive.

Additionally, this dependency on AI tools can isolate developers from broader programming communities and open-source tools that encourage peer collaboration and knowledge sharing, further hindering the growth of their skills.

False Sense of Expertise

Perhaps one of the most concerning effects of AI code generation is that it can create a false sense of expertise among developers. A developer might feel proficient in programming because they can quickly generate working code with the help of Copilot, even if they don’t fully understand the code.

This can be particularly dangerous when developers move into more complex areas of software engineering, like performance optimization, concurrency, or security, where a deeper understanding of the code is critical. Without the foundational knowledge, developers can make mistakes that are costly in both time and money.

In Short

If you as the developer do not understand the code, do not understand how you got to where you are, do not understand how to solve the problem yourself. Copying and pasting code from a LLM, and being spoon fed the answer, is not going to make you a better programmer. Its going to make you reliant on the robot, and you will never be able to do anything that the robot cant already do.

See Also

Comments

comments powered by Disqus