Do We Still Need To Know How To Code?
I am seeing an increasing number of discussions suggesting that Artificial Intelligence (AI) will soon make coding skills obsolete. With the rapid advancement of tools like GitHub Copilot and Gemini, some argue that traditional programming is no longer necessary. This post explores whether coding skills remain relevant or if we are moving toward a future of purely natural-language development.
Stating the Problem #
The core of the current debate often centers on a recurring question in developer communities like Quora:
Will the rise of AI make understanding programming languages obsolete, similar to how we no longer need to understand machine code?
To address this, we must break down the relationship between abstraction and oversight:
- Abstraction Levels: Just as we moved from Assembly to Python, AI acts as a new layer of abstraction. However, abstraction does not eliminate the need for logic; it simply changes the syntax.
- The Oversight Requirement: While AI generates code snippets, human developers must still provide the "reasoning" to ensure the code meets specific requirements and integrates into a larger system without security flaws.
How AI Works in Programming (Good Prompts Matter) #
Effective AI-assisted programming depends entirely on the specificity of the prompt. A vague prompt yields a vague (and often broken) result.
- Vague Prompt: "Create an application that converts two colors in any format supported by the color.js library and output the result."
- The Issue: This lacks a target language, a platform (Web vs. CLI), a UI framework, and error-handling requirements.
By contrast, a refined, "engineering-focused" prompt provides the AI with a deterministic path:
Create a web application using React and TypeScript with Vite as the build tool. Use the color.js library to convert colors between supported color spaces. Include input fields for color entry, a dropdown for output format selection, and a visual swatch of the converted result.
Why Programming Knowledge Is Still Important #
Even the most detailed prompt can produce "hallucinations", plausible-sounding but incorrect code.
Correctness Is Not Guaranteed #
AI tools do not guarantee functional correctness. They may invent a library method that doesn't exist, introduce subtle logical loops that look correct at first glance or use an older, unsupported, library method. Google Cloud defines AI hallucinations as confident but false responses generated by LLMs.
In programming, this manifests as:
- Incorrect Predictions: Using a deprecated API or a non-existent parameter.
- False Positives / Negatives: Identifying a secure code block as a threat or a valid logic flow as a bug.
- Security Vulnerabilities: Writing "functional" code that lacks proper sanitization or authentication.
Testing and Debugging #
A common misconception is that if an AI writes the code, the AI should also write the tests. However, this creates a "closed-loop" failure: if the AI misunderstood your requirements when writing the code, it will likely bake that same misunderstanding into the tests it generates.
Debugging the Tests Themselves #
AI-generated tests are code, which means they are subject to the same hallucinations and logic errors as the application itself. You must be able to audit a test to ensure it actually tests the right thing. Common issues include:
- Tautological Tests: The AI writes a test that passes simply because it asserts
true === trueor mirrors the exact logic of a flawed function, creating a false sense of security. - Brittle Mocking: AI often over-mocks dependencies, meaning the test passes in isolation but the application fails when connected to a real database or API.
The "Golden Test" Strategy #
To build a resilient system, you should adopt a hybrid approach and manually write Core Logic Tests (or "Golden Tests").
- Protect the Business Logic: Write hand-coded tests for your "happy path" and critical edge cases before you prompt the AI for code. If the AI-generated code fails your manual tests, you know the code is wrong.
- Validation through Contrast: Use your hand-written tests as a source of truth. If an AI generates a new feature, your existing suite acts as a deterministic barrier, catching any regressions the AI might have introduced.
The Debugging Loop #
When a bug inevitably appears, the "prompt-to-fix" cycle is only effective if you can provide the AI with technical context. Without programming knowledge, you cannot interpret a stack trace or a memory leak. Knowing how to code allows you to isolate the bug using breakpoints or logs and communicate with precision. Instead of telling the AI "it doesn't work," you can explain exactly which hook is missing a dependency.
Managing Non-Determinism: The Infrastructure of Intelligence #
LLMs are non-deterministic, meaning they can produce different outputs for the same input. While "vibes-based" development works for a hobby project, professional software requires the AI to be wrapped in a deterministic shell. This is where specialized platforms provide the operational logic required to turn a creative "brain" into a reliable "system."
Why Guardrails Are Mandatory #
In a professional environment, "pretty good" is a liability. If you are building a billing system or a medical diagnostic tool, you cannot allow the AI to improvise the process. These platforms provide three critical guardrails:
- Predictability (Constraint Engines): Tools like AgentMap and DSPy act as the "lanes" on a highway. They force the AI to choose from pre-defined, valid paths. By converting open-ended prompts into structured logic, they ensure that if a user asks for a "refund," the system triggers the refund script every single time, rather than sometimes deciding to offer a "credit" instead.
- Resilience (Durable Execution): AI calls are prone to failure—rate limits, network timeouts, or garbled JSON responses. Temporal provides "durable execution" by recording every step of a workflow. If a server crashes or an API times out, the system "remembers" exactly where it was and resumes. This prevents the AI from repeating expensive work or, worse, losing its place in a high-stakes transaction.
- Connectivity (The Nervous System): An LLM is a brain in a jar. It cannot naturally "talk" to your internal SQL database or send a Slack message. Platforms like n8n and Augment Code provide the plumbing—the hard-coded nodes and semantic maps—that allow the AI to interact with the real world securely. They ensure the AI acts only through approved interfaces, preventing it from "hallucinating" access to systems it shouldn't touch.
Infrastructure vs. Orchestration: The LangChain Ecosystem #
Building production-grade AI requires developers to understand the LangChain ecosystem. You must distinguish between AI Orchestration (LangChain, LangGraph) and System Infrastructure (Temporal, n8n).
- LangChain and LangGraph (Orchestration): These "AI-First" tools manage the "thinking" process. They govern how an agent breaks down a prompt, searches a vector database, and reasons through a loop. LangGraph, in particular, handles complex multi-turn conversations and agentic state transitions.
- LangSmith (Observability): This diagnostic tool traces and debugs your AI chains. It reveals where a prompt failed or where an LLM's logic diverged.
Why Specialized Tools Still Matter #
While LangGraph manages the "state" of a conversation, it cannot replace durable infrastructure like Temporal.
- Persistence of Execution: LangGraph manages the logic of the next step, but Temporal ensures the survival of that step. If the server loses power, LangGraph checkpoints might help you resume the chat, but Temporal ensures the system finishes high-stakes business transactions—like charging a credit card—exactly once.
- Connectivity Scale: n8n provides hundreds of robust, hard-coded connectors for business apps. These connectors often outperform the general-purpose "tools" found in the LangChain ecosystem.
- Production Guardrails: In production code, developers often embed LangChain inside an activity managed by Temporal or as a node within an n8n workflow. This architecture separates probabilistic reasoning (LangChain) from deterministic execution (Infrastructure).
Without these distinctions, an AI agent is just a demo. With them, it becomes an enterprise-grade software product.
Conclusion #
AI tools are transforming the role of the programmer from a "writer" to a "supervisor" and "architect." While you may spend less time typing boilerplate code, your foundational knowledge of logic, security, and system design is more important than ever. You don't just need to know how to prompt; you need to know how to build the infrastructure that keeps that prompt in check.