Diagram illustrating the software development problem where expectation is different from reality due to requirement issues.
Diagram illustrating the software development problem where expectation is different from reality due to requirement issues.

Are Cars Dependent on Computer Programming? More Than Just Code Behind the Wheel

[Ed. note: While we take some time to rest up over the holidays and prepare for next year, we are re-publishing our top ten posts for the year. Please enjoy our favorite work this year and we’ll see you in 2024.]

The buzz around Artificial Intelligence (AI) is deafening, filled with tales of groundbreaking advancements. Amidst this excitement, a wave of concern washes over us in the software development world: could AI render our jobs obsolete? The vision painted is one where business executives and product managers bypass programmers entirely, directly instructing AI to build software to their exact specifications. Having spent 15 years translating vague ideas into functional software, I find it hard to fully subscribe to this anxiety.

While coding presents its challenges, the technical hurdles are rarely the biggest roadblocks. Syntax, logic, and programming techniques, once mastered, become relatively straightforward – most of the time. The real complexity lies in defining what the software should actually do. The most demanding aspect of software creation isn’t writing lines of code; it’s crafting clear, unambiguous requirements. And these crucial software requirements remain firmly in the human domain.

This article delves into the intricate relationship between software requirements and the code itself, exploring what AI truly needs to deliver meaningful results. We’ll also examine how this applies to complex systems we rely on every day – like modern cars and their increasing dependence on computer programming.

It’s Not a Bug, It’s a Feature… Wait, It’s Definitely a Bug

Early in my career, I joined a project already in motion, tasked with boosting the team’s output. The software’s purpose was to configure custom products on e-commerce platforms. My specific assignment was to generate dynamic terms and conditions. These terms were conditional, varying based on the product type and the customer’s location due to different state legal requirements.

At one point, I stumbled upon what seemed like a flaw. A user could select a product type, generating the correct terms, but later in the process, they could switch to a different product type while retaining the initially generated terms. This directly contradicted a core feature outlined and signed off on in the business requirements.

Naive and fresh-faced, I approached the client with a question: “Should we remove the option that allows users to override the correct terms and conditions?” The response I received is etched in my memory. With unwavering certainty, he declared, “That will never happen.”

This was a seasoned executive, deeply familiar with the company’s operations, specifically chosen to oversee this software project. The ability to override the default terms was actually requested by this same individual. Who was I, a junior developer, to question such authority, especially from a client funding our work? I dismissed my concern and moved on.

Months later, mere weeks before the software launch, a client-side tester reported a defect and assigned it to me. Reading the defect details, I couldn’t help but laugh. The very issue I had flagged – overriding default terms, the scenario deemed impossible – was happening. And guess who was tasked with fixing it? And who bore the initial blame?

The fix was simple, and the bug’s impact was minimal. However, this experience became a recurring theme throughout my software career. Conversations with fellow software engineers confirmed I wasn’t alone. The problems grew larger, more complex, and costlier, but the root cause often remained the same: unclear, inconsistent, or simply incorrect requirements.

Diagram illustrating the software development problem where expectation is different from reality due to requirement issues.Diagram illustrating the software development problem where expectation is different from reality due to requirement issues.

AI Today: Chess Masters Versus the Real World of Self-Driving Cars

Artificial intelligence, a concept that has been around for decades, has recently surged into public consciousness with highly publicized advancements, sparking both excitement and anxiety. While AI has demonstrated remarkable success in certain fields, its capabilities are often misunderstood, particularly when we consider complex, real-world applications like self-driving cars and the intricate computer programming they rely on.

One early triumph of AI lies in the game of chess. Since the 1980s, AI has been applied to chess, and it’s now widely accepted that AI algorithms surpass human chess-playing abilities. This isn’t surprising when you consider that chess operates within a FINITE set of parameters. The game begins with 32 pieces on a 64-square board, governed by well-defined, universally accepted rules, and a singular, clear objective: checkmate. Each turn presents a finite number of possible moves. Chess, at its core, is a rules-based system, perfectly suited for AI. AI systems excel at calculating the consequences of every move, selecting the optimal strategy to capture pieces, gain positional advantage, and ultimately win.

However, the landscape shifts dramatically when we consider self-driving cars. Automakers have long promised autonomous vehicles, and while some cars now possess self-driving capabilities, these are often conditional and require human oversight. In many situations, the “self-driving” feature is more of an advanced driver-assistance system, demanding the driver’s attention and potential intervention.

Similar to chess-playing AI, self-driving cars rely heavily on rules-based engines to make decisions. Yet, unlike the clearly defined rules of chess, the rules for navigating every conceivable real-world driving scenario are far from definitive. Driving involves countless split-second judgments – avoiding pedestrians, maneuvering around obstacles, navigating complex intersections. These judgments, often intuitive for humans, are critical for safety; the difference between a safe arrival and a trip to the hospital.

In technology, the gold standard for reliability is often described in terms of “nines of availability”—aiming for 99.999% or even 99.9999% uptime for critical systems. Achieving the first 99% of availability is relatively straightforward. It allows for a significant amount of downtime annually. However, each additional “9” dramatically increases the complexity and cost. Reaching 99.9999% availability, which translates to just seconds of downtime per year, demands exponentially greater planning, effort, and resources.

Availability Percentage Downtime per Year
99% 87.6 hours
99.9% 8.76 hours
99.99% Less than 1 hour
99.999% 5.2 minutes
99.9999% Approximately 31.5 seconds

Even as AI driving systems improve, the inherent risk of accidents and fatalities remains. Human drivers, of course, are also fallible. The crucial question becomes: what level of risk is acceptable for autonomous vehicles? Governments and the public will likely demand a safety standard at least as good as, if not significantly better than, human driving.

The immense challenge in achieving this level of safety stems from the vastly greater number of variables in driving compared to chess. These variables are, crucially, NOT FINITE. While the first 95% or 99% of driving scenarios might be predictable and manageable, the remaining edge cases are incredibly complex and diverse. Consider unexpected events: other drivers’ unpredictable actions, road closures, construction zones, accidents, sudden weather changes, or even faded road markings. Training an AI model to recognize and respond appropriately to these anomalies is extraordinarily difficult. Each edge case, while potentially sharing some characteristics with others, is often unique, making it incredibly challenging for AI to generalize and react flawlessly every time. This inherent complexity highlights why, while cars depend on computer programming more than ever, achieving true autonomy is a monumental task.

AI Can Generate Code, But Not Necessarily Software: The Human Element Remains Key

Creating and maintaining software shares far more similarities with driving than with playing chess. Software development involves a multitude of variables and often relies on nuanced judgment calls rather than rigid rules. While there’s a desired outcome when building software, it’s rarely as singular and clearly defined as winning a chess game. Software is rarely “finished”; it’s a living entity, constantly evolving with new features, bug fixes, and updates. Unlike chess, where a game concludes with a win or loss, software is an ongoing process.

In software development, we strive to impose structure and predictability through technical specifications. Ideally, these specs meticulously detail expected user interactions and program workflows – “for an e-sandwich purchase: user clicks this button, system creates this data structure, this service runs.” However, reality often deviates from this ideal. More often than not, developers are given wishlists disguised as feature specs, napkin sketches of interfaces, and ambiguous requirements documents, leaving them to fill in the gaps and make critical design decisions.

Adding to the challenge, requirements frequently change or are even disregarded mid-project. Recently, I was asked to consult on a project aimed at providing COVID-19 related health information in regions with unreliable internet access. The proposed solution was an SMS-based survey application. Initially, I was enthusiastic about the project’s potential impact.

However, as the team described their vision, red flags emerged. Asking a retail customer to rate their shopping experience on a 1-10 scale via SMS is straightforward. Conducting a multi-step survey with multiple-choice questions about COVID-19 symptoms via text message is significantly more complex. While I didn’t outright reject the idea, I raised numerous potential points of failure and emphasized the need for clearly defined protocols for handling incoming responses for each question. How would comma-separated numbers representing answers be processed? What would happen if a response didn’t match any of the provided options?

After careful consideration of these challenges, the team reached a crucial decision: proceeding with the SMS survey in its current form was too risky. This, in my view, was a successful outcome. Pushing forward without addressing these potential data integrity and user experience issues would have been far more wasteful and potentially harmful.

This experience underscores a critical point: is the vision of AI-driven software creation simply to empower stakeholders to directly instruct a computer to build, for example, an SMS-based survey? Will AI proactively ask the probing questions about error handling, data validation, and user missteps that experienced developers would? Will it anticipate the myriad ways users might interact with the system incorrectly and devise robust solutions?

To generate functional, reliable software with AI, you need to possess a clear, precise understanding of your desired outcome and be able to articulate it in meticulous detail. Even when developing software for personal use, unforeseen complexities often emerge only when coding begins.

Over the past decade, the software industry has largely shifted from the waterfall methodology, with its emphasis on exhaustive upfront planning, to agile development. Waterfall aims to define every requirement before a single line of code is written, while agile embraces flexibility and iterative adjustments throughout the development process.

The history of software development is littered with waterfall project failures. Stakeholders often believed they knew exactly what they wanted and could document it perfectly, only to be deeply disappointed with the delivered product. Agile methodologies emerged as a direct response to these shortcomings, acknowledging the inherent uncertainty and evolving nature of software requirements.

AI might find its niche in rewriting existing software, porting legacy systems to modern hardware or programming languages. Numerous organizations still rely on software written in COBOL, a language with a shrinking pool of skilled programmers. If the requirements are perfectly defined and unchanging, AI could potentially generate code faster and cheaper than human teams. AI might excel at automating the coding process for well-understood software – software whose functionality has already been thoroughly defined and refined by humans.

In essence, AI might be ideally suited for the waterfall approach to software development – ironically nicknamed the “death march” due to its high failure rate when attempted by humans. The weakness of waterfall isn’t the coding phase itself; it’s the exhaustive upfront requirements definition – the phase that demands deep understanding, foresight, and the ability to anticipate the unpredictable. Artificial intelligence is undeniably powerful, but it cannot read minds, nor can it inherently tell you what you should want or anticipate every possible user need and edge case. Therefore, while cars and countless other systems increasingly depend on computer programming, the human element – the ability to define problems, envision solutions, and adapt to evolving needs – remains indispensable in the creation of truly effective software.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *