But in managing the complexity of the problem
Many applications and tools start simple. Moreover, I can do 80%, maybe even 90%, of what I want to do. But as if that wasn’t enough, version 1.1 added a few features, version 1.2 added even more, and by version 3.0, the originally elegant user interface had become a mess. It’s gone.
The problem of complexity is wider than user interfaces. Nowadays, we need to be concerned about secure programming and cloud deployment, which we didn’t previously have. Requirements like security tend to complicate code, but complexity itself tends to mask security issues.
Adding security as an afterthought usually doesn’t work. Instead, security must be designed and managed with the rest of the software.
Now, coming to the main topic, we are increasingly seeing code written using generative AI tools such as GitHub Copilot, Code Interpreter, and Google Codey. But these tools don’t care about complexity and ultimately require humans to understand and debug the code.
And the problem of complexity goes beyond individual functions and methods. Many professional programmers work on large systems consisting of thousands of functions and lines of code.
How many people understand its overall structure and architecture?
What about the complexity of legacy code that may outlive its developers?
As computer systems become more complex, software architecture becomes increasingly important. Reducing the complexity of modern software systems is yet to be a task for generative AI but for humans.
Many developers believe that minimizing the number of lines of code is the key to simplification. However, this can lead to complex spells cramming multiple ideas onto a single line, making code easier to read. There are side effects that impair the quality.
Of course, Mike Lukidas is not arguing that generative AI cannot be used in software development or has no role. Generative AI certainly has its uses. What he’s trying to say is don’t get so caught up in automatic code generation that you forget to control complexity.
Large-scale language models won’t help, at least not now, if not in the future. Still, if humans can free up time to understand and solve problems of complexity, then AI can undoubtedly be beneficial.
Will large language models ever be able to write million-line enterprise programs? I probably will come. But that means someone has to write a prompt to tell you that. He concludes that those who do this will face the challenge of understanding and managing the complexity of problems, which is the fate of programming.
I wonder if we’ll eventually see a generative AI that can manage complexity and write programs that self-propagate to enterprise scale when given an end goal as a prompt.

