How LLMs help and don’t help developing software
1 The promised land of vibe coding
Everyone says large language models make developers faster.
I used to believe that too until I spent a week rebuilding a simple image pipeline with ChatGPT.
What I found wasn’t just slower progress. I found a new kind of slowness. The kind that exposes how shallow your understanding really is.
This is a story about how LLMs help and don’t help us develop software; where they speed up flow, and where they quietly erode focus, judgment, and attention.
1.1 Enter the test subject: my 4-year-old Epaper picture frame
1.1.1 A long time ago
Some years ago I developed an Epaper-based picture frame.
The limited bit depth (3-bit) makes gray-scale images look flat. I used a dithering approach to get sharper images.
The core of that project was hardware embedded engineering, so I kept the software side light and relied on a GIMP batch processing pipeline.
GIMP has an implementation of the Floyd-Steinberg dithering.

I tried before to automate the process and do a simple script for image conversion. In my previous attempts, I could not get the exact same result as GIMP.
1.1.2 The age of AI, Copilot & Co
In 2025, every software developer will have used AI-assisted programming. Some think it is useful; others are not entirely convinced.
I currently develop an AI-powered Meal Planner. I use ChatGPT a lot in the development as it allows me to outsource manual coding of trivial tasks, so I can focus on architecture tasks or more complex algorithms.
Lately, the responses have become better. So much, that I thought I could just redo the entire pipeline in a browser window. - We are all vibe coding now, aren’t we?
The Truth: in the end, I managed it.
See the fully working Demo to play with.
1.1.3 LLMs make us faster!
That was my unequivocal belief before I sat out on this journey. Boy, was I wrong.
Using an LLM to do this side project led to many failed attempts and dead ends. With the change to GPT5, the LLM has become more confident in proposing false solutions.
This leads me to the core question I want to explore in this post: “Where do LLMs actually help us develop softwrae, and where do they mislead us?”
Read on, for the seven lessons I am going to share with you.
2 Lesson #1: LLMs make hard problems look easy
When I started recreating GIMPs Floyd-Steinberg dithering, I thought I’d be done in a breeze. After all, the algo is well documented and fairly simple. What could go wrong?

Yet, my results looked noisier and flatter. A quick Google Search revealed others had reported similar mismatch, discussion here.
Next comes what every curious engineer with the power of a mighty LLM at his fingertips would do: ask ChatGPT to reimplement GIMP’s code.
At first glance, the code seemed straightforward: C code, only heavily relying on raw pointers.
src_buf = g_malloc (width * src_bpp);
dest_buf = g_malloc (width * dest_bpp);
next_row = g_new (gint, width + 2);
prev_row = g_new0 (gint, width + 2);ChatGPT immediately produced a fully working version,
…only the results were wrong.
Because the full code is too long for the context window, I tried to be clever: breaking it into smaller parts, adding missing functions.
Digging deeper, I found the issue is that the code relies on far more functions for tone mapping, error diffusion limits, histogram caching. Which rely on even more function.
None of which were in the initial prompt.
While I was adding more and more context, I was never getting the same result. What should have been the work of an afternoon already stretched over two days.
Then I realized the problem was not the LLM or GIMPS pointer code. The issue is, as always, the person in front of the machine; me.
Of course, I knew LLMs are good at pretending to be fluent, even when they are not. But the issue is that I could not recognize the illiteracy of ChatGPT with the dithering algo.
Beginner Lesson
LLMs can make hard problems look easy. Fluent coding isn’t the same as real understanding. LLMs expand the search space, not the understanding space.
Intermediate Lesson
Expand your own understanding space
- Understand an unknown solution and own it conceptually
- Ask the model to explain its reasoning; don’t accept logic that feels incomplete
- Explore boundaries with contrastive examples to see where it breaks.
- Externalize your learning: keep a logbook, sketch diagrams, and track conversation branches.
Mastery Lesson
Cultivate your intuition of when an LLM can be trusted, even in unfamiliar domains. That will make you truly faster.
3 Lesson #2: Focus on the value add
I probably should have been satisfied with my 1-bit Floyd Steinberg pipeline. Let’s recall, the real goal is to have my pictures on the wall in a dynamic picture frame.
But here’s the catch: going through my pictures and sorting them is actually far more effort than the conversion for the picture frame. The optimization of code paths, dithering algo testing and chasing marginal speedups did not add value to the actual problem.
That is the quiet but biggest danger of LLMs for mid-career developers: you can solve so many problems, that you forget which ones are worth solving.
ChatGPT & Co lower the barrier to a 50% solution; quick, plausible, traditionally used to get further funding. Those half-baked solutions create new problems: bugs, improvements to make, experiments to run.
From a meta perspective, it’s not so different from how low-impact tasks survive inside large organizations: easy to start, hard to stop.
Beginner Lesson
Learn to spot when ChatGPT suggests optimizations that don’t matter.
Intermediate Lesson
Adopt a product mindset. Clearly formulate the goal and what you want to achieve. Define what “done” means. Hold that line when the LLM tempts you with shiny detours.
Mastery Lesson
The tool amplifies habits. Develop good habits, drop bad ones.
- Calibrate to value: Write down The value of this work is * because it improves * . Revisit this statement after each hour.
- Curiosity within boundaries: use constraints, time boxes, iteration caps
- Mode awareness: define the mode: learning (speed, breadth, discovery) or production(depth, polish, delivery)
- Reflection logbook: what helped, what kind of question was misleading? Reread your chats or transcripts.
- Debug your thinking: ask yourself if you are framing the problem wrong.
- Cultivate Strategic Boredom: stop when it is enough
4 Lesson #3: Correct initial framing beats repeated prompting
Frustrated, I was about to give up. Then I stumbled across a new approach: the Teddy-Beau Algorithm.
The algorithm creates multiple differently exposed versions of the image, then applies patterned dithering to each, and finally fuses the most contrasting regions. As a result, details stand out while maintaining the textured dither effect.

The demo code is written in JavaScript. As the author points out, it is not optimized and takes quite long.
So naturally, I asked my LLM buddy for help: “How to optimize this for performance?”
ChatGPT responded like an eager intern:
- Flatten 2D arrays to 1D.
- Avoid expensive array methods like
push. - Use typed arrays for pre-allocation.
- Try
Uint8ClampedArrayfor automatic clamping.
One more thing. Lately, I have been exploring how to push ML workloads to the client to reduce compute cost and preserve data privacy. This is made possible using a technique called WebGPU.
I doubled down and added the usage of WebGPU to my requirements. The dream of every product manager focusing on buzzwords alone: GPU acceleration, client-side compute, data privacy.
I asked for a straightaway optimization with fingers crossed.
The first result? A gray picture.
Then came hours of debugging, chasing error messages like a true VibeCoder.
Eventually, I gave up.
The realization: I didn’t need faster code.
I needed a clearer architecture: what to optimize, in what order, and why.
Beginner Lesson
Don’t ask: “Make this faster”! Define what faster means; or explore this with the help of the LLM.
Intermediate lesson
Be an architect. Specify technical constraints. Define performance goals, and make the trade-offs explicit.
Mastery Lesson
Use the LLM to map the landscape, not to sprint through it. Explore what-ifs. Guard against LLM’s instinct to jump to code too quickly. Keep it in design mode.
5 Lesson #4: Do not succumb to the illusion of progress
I started from scratch, with an architect’s mindset. First, I asked to include the existing code on a web page that allows modifying parameters with sliders. That worked nicely. Then I told ChatGPT that we are going step by step in the transformation towards a WebGPU version.
The Plan:
- remove splice
- flatten to 1D with typed arrays
- pre-allocate arrays
- replace Laplacian of Gaussian with Difference of Gaussian
- SIMD: no map, only for
- worker thread
- use WebGL
- use WebGPU
The incremental work went smoothly and then WebGPU delivered the final wow effect: from 500 ms down to 20 ms for a 1MP image. I was ecstatic. Maybe this algorithm could even handle video!
But when I looked closer, the pictures weren’t pleasing.
The contrast was wrong; the textures felt flat. The fast version looked worse than the slow one.
It turned out that Difference of Gaussian is not the same as Laplacian of Gaussian and that the whole histogram calculation had been changed.
In chasing performance without reflection, I had altered the architecture.
In hindsight, the obvious solution: go slow and use tests. But in that moment, momentum felt like mastery.
Beginner Lesson
“Working” code does not imply “correct” code. Pure vibe coding hides understanding behind motion.
Intermediate Lesson
Resist the illusion of speed. Go slow and write tests. Verify that the tests are correctly written by the LLM. Validate progress with objective, measurable evidence.
Mastery Lesson
Let the model amplify your rigor, not bypass it. Ask for test scaffolds, validation metrics. Let the LLM be your QA nightmare. True velocity comes from confidence in correctness.
6 Lesson #5: Allow failures. Do not suffer from the sunken cost fallacy.
I probably should have stopped.
But after getting so close, it felt wrong to quit. “All I need are a few simple tests,” I told myself. I started working on 3x3 images and actually managed to progress quickly through the codebase.
However, I underestimated the issues that arise in complex floating-point algorithms. The algorithm is doing several passes for the actual dithering and combines Bayer-based dithering with error diffusion (the original article explains this in more detail).
That means any rounding errors can propagate through the image.
On small samples, the errors make no change. On full-size images, the algorithm fell apart.
In fact, I never got 100% equality on a 1MP image with the highest settings for iterative processing, even after days chasing tiny differences caused by inequality signs, truncation, and clamping.
At some point, I realized I wasn’t debugging anymore — I was defending my investment.
The LLM kept offering “helpful” directions, and I kept following, the way I once followed overconfident colleagues early in my career.
They sounded sure. So did the model.
But confidence isn’t correctness.
Sometimes, the real progress is in allowing failure.
Beginner Lesson
When you keep solving the same issue, that is not progress, that is pure grind. Know when to stop.
Intermediate Lesson
LLMs remove friction in syntax and reference lookup. But they also remove the pauses that help us think. Deliberately reintroduce these pauses. Use time-boxing or amount of code change.
Mastery Lesson
Use the LLM to structure your reflection. Ask for summaries, dead ends, and hypotheses that were made.
7 Lesson #6: Technology fixation hides the real problem
Eventually, I made the algorithm’s result identical.

But the speedups I’d worked so hard for had almost vanished. In single-pass runs, performance improved from 380 ms to 350ms. Only in the multi-pass runs did it look better: 4.7s down to 2s.
Still, I wasn’t done. I wanted to use WebGPU.
So, I turned to ChatGPT once again.
It happily produced WebGPU code. Due to the immature state of WebGPU and limited sources, the code was incomplete and buggy, but plausible. I fixed syntax errors, adjusted shader parameters, and eventually got something to run.
The result: an educational detour into shaders, pipelines, and GPU execution (Yes, I am a GPU engineer now :-)).
Performance improvements were good: using 2 passes, from 2s down to 1s. Using 6 passes: 25s down to 2.8s.
Again, visual result was worse than the CPU version.

That’s when I finally stopped myself.
Somewhere along the way, I had forgotten that looking better was the reason I had selected the algorithm.
Beginner Lesson
Speed gains are meaningless if they don’t serve the goal. Always ask: What does this achieve? What impact does it have on the outcome?
Intermediate Lesson
AI tools make every technical path feel accessible. And in part that is true. But every new route has hidden costs. For every new route you take, define what success means. If the gain doesn’t improve the purpose, skip it.
Mastery Lesson
Again, focus on the landscape. Let curiosity drive exploration, but set limits with an intent. AI’s biggest cost isn’t in tokens; it’s your attention.
8 Lesson #7: AI mirrors your thinking, including your flaws
Lesson 6 was about chasing performance. This lesson is quite similar but focuses on features.
Using an LLM chat, everything feels easy. You start with a clear goal, then you drift and start exploring aspects which feel productive but aren’t.
It’s a lot like browsing the internet: you end up finding things you never searched.
The issue with the LLM is that it reinforces you in the believe of wrong ideas. That’s why many say AI assistants only work well “in the hands of an expert.”
I agree only partly. An expert would only need the AI for very mundane tasks, like code completion. It’s the non-expert, facing a new domain who gains the most. But only if he manages to stop the wandering mind and meandering that come with it.
AI is an amplifier, not a guide. It doesn’t tell you when your reasoning is off; it makes your detour smoother. To quote I, Robot: you need to “ask the right questions”.
In traditional software teams, that role falls to senior engineers and technical managers. They define the what and the why of the product.
With LLMs, you need to play the roles yourself to be successful. You’re not just writing code; you’re managing a conversation that can spiral without direction.
Back to our real problem: displaying images on Epaper. When I read through the Inkplate code and API, I noticed that it also supports a 3bit mode. Quick modification in my 1bit Python script: a simple 3-bit Floyd–Steinberg algorithm almost looks 8bit Grayscale.
Then why not use the Teddy-Beau algorithm with 3bit?


Comparing the two 3bit versions, I actually like the 3bit Floyd-Steinberg more than the 3bit Teddy-Beau. What was wrong this time, you might wonder? Everything looks good?
Epaper has a non-linear color curve. What looked perfect on a monitor looked wrong on the device.
Finally, I chose the Teddy Beau algorithm with 1Bit, which from 2 meters away looks a lot better than on a computer screen.
Beginner Lesson
LLM rarely correct wrong assumptions unless prompted to do so.
Intermediate Lesson
Don’t expect the LLM to know your true goal. Define context, constraints, and success criteria yourself. The system prompt is your friend.
Mastery Lesson
Treat the LLM as a mirror, not a mentor. Its responses reflect your framing, clarity, and discipline. You’re not its student. You’re its manager!
9 Conclusion: The real work is thinking
After a week of chasing algorithmic performance gains, I realized the project’s real contribution wasn’t in the image pipeline. Instead, it was in understanding how humans and machines can think together.
Rather than exposing the limits of its reasoning the LLM revealed the limits of mine. In this article every lesson mapped to a deeper skill.
- Conceptual clarity to increase your understanding space
- Prioritization to focus on value, not optimization.
- Problem definition to frame correctly before prompting.
- Discipline to follow a stable process
- Self-awareness to allow failures.
- Purpose alignment to avoid focusing on technology alone.
- Judgment to intentionally direct human-machine interaction
In short, AI is not a shortcut to mastery. It’s a mirror, reflecting your strengths, weaknesses, and habits on steroids.
The promise of LLMs isn’t speed; it’s awareness.
They expose how we think, where we skip steps, and how easily we confuse momentum for mastery.
Working with an LLM is no longer about writing code faster.
It’s about developing a clearer mind.
In the end, building software, and yourself, means learning to manage not just a tool, but your own attention.
That’s the real craft of this new era: knowing when to move fast, and when to slow down on purpose.
A hammer is only as precise as the hand (and mind) that wields it.
The algorithms can be compared here. Source Code can be found here.