DEV Community

Incomplete Developer
Incomplete Developer

Posted on

OpenSpec (Spec-Driven Development) Failed My Experiment — Instructions.md Was Simpler and Faster

There’s a lot of discussion right now about how developers should work with AI coding tools.

Over the past year we’ve seen the rise of two very different philosophies:

1. Vibe Coding — just prompt the AI and iterate quickly
2. Spec-Driven Development — enforce structure so AI understands requirements

Frameworks like OpenSpec are trying to formalize the second approach.

Instead of giving AI simple prompts, the workflow looks something like this:

  • generate a proposal
  • review specifications
  • approve tasks
  • allow the AI agent to execute the plan

In theory, this should produce better and more reliable code.

So I decided to test it on a real project.


Watch Video

The Experiment

I’m building a car classifieds web application.

The backend and basic functionality already exist, and the front-end is built using .NET Razor Pages.

The problem is the UI.

It works, but it looks very basic and a bit lifeless.

My goal was simple:

Use modern AI coding tools to generate a more premium looking front-end design.

This seemed like a perfect opportunity to test Spec-Driven Development with OpenSpec.


Attempt 1: OpenSpec + GPT-5.3 Codex

The first attempt followed the full Spec-Driven Development workflow.

Using OpenSpec and GPT-5.3 Codex inside VS Code, the process looked like this:

  1. Generate a proposal
  2. Review the generated specification
  3. Approve the list of tasks
  4. Allow the AI agent to execute the plan

This step alone took quite a bit of time.

One key feature of Spec-Driven Development is that the developer must review everything before execution.

This supposedly ensures that the AI clearly understands the requirements.

After about two hours of work and a lot of tokens, I finally had the result.

The new front-end looked almost identical to the original.

Not exactly the premium UI redesign I was hoping for.


Attempt 2: OpenSpec + GitHub Copilot + Claude Haiku 4.5

For the second attempt, I tried something slightly different.

Instead of GPT-5.3 Codex, I used:

  • GitHub Copilot
  • Claude Haiku

The idea was to keep the OpenSpec workflow but reduce token costs.

From a tooling perspective, this actually felt better.

Copilot’s integration with VS Code made the workflow smoother.

But the result?

Still disappointing.

More time reviewing tasks.
More agent execution cycles.
Still no meaningful improvement to the UI.

At this point the experiment had already consumed several hours.


Attempt 3: Skip the Framework

For the final attempt, I tried something much simpler.

I removed OpenSpec entirely and created a file called:

Instructions.md
Enter fullscreen mode Exit fullscreen mode

No proposal stage.
No task planning.
No spec review.

Just clear instructions.

The first test was a small bug:

The image uploader was not working when updating a listing.

I wrote the instructions and executed the AI agent.

The bug was fixed quickly.

Token usage was minimal.

Execution time was dramatically shorter.


Trying the UI Redesign Again

Next I gave the AI the original challenge:

Create a more modern and premium looking front-end.

The result still wasn’t perfect.

But the difference was huge:

  • the process was faster
  • the cost was much lower
  • iteration was much easier

Instead of waiting for proposals and reviewing large spec documents, I could simply refine the instructions and try again.


The Hidden Cost of Spec-Driven Development

Spec-Driven Development is often presented as the solution to vibe coding chaos.

And there’s some logic to that idea.

AI models absolutely do misunderstand vague prompts.

But something that doesn’t get discussed enough is the cost of the framework itself.

Spec-Driven workflows introduce:

  • proposal generation
  • specification review
  • task planning
  • multi-step agent execution

Each of these steps consumes:

  • developer time
  • AI tokens
  • attention

If the output is still poor, the overhead becomes difficult to justify.


How Long Would a Human Take?

Out of curiosity I asked an AI model how long a developer might take to manually build the redesigned UI.

The estimate was 2–3 weeks.

Personally, I think a competent developer could do it in about five days.

Which raises an interesting question:

If a developer can build the feature in under a week, does it make sense to spend hours orchestrating complex AI workflows?


A Simpler Workflow Might Be Better

This experiment made me rethink something.

Maybe the future of AI-assisted development isn’t heavy frameworks.

Maybe it’s lighter workflows with better instructions.

Something like:

  • small tasks
  • clear instructions
  • rapid iteration
  • developer oversight

Instead of long orchestration pipelines.


Final Thoughts

Frameworks like OpenSpec are exploring an important idea:

How do we manage AI agents in large software projects?

That’s a real problem that will need solutions.

But in this small experiment, the results were clear.

The structured Spec-Driven Development workflow introduced a lot of overhead without delivering better results.

A simple Instructions.md approach was faster, cheaper, and easier to iterate on.

As AI development tools continue evolving, it will be interesting to see whether developers move toward:

  • structured frameworks or
  • lightweight instruction-based workflows

The answer may ultimately be somewhere in between.


Video Series

Part 1 – Starting the Project

Part 2 – OpenSpec Solution Scaffolding

Part 3 – Data Access Layer

Part 4 – Creating UI Front End

Top comments (0)