Curious track5 min read

How galdr works with your AI

You don't change how you code. You add a small, file-first layer that makes your AI agents smarter, more consistent, and less likely to silently ship broken work.

The big picture

Galdr adds a .galdr/folder to your project. Inside it, Markdown and YAML files define your project's goals, tasks, constraints, and decisions. Every time an AI agent starts a session in your project, it reads these files first.

# your project structure

my-project/

├── .galdr/

│ ├── PROJECT.md ← mission, goals, constraints

│ ├── TASKS.md ← what needs doing

│ ├── BUGS.md ← open bugs (bugs run before tasks)

│ ├── DECISIONS.md ← why things are the way they are

│ ├── PLAN.md ← milestones and phases

│ └── CONSTRAINTS.md ← hard rules agents must follow

├── src/

└── ...

The session loop

Every galdr workflow follows the same pattern. You use a command to tell the agent what mode to run in. The agent reads context, executes work, and outputs a structured result.

1

Session start — agent loads context

When you open a new chat with Claude, Cursor, or Codex, galdr's session-start rule triggers. The agent reads PROJECT.md, TASKS.md, BUGS.md, and CONSTRAINTS.md before doing anything else.

It now knows your mission, what's in progress, what's blocked, and what it must never do. This takes about 5 seconds and replaces the 20-minute explanation you used to give.

2

You issue a command

Instead of freeform prompting, you use galdr commands. Commands are just at-mentions that load a structured workflow file.

@g-go-code

This tells the agent: pick the highest-priority bug or task, implement it, and stop. No hallucinated scope-creep. No "while I'm here I also..."

3

Agent implements and marks [🔍]

The agent works through its queue. When it finishes an item, it marks the task [🔍] awaiting-verification — never [✅] complete.

This is intentional. The agent that implements cannot verify. The [🔍]status is a signal: "I believe this is done, but I'm not the authority on that."

4

A new session verifies

You open a fresh agent session — or use a different AI tool — and run:

@g-go-review

This reviewer agent sees the [🔍] task and approaches it fresh. It checks the code, tests the acceptance criteria, looks for regressions. If everything passes, it marks the task [✅] complete. If it finds issues, it creates a bug and the cycle continues.

Two agents, one result

The power of this system isn't in any single step — it's in the two-session confirmation gate. Here's what that catches in practice:

Without galdrWith galdr
Agent ships code it hallucinated workingReviewer finds the hallucination before it merges
Half-finished tasks marked done[🔍] gate blocks premature completion
Agent ignores constraint it forgotCONSTRAINTS.md read at start; violations blocked
No record of why a choice was madeDECISIONS.md append-only audit trail
'It works on my machine' — closes the taskReviewer tests independently, different context

What you actually change about how you work

  • You start sessions with @g-status instead of re-explaining the project
  • You use @g-go-code to implement and @g-go-review to verify
  • New tasks go in TASKS.md, not in your head or a sticky note
  • Architecture choices get a one-liner in DECISIONS.md
  • That's it. Everything else is the same.