Curious track5 min read

What is galdr?

If you use Claude, Cursor, Codex, or Gemini to write code, you've already hit the three problems galdr solves. This tutorial names them — and shows you the fix.

Problem 1: Your AI forgets everything

You spend 20 minutes explaining your architecture to Claude. It writes great code. You close the chat. Next session — it's gone. You explain it again. Every session, from scratch.

This isn't a bug — it's how context windows work. But it means every AI coding session starts cold. Your decisions, your constraints, your half-finished task list — all invisible to the agent.

Galdr's answer:

A .galdr/ folder in your repo that contains your PROJECT.md, TASKS.md, DECISIONS.md, and CONSTRAINTS.md. Every agent session starts by reading these files. Your AI has memory across sessions — because memory lives in files, not in the chat.

Problem 2: AI doesn't know your whole ecosystem

Most real projects are not a single repo. You have a web app, an API, a CLI tool, a shared library. When Claude is working on the web app, it has no idea the API repo exists — or that you just broke its contract last week.

Even within a single repo, subsystem boundaries blur. Changes to one area silently break another. Your AI agent makes the change, marks the task done, and moves on. Nobody checked.

Galdr's answer:

Subsystem specs define ownership and boundaries. Dependency graphs track what depends on what. For multi-repo ecosystems, the PCAC system (Parent-Child-Agent-Coordinator) lets projects exchange tasks and messages without agents needing to know the whole topology.

Problem 3: The agent that writes code can't verify it

You ask Claude to implement a feature. It does. You ask "does this work?" It says yes — because it wrote the implementation, it will rationalise that the implementation is correct. This is confirmation bias, and it's baked into every AI model.

This is why "vibecoding" subtly fails: the AI that builds the thing also declares the thing done. Nobody ever disagreed with it.

Galdr's answer:

Adversarial quality gates. The agent that implements code marks a task [🔍] awaiting-verification, never [✅] complete. A separate agent session (a "reviewer" agent) must verify. Two agents, one sceptical, confirm independently before a task is closed.

So what exactly is galdr?

Galdr is a file-first task management frameworkthat wraps around your AI coding tools. It doesn't replace Claude, Cursor, or Codex — it gives them persistent context, structured tasks, quality gates, and a protocol for working together.

Everything in galdr is a Markdown or YAML file in a .galdr/ folder. No accounts, no databases, no cloud. It commits with your code, travels with your repo, and works with whatever AI you already use.

📋

TASKS.md

Single source of truth for what needs doing

📖

PROJECT.md

Mission, goals, constraints your AI reads at session start

🔍

Quality gates

Implement → await-verification → verify: two sessions, one result

🛠️

Skills + agents

30+ reusable skill files: bugs, tasks, features, crawl, more

Why "galdr"?

In Norse mythology, galdur (Old Norse: galdr) is the spoken word as magic — incantations that shape reality. When your AI agents work from well-formed instructions, constraints, and context, they produce reliable, repeatable results. Galdr is the structure that makes that possible. Song magic for your codebase.

Ready to see it in action?

The next tutorial walks through exactly how galdr works with your AI — the session start loop, the task cycle, and the quality gate in practice.

How galdr works with your AI →