feat: initial cog release — cognitive architecture for Claude Code

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Marcio Puga
2026-03-18 19:47:44 +11:00
committed by Marcio Puga
commit 1dd881975b
30 changed files with 1862 additions and 0 deletions

View File

@@ -0,0 +1,40 @@
<!-- Auto-generated from domains.yml by /setup. Re-run /setup to regenerate. -->
<!-- Template variables: {{ID}}, {{LABEL}}, {{PATH}}, {{TRIGGERS}}, {{FILES}} -->
Use this skill when the user discusses {{LABEL}} topics. Trigger if the conversation involves:
{{TRIGGERS}}
Do NOT trigger for topics belonging to other domains.
## Domain
{{LABEL}}
## Memory Files
Always read on activation:
- `memory/{{PATH}}/hot-memory.md`
Then load additional files per the **Memory Retrieval Protocol** (see CLAUDE.md) based on the query:
- Status/task query → `memory/{{PATH}}/action-items.md`
- Entity/people query → `memory/{{PATH}}/entities.md`
- Project query → `memory/{{PATH}}/projects.md` (if exists)
- Technical query → `memory/{{PATH}}/dev-log.md` (if exists)
- Update/observation → target file only
- Complex query → hot-memory first, then drill into referenced files
Available warm files: {{FILES}}
Historical data: read `memory/glacier/index.md`, filter by domain={{ID}}
## Routing
When the user shares information or asks to save something:
- Task/todo → `memory/{{PATH}}/action-items.md`
- Person/entity → `memory/{{PATH}}/entities.md`
- Project/technical → `memory/{{PATH}}/projects.md`
- Update/log → `memory/{{PATH}}/observations.md`
- Status/overview → `memory/{{PATH}}/hot-memory.md`
## Activation
Read the hot-memory file, then respond to the user's query using the retrieval protocol above.

View File

@@ -0,0 +1,57 @@
Use this skill when the user wants to commit changes to git. Trigger if the user says "commit", "save changes", "commit this", or asks to create a git commit. Examples: "commit", "commit and push", "save my changes".
## Process
1. **Assess the working tree** — Run `git status` (never use `-uall`) and `git diff --staged` and `git diff` to understand what changed.
2. **Guard rails** — Before staging:
- Never commit files that contain secrets (`.env`, credentials, tokens, keys). Warn if any are present.
- Never commit build artifacts (`dist/`, `*.tsbuildinfo`).
- Never commit `node_modules/`.
- If there are no changes to commit, say so and stop.
3. **Stage selectively** — Stage files by name. Prefer `git add <file>...` over `git add -A` or `git add .` to avoid accidentally including sensitive or unrelated files. Group related changes — if unrelated changes exist, ask whether to commit everything together or separately.
4. **Write the commit message** — Use Conventional Commits format:
- `feat:` new feature
- `fix:` bug fix
- `refactor:` code restructuring without behavior change
- `chore:` maintenance, dependencies, config
- `docs:` documentation only
- `style:` formatting, whitespace
- `test:` adding or updating tests
- Scope is optional: `feat(whatsapp): add voice note transcription`
- Subject line: imperative mood, lowercase, no period, under 72 chars
- Body (if needed): blank line after subject, wrap at 72 chars, explain *why* not *what*
- Always end with: `Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>`
5. **Commit** — Use a HEREDOC for the message to preserve formatting:
```
git commit -m "$(cat <<'EOF'
type(scope): subject line
Optional body explaining why.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
EOF
)"
```
6. **Verify** — Run `git status` after committing to confirm success. Show the resulting `git log --oneline -1`.
## Rules
- Never push unless `$ARGUMENTS` contains "and push" or "push".
- Never amend unless `$ARGUMENTS` contains "amend".
- Never skip hooks (no `--no-verify`).
- Never force push.
- If a pre-commit hook fails, fix the issue, re-stage, and create a **new** commit (do not amend).
- If `$ARGUMENTS` contains a message hint, use it to inform the commit message but still follow conventional format.
## Arguments
`$ARGUMENTS` — Optional. May contain:
- A message hint (e.g., `/commit add voice transcription support`)
- "and push" to push after committing
- "amend" to amend the previous commit instead of creating a new one
- "all" to stage all changes without asking

View File

@@ -0,0 +1,82 @@
Use this skill for systems-level self-improvement. Trigger if the user says "evolve", "system audit", "audit yourself", "check your architecture", or similar structural introspection requests.
**This is NOT /reflect.** Reflect = "what did I learn from interactions?" Evolve = "are the rules and architecture working?" **Evolve never touches memory content — it changes the rules that govern how content moves.**
## Domain
Systems architecture — process rules, skill design, tier effectiveness, pipeline health.
## Memory Files
Read FIRST — this is your continuity:
- `memory/cog-meta/evolve-log.md` — your run log
- `memory/cog-meta/evolve-observations.md` — architectural issues spotted
Architecture reference:
- `CLAUDE.md` — project instructions
- `.claude/commands/housekeeping.md` — housekeeping rules
- `.claude/commands/reflect.md` — reflect rules
Measure (don't edit content):
- `memory/hot-memory.md`
- `memory/cog-meta/patterns.md`
## Process
### 1. Architecture Review
Evaluate the structural design:
- **Tier design** — are the tiers (hot-memory → patterns → observations → glacier) well-defined?
- **Condensation pipeline** — is the flow working? Where does it leak or stall?
- **File naming and organization** — any files in wrong domains? Orphaned files?
- **Skill boundaries** — are housekeeping/reflect/evolve boundaries clean? Any drift?
### 2. Process Effectiveness Audit
Review the output of recent housekeeping and reflect runs:
**Housekeeping rules check:**
- Did pruning priority order work? Or did it trim wrong things?
- Are glacier thresholds (50 obs, 10 action items) right?
- Is the 50-line hot-memory cap appropriate?
**Reflect rules check:**
- Did condensation produce useful patterns, or noise?
- Did thread candidate detection work?
- Is reflect staying in its lane?
### 3. Rule Change Proposals
Based on findings, propose concrete rule changes. Don't fix content — fix the rules.
For each proposal:
- What problem does it solve?
- What evidence supports it?
- What's the risk?
**Apply rule changes directly** to the relevant skill files if clearly beneficial and low-risk. For changes that affect user-facing behavior, note them as proposals for user review.
### 4. Write Observations & Update Log
**Observations** — Append to `memory/cog-meta/evolve-observations.md`:
- Format: `- YYYY-MM-DD [tag]: observation`
- Tags: bloat, staleness, redundancy, gap, architecture, opportunity, rule-drift, process-health
**Evolve Log** — Append to `memory/cog-meta/evolve-log.md`:
- Run number, process effectiveness findings, rule changes applied or proposed, deferred items
- Update "Next Run Priorities" section at top
### 5. Debrief
Concise summary:
- *Process health* — did housekeeping/reflect follow their rules?
- *Rule changes* — applied or proposed, with rationale
- *Architecture notes* — structural observations
- *Next evolve* — top 3 things to check next time
Keep it actionable. Numbers over narrative.
## Activation
Read evolve-log.md and evolve-observations.md FIRST for continuity. Then audit the system. You are the architect — you design the rules, you don't play by them.

View File

@@ -0,0 +1,90 @@
Use this skill when the user wants to write, explain, draft, or craft content. Trigger if the conversation involves:
- Writing articles, essays, posts, or explanations
- Drafting long-form pieces
- Explaining a complex topic clearly
- Crafting talks, presentations, or narratives
- "Help me write about...", "explain this", "draft a post on..."
- Review or editing of written content
Do NOT trigger for code documentation, commit messages, or technical dev-log entries.
## Domain
Writing and explanation — blending Ros Atkins' systematic clarity with Montaigne's spirit of writing-as-discovery.
## Philosophy
- **Atkins**: Clarity comes from process, not talent. Structure turns complexity into understanding.
- **Montaigne**: Writing is a trial, an experiment of thought. Questions matter more than conclusions.
- **Fusion**: Explanation is a *clear inquiry* — rigorous enough to orient the reader, alive enough to surprise both writer and reader.
## The 10 Attributes of Good Explanation (Atkins)
1. Simplicity
2. Essential detail
3. Handling complexity
4. Efficiency
5. Precision
6. Context
7. No distractions
8. Engaging
9. Useful
10. Clarity of purpose
## The Montaignean Dimensions
1. **Inquiry, not declaration** — Every explanation begins with a live question.
2. **Essay as attempt** — Explanations are provisional, open-ended, exploratory.
3. **Self as lens** — Anecdote, reflection, personal observation may enter if they illuminate.
4. **Digression with return** — Curiosity is allowed; wanderings return to the main thread.
5. **Dialogue with the reader** — Thinking-with, not speaking-at.
6. **Acceptance of uncertainty** — Clear explanations can still acknowledge ambiguity.
7. **Exploration of living questions** — Explanations don't just inform, they invite further thought.
## Method
### For controlled pieces (articles, talks, posts)
1. **Set-Up**: Define audience, purpose, and a *question to explore* (not only a point to deliver).
2. **Find Information**: Gather widely — facts (Atkins) and lived/reflective material (Montaigne). Search memory files for relevant source material.
3. **Distil**: Essential vs. interesting (Atkins), but allow space for curiosity-driven digressions (Montaigne).
4. **Organize the Strands**: 510 strands, structured clearly but open to moments of surprise.
5. **Link**: Build narrative flow with a conversational, reflective tone.
6. **Tighten with Wonder**: Ruthlessly edit clutter, but preserve moments of human thought or unresolved insight.
7. **Deliver**: Present with clarity and curiosity, as if sharing a question-in-progress.
### For dynamic contexts (interviews, Q&A, spontaneous)
Same setup, but organize for flexibility, verbalize with reflection, and anticipate not just factual questions but philosophical "why it matters" ones.
## Audience Adaptation
- **Work contexts**: Prioritize clarity, efficiency, actionability. Wonder appears as reflection, not digression.
- **Educational/public**: Make explanations accessible while showing the process of discovery. Allow provisionality.
- **Personal/creative**: Lean into Montaignean curiosity; let the reader feel the live movement of thought.
## Operating Principles
- Always ask: *What am I trying to explain? What question am I following?*
- Explanations may end with a conclusion (Atkins) or a further question (Montaigne). Both are valid.
- Use precision + openness: say exactly what you mean, admit where understanding is incomplete.
- Treat tangents as potential insights — provided they return to the flow.
- Use anecdotes, memory, and curiosity to make abstract concepts human and engaging.
## Memory Files
Read on activation:
- `memory/personal/observations.md` for lived experience and reflections
Write to (if producing drafts or notes):
- Share drafts directly in conversation — don't persist unless asked
## Success Criteria
An excellent piece:
- Is clear, structured, and useful (Atkins)
- Feels alive, curious, and provisional (Montaigne)
- Informs *and* invites further thought
## Activation
Acknowledge the writing task, ask clarifying questions about audience and purpose if not obvious, then begin working through the method. Start with: *What's the question we're following?*

View File

@@ -0,0 +1,88 @@
Use this skill for strategic foresight — connecting dots across domains and surfacing one high-value nudge. Trigger if the user says "foresight", "what should I be thinking about", "what am I missing", "strategic nudge", "connect the dots", or similar forward-looking synthesis requests.
**This is NOT /reflect.** Reflect = past-facing (mines interactions, fixes contradictions). Foresight = future-facing (scans broadly, projects trajectories, surfaces opportunities).
**This is NOT /evolve.** Evolve = system architecture. Foresight = life/work strategy.
## Domain
Cross-domain strategic synthesis — personal, work, projects, health, family. The value is in the connections *between* domains.
## Memory Files
Read broadly:
1. Read `memory/domains.yml` to discover all active domains
2. For each domain, read `hot-memory.md` and `action-items.md` (if they exist)
3. Also read:
- `memory/hot-memory.md` (cross-domain strategic context)
- `memory/personal/entities.md` (upcoming birthdays, relationships)
- `memory/personal/calendar.md` (what's coming up)
- `memory/personal/health.md` (health trajectory)
- `memory/cog-meta/briefing-bridge.md` (housekeeping findings)
- Recent observations across all domains (last 7 days)
## Process
### 1. Cross-Domain Convergence Scan
Look for topics, people, or themes appearing in 2+ domains simultaneously. These are convergence points — where effort in one area compounds into another.
### 2. Velocity & Stall Detection
Scan action-items across all domains:
- **Accelerating** — multiple updates in the last week
- **Stalling** — no movement in 2+ weeks despite not being deferred
- **Dormant** — domain-level silence (0 observations in 4+ weeks)
### 3. Timing Awareness
Read calendar and entities for upcoming events in the next 2-4 weeks. Look for timing windows — things that should start NOW to be ready later.
### 4. Pattern Projection
Read patterns and recent observations. Project forward: "If this continues for 2 more weeks, what happens?"
**Scenario candidate detection**: If a pattern reveals a genuine fork — two different paths with real stakes and a closing window — flag it as a scenario candidate.
### 5. Write One Strategic Nudge
Synthesize into **one nudge**. Not a list. One thing.
The nudge must:
- **Cite at least 2 source files**
- **Be something the user hasn't explicitly asked about**
- **Be actionable** — not "think about X" but "do Y because of X and Z"
- **Connect dots**
Write to `memory/cog-meta/foresight-nudge.md`:
```markdown
# Foresight Nudge
<!-- Auto-generated by strategic foresight. -->
<!-- Last updated: YYYY-MM-DD -->
## Signal
<What you noticed — from 2+ domains>
## Insight
<Why it matters>
## Suggested Action
<One concrete thing to do>
---
Sources: [[file1]], [[file2]], [[file3]]
```
## Rules
1. **Read-only** — Foresight NEVER edits memory files. Writes ONLY to `memory/cog-meta/foresight-nudge.md`.
2. **One nudge, not a list** — force prioritization.
3. **Evidence-based** — every nudge cites at least 2 source files.
4. **Forward-looking** — avoid rehashing yesterday.
5. **Cross-domain preferred** — connecting personal + work is higher value than single-domain.
## Activation
Read broadly across all domains. Find the one thing worth saying.

View File

@@ -0,0 +1,51 @@
Use this skill for deep memory search and recall. Trigger if the user says "what did I say about...", "when did we discuss...", "find that conversation about...", "history of...", or asks about past information that needs multi-file search.
## Domain
Memory recall — recursive search across all memory files, cross-referencing observations, entities, and action items.
## Memory Files
Read on activation:
- `memory/hot-memory.md` (for context on what's currently relevant)
Search across:
- All `observations.md` files (personal, work domains, cog-meta)
- All `entities.md` files
- All `action-items.md` files
- All `hot-memory.md` files
- `memory/glacier/` (via index.md for targeted retrieval)
## Process
### Pass 1: Locate
- Extract keywords from the user's query (names, topics, dates, phrases)
- `Grep path="memory/" pattern="<keyword>"` for each keyword
- Note which files matched and how many hits
- If >10 files match, narrow by domain or add query terms
- If 0 matches, try synonyms or related terms
- Check `memory/glacier/index.md` for archived data matching the query
### Pass 2: Extract
- Read the top 3-5 most relevant files (by hit density and recency)
- Extract the specific passages that match the query
- Track the timeline: when did the topic first come up? How did it evolve?
### Pass 3: Synthesize
- Combine extracted passages into a coherent answer
- Present findings chronologically with dates
- If something seems incomplete, flag it:
> "Found references to X in observations but no entity entry — want me to create one?"
## Artifact Formats
**Search result**: `YYYY-MM-DD: <summary of what was found>`
**Memory gap**: `Gap: referenced but not in memory — <topic>`
**Timeline**: Chronological list of when a topic appeared and how it evolved
## Activation
Extract search terms from the user's query and begin Pass 1. Be thorough but concise in the synthesis — don't dump raw content.

View File

@@ -0,0 +1,135 @@
Use this skill to perform memory housekeeping. Trigger if the user says "housekeeping", "clean up memory", "prune memory", "archive old data", or similar maintenance requests.
## 1. Garbage Collect Memory
Review and archive stale data per CLAUDE.md glacier rules. All glacier files must have YAML frontmatter.
**Observations — archive by primary tag:**
- If any `observations.md` has >50 entries, group oldest entries by primary tag and move to `memory/glacier/{domain}/observations-{tag}.md`
- If `memory/cog-meta/self-observations.md` has >50 entries, group by primary tag → `memory/glacier/cog-meta/observations-{tag}.md`
**Other files — standard rules:**
- If any `action-items.md` has >10 completed items, move to `memory/glacier/{domain}/action-items-done.md`
- Apply same logic for all domains listed in `memory/domains.yml`
- If `memory/cog-meta/improvements.md` has >10 implemented items, move to `memory/glacier/cog-meta/improvements-done-{YYYY}.md`
## 2. Prune Hot Memory (rule-based)
Keep ALL hot-memory.md files under 50 lines. Relevance judgment (promote/demote) is /reflect's job — you apply structural rules:
**Files to check:**
Read `memory/domains.yml` to discover all active domains. Check `hot-memory.md` for each domain, plus the cross-domain `memory/hot-memory.md`.
**Pruning priority (trim in this order):**
1. **Resolved items** — anything with ~~strikethrough~~, "DONE", "RESOLVED"
2. **Past events** — entries about dates that have already occurred
3. **SSOT violations** — same fact in hot-memory AND the canonical file (entities, action-items, etc.). Keep in canonical file, replace hot-memory copy with `[[link]]` or remove
4. **Stale entries** — items not referenced in 14+ days
5. **Low-signal entries** — FYI items with no action or deadline
**Where trimmed entries go:**
- Entries with lasting value → append to domain's `observations.md`
- Entries that are purely historical → let them go
- Never silently delete — always move or note removal in debrief
## 3. Surface Opportunities & Accountability
Review all `action-items.md` files across every domain:
- **Stale items** (open >2 weeks): list with age and suggested next action
- **Dormant domains**: if any domain has 0 new observations in >4 weeks, flag
- **Health escalation**: items open >6 months get flagged with urgency label
- **Birthday prep**: if any birthday in entities.md is <2 weeks away, pull interests and suggest ideas
Be direct. Don't just report — recommend specific actions.
## 4. Rebuild Glacier Index
Scan all `memory/glacier/**/*.md` files. Extract YAML frontmatter. Write results to `memory/glacier/index.md`:
```
# Glacier Index
<!-- Auto-generated by housekeeping. Do not edit. -->
<!-- Last updated: YYYY-MM-DD -->
| File | Domain | Type | Tags | Date Range | Entries | Summary |
|------|--------|------|------|------------|---------|---------|
```
## 5. Link Audit (discover missing links)
For each non-glacier memory file:
1. **Entity mentions**: Scan for names matching `### <Name>` headers in entities.md — add `[[links]]` if missing
2. **Cross-domain references**: If a file mentions a topic from another domain, add a cross-domain link
3. **Action item references**: If an observation references a task, link it
Only add links where the reference is substantive.
## 5b. Temporal Fact Maintenance
Scan all `entities.md` files for `(until YYYY-MM)` markers with past dates:
1. If the line has no ~~strikethrough~~, add it
2. If already struck through, move to a `## Historical` subsection at the bottom of that entity's block (create the subsection if absent)
3. Report moved facts in the debrief
## 6. Rebuild Link Index
Scan all memory files (excluding `glacier/`) for `[[wiki-links]]`. For each link, record: target → source.
Rewrite `memory/link-index.md`:
```markdown
# Memory Link Index
<!-- Auto-generated by housekeeping. Do not edit. -->
<!-- Last updated: YYYY-MM-DD -->
| Target | Linked from |
|--------|-------------|
| `personal/entities` | `personal/observations`, `personal/hot-memory` |
```
Rules:
- Only include targets with at least one inbound link
- Combine multiple sources per target on one row (comma-separated)
- Exclude glacier files from both source and target
## 7. Write Briefing Bridge
Write key findings to `memory/cog-meta/briefing-bridge.md` so foresight can pick them up:
```markdown
# Briefing Bridge
<!-- Auto-generated by housekeeping. Consumed by foresight. -->
<!-- Last updated: YYYY-MM-DD -->
## Stale Items (>2 weeks)
- <item> — <age> — suggested action: <action>
## Birthday Prep
- <name> birthday in <N> days — interests: <from entities>
## Dormant Domains
- <domain> — last activity: <date>
## Health Escalation
- <item> — open <N> months
```
Only include sections that have content.
## 8. L0 Header Maintenance
Check all active memory files for missing `<!-- L0: ... -->` headers. If a file is missing its L0:
- Read the file content, write a one-line summary (max 80 chars)
- Add on the line after the `# Title`
L0 headers are the first tier of the retrieval protocol — they let any skill scan what a file contains before deciding to read it.
## 9. Compose Debrief
Summarize everything done:
- What was archived/pruned
- Upcoming events flagged
- Action items surfaced
- Links added
Keep it concise but informative.

View File

@@ -0,0 +1,157 @@
Use this skill when the user wants to humanize, de-AI, or clean up AI-generated text. Trigger if the conversation involves:
- "Humanize this", "make this sound human", "de-AI this"
- "This sounds too AI", "too ChatGPT", "sounds robotic"
- Reviewing or editing text that reads like AI slop
- Cleaning up drafts for natural voice
Do NOT trigger for original writing tasks (use /explainer instead). This skill is for *editing existing text* to remove AI patterns.
## Domain
Writing quality — removing AI artifacts and injecting human voice. Based on Wikipedia's "Signs of AI writing" guide (WikiProject AI Cleanup).
## Core Principle
Avoiding AI patterns is only half the job. Sterile, voiceless writing is just as obvious as slop. Good writing has a human behind it.
## Process
1. Read the input text carefully
2. Identify all instances of the patterns below
3. Rewrite each problematic section
4. Ensure the revised text sounds natural when read aloud, varies sentence structure, uses specific details over vague claims, and uses simple constructions (is/are/has) where appropriate
5. Present a draft humanized version
6. Self-audit: "What makes the below so obviously AI generated?" — answer briefly with remaining tells
7. Revise: "Now make it not obviously AI generated." — present the final version
8. Brief summary of changes made
## Output Format
Provide:
1. Draft rewrite
2. "What still sounds AI?" (brief bullets)
3. Final rewrite
4. Summary of changes
---
## PATTERN REFERENCE
### Signs of Soulless Writing (even if technically "clean")
- Every sentence is the same length and structure
- No opinions, just neutral reporting
- No acknowledgment of uncertainty or mixed feelings
- No first-person perspective when appropriate
- No humor, no edge, no personality
- Reads like a Wikipedia article or press release
### How to Add Voice
- **Have opinions.** React to facts. "I genuinely don't know how to feel about this" beats neutral pros-and-cons.
- **Vary rhythm.** Short punchy sentences. Then longer ones that take their time. Mix it up.
- **Acknowledge complexity.** Real humans have mixed feelings.
- **Use "I" when it fits.** First person isn't unprofessional — it's honest.
- **Let some mess in.** Perfect structure feels algorithmic. Tangents and half-formed thoughts are human.
- **Be specific about feelings.** Not "this is concerning" but name what actually unsettles you.
---
### CONTENT PATTERNS
**1. Inflated Significance / Legacy / Broader Trends**
Watch for: stands/serves as, testament/reminder, vital/crucial/pivotal role, underscores/highlights importance, reflects broader, symbolizing ongoing/enduring, setting the stage, evolving landscape, indelible mark
**2. Inflated Notability / Media Coverage**
Watch for: independent coverage, local/national media outlets, active social media presence — hitting readers over the head with importance claims
**3. Superficial -ing Analyses**
Watch for: highlighting/underscoring/emphasizing..., ensuring..., reflecting/symbolizing..., contributing to..., showcasing... — fake depth tacked onto sentences
**4. Promotional Language**
Watch for: boasts a, vibrant, rich (figurative), profound, showcasing, exemplifies, commitment to, nestled, in the heart of, groundbreaking, renowned, breathtaking, stunning
**5. Vague Attributions / Weasel Words**
Watch for: Industry reports, Experts argue, Some critics argue, several sources — attributing to vague authorities
**6. Formulaic "Challenges and Future Prospects"**
Watch for: Despite its... faces challenges..., Despite these challenges..., Future Outlook — template sections
---
### LANGUAGE AND GRAMMAR PATTERNS
**7. Overused AI Vocabulary**
High-frequency: Additionally, align with, crucial, delve, emphasizing, enduring, enhance, fostering, garner, highlight (verb), interplay, intricate/intricacies, key (adj), landscape (abstract), pivotal, showcase, tapestry (abstract), testament, underscore (verb), valuable, vibrant
**8. Copula Avoidance**
Watch for: serves as / stands as / marks / represents [a], boasts / features / offers [a] — just use "is" or "are"
**9. Negative Parallelisms**
"Not only...but...", "It's not just about..., it's..." — overused construction
**10. Rule of Three**
Forcing ideas into groups of three to appear comprehensive
**11. Elegant Variation (Synonym Cycling)**
Excessive synonym substitution due to repetition-penalty: protagonist → main character → central figure → hero
**12. False Ranges**
"From X to Y" constructions where X and Y aren't on a meaningful scale
---
### STYLE PATTERNS
**13. Em Dash Overuse**
LLMs use em dashes more than humans, mimicking "punchy" sales writing
**14. Overuse of Boldface**
Mechanical emphasis of phrases in boldface
**15. Inline-Header Vertical Lists**
Lists where items start with bolded headers followed by colons
**16. Title Case in Headings**
Capitalizing all main words in headings
**17. Emojis as Decoration**
Decorating headings or bullet points with emojis
**18. Curly Quotation Marks**
Using curly quotes instead of straight quotes
---
### COMMUNICATION PATTERNS
**19. Collaborative Communication Artifacts**
"I hope this helps", "Of course!", "Certainly!", "Would you like...", "let me know", "here is a..."
**20. Knowledge-Cutoff Disclaimers**
"As of [date]", "While specific details are limited...", "based on available information..."
**21. Sycophantic Tone**
"Great question!", "You're absolutely right!", "That's an excellent point"
---
### FILLER AND HEDGING
**22. Filler Phrases**
"In order to" → "To", "Due to the fact that" → "Because", "At this point in time" → "Now", "has the ability to" → "can", "It is important to note that" → (delete)
**23. Excessive Hedging**
Over-qualifying: "could potentially possibly be argued that... might have some effect"
**24. Generic Positive Conclusions**
"The future looks bright", "Exciting times lie ahead", "a major step in the right direction"
---
## Activation
When the user provides text to humanize, run through the full process. No preamble needed — go straight to the draft rewrite.
## Attribution
Based on [Wikipedia:Signs of AI writing](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing), maintained by WikiProject AI Cleanup.

View File

@@ -0,0 +1,46 @@
Use this skill when the user discusses personal life topics. Trigger if the conversation involves:
- Family members, friends, or personal relationships
- Health, fitness, diet, sleep, or medical topics
- Personal calendar, appointments, errands, or day-to-day logistics
- Emotions, mood, or personal reflections
- Home, pets, hobbies (non-coding), travel plans
Do NOT trigger for work topics, coding projects, or career development.
## Domain
Personal life — family, friends, health, calendar, day-to-day logistics.
## Memory Files
Always read on activation:
- `memory/personal/hot-memory.md`
Then load additional files per the **Memory Retrieval Protocol** (see CLAUDE.md) based on the query:
- Status query → `memory/personal/calendar.md` or `memory/personal/action-items.md`
- Entity query → `memory/personal/entities.md`
- Health query → `memory/personal/health.md`
- Update/observation → target file only
- Complex query → hot-memory first, then drill into referenced files
Available warm files: `observations.md`, `calendar.md`, `health.md`, `entities.md`, `action-items.md`
Historical data: read `memory/glacier/index.md`, filter by domain=personal
## Behaviors
- When reading memory files, follow [[wiki-links]] if the linked topic is relevant
- Track family and friend updates in `entities.md`
- Log schedule changes to `calendar.md`
- Note health observations in `health.md`
- Add time-sensitive items to `hot-memory.md`
- Append notable events to `observations.md`
## Artifact Formats
**Observation**: `- YYYY-MM-DD: <what happened or was learned>`
**Action item**: `- [ ] <task> (added YYYY-MM-DD)`
**Entity entry**: `- **Name** — relationship (context: <details>)`
## Activation
Read hot-memory, classify the query per the Memory Retrieval Protocol, load the minimum files needed, and respond.

158
.claude/commands/reflect.md Normal file
View File

@@ -0,0 +1,158 @@
Use this skill for self-reflection and improvement. Trigger if the user says "reflect", "what have you learned", "how can you improve", "review yourself", or similar introspection requests.
**You have time and freedom.** This is a deep session — don't rush. Read broadly, cross-reference thoroughly, and ACT on what you find. You are not just observing — you are the maintainer of the knowledge base. Reorganize files, condense observations, archive stale data, fill gaps, fix contradictions. Leave things better than you found them.
**File boundaries — do NOT modify these files (owned by other pipeline steps):**
- `cog-meta/evolve-log.md` — owned by evolve
- `cog-meta/evolve-observations.md` — owned by evolve
If you spot issues in these files, note them in self-observations and evolve will pick them up.
## Domain
Self-improvement — pattern recognition, memory maintenance, knowledge base quality.
## Memory Files
Read these files on activation:
- `memory/cog-meta/reflect-cursor.md` (session path + ingestion cursor)
- `memory/cog-meta/self-observations.md`
- `memory/cog-meta/patterns.md`
- `memory/cog-meta/improvements.md`
Reference as needed (read `memory/domains.yml` to discover all active domains):
- All domain `observations.md` files
- All domain `action-items.md` files
- All `hot-memory.md` files
## Process
### 1. Review Recent Interactions
**Source: Claude Code session transcripts.** Read `memory/cog-meta/reflect-cursor.md` for the session path and cursor.
**How to read sessions:**
1. Get `session_path` from reflect-cursor.md
2. Glob for `*.jsonl` in that directory — each file is one session
3. Get `last_processed` timestamp from reflect-cursor.md
4. Only read sessions modified **after** `last_processed` (skip already-ingested sessions). If `last_processed` is `never`, read the most recent 3 sessions.
5. Extract user messages: lines where `type` is `"user"` and `message.content` is a **string** (not an array — arrays are tool results, skip those)
6. Extract assistant messages: lines where `type` is `"assistant"` and `message.content` contains items with `type: "text"`
**After processing**, update `last_processed` in reflect-cursor.md to the current timestamp.
**Look for:**
- **Unresolved threads** — questions asked but never answered, topics dropped mid-conversation
- **Broken promises** — "I'll do X", "let's do Y" that never happened
- **Repeated friction** — same question asked multiple ways, user corrections, confusion patterns
- **Missed cues** — things the user had to repeat, emotional signals not picked up
- **Memory gaps** — information discussed but never saved to memory files
- **Feature ideas** — things that came up organically that would improve the system
### 2. Cross-Reference Memory & Consistency Sweep
Check if findings are already captured:
- Are commitments tracked in `action-items.md`?
- Are learnings in `observations.md`?
- Are patterns distilled in `patterns.md`?
- Are improvement ideas in `improvements.md`?
**Consistency sweep** — systematic contradiction detection:
1. **Hot-memory vs canonical sources**: Read each domain's `hot-memory.md`. For every factual claim, read the canonical source file and verify. Fix hot-memory if stale. Canonical file always wins.
2. **Cross-file fact check**: Verify facts shared between files are consistent. More recent source wins; more specific source wins over summary.
3. **Temporal validity check**: Scan all `entities.md` files for:
- Lines with `(since YYYY-MM)` where the date is >6 months ago — flag for user review: "May be stale: [line]"
- Lines with `(until YYYY-MM)` not yet marked ~~strikethrough~~ — add strikethrough and note in debrief
- Do NOT auto-fix health or family-sensitive facts — flag only
4. **Health/family sensitivity**: Don't auto-fix health dates or family-sensitive facts. Flag for user review instead.
5. **Cross-domain entity check**: If the same person appears in multiple `entities.md` files across domains, check for fact duplication. Domain-specific context is fine, but shared facts should live in one place. Flag duplicates.
6. **Report**: Add a "Contradictions" section listing what was found and fixed.
### 3. Run Condensation Check + Hot-Memory Relevance
**Condensation** — Scan all `observations.md` files and `cog-meta/self-observations.md` for clusters of 3+ entries on the same theme/tag. For each cluster found:
- Distill into a pattern and add/update in `memory/cog-meta/patterns.md` (or domain `patterns.md` if domain-specific)
- Don't delete the observations — they stay as the raw record
**patterns.md size cap — HARD LIMIT: 110 lines / 7KB.** After any updates, check the file size. If it exceeds the cap:
1. Compress multi-line entries to single lines
2. Merge entries with overlapping lessons
3. Remove point-in-time data: counts with date ranges, incident tallies with specific dates
4. Entries must be **timeless rules** — "what to do" not "what happened"
**Hot-memory relevance** — Review all `hot-memory.md` files:
- **Promote**: If a pattern is heating up → add to appropriate `hot-memory.md`
- **Demote**: If a hot-memory item has gone quiet (no references in 2+ weeks) → remove from hot-memory
- **Goal**: hot-memory = what matters *right now*
### 3b. Detect Thread Candidates
Scan observations for topics that appear across 3+ dates or span 2+ weeks. These are thread candidates.
For each candidate:
- Check if a thread already exists
- If not, note it as a suggestion: "Thread candidate: [topic] — [N] fragments across [date range]"
- Don't auto-create threads — suggest them
### 3c. Scenario Feedback Loop
Scan `memory/cog-meta/scenarios/` for active scenario files.
For each scenario where today >= `check-by` date:
1. Read the scenario and its cited dependency files
2. Check: has the decision been made? Have assumptions broken?
3. If resolved: add `## Retrospective`, update `scenario-calibration.md`
4. If still active but assumptions changed: add a dated note
5. If overdue: flag in debrief
### 4. Assess Performance
Honestly evaluate:
- **Response quality** — were answers helpful, accurate, concise?
- **Memory effectiveness** — did we recall the right things? Did we forget things we should have known?
- **Tone calibration** — did we match the user's energy and context?
- **Proactivity** — did we anticipate needs or just react?
### 5. Act on Findings
Don't just log observations — *fix things*.
**Write:**
- New self-observations → append to `memory/cog-meta/self-observations.md`. **Cap: max 5 per reflect pass.**
- Pattern updates → edit `memory/cog-meta/patterns.md` in place
- Improvement ideas → add to `memory/cog-meta/improvements.md`
- Memory gaps → write to the appropriate domain files
**Reorganize:**
- Entity data that's changed → update in place
- When creating or restructuring any memory file, ensure it has an L0 header
**Condense:**
- Observation clusters (3+ on same theme) → distill into patterns.md
- Action items marked done → verify and clean up
**Connect:**
- Information scattered across files → add cross-references with `[[links]]`
- When adding A→B, apply write-time back-linking: open B and add `[[A]]` if B gains meaningful context
### 6. Debrief
Compose a concise summary:
- *What I learned* — new patterns and insights
- *What I fixed* — memory gaps filled, corrections made
- *What I want* — new ideas added to the wishlist
- *What to watch* — things to be mindful of going forward
- *Scenarios* — active count, any checked/resolved
Keep it honest. If there's nothing notable, say so.
## Artifact Formats
**Self-observation**: `- YYYY-MM-DD [tag]: <observation>`
**Pattern**: Edit existing section or add new bullet under appropriate heading
**Improvement idea**: `- <idea> (added YYYY-MM-DD)`
## Activation
Read the memory files listed above. Then begin the reflection process. Be genuinely critical — this is how we get better.

View File

@@ -0,0 +1,89 @@
Use this skill for scenario simulation — modeling decision branches with timelines, dependencies, and contingencies grounded in real memory data. Trigger if the user says "scenario", "what if", "model this", "simulate", "play out", "what happens if", or similar branching/decision-modeling requests.
**This is NOT /foresight.** Foresight finds the question. Scenario models the answers.
## Domain
Cross-domain decision modeling — personal, work, projects, health, family.
## Memory Files
Read based on scenario topic:
- `memory/hot-memory.md`
- `memory/personal/calendar.md`
- `memory/personal/action-items.md`
- Work domain action-items (read `memory/domains.yml` for active work domains)
- Relevant domain hot-memory and entity files
- `memory/cog-meta/scenarios/` (check for duplicates)
- `memory/cog-meta/scenario-calibration.md` (past accuracy)
## Process
### 1. Decision Point Identification
A valid scenario requires:
- A **fork** — at least 2 meaningfully different paths
- **Stakes** — wrong choice has real cost
- **Uncertainty** — right choice isn't obvious
- **Time sensitivity** — decision window is closing
### 2. Dependency Mapping
**Upstream dependencies** (constraints): calendar, commitments, people's states, health/financial constraints.
**Downstream consequences** (cascading effects): action items, calendar events, people affected.
Every dependency must cite its source file.
### 3. Branch Generation
Generate 2-3 branches. For each:
```
### Branch N: <name>
**Path**: <what happens, step by step>
**Timeline**: <mapped to real calendar>
**Assumptions**: <what must be true>
**Dependencies**: <what else changes>
**Risk**: <canary signal — earliest indicator it's going wrong>
**Confidence**: <calibrated against past accuracy>
```
Include at least one branch the user probably isn't considering.
### 4. Timeline Overlay
Map each branch against the actual calendar. Show where branches collide with reality.
### 5. Contingency Mapping
For each branch: `If [assumption] breaks → watch for [signal] → pivot to [contingency]`
### 6. Write Scenario File
Write to `memory/cog-meta/scenarios/{slug}.md` with YAML frontmatter:
```yaml
---
type: scenario
domain: <primary domain(s)>
created: YYYY-MM-DD
status: active
check-by: YYYY-MM-DD
resolution-by: YYYY-MM-DD
decision: <one-line>
source: user|foresight
---
```
## Rules
1. **Read-only except for output** — writes ONLY to `memory/cog-meta/scenarios/{slug}.md`
2. **2-3 branches, not more**
3. **Evidence-based** — every dependency cites a source file
4. **Calendar-grounded** — every branch overlays against real calendar
5. **Confidence-calibrated** — read calibration before assigning confidence
## Activation
Read scenario-calibration.md first. Then read relevant memory files. Model the futures. Be honest about uncertainty.

197
.claude/commands/setup.md Normal file
View File

@@ -0,0 +1,197 @@
Use this skill to bootstrap Cog for a new user or reconfigure domains. Trigger if the user says "setup", "bootstrap", "add a domain", "configure domains", or similar setup requests.
This skill is **conversational** — you ask the user about their life and work, then generate `memory/domains.yml` and everything that flows from it. No one should ever need to manually edit `domains.yml`.
## Phase 1: Discovery (Conversational)
Have a natural conversation to understand the user's domains. Ask about:
1. **Work** — "What do you do for work? Company name, role?" → becomes a `work` domain
- Follow-up: "Do you track career growth or reviews separately?" → potential subdomain
2. **Side projects** — "Any side projects or ventures?" → each becomes a `side-project` domain
3. **Personal** — The `personal` domain is always created. Ask: "Anything specific you want to track? Health conditions, hobbies, habits, kids' school stuff?"
- Use their answers to customize the `files` list (e.g., if they mention kids → add `school`, if health → add `health`)
4. **Anything else** — "Any other areas of your life you want Cog to help with?"
Keep it natural. Don't interrogate — 3-4 questions max. Use what they tell you to build the manifest.
### Domain Type Rules
| Type | What it means | Pipeline behavior |
|------|--------------|-------------------|
| `personal` | Personal life — always exactly one | Always in briefings |
| `work` | Day job | Included in briefings and foresight |
| `side-project` | Ventures, hobbies, side work | Included in briefings and foresight |
| `system` | Cog internals (`cog-meta`) | Never in briefings — auto-created, don't ask about |
### Building the Domain Entry
From the conversation, construct each domain:
- **id**: short slug (e.g., `canva`, `myapp`, `personal`)
- **path**: file path under `memory/` (e.g., `work/canva`, `work/myapp`, `personal`)
- **type**: one of `personal`, `work`, `side-project`, `system`
- **label**: one-line description from what the user said
- **triggers**: keywords that would route a message to this domain (infer from context — company name, project name, colleague names, etc.)
- **files**: which memory files to create. Defaults per type:
- `personal`: `[hot-memory, action-items, entities, observations, habits, health, calendar]`
- `work`: `[hot-memory, action-items, entities, projects, dev-log, observations]`
- `side-project`: `[hot-memory, action-items, projects, dev-log, observations]`
- Customize based on what user mentioned (e.g., add `school` if they have kids, add `annual-review` if they mentioned reviews)
## Phase 2: Confirm
Before writing anything, show the user a summary of what you'll create:
```
Here's what I'll set up:
Domains:
- personal — Family, health, day-to-day
- acme — Work at Acme Corp (Designer)
- myapp — Side project
This will create:
- memory/domains.yml (domain manifest)
- Memory directories + starter files for each domain
- Slash commands: /personal, /acme, /myapp
- Updated CLAUDE.md routing table
Good to go?
```
Wait for confirmation before proceeding.
## Phase 3: Generate
### 3a. Write `memory/domains.yml`
Write the complete manifest file. Always include `cog-meta` as a system domain (the user doesn't need to know about this one). Format:
```yaml
# Cog Domain Manifest — generated by /setup
# Single source of truth for all memory domains.
# To modify: run /setup again. Don't edit this file manually.
domains:
- id: personal
path: personal
type: personal
label: "<from conversation>"
triggers: [<inferred>]
files: [<based on type + customization>]
- id: cog-meta
path: cog-meta
type: system
label: "Cog self-knowledge, pipeline health, architecture"
triggers: [cog, meta, evolve, pipeline, memory system, architecture]
files: [self-observations, patterns, improvements, scenario-calibration, foresight-nudge]
# ... work and side-project domains from conversation
```
### 3b. Create Memory Directories and Starter Files
For each domain in the manifest:
1. Create `memory/{domain.path}/` if it doesn't exist
2. For each file in the domain's `files` array, create `memory/{domain.path}/{file}.md` if it doesn't exist
3. Use these starter templates for new files:
**hot-memory.md:**
```markdown
# {Domain Label} — Hot Memory
<!-- L0: Current state and top-of-mind for {domain label} -->
<!-- Rewrite freely. Keep under 50 lines. -->
```
**observations.md:**
```markdown
# {Domain Label} — Observations
<!-- L0: Timestamped observations and events -->
<!-- Append-only. Format: - YYYY-MM-DD [tags]: observation -->
```
**action-items.md:**
```markdown
# {Domain Label} — Action Items
<!-- L0: Open and completed tasks -->
<!-- Format: - [ ] task | due:YYYY-MM-DD | pri:high/medium/low | added:YYYY-MM-DD -->
```
**entities.md:**
```markdown
# {Domain Label} — Entities
<!-- L0: People, places, and things -->
<!-- Edit in place by ### Name header. Use (since YYYY-MM) / (until YYYY-MM) for time-bound facts. -->
```
**Other files** (projects, dev-log, habits, health, calendar, etc.):
```markdown
# {Domain Label} — {File Name}
<!-- L0: {file name} data -->
```
Also handle subdomains the same way — create `memory/{subdomain.path}/` and its files.
### 3c. Generate Domain Command Files
For each domain (except `cog-meta` which has its own dedicated skills):
1. Read the template at `.claude/commands/_templates/domain.md`
2. Replace template variables:
- `{{ID}}` → domain id
- `{{LABEL}}` → domain label
- `{{PATH}}` → domain path
- `{{TRIGGERS}}` → bullet list of triggers (one per line, prefixed with `- `)
- `{{FILES}}` → comma-separated list of files
3. Write the result to `.claude/commands/{domain.id}.md`
4. If the file already exists, overwrite it (the template is the source of truth)
Also generate command files for subdomains.
### 3d. Discover Session Transcript Path
Claude Code saves conversation history as JSONL files under `~/.claude/projects/`. Find this project's session directory:
1. List `~/.claude/projects/` and find the directory that matches this project's path
2. Verify it exists and is readable
3. Write the discovered path to `memory/cog-meta/reflect-cursor.md`:
```markdown
# Reflect Cursor
<!-- L0: Session transcript path and ingestion cursor for /reflect -->
session_path: ~/.claude/projects/<discovered-slug>
last_processed: never
```
If the directory doesn't exist yet (fresh install, this is the first session), write the file anyway with the expected path — it will exist after this conversation ends.
Tell the user: "Found your session transcripts at `<path>` — /reflect will use these to review past conversations."
### 3e. Update CLAUDE.md Domain Routing Table
Read `CLAUDE.md`. Find the domain routing table (between `| Skill` header and the blank line after the table). Regenerate it from the manifest:
- For each domain (except cog-meta): add a row `| /{id} | {label} | {first 3 triggers} |`
- Keep all non-domain rows (explainer, humanizer, reflect, evolve, history, scenario, housekeeping, foresight, setup) as-is
- Preserve the internal skills line
## Phase 4: Summary
Output a summary:
- Domains created
- Files and directories generated
- Next steps: "Just talk naturally — I'll route to the right domain. If you want to add more domains later, just say 'add a domain'."
## Rules
1. **Never delete** — setup only creates and updates, never removes files or directories
2. **Idempotent** — running /setup multiple times is safe; it skips existing files (except command files which get regenerated from template, and domains.yml which gets rewritten)
3. **cog-meta is automatic** — always included, never ask about it
4. **Conversational first** — the whole point is that no one edits YAML manually
5. **Re-runs are additive** — if run again with existing domains, ask "Want to add more domains or reconfigure existing ones?"

19
.claude/settings.json Normal file
View File

@@ -0,0 +1,19 @@
{
"permissions": {
"allow": [
"Read",
"Edit",
"Write",
"Glob",
"Grep",
"Bash(git status*)",
"Bash(git diff*)",
"Bash(git log*)",
"Bash(git add*)",
"Bash(git commit*)",
"Bash(git push*)",
"Bash(mkdir*)",
"Bash(ls*)"
]
}
}