4 Commits

Author SHA1 Message Date
Danilo Reyes
d448e0f6c8 reviewing 2026-01-30 16:42:29 -06:00
Danilo Reyes
5da9abf1b7 init 2026-01-30 16:31:02 -06:00
NixOS Builder Bot
93411591ef Weekly flake update: 2026-01-30 09:47 UTC 2026-01-30 03:47:10 -06:00
Danilo Reyes
832054d987 prem2resolve
All checks were successful
Weekly NixOS Build & Cache / build-and-cache (push) Successful in 16m28s
2026-01-28 06:30:10 -06:00
32 changed files with 2845 additions and 44 deletions

8
.gitignore vendored
View File

@@ -2,7 +2,15 @@
config.el
*.qcow2
result
.codex/
# Prevent accidentally committing unencrypted secrets
**/secrets/*.yaml.dec
**/*-decrypted.*
**/temp-secrets.*
# Editor/OS artifacts
.DS_Store
Thumbs.db
.vscode/
.idea/
*.swp
*.tmp

View File

@@ -0,0 +1,50 @@
# [PROJECT_NAME] Constitution
<!-- Example: Spec Constitution, TaskFlow Constitution, etc. -->
## Core Principles
### [PRINCIPLE_1_NAME]
<!-- Example: I. Library-First -->
[PRINCIPLE_1_DESCRIPTION]
<!-- Example: Every feature starts as a standalone library; Libraries must be self-contained, independently testable, documented; Clear purpose required - no organizational-only libraries -->
### [PRINCIPLE_2_NAME]
<!-- Example: II. CLI Interface -->
[PRINCIPLE_2_DESCRIPTION]
<!-- Example: Every library exposes functionality via CLI; Text in/out protocol: stdin/args → stdout, errors → stderr; Support JSON + human-readable formats -->
### [PRINCIPLE_3_NAME]
<!-- Example: III. Test-First (NON-NEGOTIABLE) -->
[PRINCIPLE_3_DESCRIPTION]
<!-- Example: TDD mandatory: Tests written → User approved → Tests fail → Then implement; Red-Green-Refactor cycle strictly enforced -->
### [PRINCIPLE_4_NAME]
<!-- Example: IV. Integration Testing -->
[PRINCIPLE_4_DESCRIPTION]
<!-- Example: Focus areas requiring integration tests: New library contract tests, Contract changes, Inter-service communication, Shared schemas -->
### [PRINCIPLE_5_NAME]
<!-- Example: V. Observability, VI. Versioning & Breaking Changes, VII. Simplicity -->
[PRINCIPLE_5_DESCRIPTION]
<!-- Example: Text I/O ensures debuggability; Structured logging required; Or: MAJOR.MINOR.BUILD format; Or: Start simple, YAGNI principles -->
## [SECTION_2_NAME]
<!-- Example: Additional Constraints, Security Requirements, Performance Standards, etc. -->
[SECTION_2_CONTENT]
<!-- Example: Technology stack requirements, compliance standards, deployment policies, etc. -->
## [SECTION_3_NAME]
<!-- Example: Development Workflow, Review Process, Quality Gates, etc. -->
[SECTION_3_CONTENT]
<!-- Example: Code review requirements, testing gates, deployment approval process, etc. -->
## Governance
<!-- Example: Constitution supersedes all other practices; Amendments require documentation, approval, migration plan -->
[GOVERNANCE_RULES]
<!-- Example: All PRs/reviews must verify compliance; Complexity must be justified; Use [GUIDANCE_FILE] for runtime development guidance -->
**Version**: [CONSTITUTION_VERSION] | **Ratified**: [RATIFICATION_DATE] | **Last Amended**: [LAST_AMENDED_DATE]
<!-- Example: Version: 2.1.1 | Ratified: 2025-06-13 | Last Amended: 2025-07-16 -->

View File

@@ -0,0 +1,166 @@
#!/usr/bin/env bash
# Consolidated prerequisite checking script
#
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
# It replaces the functionality previously spread across multiple scripts.
#
# Usage: ./check-prerequisites.sh [OPTIONS]
#
# OPTIONS:
# --json Output in JSON format
# --require-tasks Require tasks.md to exist (for implementation phase)
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
# --paths-only Only output path variables (no validation)
# --help, -h Show help message
#
# OUTPUTS:
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
set -e
# Parse command line arguments
JSON_MODE=false
REQUIRE_TASKS=false
INCLUDE_TASKS=false
PATHS_ONLY=false
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--require-tasks)
REQUIRE_TASKS=true
;;
--include-tasks)
INCLUDE_TASKS=true
;;
--paths-only)
PATHS_ONLY=true
;;
--help|-h)
cat << 'EOF'
Usage: check-prerequisites.sh [OPTIONS]
Consolidated prerequisite checking for Spec-Driven Development workflow.
OPTIONS:
--json Output in JSON format
--require-tasks Require tasks.md to exist (for implementation phase)
--include-tasks Include tasks.md in AVAILABLE_DOCS list
--paths-only Only output path variables (no prerequisite validation)
--help, -h Show this help message
EXAMPLES:
# Check task prerequisites (plan.md required)
./check-prerequisites.sh --json
# Check implementation prerequisites (plan.md + tasks.md required)
./check-prerequisites.sh --json --require-tasks --include-tasks
# Get feature paths only (no validation)
./check-prerequisites.sh --paths-only
EOF
exit 0
;;
*)
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
exit 1
;;
esac
done
# Source common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get feature paths and validate branch
eval $(get_feature_paths)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
if $PATHS_ONLY; then
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "TASKS: $TASKS"
fi
exit 0
fi
# Validate required directories and files
if [[ ! -d "$FEATURE_DIR" ]]; then
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
echo "Run /speckit.specify first to create the feature structure." >&2
exit 1
fi
if [[ ! -f "$IMPL_PLAN" ]]; then
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.plan first to create the implementation plan." >&2
exit 1
fi
# Check for tasks.md if required
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task list." >&2
exit 1
fi
# Build list of available documents
docs=()
# Always check these optional docs
[[ -f "$RESEARCH" ]] && docs+=("research.md")
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
# Check contracts directory (only if it exists and has files)
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
docs+=("contracts/")
fi
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
# Include tasks.md if requested and it exists
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
docs+=("tasks.md")
fi
# Output results
if $JSON_MODE; then
# Build JSON array of documents
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(printf '"%s",' "${docs[@]}")
json_docs="[${json_docs%,}]"
fi
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
else
# Text output
echo "FEATURE_DIR:$FEATURE_DIR"
echo "AVAILABLE_DOCS:"
# Show status of each potential document
check_file "$RESEARCH" "research.md"
check_file "$DATA_MODEL" "data-model.md"
check_dir "$CONTRACTS_DIR" "contracts/"
check_file "$QUICKSTART" "quickstart.md"
if $INCLUDE_TASKS; then
check_file "$TASKS" "tasks.md"
fi
fi

156
.specify/scripts/bash/common.sh Executable file
View File

@@ -0,0 +1,156 @@
#!/usr/bin/env bash
# Common functions and variables for all scripts
# Get repository root, with fallback for non-git repositories
get_repo_root() {
if git rev-parse --show-toplevel >/dev/null 2>&1; then
git rev-parse --show-toplevel
else
# Fall back to script location for non-git repos
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
(cd "$script_dir/../../.." && pwd)
fi
}
# Get current branch, with fallback for non-git repositories
get_current_branch() {
# First check if SPECIFY_FEATURE environment variable is set
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
echo "$SPECIFY_FEATURE"
return
fi
# Then check git if available
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
git rev-parse --abbrev-ref HEAD
return
fi
# For non-git repos, try to find the latest feature directory
local repo_root=$(get_repo_root)
local specs_dir="$repo_root/specs"
if [[ -d "$specs_dir" ]]; then
local latest_feature=""
local highest=0
for dir in "$specs_dir"/*; do
if [[ -d "$dir" ]]; then
local dirname=$(basename "$dir")
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
local number=${BASH_REMATCH[1]}
number=$((10#$number))
if [[ "$number" -gt "$highest" ]]; then
highest=$number
latest_feature=$dirname
fi
fi
fi
done
if [[ -n "$latest_feature" ]]; then
echo "$latest_feature"
return
fi
fi
echo "main" # Final fallback
}
# Check if we have git available
has_git() {
git rev-parse --show-toplevel >/dev/null 2>&1
}
check_feature_branch() {
local branch="$1"
local has_git_repo="$2"
# For non-git repos, we can't enforce branch naming but still provide output
if [[ "$has_git_repo" != "true" ]]; then
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
return 0
fi
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
echo "Feature branches should be named like: 001-feature-name" >&2
return 1
fi
return 0
}
get_feature_dir() { echo "$1/specs/$2"; }
# Find feature directory by numeric prefix instead of exact branch match
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
find_feature_dir_by_prefix() {
local repo_root="$1"
local branch_name="$2"
local specs_dir="$repo_root/specs"
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
# If branch doesn't have numeric prefix, fall back to exact match
echo "$specs_dir/$branch_name"
return
fi
local prefix="${BASH_REMATCH[1]}"
# Search for directories in specs/ that start with this prefix
local matches=()
if [[ -d "$specs_dir" ]]; then
for dir in "$specs_dir"/"$prefix"-*; do
if [[ -d "$dir" ]]; then
matches+=("$(basename "$dir")")
fi
done
fi
# Handle results
if [[ ${#matches[@]} -eq 0 ]]; then
# No match found - return the branch name path (will fail later with clear error)
echo "$specs_dir/$branch_name"
elif [[ ${#matches[@]} -eq 1 ]]; then
# Exactly one match - perfect!
echo "$specs_dir/${matches[0]}"
else
# Multiple matches - this shouldn't happen with proper naming convention
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
echo "Please ensure only one spec directory exists per numeric prefix." >&2
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
fi
}
get_feature_paths() {
local repo_root=$(get_repo_root)
local current_branch=$(get_current_branch)
local has_git_repo="false"
if has_git; then
has_git_repo="true"
fi
# Use prefix-based lookup to support multiple branches per spec
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
cat <<EOF
REPO_ROOT='$repo_root'
CURRENT_BRANCH='$current_branch'
HAS_GIT='$has_git_repo'
FEATURE_DIR='$feature_dir'
FEATURE_SPEC='$feature_dir/spec.md'
IMPL_PLAN='$feature_dir/plan.md'
TASKS='$feature_dir/tasks.md'
RESEARCH='$feature_dir/research.md'
DATA_MODEL='$feature_dir/data-model.md'
QUICKSTART='$feature_dir/quickstart.md'
CONTRACTS_DIR='$feature_dir/contracts'
EOF
}
check_file() { [[ -f "$1" ]] && echo "$2" || echo "$2"; }
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo "$2" || echo "$2"; }

View File

@@ -0,0 +1,297 @@
#!/usr/bin/env bash
set -e
JSON_MODE=false
SHORT_NAME=""
BRANCH_NUMBER=""
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
# Check if the next argument is another option (starts with --)
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
;;
--help|-h)
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
exit 1
fi
# Function to find the repository root by searching for existing project markers
find_repo_root() {
local dir="$1"
while [ "$dir" != "/" ]; do
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
echo "$dir"
return 0
fi
dir="$(dirname "$dir")"
done
return 1
}
# Function to get highest number from specs directory
get_highest_from_specs() {
local specs_dir="$1"
local highest=0
if [ -d "$specs_dir" ]; then
for dir in "$specs_dir"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
done
fi
echo "$highest"
}
# Function to get highest number from git branches
get_highest_from_branches() {
local highest=0
# Get all branches (local and remote)
branches=$(git branch -a 2>/dev/null || echo "")
if [ -n "$branches" ]; then
while IFS= read -r branch; do
# Clean branch name: remove leading markers and remote prefixes
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
# Extract feature number if branch matches pattern ###-*
if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then
number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done <<< "$branches"
fi
echo "$highest"
}
# Function to check existing branches (local and remote) and return next available number
check_existing_branches() {
local specs_dir="$1"
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
git fetch --all --prune 2>/dev/null || true
# Get highest number from ALL branches (not just matching short name)
local highest_branch=$(get_highest_from_branches)
# Get highest number from ALL specs (not just matching short name)
local highest_spec=$(get_highest_from_specs "$specs_dir")
# Take the maximum of both
local max_num=$highest_branch
if [ "$highest_spec" -gt "$max_num" ]; then
max_num=$highest_spec
fi
# Return next number
echo $((max_num + 1))
}
# Function to clean and format a branch name
clean_branch_name() {
local name="$1"
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
}
# Resolve repository root. Prefer git information when available, but fall back
# to searching for repository markers so the workflow still functions in repositories that
# were initialised with --no-git.
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if git rev-parse --show-toplevel >/dev/null 2>&1; then
REPO_ROOT=$(git rev-parse --show-toplevel)
HAS_GIT=true
else
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
if [ -z "$REPO_ROOT" ]; then
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
exit 1
fi
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
mkdir -p "$SPECS_DIR"
# Function to generate branch name with stop word filtering and length filtering
generate_branch_name() {
local description="$1"
# Common stop words to filter out
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
# Convert to lowercase and split into words
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
local meaningful_words=()
for word in $clean_name; do
# Skip empty words
[ -z "$word" ] && continue
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -q "\b${word^^}\b"; then
# Keep short words if they appear as uppercase in original (likely acronyms)
meaningful_words+=("$word")
fi
fi
done
# If we have meaningful words, use first 3-4 of them
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
else
# Fallback to original logic if no meaningful words found
local cleaned=$(clean_branch_name "$description")
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
# Use provided short name, just clean it up
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
else
# Generate from description with smart filtering
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
# Determine branch number
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$HAS_GIT" = true ]; then
# Check existing branches on remotes
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
else
# Fall back to local directory check
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
fi
fi
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
# GitHub enforces a 244-byte limit on branch names
# Validate and truncate if necessary
MAX_BRANCH_LENGTH=244
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
# Calculate how much we need to trim from suffix
# Account for: feature number (3) + hyphen (1) = 4 chars
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
# Truncate suffix at word boundary if possible
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
# Remove trailing hyphen if truncation created one
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
if [ "$HAS_GIT" = true ]; then
git checkout -b "$BRANCH_NAME"
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
mkdir -p "$FEATURE_DIR"
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
SPEC_FILE="$FEATURE_DIR/spec.md"
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
# Set the SPECIFY_FEATURE environment variable for the current session
export SPECIFY_FEATURE="$BRANCH_NAME"
if $JSON_MODE; then
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SPEC_FILE: $SPEC_FILE"
echo "FEATURE_NUM: $FEATURE_NUM"
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
fi

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -e
# Parse command line arguments
JSON_MODE=false
ARGS=()
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--help|-h)
echo "Usage: $0 [--json]"
echo " --json Output results in JSON format"
echo " --help Show this help message"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
done
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
# Check if we're on a proper feature branch (only for git repos)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# Ensure the feature directory exists
mkdir -p "$FEATURE_DIR"
# Copy plan template if it exists
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
if [[ -f "$TEMPLATE" ]]; then
cp "$TEMPLATE" "$IMPL_PLAN"
echo "Copied plan template to $IMPL_PLAN"
else
echo "Warning: Plan template not found at $TEMPLATE"
# Create a basic plan file if template doesn't exist
touch "$IMPL_PLAN"
fi
# Output results
if $JSON_MODE; then
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
else
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "SPECS_DIR: $FEATURE_DIR"
echo "BRANCH: $CURRENT_BRANCH"
echo "HAS_GIT: $HAS_GIT"
fi

View File

@@ -0,0 +1,799 @@
#!/usr/bin/env bash
# Update agent context files with information from plan.md
#
# This script maintains AI agent context files by parsing feature specifications
# and updating agent-specific configuration files with project information.
#
# MAIN FUNCTIONS:
# 1. Environment Validation
# - Verifies git repository structure and branch information
# - Checks for required plan.md files and templates
# - Validates file permissions and accessibility
#
# 2. Plan Data Extraction
# - Parses plan.md files to extract project metadata
# - Identifies language/version, frameworks, databases, and project types
# - Handles missing or incomplete specification data gracefully
#
# 3. Agent File Management
# - Creates new agent context files from templates when needed
# - Updates existing agent files with new project information
# - Preserves manual additions and custom configurations
# - Supports multiple AI agent formats and directory structures
#
# 4. Content Generation
# - Generates language-specific build/test commands
# - Creates appropriate project directory structures
# - Updates technology stacks and recent changes sections
# - Maintains consistent formatting and timestamps
#
# 5. Multi-Agent Support
# - Handles agent-specific file paths and naming conventions
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, or Amazon Q Developer CLI
# - Can update single agents or all existing agent files
# - Creates default Claude file if no agent files exist
#
# Usage: ./update-agent-context.sh [agent_type]
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|shai|q|bob|qoder
# Leave empty to update all existing agent files
set -e
# Enable strict error handling
set -u
set -o pipefail
#==============================================================================
# Configuration and Global Variables
#==============================================================================
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
AGENT_TYPE="${1:-}"
# Agent-specific file paths
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
QWEN_FILE="$REPO_ROOT/QWEN.md"
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
QODER_FILE="$REPO_ROOT/QODER.md"
AMP_FILE="$REPO_ROOT/AGENTS.md"
SHAI_FILE="$REPO_ROOT/SHAI.md"
Q_FILE="$REPO_ROOT/AGENTS.md"
BOB_FILE="$REPO_ROOT/AGENTS.md"
# Template file
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
# Global variables for parsed plan data
NEW_LANG=""
NEW_FRAMEWORK=""
NEW_DB=""
NEW_PROJECT_TYPE=""
#==============================================================================
# Utility Functions
#==============================================================================
log_info() {
echo "INFO: $1"
}
log_success() {
echo "$1"
}
log_error() {
echo "ERROR: $1" >&2
}
log_warning() {
echo "WARNING: $1" >&2
}
# Cleanup function for temporary files
cleanup() {
local exit_code=$?
rm -f /tmp/agent_update_*_$$
rm -f /tmp/manual_additions_$$
exit $exit_code
}
# Set up cleanup trap
trap cleanup EXIT INT TERM
#==============================================================================
# Validation Functions
#==============================================================================
validate_environment() {
# Check if we have a current branch/feature (git or non-git)
if [[ -z "$CURRENT_BRANCH" ]]; then
log_error "Unable to determine current feature"
if [[ "$HAS_GIT" == "true" ]]; then
log_info "Make sure you're on a feature branch"
else
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
fi
exit 1
fi
# Check if plan.md exists
if [[ ! -f "$NEW_PLAN" ]]; then
log_error "No plan.md found at $NEW_PLAN"
log_info "Make sure you're working on a feature with a corresponding spec directory"
if [[ "$HAS_GIT" != "true" ]]; then
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
fi
exit 1
fi
# Check if template exists (needed for new files)
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_warning "Template file not found at $TEMPLATE_FILE"
log_warning "Creating new agent files will fail"
fi
}
#==============================================================================
# Plan Parsing Functions
#==============================================================================
extract_plan_field() {
local field_pattern="$1"
local plan_file="$2"
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
head -1 | \
sed "s|^\*\*${field_pattern}\*\*: ||" | \
sed 's/^[ \t]*//;s/[ \t]*$//' | \
grep -v "NEEDS CLARIFICATION" | \
grep -v "^N/A$" || echo ""
}
parse_plan_data() {
local plan_file="$1"
if [[ ! -f "$plan_file" ]]; then
log_error "Plan file not found: $plan_file"
return 1
fi
if [[ ! -r "$plan_file" ]]; then
log_error "Plan file is not readable: $plan_file"
return 1
fi
log_info "Parsing plan data from $plan_file"
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
# Log what we found
if [[ -n "$NEW_LANG" ]]; then
log_info "Found language: $NEW_LANG"
else
log_warning "No language information found in plan"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
log_info "Found framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
log_info "Found database: $NEW_DB"
fi
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
log_info "Found project type: $NEW_PROJECT_TYPE"
fi
}
format_technology_stack() {
local lang="$1"
local framework="$2"
local parts=()
# Add non-empty parts
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
# Join with proper formatting
if [[ ${#parts[@]} -eq 0 ]]; then
echo ""
elif [[ ${#parts[@]} -eq 1 ]]; then
echo "${parts[0]}"
else
# Join multiple parts with " + "
local result="${parts[0]}"
for ((i=1; i<${#parts[@]}; i++)); do
result="$result + ${parts[i]}"
done
echo "$result"
fi
}
#==============================================================================
# Template and Content Generation Functions
#==============================================================================
get_project_structure() {
local project_type="$1"
if [[ "$project_type" == *"web"* ]]; then
echo "backend/\\nfrontend/\\ntests/"
else
echo "src/\\ntests/"
fi
}
get_commands_for_language() {
local lang="$1"
case "$lang" in
*"Python"*)
echo "cd src && pytest && ruff check ."
;;
*"Rust"*)
echo "cargo test && cargo clippy"
;;
*"JavaScript"*|*"TypeScript"*)
echo "npm test \\&\\& npm run lint"
;;
*)
echo "# Add commands for $lang"
;;
esac
}
get_language_conventions() {
local lang="$1"
echo "$lang: Follow standard conventions"
}
create_new_agent_file() {
local target_file="$1"
local temp_file="$2"
local project_name="$3"
local current_date="$4"
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_error "Template not found at $TEMPLATE_FILE"
return 1
fi
if [[ ! -r "$TEMPLATE_FILE" ]]; then
log_error "Template file is not readable: $TEMPLATE_FILE"
return 1
fi
log_info "Creating new agent context file from template..."
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
log_error "Failed to copy template file"
return 1
fi
# Replace template placeholders
local project_structure
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
local commands
commands=$(get_commands_for_language "$NEW_LANG")
local language_conventions
language_conventions=$(get_language_conventions "$NEW_LANG")
# Perform substitutions with error checking using safer approach
# Escape special characters for sed by using a different delimiter or escaping
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
# Build technology stack and recent change strings conditionally
local tech_stack
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
elif [[ -n "$escaped_lang" ]]; then
tech_stack="- $escaped_lang ($escaped_branch)"
elif [[ -n "$escaped_framework" ]]; then
tech_stack="- $escaped_framework ($escaped_branch)"
else
tech_stack="- ($escaped_branch)"
fi
local recent_change
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
elif [[ -n "$escaped_lang" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang"
elif [[ -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_framework"
else
recent_change="- $escaped_branch: Added"
fi
local substitutions=(
"s|\[PROJECT NAME\]|$project_name|"
"s|\[DATE\]|$current_date|"
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
)
for substitution in "${substitutions[@]}"; do
if ! sed -i.bak -e "$substitution" "$temp_file"; then
log_error "Failed to perform substitution: $substitution"
rm -f "$temp_file" "$temp_file.bak"
return 1
fi
done
# Convert \n sequences to actual newlines
newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
# Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2"
return 0
}
update_existing_agent_file() {
local target_file="$1"
local current_date="$2"
log_info "Updating existing agent context file..."
# Use a single temporary file for atomic update
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
# Process the file in one pass
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
local new_tech_entries=()
local new_change_entry=""
# Prepare new technology entries
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
fi
# Prepare new change entry
if [[ -n "$tech_stack" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
fi
# Check if sections exist in the file
local has_active_technologies=0
local has_recent_changes=0
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
has_active_technologies=1
fi
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
has_recent_changes=1
fi
# Process file line by line
local in_tech_section=false
local in_changes_section=false
local tech_entries_added=false
local changes_entries_added=false
local existing_changes_count=0
local file_ended=false
while IFS= read -r line || [[ -n "$line" ]]; do
# Handle Active Technologies section
if [[ "$line" == "## Active Technologies" ]]; then
echo "$line" >> "$temp_file"
in_tech_section=true
continue
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
# Add new tech entries before closing the section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
in_tech_section=false
continue
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
# Add new tech entries before empty line in tech section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
continue
fi
# Handle Recent Changes section
if [[ "$line" == "## Recent Changes" ]]; then
echo "$line" >> "$temp_file"
# Add new change entry right after the heading
if [[ -n "$new_change_entry" ]]; then
echo "$new_change_entry" >> "$temp_file"
fi
in_changes_section=true
changes_entries_added=true
continue
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
echo "$line" >> "$temp_file"
in_changes_section=false
continue
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
# Keep only first 2 existing changes
if [[ $existing_changes_count -lt 2 ]]; then
echo "$line" >> "$temp_file"
((existing_changes_count++))
fi
continue
fi
# Update timestamp
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
else
echo "$line" >> "$temp_file"
fi
done < "$target_file"
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
# If sections don't exist, add them at the end of the file
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
echo "" >> "$temp_file"
echo "## Active Technologies" >> "$temp_file"
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
echo "" >> "$temp_file"
echo "## Recent Changes" >> "$temp_file"
echo "$new_change_entry" >> "$temp_file"
changes_entries_added=true
fi
# Move temp file to target atomically
if ! mv "$temp_file" "$target_file"; then
log_error "Failed to update target file"
rm -f "$temp_file"
return 1
fi
return 0
}
#==============================================================================
# Main Agent File Update Function
#==============================================================================
update_agent_file() {
local target_file="$1"
local agent_name="$2"
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
log_error "update_agent_file requires target_file and agent_name parameters"
return 1
fi
log_info "Updating $agent_name context file: $target_file"
local project_name
project_name=$(basename "$REPO_ROOT")
local current_date
current_date=$(date +%Y-%m-%d)
# Create directory if it doesn't exist
local target_dir
target_dir=$(dirname "$target_file")
if [[ ! -d "$target_dir" ]]; then
if ! mkdir -p "$target_dir"; then
log_error "Failed to create directory: $target_dir"
return 1
fi
fi
if [[ ! -f "$target_file" ]]; then
# Create new file from template
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
if mv "$temp_file" "$target_file"; then
log_success "Created new $agent_name context file"
else
log_error "Failed to move temporary file to $target_file"
rm -f "$temp_file"
return 1
fi
else
log_error "Failed to create new agent file"
rm -f "$temp_file"
return 1
fi
else
# Update existing file
if [[ ! -r "$target_file" ]]; then
log_error "Cannot read existing file: $target_file"
return 1
fi
if [[ ! -w "$target_file" ]]; then
log_error "Cannot write to existing file: $target_file"
return 1
fi
if update_existing_agent_file "$target_file" "$current_date"; then
log_success "Updated existing $agent_name context file"
else
log_error "Failed to update existing agent file"
return 1
fi
fi
return 0
}
#==============================================================================
# Agent Selection and Processing
#==============================================================================
update_specific_agent() {
local agent_type="$1"
case "$agent_type" in
claude)
update_agent_file "$CLAUDE_FILE" "Claude Code"
;;
gemini)
update_agent_file "$GEMINI_FILE" "Gemini CLI"
;;
copilot)
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
;;
cursor-agent)
update_agent_file "$CURSOR_FILE" "Cursor IDE"
;;
qwen)
update_agent_file "$QWEN_FILE" "Qwen Code"
;;
opencode)
update_agent_file "$AGENTS_FILE" "opencode"
;;
codex)
update_agent_file "$AGENTS_FILE" "Codex CLI"
;;
windsurf)
update_agent_file "$WINDSURF_FILE" "Windsurf"
;;
kilocode)
update_agent_file "$KILOCODE_FILE" "Kilo Code"
;;
auggie)
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
;;
roo)
update_agent_file "$ROO_FILE" "Roo Code"
;;
codebuddy)
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
;;
qoder)
update_agent_file "$QODER_FILE" "Qoder CLI"
;;
amp)
update_agent_file "$AMP_FILE" "Amp"
;;
shai)
update_agent_file "$SHAI_FILE" "SHAI"
;;
q)
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
;;
bob)
update_agent_file "$BOB_FILE" "IBM Bob"
;;
*)
log_error "Unknown agent type '$agent_type'"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|shai|q|bob|qoder"
exit 1
;;
esac
}
update_all_existing_agents() {
local found_agent=false
# Check each possible agent file and update if it exists
if [[ -f "$CLAUDE_FILE" ]]; then
update_agent_file "$CLAUDE_FILE" "Claude Code"
found_agent=true
fi
if [[ -f "$GEMINI_FILE" ]]; then
update_agent_file "$GEMINI_FILE" "Gemini CLI"
found_agent=true
fi
if [[ -f "$COPILOT_FILE" ]]; then
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
found_agent=true
fi
if [[ -f "$CURSOR_FILE" ]]; then
update_agent_file "$CURSOR_FILE" "Cursor IDE"
found_agent=true
fi
if [[ -f "$QWEN_FILE" ]]; then
update_agent_file "$QWEN_FILE" "Qwen Code"
found_agent=true
fi
if [[ -f "$AGENTS_FILE" ]]; then
update_agent_file "$AGENTS_FILE" "Codex/opencode"
found_agent=true
fi
if [[ -f "$WINDSURF_FILE" ]]; then
update_agent_file "$WINDSURF_FILE" "Windsurf"
found_agent=true
fi
if [[ -f "$KILOCODE_FILE" ]]; then
update_agent_file "$KILOCODE_FILE" "Kilo Code"
found_agent=true
fi
if [[ -f "$AUGGIE_FILE" ]]; then
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
found_agent=true
fi
if [[ -f "$ROO_FILE" ]]; then
update_agent_file "$ROO_FILE" "Roo Code"
found_agent=true
fi
if [[ -f "$CODEBUDDY_FILE" ]]; then
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
found_agent=true
fi
if [[ -f "$SHAI_FILE" ]]; then
update_agent_file "$SHAI_FILE" "SHAI"
found_agent=true
fi
if [[ -f "$QODER_FILE" ]]; then
update_agent_file "$QODER_FILE" "Qoder CLI"
found_agent=true
fi
if [[ -f "$Q_FILE" ]]; then
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
found_agent=true
fi
if [[ -f "$BOB_FILE" ]]; then
update_agent_file "$BOB_FILE" "IBM Bob"
found_agent=true
fi
# If no agent files exist, create a default Claude file
if [[ "$found_agent" == false ]]; then
log_info "No existing agent files found, creating default Claude file..."
update_agent_file "$CLAUDE_FILE" "Claude Code"
fi
}
print_summary() {
echo
log_info "Summary of changes:"
if [[ -n "$NEW_LANG" ]]; then
echo " - Added language: $NEW_LANG"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
echo " - Added framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
echo " - Added database: $NEW_DB"
fi
echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]"
}
#==============================================================================
# Main Execution
#==============================================================================
main() {
# Validate environment before proceeding
validate_environment
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
# Parse the plan file to extract project information
if ! parse_plan_data "$NEW_PLAN"; then
log_error "Failed to parse plan data"
exit 1
fi
# Process based on agent type argument
local success=true
if [[ -z "$AGENT_TYPE" ]]; then
# No specific agent provided - update all existing agent files
log_info "No agent specified, updating all existing agent files..."
if ! update_all_existing_agents; then
success=false
fi
else
# Specific agent provided - update only that agent
log_info "Updating specific agent: $AGENT_TYPE"
if ! update_specific_agent "$AGENT_TYPE"; then
success=false
fi
fi
# Print summary
print_summary
if [[ "$success" == true ]]; then
log_success "Agent context update completed successfully"
exit 0
else
log_error "Agent context update completed with errors"
exit 1
fi
}
# Execute main function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View File

@@ -0,0 +1,28 @@
# [PROJECT NAME] Development Guidelines
Auto-generated from all feature plans. Last updated: [DATE]
## Active Technologies
[EXTRACTED FROM ALL PLAN.MD FILES]
## Project Structure
```text
[ACTUAL STRUCTURE FROM PLANS]
```
## Commands
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
## Code Style
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
## Recent Changes
[LAST 3 FEATURES AND WHAT THEY ADDED]
<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->

View File

@@ -0,0 +1,40 @@
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
**Purpose**: [Brief description of what this checklist covers]
**Created**: [DATE]
**Feature**: [Link to spec.md or relevant documentation]
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
<!--
============================================================================
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
The /speckit.checklist command MUST replace these with actual items based on:
- User's specific checklist request
- Feature requirements from spec.md
- Technical context from plan.md
- Implementation details from tasks.md
DO NOT keep these sample items in the generated checklist file.
============================================================================
-->
## [Category 1]
- [ ] CHK001 First checklist item with clear action
- [ ] CHK002 Second checklist item
- [ ] CHK003 Third checklist item
## [Category 2]
- [ ] CHK004 Another category item
- [ ] CHK005 Item with specific criteria
- [ ] CHK006 Final item in this category
## Notes
- Check items off as completed: `[x]`
- Add comments or findings inline
- Link to relevant resources or documentation
- Items are numbered sequentially for easy reference

View File

@@ -0,0 +1,104 @@
# Implementation Plan: [FEATURE]
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
[Extract from feature spec: primary requirement + technical approach from research]
## Technical Context
<!--
ACTION REQUIRED: Replace the content in this section with the technical details
for the project. The structure here is presented in advisory capacity to guide
the iteration process.
-->
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [single/web/mobile - determines source structure]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
[Gates determined based on constitution file]
## Project Structure
### Documentation (this feature)
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
real paths (e.g., apps/admin, packages/something). The delivered plan must
not include Option labels.
-->
```text
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
src/
├── models/
├── services/
├── cli/
└── lib/
tests/
├── contract/
├── integration/
└── unit/
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
backend/
├── src/
│ ├── models/
│ ├── services/
│ └── api/
└── tests/
frontend/
├── src/
│ ├── components/
│ ├── pages/
│ └── services/
└── tests/
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
api/
└── [same as backend above]
ios/ or android/
└── [platform-specific structure: feature modules, UI flows, platform tests]
```
**Structure Decision**: [Document the selected structure and reference the real
directories captured above]
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |

View File

@@ -0,0 +1,115 @@
# Feature Specification: [FEATURE NAME]
**Feature Branch**: `[###-feature-name]`
**Created**: [DATE]
**Status**: Draft
**Input**: User description: "$ARGUMENTS"
## User Scenarios & Testing *(mandatory)*
<!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
you should still have a viable MVP (Minimum Viable Product) that delivers value.
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
Think of each story as a standalone slice of functionality that can be:
- Developed independently
- Tested independently
- Deployed independently
- Demonstrated to users independently
-->
### User Story 1 - [Brief Title] (Priority: P1)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 2 - [Brief Title] (Priority: P2)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 3 - [Brief Title] (Priority: P3)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
[Add more user stories as needed, each with an assigned priority]
### Edge Cases
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right edge cases.
-->
- What happens when [boundary condition]?
- How does system handle [error scenario]?
## Requirements *(mandatory)*
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right functional requirements.
-->
### Functional Requirements
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
*Example of marking unclear requirements:*
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### Key Entities *(include if feature involves data)*
- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities]
## Success Criteria *(mandatory)*
<!--
ACTION REQUIRED: Define measurable success criteria.
These must be technology-agnostic and measurable.
-->
### Measurable Outcomes
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]

View File

@@ -0,0 +1,251 @@
---
description: "Task list template for feature implementation"
---
# Tasks: [FEATURE NAME]
**Input**: Design documents from `/specs/[###-feature-name]/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- **Single project**: `src/`, `tests/` at repository root
- **Web app**: `backend/src/`, `frontend/src/`
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit.tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [ ] T001 Create project structure per implementation plan
- [ ] T002 Initialize [language] project with [framework] dependencies
- [ ] T003 [P] Configure linting and formatting tools
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
Examples of foundational tasks (adjust based on your project):
- [ ] T004 Setup database schema and migrations framework
- [ ] T005 [P] Implement authentication/authorization framework
- [ ] T006 [P] Setup API routing and middleware structure
- [ ] T007 Create base models/entities that all stories depend on
- [ ] T008 Configure error handling and logging infrastructure
- [ ] T009 Setup environment configuration management
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 1
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Add logging for user story 1 operations
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - [Title] (Priority: P2)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - [Title] (Priority: P3)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
**Checkpoint**: All user stories should now be independently functional
---
[Add more user story phases as needed, following the same pattern]
---
## Phase N: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [ ] TXXX [P] Documentation updates in docs/
- [ ] TXXX Code cleanup and refactoring
- [ ] TXXX Performance optimization across all stories
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
- [ ] TXXX Security hardening
- [ ] TXXX Run quickstart.md validation
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
- All tests for a user story marked [P] can run in parallel
- Models within a story marked [P] can run in parallel
- Different user stories can be worked on in parallel by different team members
---
## Parallel Example: User Story 1
```bash
# Launch all tests for User Story 1 together (if tests requested):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all models for User Story 1 together:
Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Create [Entity2] model in src/models/[entity2].py"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup + Foundational together
2. Once Foundational is done:
- Developer A: User Story 1
- Developer B: User Story 2
- Developer C: User Story 3
3. Stories complete and integrate independently
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Verify tests fail before implementing
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence

31
AGENTS.md Normal file
View File

@@ -0,0 +1,31 @@
# NixOS Development Guidelines (AI Agents)
Auto-generated from feature plans. Last updated: 2026-01-30
## Active Technologies
- Documentation set (AI-facing constitution and playbooks) in Markdown (001-ai-docs)
## Project Structure
```text
docs/ # Constitution and playbooks for AI guidance
specs/001-ai-docs/ # Planning artifacts (plan, research, tasks, data model, contracts)
```
## Commands
- Primary work is authoring markdown; no build/test commands required beyond manual validation.
## Code Style & Rules
- Follow repo conventions: no blank lines between code blocks; comments only when non-obvious; factor duplication into shared helpers/functions in examples.
- Constitution is authoritative for AI guidance; if human docs conflict, update both with the recorded resolution.
- Keep language business-level and technology-agnostic in AI-facing docs.
## Recent Changes
- 001-ai-docs: Documentation-focused stack; added docs/ for constitution/playbooks and specs/001-ai-docs/ for planning outputs.
<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->

View File

@@ -2,6 +2,7 @@
config,
lib,
pkgs,
inputs,
...
}:
let
@@ -54,8 +55,9 @@ in
vscode
nextcloud-client
warp
handbrake
handbrake
;
inherit (inputs.prem2resolve.packages.x86_64-linux) prem2resolve;
};
extraGroups = [
"audio"

52
docs/constitution.md Normal file
View File

@@ -0,0 +1,52 @@
# AI Constitution for the NixOS Repository
## Scope and Audience
- Audience: AI assistants and contributors needing an authoritative description of repository rules, structure, and workflows.
- Scope: Repo-wide conventions, module categories, host roles, secrets handling, proxy rules, documentation locations, and maintenance triggers.
- Authority: This constitution is the source of truth for AI. If human-facing docs differ, update both with the recorded resolution in `specs/001-ai-docs/research.md`.
## Repository Overview
- Architecture: Flake-based repo using `flake-parts` with inputs for pkgs (stable/unstable), stylix, home-manager, sops-nix, and service overlays. Common modules are composed through `parts/core.nix` and `parts/hosts.nix`.
- Module auto-import: `modules/modules.nix` auto-imports `.nix` files under `modules/apps`, `modules/dev`, `modules/scripts`, `modules/servers`, `modules/services`, `modules/shell`, and `modules/network`, excluding `librewolf.nix`. Factories live in `modules/factories/` (`mkserver`, `mkscript`), and shared options are in `modules/nix` and `modules/users`.
- Hosts and toggles: Host definitions live in `hosts/<name>/configuration.nix` with host-specific toggles in `hosts/<name>/toggles.nix`. The `my` namespace carries toggles for apps/dev/scripts/services/shell, feature flags like `enableProxy` and `enableContainers`, and per-host `interfaces` and `ips` maps.
- Main server and proxies: `my.mainServer` selects the host that should serve traffic by default (default `miniserver`; overridden to `server` in `hosts/server/toggles.nix`). Reverse proxies use helpers in `parts/core.nix` (`proxy`, `proxyReverse`, `proxyReverseFix`, `proxyReversePrivate`) and pick IPs from `my.ips` plus the hostName/ip set by `mkserver` options.
- Secure hosts and secrets: `my.secureHost` gates SOPS secrets. Secure hosts load secrets from `secrets/*.yaml` and wireguard definitions; non-secure hosts (e.g., `hosts/emacs`) skip secret-dependent services. Default SOPS file is `secrets/secrets.yaml` via `config/base.nix`.
## Coding Conventions
- No blank lines between code blocks; keep markdown examples tight.
- Minimize comments; prefer clear naming and shared helpers (`modules/factories/mkserver.nix`, `modules/factories/mkscript.nix`) to avoid duplication.
- Use business-level, technology-agnostic language in AI docs; reserve implementation detail for module code.
## Terminology and Naming Standards
- Module: A Nix module under `modules/<category>/<name>.nix` auto-imported into the system.
- Factory: Shared option constructors in `modules/factories/` (use `mkserver` for server modules, `mkscript` for script units).
- Options: Settings under the `my` namespace (e.g., `my.services.<service>`, `my.scripts.<script>`).
- Toggles: Enablement maps in `hosts/<name>/toggles.nix` controlling categories (apps/dev/shell/scripts/services/servers/units) and features (`enableProxy`, `enableContainers`).
- Servers: Reverse-proxied services under `modules/servers/`, normally created with `mkserver` options.
- Scripts: Units defined via `mkscript` with `enable`, `install`, `service`, `users`, `timer`, and `package` fields.
- Playbooks: Workflow guides under `docs/playbooks/` for repeatable tasks.
- Reference map: Navigation index under `docs/reference/index.md` for paths and responsibilities.
## Secrets Map and secureHost Behavior
- Secrets files: `secrets/certs.yaml`, `secrets/env.yaml`, `secrets/gallery.yaml`, `secrets/homepage.yaml`, `secrets/keys.yaml`, `secrets/wireguard.yaml`, `secrets/secrets.yaml`, plus `secrets/ssh/` for host keys.
- Placement rules: Keep secrets aligned to their file purpose (certificates → `certs.yaml`; environment/service env vars → `env.yaml`; media/gallery creds → `gallery.yaml`; homepage widgets → `homepage.yaml`; SSH/private keys → `keys.yaml`; WireGuard peers → `wireguard.yaml`; misc defaults → `secrets.yaml`).
- secureHost gating: Only hosts with `my.secureHost = true` load SOPS secrets and WireGuard interfaces. Hosts with `secureHost = false` must avoid secret-dependent services and skip SOPS entries.
## Module Categories and Active Hosts
- Module categories: apps, dev, scripts, servers, services, shell, network, users, nix, patches. Factories sit in `modules/factories/` and are imported explicitly.
- Active hosts: `workstation`, `server`, `miniserver`, `galaxy`, `emacs`. Host roles and secure status are defined in `hosts/<name>/configuration.nix` and toggles in `hosts/<name>/toggles.nix`.
## Precedence and Conflict Resolution
- Precedence: This constitution is authoritative for AI. Human docs must be updated to match. If conflicts are found, align human docs to the constitution and log the resolution in `specs/001-ai-docs/research.md`.
- Conflict handling steps: identify the divergent rule, cite the source files, decide the authoritative rule per this constitution, update both the source file and the relevant doc, and record the decision and timestamp.
## Maintenance Triggers and Update Process
- Triggers: New factory/helper, new module category, new host, new toggle set, new proxy rule, new secret category/file, change to `my.mainServer` or `my.ips`, stylix scheme changes, or new auto-import filters.
- Update flow: (1) Amend the relevant module or toggle files; (2) Update `docs/constitution.md` for rules/terminology changes; (3) Update playbooks under `docs/playbooks/` affected by the change; (4) Update `docs/reference/index.md` for navigation paths; (5) Note the decision in `specs/001-ai-docs/research.md` and refresh `quickstart.md` if discoverability shifts.
- Validation: Confirm discoverability within two clicks (constitution → reference map/playbook), secrets map completeness, and alignment with success criteria SC-001SC-004.
## Quick Reference and Navigation
- Constitution: `docs/constitution.md` (this file)
- Reference map: `docs/reference/index.md` (paths, hosts, secrets, proxies, stylix)
- Playbooks: `docs/playbooks/*.md` (add module/server/script/host toggle/secret, plus template)
- Planning artifacts: `specs/001-ai-docs/` (plan, research, data-model, quickstart, contracts)

View File

@@ -0,0 +1,18 @@
# Playbook: Add a Host Toggle
- Name: Add or adjust host toggles
- Purpose: Enable categories, services, or features per host in `hosts/<name>/toggles.nix`.
- Prerequisites: Identify host role (see Hosts and Roles), secureHost setting, and whether proxies/containers are required.
- Inputs: Toggle category (apps/dev/scripts/services/servers/units), users list, proxy/container flags, mainServer override, network interface names.
- Steps:
1. Open `hosts/<name>/toggles.nix` and adjust category maps using helper patterns (`enableList` with `mkEnabled`, `mkEnabledWithUsers`, or `mkEnabledIp`).
2. Set feature flags such as `enableProxy`, `enableContainers`, and `mainServer` when the host should own proxied services.
3. Add service toggles under `servers` with proxy/ip data as needed; align IPs to `my.ips` (e.g., `mkEnabledIp` for remote hosts).
4. Ensure `interfaces` entries exist for network-facing services and match `my.interfaces` defaults unless intentionally overridden.
5. Reconcile toggle changes with secrets and secureHost: avoid enabling secret-backed services on hosts with `secureHost = false`.
- Validation:
- Toggle sets align with host capabilities and `my.secureHost`.
- Proxy- or container-dependent services have `enableProxy`/`enableContainers` enabled.
- IP/interface values match `docs/reference/index.md` entries.
- Outputs: Updated host toggle file reflecting new enablement and infrastructure flags.
- References: `docs/constitution.md` (Hosts and toggles, Main server and proxies), `docs/reference/index.md` (Hosts and Roles, Proxy rules, Network maps)

View File

@@ -0,0 +1,18 @@
# Playbook: Add a NixOS Module
- Name: Add a module under `modules/<category>/`
- Purpose: Introduce a new module following auto-import and toggle conventions.
- Prerequisites: Identify target host(s) and toggle category; confirm `my.secureHost` if secrets are involved.
- Inputs: Module name, category (apps/dev/scripts/servers/services/shell/network), required options, secret needs, proxy requirements if server-facing.
- Steps:
1. Choose the category path from `docs/reference/index.md` and create `modules/<category>/<name>.nix` (auto-import picks it up; avoid names filtered out such as `librewolf.nix`).
2. Define options under `my.<category>` or reuse factories (`mkserver` for servers, `mkscript` for scripts) instead of hand-rolled patterns.
3. If the module needs secrets, guard references with `lib.mkIf config.my.secureHost` and map them to the correct secrets file (see secrets map).
4. For networked services, align host selection with `my.mainServer` and `my.ips`; enable reverse proxy via `enableProxy` when applicable.
5. Wire toggles for target hosts in `hosts/<host>/toggles.nix`, ensuring users/groups and containers/proxy flags are set.
- Validation:
- Module loads without extra imports (auto-import applies).
- Toggle wiring matches intended hosts; secureHost gating present for secrets.
- Proxy and port choices align with `my.mainServer`, `my.ips`, and firewall rules.
- Outputs: New module file and updated host toggles if required.
- References: `docs/constitution.md` (Module Categories, Secrets Map, Main server and proxies), `docs/reference/index.md` (Module Directories, Proxy rules, Secrets Map)

View File

@@ -0,0 +1,17 @@
# Playbook: Add a Script Unit
- Name: Add a script via `mkscript`
- Purpose: Ship a script package with optional user service and timer.
- Prerequisites: Identify target users (`my.toggleUsers.scripts` defaults), secureHost status if the script needs secrets, and whether a timer/service is required.
- Inputs: Script name, package derivation, description, timer schedule, users list, service needs.
- Steps:
1. Add a definition under `my.scripts.<name>` in `modules/scripts/<name>.nix` using `mkscript` options (`enable`, `install`, `service`, `users`, `timer`, `package`, `description`).
2. Ensure the package exposes the executable name used by the service/timer.
3. For user scoping, set `users` to a single user or list; defaults come from `my.toggleUsers.scripts`.
4. If secrets are required, guard references with `lib.mkIf config.my.secureHost` and map them to the appropriate secrets file.
5. Enable the script toggle in `hosts/<host>/toggles.nix` under `scripts` or `units`, and ensure timers/services are expected on that host.
- Validation:
- Script installs for intended users; systemd user service/timer activates only when `enable` and `service` are true.
- secureHost gating present for any secrets; no orphaned timers.
- Outputs: New script module and updated host toggles if needed.
- References: `docs/constitution.md` (Terminology, Secrets Map), `docs/reference/index.md` (Module Directories, Secrets Map, Hosts and Roles)

View File

@@ -0,0 +1,17 @@
# Playbook: Add a Secret Entry
- Name: Add or update a secret
- Purpose: Place secrets in the correct SOPS file with secureHost gating.
- Prerequisites: Target host(s) must have `my.secureHost = true`; identify secret type and consumer service/module.
- Inputs: Secret name, target file (certs/env/gallery/homepage/keys/wireguard/secrets), owner/group if file material is written, consuming module path.
- Steps:
1. Choose the correct secrets file from the map in `docs/constitution.md` and add the entry there (YAML, encrypted via sops-nix).
2. If a private key or file path is required, specify `owner`, `group`, and target path consistent with the consuming module.
3. In the consuming module, reference the secret under `config.sops.secrets.<name>` and guard with `lib.mkIf config.my.secureHost`.
4. For WireGuard entries, update `secrets/wireguard.yaml` and corresponding interface configuration under the target host.
5. Avoid adding secrets for hosts with `secureHost = false`; instead route the workload to a secure host or skip enablement.
- Validation:
- Secret lives in the correct file and encrypts with SOPS; file ownership matches service user where applicable.
- Module references are gated by `secureHost` and align with host toggles.
- Outputs: Updated secrets file and gated module references.
- References: `docs/constitution.md` (Secrets Map and secureHost), `docs/reference/index.md` (Secrets Map, Hosts and Roles)

View File

@@ -0,0 +1,18 @@
# Playbook: Add a Server Module with mkserver
- Name: Add a reverse-proxied server module
- Purpose: Stand up a server using `modules/factories/mkserver.nix` with correct proxy and host routing.
- Prerequisites: Target host must have `my.enableProxy = true` and container support if needed; confirm `my.secureHost` for secrets.
- Inputs: Service name, desired subdomain, port, proxy type (standard/fix/private), cron needs, secrets/env vars.
- Steps:
1. Create `modules/servers/<name>.nix` and import `mkserver` options to define `enable`, `enableProxy`, `port`, `host`, `hostName`, `url`, `ip`, `enableSocket`, and `certPath` as needed.
2. Default host routing uses `my.mainServer` and `my.ips`; override `hostName`/`ip` only when the service must live elsewhere.
3. For reverse proxy behavior, select helper from `parts/core.nix`: `proxyReverse` (standard), `proxyReverseFix` (preserve host headers/websockets), or `proxyReversePrivate` (mutual TLS).
4. Place secrets/env references in the appropriate file from the secrets map and guard with `lib.mkIf config.my.secureHost`.
5. Enable the service toggle in `hosts/<host>/toggles.nix` under `servers` (and `enableProxy` if not already set); add any firewall/static ports needed.
- Validation:
- Service resolves to the expected URL and IP per `my.ips` and `my.mainServer`.
- Proxy helper matches the protocol needs; SSL settings align with cert sources.
- Secrets load only on secure hosts; firewall assertions pass.
- Outputs: New server module with mkserver options and updated host toggles/firewall settings.
- References: `docs/constitution.md` (Main server and proxies, Secrets Map), `docs/reference/index.md` (Proxy rules, Module Directories, Secrets Map, Hosts and Roles)

View File

@@ -0,0 +1,15 @@
# Playbook Template
- Name:
- Purpose:
- Prerequisites: (toggles, hosts, secureHost, required secrets)
- Inputs: (paths, options, credentials, ports)
- Steps:
1.
2.
3.
- Validation:
-
-
- Outputs:
- References: `docs/constitution.md` (relevant sections) and `docs/reference/index.md` (paths/hosts/proxies/secrets)

64
docs/reference/index.md Normal file
View File

@@ -0,0 +1,64 @@
# Reference Map
## Module Directories
- apps → `modules/apps/` (desktop/workstation apps, auto-imported)
- dev → `modules/dev/` (language toolchains and dev shells, auto-imported)
- scripts → `modules/scripts/` (script units built via `mkscript`, auto-imported)
- servers → `modules/servers/` (reverse-proxied services built via `mkserver`)
- services → `modules/services/` (supporting services like syncthing, wireguard)
- shell → `modules/shell/` (shell customizations and CLI tooling)
- network → `modules/network/` (networking rules, firewall helpers)
- users → `modules/users/` (user-related options)
- nix → `modules/nix/` (Nix configuration and helpers)
- patches → `patches/` (patch artifacts referenced by modules)
- factories → `modules/factories/` (`mkserver.nix`, `mkscript.nix` shared helpers)
## Auto-Import Rules
- Source: `modules/modules.nix` uses `inputs.self.lib.autoImport` to load `.nix` files from module directories.
- Filter: Excludes `librewolf.nix`; all other `.nix` files in target dirs are loaded automatically.
- Implication: Place new modules in the correct category directory with a `.nix` filename; no manual import wiring required unless adding a new factory.
## Hosts and Roles
- Configs: `hosts/<name>/configuration.nix` with toggles in `hosts/<name>/toggles.nix`.
- Active hosts: `workstation`, `server`, `miniserver`, `galaxy`, `emacs`.
- Roles:
- workstation: developer desktop; provides build power for distributed builds.
- server: primary services host (overrides `my.mainServer = "server"` and enables proxies/containers).
- miniserver: small-footprint server; default `mainServer` in shared options.
- galaxy: small server variant using nixpkgs-small.
- emacs: VM profile, `my.secureHost = false` for secret-free usage.
- Network maps: `my.ips` and `my.interfaces` declared in `modules/modules.nix`; host toggles may override.
## Proxy, Firewall, and Networking
- Proxy enablement: `my.enableProxy` toggles Nginx reverse proxy; assertions require at least one `my.servers.*.enableProxy` when enabled.
- Proxy helpers: use `parts/core.nix` helpers (`proxy`, `proxyReverse`, `proxyReverseFix` for header preservation, `proxyReversePrivate` for mutual TLS). `mkserver` supplies `host`, `ip`, `url`, and `enableProxy` defaults per service.
- Main server selection: `my.mainServer` chooses where services live by default; `mkserver` sets `isLocal` based on this and picks IPs from `my.ips`.
- Firewall generation: `inputs.self.lib.generateFirewallPorts` combines static ports, additional ports, and service ports from `my.servers` (excluding native firewall services). Use `my.network.firewall` settings and `getServicesWithNativeFirewall` to derive open ports.
## Secrets Map
- Files and purposes:
- `secrets/certs.yaml` → certificates and TLS material.
- `secrets/env.yaml` → environment variables for services (e.g., lidarr-mb-gap).
- `secrets/gallery.yaml` → media/gallery credentials.
- `secrets/homepage.yaml` → homepage widget secrets.
- `secrets/keys.yaml` → SSH/private keys and key ownership.
- `secrets/wireguard.yaml` → WireGuard peers and private keys.
- `secrets/secrets.yaml` → default SOPS file (general secrets, fallback when unspecified).
- `secrets/ssh/` → host SSH keys and related artifacts.
- secureHost: Only hosts with `my.secureHost = true` consume SOPS entries and WireGuard interfaces. Keep secret references behind `lib.mkIf config.my.secureHost`.
## Stylix and Theming
- Stylix module: `config/stylix.nix` and stylix inputs in `flake.nix` apply theming. Host toggle `my.stylix.enable` controls activation (see host toggles).
- Schemes and assets: Imported via Stylix inputs; wallpapers/fonts sourced from external flakes (`wallpapers`, `fonts`).
## Playbooks and Templates
- Playbook template: `docs/playbooks/template.md`
- Workflows: `docs/playbooks/add-module.md`, `add-server.md`, `add-script.md`, `add-host-toggle.md`, `add-secret.md`
- Constitution link-back: `docs/constitution.md` sections on terminology, proxies, secrets, and maintenance.
## Quick Audit Checklist
- Module coverage: All categories (apps, dev, scripts, servers, services, shell, network, users, nix, patches) have corresponding entries and auto-import rules.
- Host coverage: Active hosts listed with roles and secureHost status; `mainServer` noted.
- Proxy rules: `enableProxy` usage, proxy helper selection, and `my.ips` mappings documented.
- Secrets map: Every secrets file and secureHost gating captured; new secret types aligned to file purposes.
- Discoverability: Paths reachable within two clicks from `docs/constitution.md`.

141
flake.lock generated
View File

@@ -20,11 +20,11 @@
]
},
"locked": {
"lastModified": 1767024902,
"narHash": "sha256-sMdk6QkMDhIOnvULXKUM8WW8iyi551SWw2i6KQHbrrU=",
"lastModified": 1769428758,
"narHash": "sha256-0G/GzF7lkWs/yl82bXuisSqPn6sf8YGTnbEdFOXvOfU=",
"owner": "hyprwm",
"repo": "aquamarine",
"rev": "b8a0c5ba5a9fbd2c660be7dd98bdde0ff3798556",
"rev": "def5e74c97370f15949a67c62e61f1459fcb0e15",
"type": "github"
},
"original": {
@@ -324,6 +324,24 @@
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_4"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"fonts": {
"flake": false,
"locked": {
@@ -404,11 +422,11 @@
]
},
"locked": {
"lastModified": 1768949235,
"narHash": "sha256-TtjKgXyg1lMfh374w5uxutd6Vx2P/hU81aEhTxrO2cg=",
"lastModified": 1769580047,
"narHash": "sha256-tNqCP/+2+peAXXQ2V8RwsBkenlfWMERb+Uy6xmevyhM=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "75ed713570ca17427119e7e204ab3590cc3bf2a5",
"rev": "366d78c2856de6ab3411c15c1cb4fb4c2bf5c826",
"type": "github"
},
"original": {
@@ -463,11 +481,11 @@
]
},
"locked": {
"lastModified": 1766946335,
"narHash": "sha256-MRD+Jr2bY11MzNDfenENhiK6pvN+nHygxdHoHbZ1HtE=",
"lastModified": 1769284023,
"narHash": "sha256-xG34vwYJ79rA2wVC8KFuM8r36urJTG6/csXx7LiiSYU=",
"owner": "hyprwm",
"repo": "hyprgraphics",
"rev": "4af02a3925b454deb1c36603843da528b67ded6c",
"rev": "13c536659d46893596412d180449353a900a1d31",
"type": "github"
},
"original": {
@@ -495,11 +513,11 @@
"xdph": "xdph"
},
"locked": {
"lastModified": 1769284856,
"narHash": "sha256-slXgC5fwTk9E+kkm6+Oy16laDFo+whNXZKsmf4eigN8=",
"lastModified": 1769694617,
"narHash": "sha256-h8+Wqc4x68mN2qOLX45HsO6Z4eQOfrdtSKiSzcBrCVg=",
"owner": "hyprwm",
"repo": "Hyprland",
"rev": "c65c7614bc573c3f0150e31a31187057f48813df",
"rev": "c92fb5e85f4a5fd3a0f5ffb5892f6a61cfe1be2b",
"type": "github"
},
"original": {
@@ -595,11 +613,11 @@
]
},
"locked": {
"lastModified": 1764612430,
"narHash": "sha256-54ltTSbI6W+qYGMchAgCR6QnC1kOdKXN6X6pJhOWxFg=",
"lastModified": 1767983607,
"narHash": "sha256-8C2co8NYfR4oMOUEsPROOJ9JHrv9/ktbJJ6X1WsTbXc=",
"owner": "hyprwm",
"repo": "hyprlang",
"rev": "0d00dc118981531aa731150b6ea551ef037acddd",
"rev": "d4037379e6057246b408bbcf796cf3e9838af5b2",
"type": "github"
},
"original": {
@@ -726,11 +744,11 @@
]
},
"locked": {
"lastModified": 1767473322,
"narHash": "sha256-RGOeG+wQHeJ6BKcsSB8r0ZU77g9mDvoQzoTKj2dFHwA=",
"lastModified": 1769202094,
"narHash": "sha256-gdJr/vWWLRW85ucatSjoBULPB2dqBJd/53CZmQ9t91Q=",
"owner": "hyprwm",
"repo": "hyprwire",
"rev": "d5e7d6b49fe780353c1cf9a1cf39fa8970bd9d11",
"rev": "a45ca05050d22629b3c7969a926d37870d7dd75c",
"type": "github"
},
"original": {
@@ -788,11 +806,11 @@
]
},
"locked": {
"lastModified": 1769394251,
"narHash": "sha256-IkL7t/k1kbCG3LHPhZD32c80m4QHFgCZ8bVTqN79kEM=",
"lastModified": 1769740349,
"narHash": "sha256-Tbk4SF5XhM9fnrDtPl4wy3ItkjRMcBTVuA26ThzLVcs=",
"owner": "fufexan",
"repo": "nix-gaming",
"rev": "2805bc370151d38eba406f5e3bfd111b02a13bcd",
"rev": "cd0a8141f410a6532a76546df2665a4e3c93b69b",
"type": "github"
},
"original": {
@@ -903,11 +921,11 @@
},
"nixpkgs-small": {
"locked": {
"lastModified": 1769373857,
"narHash": "sha256-IVRjQyPlY4jagm2nzROHobD8lVff4m++swbJ4Q1+kTs=",
"lastModified": 1769724120,
"narHash": "sha256-6DBBx8SJSOU/RPSoy2kWBzRRjxZR2quC5ema5TJ1zVg=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "145fdb350cfaa360ff356edf6e85430b231dd5c6",
"rev": "8ec59ed5093c2a742d7744e9ecf58f358aa4a87d",
"type": "github"
},
"original": {
@@ -919,11 +937,11 @@
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1769170682,
"narHash": "sha256-oMmN1lVQU0F0W2k6OI3bgdzp2YOHWYUAw79qzDSjenU=",
"lastModified": 1769461804,
"narHash": "sha256-msG8SU5WsBUfVVa/9RPLaymvi5bI8edTavbIq3vRlhI=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "c5296fdd05cfa2c187990dd909864da9658df755",
"rev": "bfc1b8a4574108ceef22f02bafcf6611380c100d",
"type": "github"
},
"original": {
@@ -935,11 +953,11 @@
},
"nixpkgs_2": {
"locked": {
"lastModified": 1769089682,
"narHash": "sha256-9yA/LIuAVQq0lXelrZPjLuLVuZdm03p8tfmHhnDIkms=",
"lastModified": 1769598131,
"narHash": "sha256-e7VO/kGLgRMbWtpBqdWl0uFg8Y2XWFMdz0uUJvlML8o=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "078d69f03934859a181e81ba987c2bb033eebfc5",
"rev": "fa83fd837f3098e3e678e6cf017b2b36102c7211",
"type": "github"
},
"original": {
@@ -978,11 +996,11 @@
]
},
"locked": {
"lastModified": 1769419322,
"narHash": "sha256-V9L2d2nulWyP/s5EQrt0aDbNOfOR1ZNI5apb+mUWsrc=",
"lastModified": 1769764253,
"narHash": "sha256-lkjNGrUfTG1RR1AjvDqaYJcWsEkOhUz0w/U8tD0sjmk=",
"owner": "nix-community",
"repo": "nur",
"rev": "88fca3ca8ff051e269b40f9fc55b851802344702",
"rev": "db595036b2efc5f9de5053e6c5bdbf730ffe6f70",
"type": "github"
},
"original": {
@@ -1026,11 +1044,11 @@
]
},
"locked": {
"lastModified": 1767281941,
"narHash": "sha256-6MkqajPICgugsuZ92OMoQcgSHnD6sJHwk8AxvMcIgTE=",
"lastModified": 1769069492,
"narHash": "sha256-Efs3VUPelRduf3PpfPP2ovEB4CXT7vHf8W+xc49RL/U=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "f0927703b7b1c8d97511c4116eb9b4ec6645a0fa",
"rev": "a1ef738813b15cf8ec759bdff5761b027e3e1d23",
"type": "github"
},
"original": {
@@ -1039,6 +1057,27 @@
"type": "github"
}
},
"prem2resolve": {
"inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1769712701,
"narHash": "sha256-R5IOg12d7lJwaA3qmC7pQBNgUbmCDo5NHSCY/O95VQA=",
"ref": "refs/heads/main",
"rev": "f7b65d0b8f3d010ce6da4afe9dbd16b864260ae0",
"revCount": 5,
"type": "git",
"url": "https://git.lebubu.org/vibe-coded/prem2resolve.git"
},
"original": {
"type": "git",
"url": "https://git.lebubu.org/vibe-coded/prem2resolve.git"
}
},
"qbit_manage": {
"flake": false,
"locked": {
@@ -1071,6 +1110,7 @@
"nixpkgs-unstable": "nixpkgs-unstable",
"nixtendo-switch": "nixtendo-switch",
"nur": "nur",
"prem2resolve": "prem2resolve",
"qbit_manage": "qbit_manage",
"sops-nix": "sops-nix",
"stylix": "stylix",
@@ -1085,11 +1125,11 @@
]
},
"locked": {
"lastModified": 1769314333,
"narHash": "sha256-+Uvq9h2eGsbhacXpuS7irYO7fFlz514nrhPCSTkASlw=",
"lastModified": 1769469829,
"narHash": "sha256-wFcr32ZqspCxk4+FvIxIL0AZktRs6DuF8oOsLt59YBU=",
"owner": "Mic92",
"repo": "sops-nix",
"rev": "2eb9eed7ef48908e0f02985919f7eb9d33fa758f",
"rev": "c5eebd4eb2e3372fe12a8d70a248a6ee9dd02eff",
"type": "github"
},
"original": {
@@ -1111,7 +1151,7 @@
"nixpkgs"
],
"nur": "nur_2",
"systems": "systems_4",
"systems": "systems_5",
"tinted-foot": "tinted-foot",
"tinted-kitty": "tinted-kitty",
"tinted-schemes": "tinted-schemes",
@@ -1119,11 +1159,11 @@
"tinted-zed": "tinted-zed"
},
"locked": {
"lastModified": 1769202415,
"narHash": "sha256-6XUTQjam/xLFGeJFj9XCo+12XSxd2iBxkAdLe6IJqiw=",
"lastModified": 1769472288,
"narHash": "sha256-RdnbroWsujYh1MaMhDpP5QM+bRIGG6smz987v1fli+U=",
"owner": "danth",
"repo": "stylix",
"rev": "296aa01b461af5146612cd26cc115c1d3e5ed4ae",
"rev": "c2c4a3ad52c096db1c8dde97d3d21451613f000c",
"type": "github"
},
"original": {
@@ -1210,6 +1250,21 @@
"type": "github"
}
},
"systems_5": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"tinted-foot": {
"flake": false,
"locked": {

View File

@@ -22,6 +22,10 @@
url = "git+https://git.lebubu.org/jawz/scripts.git";
inputs.nixpkgs.follows = "nixpkgs";
};
prem2resolve = {
url = "git+https://git.lebubu.org/vibe-coded/prem2resolve.git";
inputs.nixpkgs.follows = "nixpkgs";
};
lidarr-mb-gap = {
url = "git+https://git.lebubu.org/vibe-coded/lidarr-mb-gap.git";
inputs.nixpkgs.follows = "nixpkgs";

View File

@@ -0,0 +1,34 @@
# Specification Quality Checklist: AI-Facing Repository Constitution
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2026-01-30
**Feature**: specs/001-ai-docs/spec.md
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- All checklist items validated and passing.

View File

@@ -0,0 +1,26 @@
# Documentation Contract: AI-Facing Repository Constitution
## Constitution Document
- **Purpose**: Authoritative AI guide for repo rules, architecture, conventions, and precedence.
- **Required Sections**:
- Scope and audience (AI tools)
- Repo overview (flake-parts, auto-imported modules, hosts/toggles, mainServer/proxy rules)
- Coding conventions (no blank lines between blocks, minimal comments, factor duplication via functions/factories)
- Secrets map (certs/env/gallery/homepage/keys/wireguard/secrets) and secureHost behavior
- Active hosts and module categories coverage
- Precedence and conflict resolution (constitution over human docs for AI)
- Maintenance triggers and update process
- Quick-reference index linking to repo paths and playbooks
## Use-case Playbooks
- **Purpose**: Step-by-step workflows for common tasks (add module/server/script/host/secrets).
- **Required Fields**: Name, prerequisites (toggles/hosts/secureHost), inputs, steps, outputs, references to helpers (e.g., mkserver, auto-import), secrets placement, validation checklist.
- **Location**: `/docs` (linked) or `/specs/001-ai-docs` as interim until promoted.
## Reference Map
- **Purpose**: Index for navigation.
- **Required Fields**: Category → repo path(s), notes on auto-import filters, proxy type mapping, firewall rules generation, stylix schemes location, active hosts.
## Validation
- **Checklist Alignment**: Must satisfy feature success criteria (coverage completeness, discoverability ≤2 steps, secrets map completeness).
- **Review**: Manual review against requirements and this contract; record conflicts and resolutions.

View File

@@ -0,0 +1,22 @@
# Data Model: AI-Facing Repository Constitution
## Entities
### Constitution
- **Role**: Source of truth for AI about repository rules, architecture, conventions, precedence.
- **Key Fields**: scope (modules, hosts, secrets, proxies), conventions (no blank lines, comment discipline, factor duplication), precedence (constitution over human docs for AI), update triggers, references index, success criteria links.
- **Relationships**: Links to Use-case Playbooks and Reference Map; references repo paths.
### Use-case Playbook
- **Role**: Workflow guide for repeatable actions (add module/server/script/host/secrets).
- **Key Fields**: use-case name, required inputs, steps, expected outputs, related helpers (e.g., mkserver), secrets mapping, host/toggle implications.
- **Relationships**: References Constitution for rules and Reference Map for file locations.
### Reference Map
- **Role**: Index mapping core concerns to repo paths for navigation and validation.
- **Key Fields**: category (apps, dev, scripts, servers, services, shell, network, users, nix, patches), hosts list (emacs, server, workstation, miniserver, galaxy), secrets files, proxy rules, auto-import rules, stylix/schemes, audit checklist entries, navigation links to constitution/playbooks.
- **Relationships**: Anchors citations used by Constitution and Playbooks.
## Constraints and States
- **Lifecycle**: Constitution updates trigger playbook/index updates; playbooks evolve with new workflows; reference map updates when repo structure changes.
- **Validation Rules**: Constitution must enumerate all categories/hosts; secrets map must be complete; discoverability within two steps; precedence must be explicit.

74
specs/001-ai-docs/plan.md Normal file
View File

@@ -0,0 +1,74 @@
# Implementation Plan: AI-Facing Repository Constitution
**Branch**: `001-ai-docs` | **Date**: 2026-01-30 | **Spec**: specs/001-ai-docs/spec.md
**Input**: Feature specification from `/specs/001-ai-docs/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
Document an AI-facing constitution and use-case playbooks that explain repository rules, architecture, and workflows (e.g., adding modules/hosts/scripts/secrets), with clear information architecture for `/docs` and `/specs`. Approach: synthesize current repo conventions (flake-parts, auto-imports, toggles, secrets map, proxy rules), define authoritative precedence (constitution for AI), and provide quick-start guidance plus reference mapping for AI tools.
## Technical Context
<!--
ACTION REQUIRED: Replace the content in this section with the technical details
for the project. The structure here is presented in advisory capacity to guide
the iteration process.
-->
**Language/Version**: N/A (documentation artifacts)
**Primary Dependencies**: N/A (static markdown)
**Storage**: Repo files under `/docs` and `/specs`
**Testing**: Manual validation against checklist and repo conventions
**Target Platform**: AI assistants (Cursor/MCP) consuming repository docs
**Project Type**: Documentation set
**Performance Goals**: Docs discoverable within ≤2 navigation steps as per spec
**Constraints**: Follow repo conventions (no blank lines between code blocks, minimal comments, factor duplication into shared functions in examples)
**Scale/Scope**: Covers all module categories and active hosts; includes constitution, reference map, and workflow playbooks
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
- Constitution in `.specify/memory/constitution.md` is a placeholder; default gate is adherence to repo conventions captured in the feature spec (no duplicate code without functions/factories, no blank lines between blocks, minimal comments).
- Plan activities are documentation-only and respect these conventions. Gate: PASS (no violations).
- Post-design re-check: PASS (generated docs/templates only).
## Project Structure
### Documentation (this feature)
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
### Source Code (repository root)
```text
docs/ # Human/AI-facing constitutional docs (to be added)
specs/
└── 001-ai-docs/ # Feature-specific planning and outputs
├── plan.md
├── research.md
├── data-model.md
├── quickstart.md
└── contracts/ # Documentation contracts/templates for AI use-cases
```
**Structure Decision**: Documentation-first; all outputs live under `specs/001-ai-docs/` with future AI-facing constitution placed in `docs/`. No application code changes in this plan.
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |

View File

@@ -0,0 +1,7 @@
# Quickstart: AI-Facing Repository Constitution
1) Start with the constitution at `docs/constitution.md` for scope, terminology, proxies, secrets map, and precedence (constitution overrides human docs for AI).
2) Navigate with the reference map at `docs/reference/index.md` (module directories, hosts/roles, auto-import filters, proxy helpers, secrets map, stylix).
3) Run the correct playbook from `docs/playbooks/` (`add-module`, `add-server`, `add-script`, `add-host-toggle`, `add-secret`) and keep secureHost gating in mind.
4) Validate outcomes using the audit checklist in `docs/reference/index.md` (coverage of categories/hosts, secrets completeness, proxy rules, discoverability ≤2 steps).
5) If rules or structure change: update `docs/constitution.md`, refresh affected playbooks and `docs/reference/index.md`, then record the decision and date in `specs/001-ai-docs/research.md`.

View File

@@ -0,0 +1,21 @@
# Research Findings: AI-Facing Repository Constitution
## Decision 1: Constitution is authoritative for AI
- **Decision**: AI-facing constitution overrides conflicting human docs for AI guidance; conflicts trigger updates to both with recorded resolution.
- **Rationale**: Ensures consistent automated actions and avoids drift between AI behavior and repo rules.
- **Alternatives considered**: (a) Defer to human docs and mirror later (rejected: increases drift); (b) Split authority by domain (rejected: adds ambiguity for AI tools).
## Decision 2: Documentation IA
- **Decision**: Place the living constitution in `/docs`; keep feature-specific planning/playbooks under `specs/001-ai-docs/` with index links in constitution.
- **Rationale**: Clear separation of durable source-of-truth vs feature-level artifacts; aligns with spec expectations.
- **Alternatives considered**: (a) Keep all AI docs in `/specs` (rejected: mixes planning with durable guidance); (b) Distribute docs per module directory (rejected: harder AI discoverability).
## Decision 3: Validation approach
- **Decision**: Validate docs manually against repo conventions and the specifications success criteria (discoverability ≤2 steps, full module/host coverage, secrets map completeness).
- **Rationale**: No automated tests for markdown; checklist-based review matches spec.
- **Alternatives considered**: (a) Add automated linting now (rejected: out-of-scope for this documentation feature); (b) No validation (rejected: risks drift and omissions).
## Decision 4: Navigation and conflict logging
- **Decision**: Anchor discoverability through `docs/constitution.md` with links to `docs/reference/index.md` and `docs/playbooks/`; record any conflict resolutions in `specs/001-ai-docs/research.md` with date and source references.
- **Rationale**: Keeps navigation within two steps for AI and centralizes historical decisions for reconciliation.
- **Alternatives considered**: (a) Distribute links per playbook only (rejected: harder to enforce two-step navigation); (b) Log conflicts in ad-hoc comments (rejected: loses traceability for AI tools).

94
specs/001-ai-docs/spec.md Normal file
View File

@@ -0,0 +1,94 @@
# Feature Specification: AI-Facing Repository Constitution
**Feature Branch**: `001-ai-docs`
**Created**: 2026-01-30
**Status**: Draft
**Input**: User description: "I want to start by creating comprehensive documentation targeted to AI, this documentation should include the context of how this repository works the logic and rules (both written and unwritten), and the location of where use cases such as adding a new module, would go. The end goal will be to build a mcp with this documentation, but not yet."
## Clarifications
### Session 2026-01-30
- Q: How to handle conflicts between human-facing docs and the AI-facing constitution? → A: Constitution is authoritative for AI; if human docs disagree, update both with a recorded resolution.
## User Scenarios & Testing *(mandatory)*
### User Story 1 - AI reads the constitution (Priority: P1)
An AI assistant loads a single constitution document to understand repository-wide rules, conventions, and architecture so it can answer questions and generate correct changes without human context.
**Why this priority**: Without a trustworthy root document, AI-driven edits risk violating repo rules or duplicating code.
**Independent Test**: Provide only the constitution to an AI and ask for the steps to add a new server module; the AI should return steps that follow documented rules (e.g., use mkserver, no blank lines, secureHost gating).
**Acceptance Scenarios**:
1. **Given** the constitution is available, **When** an AI is asked to summarize how reverse proxies are chosen, **Then** it cites the documented proxy types and mainServer/IP rules.
2. **Given** the constitution is available, **When** an AI is asked where secrets for a new service go, **Then** it names the correct secrets file based on the documented mapping.
---
### User Story 2 - AI knows where to place use cases (Priority: P2)
An AI can locate and follow guidance on where to document workflows such as adding a module, host, or script, including which folder to use and what schema to follow.
**Why this priority**: Clear placement rules prevent scattered or conflicting AI-authored docs and keep automation predictable.
**Independent Test**: Ask an AI, “Where do I document adding a new NixOS server module?”; it should point to the specified `/docs` or `/specs` location and expected template sections.
**Acceptance Scenarios**:
1. **Given** the documentation set, **When** an AI searches for “add module” guidance, **Then** it finds the designated playbook location and the required fields to fill.
2. **Given** a new use case like “add new host toggle,” **When** the AI checks the structure, **Then** it identifies which section/template to extend and what metadata to capture.
---
### User Story 3 - AI maintains alignment over time (Priority: P3)
An AI or contributor can update the constitution and use-case docs when repo rules change, with explicit triggers and review checkpoints to keep AI guidance in sync with code.
**Why this priority**: Prevents drift between living code (e.g., new factories, proxy rules) and AI-consumable docs.
**Independent Test**: After a rule change (e.g., new secrets file category), the doc update path lists the trigger, target doc, and reviewer checklist without needing additional instructions.
**Acceptance Scenarios**:
1. **Given** a new helper is added to `parts/core.nix`, **When** the AI follows the maintenance rules, **Then** it updates the constitution and cross-references within one documented cycle.
2. **Given** conflicting rules are detected, **When** the AI follows conflict-resolution guidance, **Then** it defers to the constitutions precedence rules and records the resolution location.
---
### Edge Cases
- When human-facing docs and the AI-facing constitution disagree, the constitution is authoritative for AI; reconcile by updating both docs and recording the resolution.
- What to do when a new module category appears that is not yet mapped in the doc structure.
- How to handle secureHost-off hosts where secret-driven guidance should be skipped or flagged.
- What to do when auto-imported modules are added accidentally (guidance on validation and rollback paths).
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: Document a single source-of-truth constitution that explains repo architecture (flake-parts, auto-imported modules, hosts/toggles, secureHost, mainServer-driven proxy IP selection) and coding conventions (no blank lines between blocks, comment discipline, factor duplication into functions/factories).
- **FR-002**: Define terminology and naming standards for this repo (NixOS module vs factory, server options vs service config, toggle sets, host toggles, script units, dev shells) with canonical references to current files.
- **FR-003**: Specify the documentation IA: what lives in `/docs` vs `/specs`, where AI-facing use-case playbooks go, and how to reference them from the constitution.
- **FR-004**: Capture unwritten operational rules: secret file roles, secureHost gating, proxy type selection rules, mainServer meaning, uid/gid reproducibility goals, and host activity status.
- **FR-005**: Provide standard playbook outlines for common workflows (adding NixOS module, server module using mkserver, script unit, host toggle, secrets entry) including required metadata and links to relevant helpers.
- **FR-006**: Include maintenance triggers and process: when code changes (new factory/helper, new secret category, new toggle), which docs must be updated, and how conflicts are resolved with the constitution as the AI authority (update both human docs and constitution with the recorded decision).
- **FR-007**: Add a quick-reference index that maps core concerns to files/dirs (e.g., auto-import rules, proxies, firewall generation, stylix schemes, user toggles, secrets locations).
- **FR-008**: Ensure all required sections are written in business-level, technology-agnostic language suitable for AI ingestion, free of implementation steps.
### Key Entities *(include if feature involves data)*
- **Constitution**: The AI-facing source-of-truth document describing repo-wide rules, architecture, conventions, and precedence.
- **Use-case Playbooks**: Structured guides for repeatable workflows (add module/server/script/host/secrets), each with required fields and links back to the constitution.
- **Reference Map**: Index of repository paths and their responsibilities (modules categories, factories, hosts, secrets, proxies, toggles) used for navigation and validation.
## Success Criteria *(mandatory)*
### Measurable Outcomes
- **SC-001**: An AI with only these docs can describe the correct steps and file locations to add a new server module in under 2 minutes of reading time, matching existing patterns.
- **SC-002**: The constitution explicitly enumerates 100% of current module categories (apps, dev, scripts, servers, services, shell, network, users, nix, patches) and active hosts (emacs, server, workstation) with their roles.
- **SC-003**: Guidance includes the full secrets file map (certs/env/gallery/homepage/keys/wireguard/secrets) and secureHost behavior with no omissions when audited against the repository.
- **SC-004**: Playbook locations and required fields are discoverable via the documented index in ≤2 navigation steps from the top of the spec.

137
specs/001-ai-docs/tasks.md Normal file
View File

@@ -0,0 +1,137 @@
# Tasks: AI-Facing Repository Constitution
**Input**: Design documents from `/specs/001-ai-docs/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
**Tests**: Tests are optional; not requested in the spec. No explicit test tasks included.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- Documentation lives under `docs/` (constitution and playbooks) and `specs/001-ai-docs/` (planning artifacts and interim docs).
---
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Prepare documentation scaffolding and locations.
- [X] T001 Create `docs/` constitution folder structure per plan (docs/)
- [X] T002 Create `docs/` playbooks subfolder for workflows (docs/playbooks/)
- [X] T003 Create `docs/` reference-map subfolder for indexes (docs/reference/)
- [X] T004 Link feature artifacts index in specs/001-ai-docs/quickstart.md to future docs/ locations (specs/001-ai-docs/quickstart.md)
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Shared artifacts all user stories rely on.
- [X] T005 Draft constitution outline with required sections and precedence statement (docs/constitution.md)
- [X] T006 Populate secrets map structure and secureHost rules placeholder (docs/constitution.md)
- [X] T007 Add reference map skeleton covering modules, hosts, secrets, proxies, stylix (docs/reference/index.md)
- [X] T008 Add playbook template covering required fields (docs/playbooks/template.md)
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel.
---
## Phase 3: User Story 1 - AI reads the constitution (Priority: P1) 🎯 MVP
**Goal**: Single constitution document for AI with repo rules, architecture, conventions, and precedence.
**Independent Test**: With only docs/constitution.md, an AI can answer proxy selection, secrets placement, and module-add steps within repo rules.
### Implementation for User Story 1
- [X] T009 [US1] Fill constitution scope, audience, repo overview (flake-parts, auto-imports, hosts/toggles, mainServer/proxy rules) (docs/constitution.md)
- [X] T010 [US1] Document coding conventions (no blank lines between blocks, comment discipline, factor duplication via functions/factories) (docs/constitution.md)
- [X] T011 [US1] Define canonical terminology and naming standards for modules, factories, options, toggles, servers, scripts, playbooks (docs/constitution.md)
- [X] T012 [US1] Document secrets map and secureHost behavior (certs/env/gallery/homepage/keys/wireguard/secrets) (docs/constitution.md)
- [X] T013 [US1] Enumerate module categories and active hosts with roles (docs/constitution.md)
- [X] T014 [US1] Add precedence/conflict resolution section (constitution over human docs for AI) (docs/constitution.md)
- [X] T015 [US1] Add maintenance triggers and update process (docs/constitution.md)
- [X] T016 [US1] Add quick-reference index linking to playbooks and reference map (docs/constitution.md)
**Checkpoint**: Constitution stands alone for AI queries.
---
## Phase 4: User Story 2 - AI knows where to place use cases (Priority: P2)
**Goal**: Playbooks and reference map guide AI to the right locations/templates for workflows.
**Independent Test**: AI can locate the correct doc and fields for “add module/server/script/host/secrets” without extra context.
### Implementation for User Story 2
- [X] T017 [US2] Write playbook for adding a NixOS module (docs/playbooks/add-module.md)
- [X] T018 [US2] Write playbook for adding a server module using mkserver (docs/playbooks/add-server.md)
- [X] T019 [US2] Write playbook for adding a script unit (docs/playbooks/add-script.md)
- [X] T020 [US2] Write playbook for adding a host toggle and host wiring (docs/playbooks/add-host-toggle.md)
- [X] T021 [US2] Write playbook for secrets entry mapping to files (docs/playbooks/add-secret.md)
- [X] T022 [US2] Fill reference map entries for module dirs, auto-import filters, proxy types, firewall generation, stylix schemes, hosts (docs/reference/index.md)
- [X] T023 [US2] Link playbooks and reference map from constitution index (docs/constitution.md)
**Checkpoint**: Playbooks and reference map make placement and schema discoverable.
---
## Phase 5: User Story 3 - AI maintains alignment over time (Priority: P3)
**Goal**: Process to keep AI docs synced with repo changes and resolve conflicts.
**Independent Test**: After a rule change, the docs update path is clear (trigger → target docs → reviewer checklist) without extra guidance.
### Implementation for User Story 3
- [X] T024 [US3] Add maintenance triggers list (new factory/helper, new secret category, new toggle, new host/module dir) (docs/constitution.md)
- [X] T025 [US3] Add update workflow/checklist for constitution, playbooks, reference map (docs/constitution.md)
- [X] T026 [US3] Add conflict-resolution steps and logging guidance in constitution (docs/constitution.md)
- [X] T027 [US3] Add quick audit checklist to reference map for drift detection (docs/reference/index.md)
- [X] T028 [US3] Update quickstart.md with maintenance flow summary (specs/001-ai-docs/quickstart.md)
**Checkpoint**: Maintenance and conflict resolution are documented and repeatable.
---
## Phase N: Polish & Cross-Cutting Concerns
**Purpose**: Final alignment and validation across documents.
- [X] T029 [P] Cross-check constitution, playbooks, reference map against SC-001SC-004 and FR-008 (docs/constitution.md, docs/playbooks/, docs/reference/index.md)
- [X] T030 [P] Ensure discoverability ≤2 navigation steps from top-level docs (docs/)
- [X] T031 [P] Validate business-level, tech-agnostic language per FR-008 across all docs (docs/)
- [X] T032 [P] Update specs/001-ai-docs/research.md with any decisions made during authoring (specs/001-ai-docs/research.md)
- [X] T033 [P] Update specs/001-ai-docs/data-model.md if entities/fields changed (specs/001-ai-docs/data-model.md)
- [X] T034 Final proofread and format pass (docs/)
---
## Dependencies & Execution Order
### Phase Dependencies
- Setup → Foundational → User Stories (P1 → P2 → P3) → Polish
### User Story Dependencies
- US1 has no story dependency (MVP). US2 builds on US1 artifacts (constitution index). US3 builds on US1/US2.
### Parallel Opportunities
- Setup tasks T001T004 can run in parallel.
- Foundational tasks T005T008 can run in parallel.
- Within US1, T009T016 mostly sequential; minor parallelization on subsections if coordinated.
- US2 playbooks (T017T021) parallelizable; reference map (T022) depends on prior structure; linking (T023) last.
- US3 tasks can largely parallelize T024T027; quickstart update (T028) after others.
- Polish tasks T029T033 parallel; T034 last.
## Implementation Strategy
- Deliver MVP with US1 (constitution) first.
- Add placement guidance (US2) next.
- Add maintenance/conflict process (US3), then polish.