Initialize project structure with foundational files including .gitignore, README, and specification templates. Establish project constitution outlining core principles for code quality, testing, user experience, and performance. Add initial feature specification for Reference Board Viewer application.

This commit is contained in:
Danilo Reyes
2025-11-01 21:49:14 -06:00
parent 75492c3b61
commit 43bd1aebf0
15 changed files with 1718 additions and 436 deletions

View File

@@ -2,6 +2,17 @@
Auto-generated from all feature plans. Last updated: [DATE]
## Constitutional Principles
This project follows a formal constitution (`.specify/memory/constitution.md`). All development work MUST align with these principles:
1. **Code Quality & Maintainability** - Clear, maintainable code with proper typing
2. **Testing Discipline** - ≥80% coverage, automated testing required
3. **User Experience Consistency** - Intuitive, accessible interfaces
4. **Performance & Efficiency** - Performance-first design with bounded resources
Reference the full constitution for detailed requirements and enforcement mechanisms.
## Active Technologies
[EXTRACTED FROM ALL PLAN.MD FILES]
@@ -20,6 +31,24 @@ Auto-generated from all feature plans. Last updated: [DATE]
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
### Constitutional Requirements
All code MUST meet these standards (per Principle 1):
- Linter passing (zero errors/warnings)
- Type hints on all public APIs
- Clear single responsibilities (SRP)
- Explicit constants (no magic numbers)
- Comments explaining "why" not "what"
## Testing Standards
Per Constitutional Principle 2:
- Minimum 80% test coverage required
- Unit tests for all public functions
- Integration tests for component interactions
- Edge cases and error paths explicitly tested
- Tests are deterministic, isolated, and fast (<1s unit, <10s integration)
## Recent Changes
[LAST 3 FEATURES AND WHAT THEY ADDED]

View File

@@ -1,8 +1,8 @@
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
**Purpose**: [Brief description of what this checklist covers]
**Created**: [DATE]
**Feature**: [Link to spec.md or relevant documentation]
**Purpose**: [Brief description of what this checklist covers]
**Created**: [DATE]
**Feature**: [Link to spec.md or relevant documentation]
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
@@ -20,6 +20,15 @@
============================================================================
-->
## Constitutional Compliance Check
Before proceeding, verify alignment with constitutional principles:
- [ ] **Code Quality (Principle 1):** Design maintains/improves maintainability
- [ ] **Testing (Principle 2):** ≥80% coverage plan established
- [ ] **UX Consistency (Principle 3):** User impact documented and positive
- [ ] **Performance (Principle 4):** Performance budget and complexity analyzed
## [Category 1]
- [ ] CHK001 First checklist item with clear action
@@ -32,6 +41,16 @@
- [ ] CHK005 Item with specific criteria
- [ ] CHK006 Final item in this category
## Pre-Merge Validation
Per constitutional requirements:
- [ ] All tests passing (≥80% coverage maintained)
- [ ] Linter/type checker passing (zero errors)
- [ ] Code review approved with principle verification
- [ ] Documentation updated
- [ ] Performance benchmarks met (if applicable)
## Notes
- Check items off as completed: `[x]`

View File

@@ -0,0 +1,81 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync
---
## User Input
```text
[User's request for constitutional changes]
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
Follow this execution flow:
1. Load the existing constitution template at `.specify/memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it.
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
- MINOR: New principle/section added or materially expanded guidance.
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
- If version bump type ambiguous, propose reasoning before finalizing.
3. Draft the updated constitution content:
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing nonnegotiable rules, explicit rationale if not obvious.
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
4. Consistency propagation checklist (convert prior checklist into active validations):
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
- Version change: old → new
- List of modified principles (old title → new title if renamed)
- Added sections
- Removed sections
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
- Follow-up TODOs if any placeholders intentionally deferred.
6. Validation before final output:
- No remaining unexplained bracket tokens.
- Version line matches report.
- Dates ISO format YYYY-MM-DD.
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
8. Output a final summary to the user with:
- New version and bump rationale.
- Any files flagged for manual follow-up.
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
Formatting & Style Requirements:
- Use Markdown headings exactly as in the template (do not demote/promote levels).
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
- Keep a single blank line between sections.
- Avoid trailing whitespace.
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
--- End Command ---

View File

@@ -1,104 +1,97 @@
# Implementation Plan: [FEATURE]
# Plan: [FEATURE_NAME]
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Created:** [YYYY-MM-DD]
**Status:** [Draft | Active | Completed | Obsolete]
**Owner:** [OWNER_NAME]
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Overview
## Summary
Brief description of what this plan aims to achieve and why it's important.
[Extract from feature spec: primary requirement + technical approach from research]
## Objectives
## Technical Context
- [ ] Primary objective 1
- [ ] Primary objective 2
- [ ] Primary objective 3
<!--
ACTION REQUIRED: Replace the content in this section with the technical details
for the project. The structure here is presented in advisory capacity to guide
the iteration process.
-->
## Constitution Alignment Check
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [single/web/mobile - determines source structure]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
Before proceeding, verify alignment with constitutional principles:
## Constitution Check
- **Code Quality & Maintainability:** How will this maintain/improve code quality?
- [ ] Design follows single responsibility principle
- [ ] Clear module boundaries defined
- [ ] Dependencies justified and documented
- **Testing Discipline:** What testing strategy will ensure correctness?
- [ ] Unit test coverage plan (≥80%)
- [ ] Integration test scenarios identified
- [ ] Edge cases documented
- **User Experience Consistency:** How does this impact users?
- [ ] UI/API changes follow existing patterns
- [ ] Error handling is user-friendly
- [ ] Documentation plan complete
- **Performance & Efficiency:** What are the performance implications?
- [ ] Performance budget established
- [ ] Algorithmic complexity analyzed
- [ ] Resource usage estimated
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
## Scope
[Gates determined based on constitution file]
### In Scope
- What will be built/changed
- Explicit boundaries
## Project Structure
### Out of Scope
- What will NOT be addressed
- Deferred items for future work
### Documentation (this feature)
## Technical Approach
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
High-level technical strategy and architectural decisions.
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
real paths (e.g., apps/admin, packages/something). The delivered plan must
not include Option labels.
-->
### Key Components
1. Component A: Purpose and responsibilities
2. Component B: Purpose and responsibilities
3. Component C: Purpose and responsibilities
```text
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
src/
├── models/
├── services/
├── cli/
└── lib/
### Dependencies
- Internal dependencies (other modules/services)
- External dependencies (libraries, APIs, services)
tests/
├── contract/
├── integration/
└── unit/
### Risks & Mitigations
| Risk | Impact | Probability | Mitigation Strategy |
|------|--------|-------------|---------------------|
| Risk 1 | High/Med/Low | High/Med/Low | How we'll address it |
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
backend/
├── src/
│ ├── models/
│ ├── services/
│ └── api/
└── tests/
## Implementation Phases
frontend/
├── src/
│ ├── components/
│ ├── pages/
│ └── services/
└── tests/
### Phase 1: [Name] (Est: X days)
- Milestone 1
- Milestone 2
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
api/
└── [same as backend above]
### Phase 2: [Name] (Est: X days)
- Milestone 3
- Milestone 4
ios/ or android/
└── [platform-specific structure: feature modules, UI flows, platform tests]
```
## Success Criteria
**Structure Decision**: [Document the selected structure and reference the real
directories captured above]
Clear, measurable criteria for completion:
- [ ] All tests passing with ≥80% coverage
- [ ] Performance benchmarks met
- [ ] Documentation complete
- [ ] Code review approved
- [ ] Production deployment successful
## Complexity Tracking
## Open Questions
> **Fill ONLY if Constitution Check has violations that must be justified**
- [ ] Question 1 that needs resolution
- [ ] Question 2 that needs research
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
## References
- Link to specs
- Related plans
- External documentation

View File

@@ -1,115 +1,181 @@
# Feature Specification: [FEATURE NAME]
# Specification: [FEATURE_NAME]
**Feature Branch**: `[###-feature-name]`
**Created**: [DATE]
**Status**: Draft
**Input**: User description: "$ARGUMENTS"
**Version:** [X.Y.Z]
**Created:** [YYYY-MM-DD]
**Last Updated:** [YYYY-MM-DD]
**Status:** [Draft | Review | Approved | Implemented]
**Owner:** [OWNER_NAME]
## User Scenarios & Testing *(mandatory)*
## Purpose
<!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
you should still have a viable MVP (Minimum Viable Product) that delivers value.
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
Think of each story as a standalone slice of functionality that can be:
- Developed independently
- Tested independently
- Deployed independently
- Demonstrated to users independently
-->
Clear statement of what this specification defines and its business/technical value.
### User Story 1 - [Brief Title] (Priority: P1)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 2 - [Brief Title] (Priority: P2)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 3 - [Brief Title] (Priority: P3)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
[Add more user stories as needed, each with an assigned priority]
### Edge Cases
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right edge cases.
-->
- What happens when [boundary condition]?
- How does system handle [error scenario]?
## Requirements *(mandatory)*
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right functional requirements.
-->
## Requirements
### Functional Requirements
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
#### FR1: [Requirement Name]
**Priority:** [Critical | High | Medium | Low]
**Description:** Detailed description of the requirement.
*Example of marking unclear requirements:*
**Acceptance Criteria:**
- [ ] Criterion 1 (testable condition)
- [ ] Criterion 2 (testable condition)
- [ ] Criterion 3 (testable condition)
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
**Constitutional Alignment:**
- Testing: [How this will be tested per Principle 2]
- UX Impact: [User-facing implications per Principle 3]
- Performance: [Performance considerations per Principle 4]
### Key Entities *(include if feature involves data)*
#### FR2: [Requirement Name]
[Repeat structure above]
- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities]
### Non-Functional Requirements
## Success Criteria *(mandatory)*
#### NFR1: Performance
Per Constitutional Principle 4:
- Response time: [target, e.g., <200ms for p95]
- Throughput: [target, e.g., >1000 req/s]
- Resource limits: [memory/CPU bounds]
- Scalability: [expected load ranges]
<!--
ACTION REQUIRED: Define measurable success criteria.
These must be technology-agnostic and measurable.
-->
#### NFR2: Quality
Per Constitutional Principle 1:
- Code coverage: ≥80% (Principle 2 requirement)
- Linting: Zero errors/warnings
- Type safety: Full type hints on public APIs
- Documentation: All public APIs documented
### Measurable Outcomes
#### NFR3: User Experience
Per Constitutional Principle 3:
- Accessibility: WCAG 2.1 AA compliance
- Error handling: User-friendly messages
- Consistency: Follows existing design patterns
- Response feedback: <200ms or progress indicators
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
#### NFR4: Maintainability
Per Constitutional Principle 1:
- Complexity: Cyclomatic complexity <10 per function
- Dependencies: Explicit versioning, security audit
- Modularity: Clear separation of concerns
## Design
### Architecture Overview
[Diagram or description of system components and their interactions]
### Data Models
```python
# Example data structures with type hints
class ExampleModel:
"""Clear docstring explaining purpose."""
field1: str
field2: int
field3: Optional[List[str]]
```
### API/Interface Specifications
#### Endpoint/Method: [Name]
```python
def example_function(param1: str, param2: int) -> ResultType:
"""
Clear description of what this does.
Args:
param1: Description of parameter
param2: Description of parameter
Returns:
Description of return value
Raises:
ValueError: When validation fails
"""
pass
```
**Error Handling:**
- Error case 1: Response/behavior
- Error case 2: Response/behavior
### Testing Strategy
#### Unit Tests
- Component A: [Test scenarios]
- Component B: [Test scenarios]
- Edge cases: [List critical edge cases]
#### Integration Tests
- Integration point 1: [Test scenario]
- Integration point 2: [Test scenario]
#### Performance Tests
- Benchmark 1: [Target metric]
- Load test: [Expected traffic pattern]
## Implementation Considerations
### Performance Analysis
- Algorithmic complexity: [Big-O analysis]
- Database queries: [Query plans, indexes needed]
- Caching strategy: [What, when, invalidation]
- Bottleneck prevention: [Known risks and mitigations]
### Security Considerations
- Authentication/Authorization requirements
- Input validation requirements
- Data protection measures
### Migration Path
If this changes existing functionality:
- Backward compatibility strategy
- User migration steps
- Rollback plan
## Dependencies
### Internal Dependencies
- Module/Service A: [Why needed]
- Module/Service B: [Why needed]
### External Dependencies
```python
# New dependencies to add (with justification)
package-name==X.Y.Z # Why: specific reason for this dependency
```
## Rollout Plan
1. **Development:** [Timeline and milestones]
2. **Testing:** [QA approach and environments]
3. **Staging:** [Validation steps]
4. **Production:** [Deployment strategy - canary/blue-green/etc]
5. **Monitoring:** [Key metrics to watch]
## Success Metrics
Post-deployment validation:
- [ ] All acceptance criteria met
- [ ] Performance benchmarks achieved
- [ ] Zero critical bugs in first week
- [ ] User feedback collected and positive
- [ ] Test coverage ≥80% maintained
## Open Issues
- [ ] Issue 1 requiring resolution
- [ ] Issue 2 needing decision
## Appendix
### References
- Related specifications
- External documentation
- Research materials
### Change Log
| Version | Date | Author | Changes |
|---------|------|--------|---------|
| 1.0.0 | YYYY-MM-DD | Name | Initial specification |

View File

@@ -1,251 +1,148 @@
---
# Tasks: [FEATURE/AREA_NAME]
description: "Task list template for feature implementation"
---
**Created:** [YYYY-MM-DD]
**Last Updated:** [YYYY-MM-DD]
**Sprint/Milestone:** [IDENTIFIER]
# Tasks: [FEATURE NAME]
## Overview
Brief context for this task list and its relationship to plans/specs.
**Input**: Design documents from `/specs/[###-feature-name]/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
## Task Categories
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
Tasks are organized by constitutional principle to ensure balanced development:
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
### 🏗️ Implementation Tasks (Principle 1: Code Quality)
- [ ] **[TASK-001]** Task title
- **Description:** What needs to be done
- **Acceptance:** How to verify completion
- **Estimate:** [S/M/L/XL or hours]
- **Dependencies:** [Other task IDs]
- **Quality checklist:**
- [ ] Follows style guide (linter passes)
- [ ] Type hints added
- [ ] No code duplication
- [ ] Comments explain "why" not "what"
- [ ] **[TASK-002]** Next task...
### 🧪 Testing Tasks (Principle 2: Testing Discipline)
## Format: `[ID] [P?] [Story] Description`
- [ ] **[TEST-001]** Write unit tests for [Component]
- **Coverage target:** ≥80% for new code
- **Test scenarios:**
- [ ] Happy path
- [ ] Edge case 1
- [ ] Edge case 2
- [ ] Error handling
- **Estimate:** [S/M/L/XL]
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
- [ ] **[TEST-002]** Integration tests for [Feature]
- **Scope:** [Component interactions to validate]
- **Performance target:** <10s execution time
## Path Conventions
- [ ] **[TEST-003]** Regression test for [Bug #X]
- **Bug reference:** [Link to issue]
- **Reproduction steps:** [Documented]
- **Single project**: `src/`, `tests/` at repository root
- **Web app**: `backend/src/`, `frontend/src/`
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
### 👤 User Experience Tasks (Principle 3: UX Consistency)
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit.tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
- [ ] **[UX-001]** Design/implement [UI Component]
- **Design system alignment:** [Pattern/component to follow]
- **Accessibility checklist:**
- [ ] Keyboard navigable
- [ ] Screen reader compatible
- [ ] Color contrast WCAG AA
- [ ] Focus indicators visible
- **Estimate:** [S/M/L/XL]
## Phase 1: Setup (Shared Infrastructure)
- [ ] **[UX-002]** Error message improvement for [Feature]
- **Current message:** [What users see now]
- **Improved message:** [Clear, actionable alternative]
- **Context provided:** [Where, why, what to do]
**Purpose**: Project initialization and basic structure
- [ ] **[UX-003]** User documentation for [Feature]
- **Target audience:** [End users/API consumers/admins]
- **Format:** [README/Wiki/API docs/Tutorial]
- [ ] T001 Create project structure per implementation plan
- [ ] T002 Initialize [language] project with [framework] dependencies
- [ ] T003 [P] Configure linting and formatting tools
### ⚡ Performance Tasks (Principle 4: Performance & Efficiency)
---
- [ ] **[PERF-001]** Optimize [Operation/Query]
- **Current performance:** [Baseline metric]
- **Target performance:** [Goal metric]
- **Approach:** [Algorithm change/caching/indexing/etc]
- **Estimate:** [S/M/L/XL]
## Phase 2: Foundational (Blocking Prerequisites)
- [ ] **[PERF-002]** Add performance benchmark for [Feature]
- **Metric:** [Response time/throughput/memory]
- **Budget:** [Threshold that triggers alert]
- **CI integration:** [How it blocks bad merges]
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
- [ ] **[PERF-003]** Profile and fix [Bottleneck]
- **Profiling tool:** [Tool to use]
- **Suspected issue:** [Hypothesis]
- **Verification:** [How to confirm fix]
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
### 🔧 Infrastructure/DevOps Tasks
Examples of foundational tasks (adjust based on your project):
- [ ] **[INFRA-001]** Setup [Tool/Service]
- **Purpose:** [Why this is needed]
- **Configuration:** [Key settings]
- **Documentation:** [Where to document setup]
- [ ] T004 Setup database schema and migrations framework
- [ ] T005 [P] Implement authentication/authorization framework
- [ ] T006 [P] Setup API routing and middleware structure
- [ ] T007 Create base models/entities that all stories depend on
- [ ] T008 Configure error handling and logging infrastructure
- [ ] T009 Setup environment configuration management
- [ ] **[INFRA-002]** CI/CD pipeline enhancement
- **Addition:** [What check/stage to add]
- **Constitutional alignment:** [Which principle this enforces]
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
### 📋 Technical Debt Tasks
---
- [ ] **[DEBT-001]** Refactor [Component]
- **Current problem:** [What makes this debt]
- **Proposed solution:** [Refactoring approach]
- **Impact:** [What improves after fix]
- **Estimate:** [S/M/L/XL]
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
- [ ] **[DEBT-002]** Update dependencies
- **Packages:** [List outdated packages]
- **Risk assessment:** [Breaking changes?]
- **Testing plan:** [How to verify upgrade]
**Goal**: [Brief description of what this story delivers]
## Task Estimation Guide
**Independent Test**: [How to verify this story works on its own]
- **S (Small):** <2 hours, single file, no dependencies
- **M (Medium):** 2-4 hours, multiple files, minor dependencies
- **L (Large):** 4-8 hours, multiple components, significant testing
- **XL (X-Large):** >8 hours, consider breaking down further
## Completion Checklist
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
Before closing any task, verify:
- [ ] Code changes committed with clear message
- [ ] Tests written and passing (≥80% coverage for new code)
- [ ] Linter/type checker passing
- [ ] Documentation updated
- [ ] Code review completed
- [ ] Constitutional principles satisfied
- [ ] Deployed to staging/production
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
## Blocked Tasks
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
Track tasks waiting on external dependencies:
### Implementation for User Story 1
- **[TASK-XXX]** Task title
- **Blocked by:** [Reason/dependency]
- **Resolution needed:** [Action to unblock]
- **Owner of blocker:** [Person/team]
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Add logging for user story 1 operations
## Completed Tasks
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
Move completed tasks here with completion date:
---
-**[TASK-000]** Example completed task (2025-11-01)
## Phase 4: User Story 2 - [Title] (Priority: P2)
## Notes & Decisions
**Goal**: [Brief description of what this story delivers]
Document important decisions or context that affects multiple tasks:
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - [Title] (Priority: P3)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
**Checkpoint**: All user stories should now be independently functional
---
[Add more user story phases as needed, following the same pattern]
---
## Phase N: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [ ] TXXX [P] Documentation updates in docs/
- [ ] TXXX Code cleanup and refactoring
- [ ] TXXX Performance optimization across all stories
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
- [ ] TXXX Security hardening
- [ ] TXXX Run quickstart.md validation
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
- All tests for a user story marked [P] can run in parallel
- Models within a story marked [P] can run in parallel
- Different user stories can be worked on in parallel by different team members
---
## Parallel Example: User Story 1
```bash
# Launch all tests for User Story 1 together (if tests requested):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all models for User Story 1 together:
Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Create [Entity2] model in src/models/[entity2].py"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup + Foundational together
2. Once Foundational is done:
- Developer A: User Story 1
- Developer B: User Story 2
- Developer C: User Story 3
3. Stories complete and integrate independently
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Verify tests fail before implementing
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
- **[2025-11-02]** Decision about [topic]: [What was decided and why]