Workflow objective#
This blueprint automates the repeatable, administrative overhead of Agile sprint ceremonies and daily rituals: reviewing and grooming the backlog, prioritizing stories for the sprint, breaking down large stories, estimating effort, aggregating standup updates, preparing sprint review materials, and synthesizing retrospective feedback into action items.
The design is explicit about where human judgment is required and where it is not. Sprint planning decisions — which stories to commit to, what the team's actual capacity is, what dependencies create risk — require PM and engineering judgment. The workflow automates the information gathering and preparation that makes those judgment calls faster and better-informed.
Preconditions#
- Backlog management tool is in use (Jira, Linear, Shortcut, GitHub Issues, or equivalent) with API access.
- Stories in the backlog have minimum required fields: title, description, and rough sizing or story points.
- Team capacity calculation method is documented (e.g., hours per sprint per developer, points per sprint).
- Sprint cadence is defined and consistent (1-week, 2-week, or other).
- A retrospective format is established and the team is committed to acting on retrospective outputs.
- Communication channels for standup updates and sprint notifications are defined (Slack, Teams, email).
Workflow steps#
- Backlog review: The backlog agent reviews all stories above the ready threshold, flags stories lacking sufficient detail for sprint planning, and produces a groomed backlog summary for the PM.
- Prioritization: The prioritization agent scores stories against the current sprint's stated objectives using defined criteria (user impact, business value, confidence, effort), then produces a ranked list of sprint candidates.
- Story breakdown: The breakdown agent takes large stories (above a defined point or complexity threshold) and proposes decomposition into smaller, independently deliverable stories that fit within one sprint.
- Estimation support: The estimation agent generates a first-pass effort estimate for stories without estimates, based on similar completed stories in the backlog history and story description analysis.
- Sprint planning: Sprint plan is proposed by combining the prioritized stories with team capacity. The PM and engineering lead review, adjust, and commit. Final sprint scope is written to the backlog tool.
- Standup automation: The standup agent aggregates async status updates from the team, identifies blockers and risks, and generates a daily digest for the PM and team — reducing or eliminating synchronous standup time.
- Sprint review preparation: The review prep agent pulls completed stories, summarizes what was delivered, prepares demo notes, and identifies stories that were not completed with reasons.
- Retrospective synthesis: After the retrospective, the retrospective agent processes team feedback inputs, groups themes, and generates a structured action item list with owners assigned from the discussion.
Copy-ready workflow template#
Sprint cadence: [1-WEEK / 2-WEEK / 4-WEEK]
Sprint start day: [DAY OF WEEK]
Team size: [NUMBER OF DEVELOPERS + PM + DESIGN]
Sprint capacity: [STORY POINTS OR HOURS per sprint]
Backlog tool: [JIRA / LINEAR / SHORTCUT / GITHUB ISSUES / OTHER]
Communication tool: [SLACK / TEAMS / EMAIL]
--- STEP 1: BACKLOG REVIEW ---
Schedule: 3 days before sprint planning
Agent task: Review all backlog items with status = READY or REFINED
For each story, check:
- Has title and description: YES / NO
- Has acceptance criteria: YES / NO
- Has story point estimate: YES / NO
- Dependencies identified: YES / NO
- Design assets linked (if UI work): YES / NO
Flag stories missing 2+ fields as NEEDS_GROOMING
Output:
- groomed_stories[]: stories meeting all criteria
- needs_grooming[]: stories with field gaps, with specific gaps listed
- grooming_summary: count of stories in each state
Route to: [PM NAME] via [CHANNEL] for grooming review
--- STEP 2: PRIORITIZATION ---
Schedule: 2 days before sprint planning
Input: groomed_stories[] from Step 1
Sprint objective: [SPRINT GOAL OR OKR THIS SPRINT CONTRIBUTES TO]
Score each story on:
- User impact (1-5): how directly this addresses a user problem
- Business alignment (1-5): alignment with sprint objective
- Confidence (1-5): clarity of requirements and solution
- Effort (inverted 1-5): 5 = lowest effort relative to team capacity
Priority score = user_impact + business_alignment + confidence + effort_inverse
Output:
- prioritized_backlog[]: stories ranked by priority score, highest first
- sprint_candidates[]: top stories that fit within sprint capacity
- deferred_stories[]: stories that scored below cut line
- prioritization_notes: flags for stories with tied scores or special cases
Route to: [PM NAME] + [ENGINEERING LEAD] for review
--- STEP 3: STORY BREAKDOWN ---
Schedule: 2 days before sprint planning
Trigger: any story in sprint_candidates[] with size > [POINT THRESHOLD] or
complexity indicator = LARGE or XL
For each oversized story:
- Identify natural decomposition points in the acceptance criteria
- Propose 2-4 sub-stories, each independently deliverable
- Each sub-story: title, description, acceptance criteria (1-3), size estimate
- Flag any sub-story that still exceeds [POINT THRESHOLD] for engineering review
Output: breakdown_proposals[] — one per oversized story
Route to: [PM NAME] + [ENGINEERING LEAD] for approval before adding to sprint candidates
--- STEP 4: ESTIMATION SUPPORT ---
Schedule: 2 days before sprint planning
Trigger: any story in sprint_candidates[] with no story point estimate
For each unestimated story:
- Search completed sprint history for similar stories by:
- Keyword similarity in title and description
- Story type (feature / bug / tech debt / research)
- Affected component or service
- Return: 3 most similar historical stories with their actual points
- Generate first-pass estimate with rationale
- Flag estimate confidence: HIGH / MEDIUM / LOW
Output: estimation_suggestions[] with historical references
Route to: engineering team for review and override before sprint planning
--- STEP 5: SPRINT PLANNING ---
Schedule: Sprint planning session
Input: prioritized stories, team capacity, estimates, breakdown proposals
Pre-planning prep (automated):
- Generate sprint planning agenda from template
- Prepare story summary sheet: title, priority score, estimate, acceptance criteria count
- Flag stories with unresolved dependencies
- Calculate: if team commits to stories 1-N, remaining capacity = [X] points
Human decision (PM + engineering lead):
- Review story list, adjust priorities based on new information
- Discuss risks and dependencies
- Confirm or adjust estimates for complex stories
- Commit to sprint scope
Post-planning (automated):
- Update sprint board with committed stories: status → IN_SPRINT
- Create sprint in backlog tool: start date, end date, goal text
- Send sprint kickoff notification to team: {sprint_goal, stories_committed,
key_dependencies}
- Create standup thread or channel for sprint
Sprint commit record:
- sprint_id: [AUTO-GENERATED]
- sprint_goal: [PM-DEFINED GOAL]
- stories_committed: [LIST WITH POINTS]
- total_points: [SUM]
- capacity_utilized: [% OF TOTAL CAPACITY]
- stories_deferred: [LIST WITH REASON]
--- STEP 6: STANDUP AUTOMATION ---
Schedule: Daily at [TIME], running days 1 through [SPRINT LENGTH - 1]
Collection window: [TIME] to [TIME]
Collect updates from each team member via [CHANNEL]:
Prompt sent to each person:
"Daily update for [DATE]:
1. What did you complete since last standup?
2. What are you working on today?
3. Any blockers or help needed?"
Aggregation rules:
- Compile all responses into single daily digest
- Flag any message containing blocker keywords:
[blocked, waiting, stuck, need help, dependency, delayed]
- Flag any story mentioned that is not in the current sprint
- Identify stories with no update for 2+ consecutive days
Output: daily_digest
- Team updates by person
- Blockers identified: [LIST with who reported it]
- At-risk stories: [LIST with no-update flag]
- Story progress summary: [COUNT] completed, [COUNT] in progress, [COUNT] not started
Distribute to: [PM NAME] + [ENGINEERING LEAD] + [OPTIONAL: full team]
Channel: [CHANNEL]
Escalation: if any blocker has no resolution update after [HOURS],
re-alert [PM NAME] directly
--- STEP 7: SPRINT REVIEW PREPARATION ---
Schedule: 1 day before sprint end
Input: sprint board data — stories by status
Pull from backlog tool:
- completed_stories[]: status = DONE
- in_progress_stories[]: status = IN_PROGRESS
- not_started_stories[]: status = TODO (was committed to sprint)
For completed stories, generate:
- Delivery summary: what was built, one paragraph per major story
- Demo notes: key interactions or flows to demonstrate for each story
- Metrics impact: if acceptance criteria included measurable outcomes,
note what can be validated
For incomplete stories, generate:
- Reason classification: BLOCKED / SCOPE_EXPANDED / UNDERESTIMATED / DEPRIORITIZED
- Carry-forward recommendation: YES / NO with rationale
- Impact on sprint goal: does incompletion affect the sprint goal?
Output: review_prep_package
- Sprint goal: [ORIGINAL GOAL]
- Goal achieved: YES / PARTIAL / NO
- Delivery summary: story-by-story
- Demo run-of-show: ordered list of what to show
- Incomplete stories analysis
- Carry-forward recommendations
Route to: [PM NAME] for review before sprint review meeting
--- STEP 8: RETROSPECTIVE SYNTHESIS ---
Schedule: After retrospective meeting
Input: team feedback collected during retrospective (responses from
async form, Miro/FigJam board, or meeting notes)
Collection format (send before or during retro):
For each team member:
1. What went well this sprint? (freetext)
2. What could be improved? (freetext)
3. One specific action you would suggest for next sprint? (freetext)
Synthesis process:
- Group similar responses by theme
- Count mentions per theme (frequency signal)
- Identify highest-frequency themes in "went well" and "improve" categories
- Extract all specific action suggestions
Output: retro_synthesis
- Top 3 themes: went well
- Top 3 themes: improve
- Action items proposed: [LIST from team suggestions]
- Suggested sprint action (1-2 items to commit to next sprint)
- Parking lot: items raised but not actioned this retro
Route to: [PM NAME] for action item assignment and tracking
Add committed action items to: [BACKLOG TOOL] as tech debt or process items
with owner assigned
Step-level detail: agent roles and success criteria#
| Step | Agent Role | Key Tools | Success Criteria | |------|-----------|-----------|-----------------| | 1 — Backlog review | Backlog analyzer | Jira/Linear API | All stories assessed, grooming gaps identified | | 2 — Prioritization | Scoring engine | Backlog API, scoring config | Ranked list produced 2 days before planning | | 3 — Story breakdown | Story decomposer | AI model, backlog API | All oversized stories have breakdown proposals | | 4 — Estimation | Historical matcher | Backlog history, AI model | Unestimated stories have first-pass estimates with references | | 5 — Sprint planning | Planning coordinator | Backlog tool, communication platform | Sprint committed and board updated within 1 hour of planning session | | 6 — Standup | Update aggregator | Slack/Teams, backlog API | Daily digest delivered before [TIME], blockers flagged same-day | | 7 — Review prep | Delivery summarizer | Backlog API, AI model | Review package ready 24 hours before review meeting | | 8 — Retro synthesis | Feedback analyzer | AI model, backlog tool | Action items written and assigned within 24 hours of retro |
Decision nodes and human checkpoints#
Prioritization override (Step 2). The priority scoring is a starting point for the planning conversation, not a binding commitment. PMs and engineering leads frequently override the score-based ranking based on factors the algorithm does not capture: team knowledge of a particular area, customer commitments, or emerging strategic context. The score should be visible during the planning conversation, but the human decision takes precedence.
Estimation ownership (Step 4). AI-generated estimates are reference points, not commitments. Engineering owns the estimate. If the AI estimate is significantly different from the team's read, that discrepancy is itself useful — it usually means the story description is missing something important or the similarity match was incorrect.
Retro action selection (Step 8). The retrospective synthesis produces more action suggestions than any team can address in one sprint. The PM and team should select one to two committed actions for the next sprint, not attempt to address all suggestions simultaneously.
FAQ#
Can this workflow run without API access to our backlog tool?#
A reduced version can work with structured exports (CSV or spreadsheet data) as input for the prioritization and estimation steps. However, Steps 5, 6, and 7 depend on live board data. If your backlog tool does not have an API, prioritize getting API access — most modern tools (Jira, Linear, Shortcut, GitHub Issues) support this and it significantly increases what the workflow can automate.
How do we handle the standup for a remote team across multiple time zones?#
Async standup collection works better for distributed teams than synchronous standup automation. Set the collection window to a period that covers all time zones (e.g., 6:00 AM to 11:00 AM PT for a US-EMEA team), collect written responses, and publish the digest at a consistent time each day. The PM reviews the digest at the start of their day; engineers across time zones contribute on their own schedule.
What if team members do not respond to the standup prompt?#
Build non-response handling into Step 6: if a team member has not responded within [HOURS] of the collection deadline, send a single reminder. If still no response, mark their status as NO_UPDATE in the digest and flag the absence to the PM. This creates a light accountability mechanism without requiring the PM to chase individuals manually.
Related resources#
- Parent page: AI Agent Templates
- Related template: Product Requirements AI Prompt Template
- Related template: Product AI Agent Launch Checklist
- Cross-playbook: Task Decomposition
- Cross-playbook: How to Measure AI Agent ROI