Workflow objective#
This blueprint automates the repeatable, data-intensive steps of month-end financial reporting: pulling actuals from the ERP, calculating budget vs. actual variances, identifying material variances, generating a narrative explaining the results, and routing the draft package for FP&A review before board or leadership distribution.
The workflow keeps humans in control of the two things that require judgment: reviewing the narrative for accuracy and context before it goes to leadership, and approving any commentary that will appear in board-level packages.
Preconditions#
- ERP system with an API or structured data export (NetSuite, SAP, QuickBooks, Xero, or equivalent).
- Budget data stored in a format the agent can read (ERP budget module, Excel with consistent structure, or Airtable/Google Sheets).
- Chart of accounts documented with materiality levels defined per account or account group.
- Prior period reports available for trend comparison.
- Distribution list defined and approved for each report type (internal vs. board-level).
- FP&A reviewer designated for the month-end approval gate.
Workflow steps#
- Month-end close trigger: Workflow initiates when the close is marked complete in the ERP, or on a scheduled date/time trigger (e.g., the third business day after month-end).
- Actuals data pull: Data agent queries the ERP for the period's actuals across all relevant account groups — revenue, cost of goods sold, operating expenses, headcount costs, and capital expenditures.
- Budget vs. actual calculation: Analysis agent calculates variance (amount and percentage) for each account line against the current-period budget and the prior period actuals. Applies materiality threshold to identify significant variances.
- Material variance identification: Classification agent reviews the variance list and tags each material variance with a preliminary root cause category: volume variance, price/rate variance, timing difference, forecast error, or one-time item.
- AI narrative generation: Narrative agent generates a draft management commentary section explaining: overall performance vs. budget, the top 3-5 material variances with context, and any notable trends vs. prior period.
- FP&A review checkpoint: Draft variance analysis and AI-generated narrative are routed to the FP&A reviewer via email and Slack. Reviewer edits, approves, or requests regeneration with additional context.
- Board or leadership distribution: Approved report package is formatted for the designated distribution channel — internal finance meeting deck, board reporting package, or executive dashboard — and delivered to the distribution list.
- Archiving and audit trail: Final approved report, the underlying data snapshot, and the AI generation log are archived together in a version-controlled location for audit access.
Copy-ready workflow template#
Trigger: Month-end close flag set to TRUE in ERP, OR scheduled trigger on [DATE/TIME].
Step 1: Validate close readiness.
- Check: all sub-ledgers posted (AR, AP, payroll, fixed assets)
- Check: bank reconciliation status = COMPLETE for all accounts
- If validation fails: notify accounting team → pause workflow
Step 2: Pull actuals.
- Query ERP for period: [MONTH/YEAR]
- Data required per account:
{gl_code, account_name, category, period_actuals, ytd_actuals}
- Output: actuals_dataset[]
Step 3: Pull budget and prior period.
- Query budget source for same account list
- Query ERP for prior period actuals (same period last year + prior month)
- Output: budget_dataset[], prior_period_dataset[]
Step 4: Calculate variances.
- For each account:
period_var_amount = actuals - budget
period_var_pct = (actuals - budget) / abs(budget) * 100
prior_period_var = actuals - prior_period_actuals
- Apply materiality filter:
material = abs(period_var_amount) > [THRESHOLD] OR abs(period_var_pct) > [%]
- Output: variance_table[] with material flag
Step 5: Classify material variances.
- For each material variance, classify as:
VOLUME_VARIANCE | PRICE_RATE_VARIANCE | TIMING_DIFFERENCE |
FORECAST_ERROR | ONE_TIME_ITEM | NEEDS_INVESTIGATION
- Classification basis: transaction detail, GL description fields,
historical patterns
- Output: classified_variances[]
Step 6: Generate narrative.
- Input: variance_table, classified_variances, prior_period_dataset,
company_context (from knowledge base)
- Output narrative sections:
- Executive summary: 2-3 sentences on overall performance
- Revenue performance: [period] vs budget and prior period
- Operating expense performance: key drivers
- Material variances: top [N] explained with root cause
- Outlook note: any known items affecting next period
- Tone: [REPORTING TONE — e.g., "factual, no judgment, numbers-first"]
- Output: draft_narrative
Step 7: Approval gate.
- Package: variance_table + classified_variances + draft_narrative
- Route to: [FP&A REVIEWER NAME] via Slack + email
- Options: APPROVE / REVISE (with notes) / REQUEST_REGENERATION
- If REVISE: append reviewer notes → return to Step 6 with context
- Timeout: [HOURS] → escalate to [BACKUP REVIEWER]
Step 8: Format and distribute.
- If distribution = "internal": format as Google Slides template and
share with [INTERNAL DISTRIBUTION LIST]
- If distribution = "board": format as board package template and
share with [BOARD DISTRIBUTION LIST]
- If distribution = "dashboard": update [DASHBOARD TOOL] data source
Step 9: Archive.
- Save to: [ARCHIVE LOCATION] / [YEAR] / [MONTH] /
- report_final_[PERIOD].pdf
- actuals_data_snapshot_[PERIOD].csv
- ai_generation_log_[PERIOD].json
- Set retention policy: [YEARS per audit requirements]
Decision nodes and escalation paths#
Close readiness gate (Step 1): This gate prevents the workflow from running on incomplete data. A report generated before all sub-ledgers are posted will contain incorrect figures. The gate should check specific system flags in the ERP — not just rely on a calendar trigger.
Materiality threshold calibration (Step 4): Set two thresholds: an absolute dollar amount (e.g., $25,000) AND a percentage threshold (e.g., 5%). A line item with a $30,000 variance on a $600,000 budget (5%) is material. A $30,000 variance on a $5M budget (0.6%) may not be. Using both thresholds simultaneously captures both small accounts with large percentage swings and large accounts with significant dollar variances.
Narrative regeneration with context (Step 6 → Step 7 loop): When a reviewer requests regeneration, the most common failure is regenerating without adding new context. Build the loop to append reviewer notes to the narrative generation prompt. Without this, the agent regenerates the same narrative, frustrating reviewers.
Board vs. internal distribution (Step 8): These require different formats and different levels of detail. Board packages typically require: executive summary only, chart-heavy format, no raw variance tables. Internal finance packages typically include: full variance detail, GL-level data, methodology notes. Configure format templates separately.
Integration points#
| Step | System | Action | |------|--------|--------| | 2-3 — Data pull | ERP (NetSuite / SAP / Xero) | API query for actuals and budget | | 5 — Classification | AI model | Variance root cause analysis | | 6 — Narrative | AI model | Management commentary generation | | 7 — Approval | Slack + Email | FP&A reviewer notification | | 8 — Distribution | Google Slides / PowerPoint / Dashboard | Report formatting and delivery | | 9 — Archive | Google Drive / SharePoint | Version-controlled storage |
Governance guidance#
- The AI-generated narrative should always be clearly marked as "AI-Assisted Draft — Reviewed by [REVIEWER NAME]" in the final report. This is important for audit trail integrity and for recipients to understand the provenance of the commentary.
- Maintain a log of AI generation parameters (model version, prompt version, data snapshot date) alongside each report. When narrative quality issues arise, this log enables root cause analysis.
- Review classification accuracy quarterly. Compare the agent's variance classifications against the final FP&A-reviewed classifications and calculate accuracy rate. Classification accuracy below 75% indicates the classification prompt needs refinement.
FAQ#
How do we handle one-time items that the agent does not know about?#
Add a "management context" input step before narrative generation. The FP&A team enters a brief note about known one-time items ("Q3 includes $200K severance for restructuring announced October 1") and the agent incorporates this context into the narrative. Without this step, the agent will misclassify one-time items as forecast errors.
What if the ERP does not have an API?#
Use a structured data export step: the ERP exports a standardized CSV at month-end, which is uploaded to a shared drive location the agent monitors. Less elegant than a direct API connection but functionally equivalent. Add a data validation step to check for common export issues — missing headers, date format inconsistencies, blank rows.
Can this workflow replace the FP&A function?#
No. This workflow automates the data retrieval, calculation, and first-draft narrative steps — roughly 4-8 hours of work per month for a typical finance team. The FP&A function's value in interpretation, judgment, stakeholder communication, and strategic context is not automated by this workflow. The approval gate at Step 7 is where that value is applied.
Related resources#
- Parent page: AI Agent Templates
- Related template: Finance Reconciliation Agent Prompt Template
- Related template: Finance AI Agent Compliance and Audit Checklist
- Cross-playbook: AI Agent Finance Examples
- Cross-playbook: What Are AI Agents?
- Cross-playbook: Build an AI Agent with LangChain