🤖AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
🤖AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

© 2026 AI Agents Guide. All rights reserved.

Home/Examples/AI Agent E-Commerce Examples: 7 Workflows
ExampleE-Commerce10 min read

AI Agent E-Commerce Examples: 7 Workflows

Six practical AI agent examples for e-commerce covering product recommendation, inventory management, customer service returns, dynamic pricing, abandoned cart recovery, and review analysis. Each example includes architecture details and production-ready Python code snippets.

puppet using tablet
Photo by Brett Jordan on Unsplash
By AI Agents Guide Team•February 28, 2026

Table of Contents

  1. Example 1: Product Recommendation Agent
  2. Example 2: Inventory Management Agent
  3. Example 3: Customer Service Returns Agent
  4. Example 4: Dynamic Pricing Agent
  5. Example 5: Abandoned Cart Recovery Agent
  6. Example 6: Product Review Analysis Agent
  7. Choosing the Right E-Commerce Agent Pattern
  8. Getting Started
  9. Frequently Asked Questions
Warehouse shelving with products representing inventory management automation
Photo by Norbert Levajsics on Unsplash

AI Agent E-Commerce Examples: Real-World Automation Workflows

E-commerce operations involve hundreds of repetitive decisions every day — which products to recommend to which customers, how to price items as demand shifts, which support tickets to escalate, how much inventory to reorder, and when to reach out to a shopper who left without buying. AI agents excel at these decision-dense workflows because they can incorporate more variables than any human operator while running continuously at scale.

These six examples cover the highest-impact e-commerce AI agent patterns across the customer lifecycle. Each includes realistic Python code and architecture notes you can adapt for your own store.

For strategic context on where agents add most value in retail, see AI Agent E-Commerce Use Cases and compare implementation approaches in LangChain Agent Examples.


Example 1: Product Recommendation Agent#

Use Case: Generate personalized product recommendations for each shopper based on browsing history, purchase history, cart contents, and real-time session behavior — going far beyond simple "customers also bought" rules.

Architecture: Customer profile builder + semantic vector search over product catalog + LLM reranking + explanation generator. The agent queries a vector index with a customer intent embedding, retrieves the top 20 candidates, then asks the LLM to select and rank the five best matches with personalized reasons.

Key Implementation:

from openai import OpenAI
from pinecone import Pinecone
import json

client = OpenAI()
pc = Pinecone(api_key="PINECONE_API_KEY")
product_index = pc.Index("product-catalog")

def get_customer_context(customer_id: str) -> dict:
    # Production: query your customer data platform
    return {
        "recent_views": ["hiking-boots-trail-runner", "waterproof-jacket-gore-tex"],
        "purchases": [{"product": "trail-running-socks", "date": "2026-01-15"}],
        "cart": ["trekking-poles-carbon"],
        "preferred_brands": ["Patagonia", "Merrell", "Black Diamond"],
        "price_sensitivity": "medium",
    }

def recommend_products(customer_id: str) -> list[dict]:
    ctx = get_customer_context(customer_id)
    intent_query = f"outdoor hiking trail running gear {' '.join(ctx['preferred_brands'])}"

    embedding = client.embeddings.create(
        input=intent_query, model="text-embedding-3-small"
    ).data[0].embedding

    candidates = product_index.query(
        vector=embedding, top_k=20, filter={"in_stock": True}, include_metadata=True
    )
    candidate_list = [{"id": m.id, **m.metadata} for m in candidates.matches]

    response = client.chat.completions.create(
        model="gpt-4o",
        response_format={"type": "json_object"},
        messages=[
            {"role": "system", "content": "Select top 5 products from candidates that best match the customer profile. Return JSON: {\"recommendations\": [{\"product_id\": str, \"rank\": int, \"reason\": str}]}"},
            {"role": "user", "content": f"Customer: {json.dumps(ctx)}\nCandidates: {json.dumps(candidate_list)}"}
        ]
    )
    return json.loads(response.choices[0].message.content)["recommendations"]

recs = recommend_products("customer_847392")
for r in recs:
    print(f"#{r['rank']}: {r['product_id']} — {r['reason']}")

Outcome: Personalized recommendations with human-readable explanations that can be surfaced in the UI ("Because you viewed trail running gear..."). Typical lift over rule-based systems: 15–35% improvement in recommendation click-through rate.


Example 2: Inventory Management Agent#

Use Case: Monitor inventory levels across SKUs, predict stockout risk based on sales velocity and supplier lead times, trigger purchase orders at optimal timing, and flag slow-moving inventory for markdowns.

Architecture: Scheduled daily agent + pandas inventory snapshot + LLM decision maker + PO generator. The agent evaluates each SKU against its lead time and safety buffer, then produces a prioritized action list rather than just raw metrics.

Key Implementation:

from anthropic import Anthropic
import pandas as pd
from datetime import datetime, timedelta

client = Anthropic()

def get_inventory_snapshot() -> pd.DataFrame:
    # Production: query your inventory management system
    data = [
        {"sku": "SHOE-TRAIL-X-10", "units": 8, "avg_daily_sales": 2.3, "lead_time_days": 14, "reorder_qty": 50},
        {"sku": "JACKET-RAIN-M-L", "units": 145, "avg_daily_sales": 0.8, "lead_time_days": 21, "reorder_qty": 30},
        {"sku": "POLES-CARBON-PR", "units": 23, "avg_daily_sales": 1.1, "lead_time_days": 18, "reorder_qty": 25},
    ]
    df = pd.DataFrame(data)
    df["days_of_supply"] = (df["units"] / df["avg_daily_sales"]).round(1)
    df["reorder_needed"] = df["days_of_supply"] < (df["lead_time_days"] + 7)
    return df

def generate_inventory_actions(df: pd.DataFrame) -> str:
    response = client.messages.create(
        model="claude-3-5-haiku-20241022",
        max_tokens=1200,
        messages=[{"role": "user", "content": f"""
Analyze this inventory snapshot and output a prioritized action list.
For each SKU recommend: REORDER (qty + urgency), MARKDOWN (discount % for >90 days supply),
EXPEDITE (critical <lead_time days supply), or MONITOR (no action, brief reason).

Data: {df.to_dict("records")}
Today: {datetime.now().strftime("%Y-%m-%d")}
"""}]
    )
    return response.content[0].text

inventory = get_inventory_snapshot()
print(generate_inventory_actions(inventory))

# Auto-flag critical items
critical = inventory[inventory["days_of_supply"] < inventory["lead_time_days"]]
for _, row in critical.iterrows():
    print(f"CRITICAL: {row['sku']} — only {row['days_of_supply']} days supply vs {row['lead_time_days']} day lead time")

Outcome: Stockout prevention through proactive reordering and automated markdown identification for slow movers. Typical result: 15–25% reduction in stockout events and improved inventory turnover. The LLM action list gives buyers context for each decision rather than just a raw threshold breach alert.


Example 3: Customer Service Returns Agent#

Use Case: Handle return and refund requests autonomously — verifying eligibility, generating return labels, processing refunds — with escalation to a human agent for complex cases or repeated contact.

Architecture: OpenAI Agents SDK with tool calling + order lookup tool + return initiation tool + refund processing tool + human handoff. The agent verifies order ownership before taking any action and enforces return policy programmatically.

Key Implementation:

import asyncio
from agents import Agent, Runner, function_tool, handoff

@function_tool
def lookup_order(order_id: str, customer_email: str) -> dict:
    """Look up order status and delivery date for a verified customer."""
    # Production: query your OMS with identity verification
    return {
        "order_id": order_id, "status": "delivered",
        "delivery_date": "2026-02-10", "days_since_delivery": 18,
        "items": [{"sku": "SHOE-TRAIL-X-10", "qty": 1, "price": 149.99, "returnable": True}]
    }

@function_tool
def initiate_return(order_id: str, item_sku: str, reason: str) -> dict:
    """Initiate a return for an eligible item within the 30-day policy window."""
    return {
        "return_id": "RTN-84729",
        "label_url": "https://returns.example.com/label/RTN-84729",
        "refund_timeline": "3-5 business days after we receive the item"
    }

human_agent = Agent(
    name="Human Support Specialist", model="gpt-4o",
    instructions="Handle escalated cases requiring human judgment."
)

returns_agent = Agent(
    name="Returns Agent", model="gpt-4o",
    instructions="""Handle return and refund requests. Verify order ownership first.
    Return policy: 30 days from delivery, unworn items in original packaging.
    Escalate after 2 failed resolution attempts or customer dissatisfaction.""",
    tools=[lookup_order, initiate_return],
    handoffs=[handoff(human_agent, tool_description="Escalate complex or unresolved cases")]
)

async def handle_return_request(message: str) -> str:
    result = await Runner.run(returns_agent, input=message)
    return result.final_output

response = asyncio.run(handle_return_request(
    "Order #ORD-84729 (jane@example.com) — the shoes don't fit, I want to return them."
))
print(response)

Outcome: 70–80% of return requests resolved autonomously with proper policy enforcement. Human agents handle only edge cases. Average resolution time for auto-handled returns: under 60 seconds end to end.


Warehouse shelving with products representing inventory management automation

Example 4: Dynamic Pricing Agent#

Use Case: Optimize product prices in real time based on competitor pricing, inventory levels, demand signals, and estimated price elasticity — while staying within business policy guardrails and routing large changes to human approval.

Architecture: Data aggregation layer (competitor prices, inventory, demand) + LLM decision agent + hard policy guardrails in code + structured output for approval workflow integration.

Key Implementation:

from anthropic import Anthropic
from pydantic import BaseModel
import json

client = Anthropic()

class PricingDecision(BaseModel):
    recommended_price: float
    price_change_pct: float
    confidence: str  # high / medium / low
    rationale: str
    requires_approval: bool

def get_pricing_context(product_id: str) -> dict:
    # Production: parallel calls to competitor scraper, inventory API, demand model
    return {
        "product_name": "Trail Running Shoes Model X",
        "current_price": 149.99, "cost": 58.00,
        "min_price": 89.99, "max_price": 199.99,
        "inventory_days_of_supply": 22,
        "conversion_rate": 0.035,
        "competitor_prices": {"CompetitorA": 159.99, "CompetitorB": 144.99},
        "price_elasticity": -1.8,
        "seasonality_index": 1.3,
    }

def make_pricing_decision(product_id: str) -> PricingDecision:
    ctx = get_pricing_context(product_id)
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=500,
        messages=[{"role": "user", "content": f"""
Recommend a price adjustment. Stay within [min_price, max_price].
Flag requires_approval=true if change >15%.

Strategy: maximize revenue. Strong demand (CR>0.04) + low stock → consider increase.
Significant competitor undercut → consider matching within 5%.

Data: {json.dumps(ctx)}
Return JSON: {{"recommended_price": float, "confidence": str, "rationale": str, "requires_approval": bool}}
"""}]
    )
    data = json.loads(response.content[0].text)
    change_pct = (data["recommended_price"] - ctx["current_price"]) / ctx["current_price"] * 100
    data["requires_approval"] = data["requires_approval"] or abs(change_pct) > 15
    return PricingDecision(price_change_pct=round(change_pct, 2), **data)

decision = make_pricing_decision("trail-runner-x")
print(f"${decision.recommended_price:.2f} ({decision.price_change_pct:+.1f}%) — {decision.rationale}")
print(f"Needs approval: {decision.requires_approval}")

Outcome: Data-driven pricing with explicit rationale and automatic escalation for large changes. Structured output integrates directly with your existing approval workflow. Expected revenue improvement: 3–8% through better price positioning relative to competitors and demand signals.


Example 5: Abandoned Cart Recovery Agent#

Use Case: Identify why a customer abandoned their cart — price concern, shipping cost, product uncertainty — and generate a personalized recovery message that addresses that specific friction rather than sending a generic discount blast.

Architecture: Cart abandonment event trigger + customer context enrichment + friction inference agent + personalized message generator + send-time optimizer. The agent diagnoses the most likely abandonment reason before drafting the outreach.

Key Implementation:

from anthropic import Anthropic
from pydantic import BaseModel
from typing import Literal

client = Anthropic()

class CartRecoveryMessage(BaseModel):
    primary_friction: Literal["price", "shipping_cost", "product_uncertainty", "distraction", "comparison_shopping"]
    subject_line: str
    message_body: str
    offer: str | None  # e.g. "Free shipping" or "10% off" — None if no offer needed
    send_delay_hours: int  # How many hours after abandonment to send

def generate_recovery_message(cart_event: dict) -> CartRecoveryMessage:
    import json
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=800,
        messages=[{"role": "user", "content": f"""
Analyze this cart abandonment and generate a personalized recovery message.

Cart event: {json.dumps(cart_event)}

Instructions:
- Infer the most likely abandonment reason from customer and cart signals
- Write a subject line that addresses that friction specifically
- Keep message body under 60 words, conversational, not salesy
- Only include an offer if price/shipping is the inferred friction
- Never offer a discount to customers who have a history of abandoning just to get discounts
- Recommended send delay: 1h for impulse items, 24h for considered purchases

Return structured JSON matching the schema.
"""}]
    )
    import json as json_mod
    text = response.content[0].text
    start, end = text.find('{'), text.rfind('}') + 1
    return CartRecoveryMessage(**json_mod.loads(text[start:end]))

cart_event = {
    "customer_id": "cust_4821",
    "cart_value": 189.98,
    "items": [{"sku": "BOOT-HIKE-W-8", "price": 189.98, "reviews": 4.6, "review_count": 312}],
    "shipping_cost_shown": 14.99,
    "customer_ltv": 420,
    "previous_purchases": 3,
    "time_on_product_page_seconds": 142,
    "viewed_competitor_in_session": True,
}

msg = generate_recovery_message(cart_event)
print(f"Friction: {msg.primary_friction}")
print(f"Subject: {msg.subject_line}")
print(f"Send after: {msg.send_delay_hours}h")
print(f"Offer: {msg.offer or 'None'}")
print(f"\n{msg.message_body}")

Outcome: Recovery messages that address the actual reason for abandonment rather than blasting everyone with a 10% discount. This approach improves recovery rates while protecting margin — discount offers are only generated when the agent infers price or shipping friction.


Example 6: Product Review Analysis Agent#

Use Case: Process product reviews in batch to extract themes, identify recurring product defects, prioritize improvements for the product team, and generate on-brand responses to negative reviews — all without a human reading each review individually.

Architecture: Review batch loader + LLM theme extraction + issue prioritization agent + response generator. A lightweight model handles classification and a more capable model generates responses that require voice fidelity.

Key Implementation:

from openai import OpenAI
from pydantic import BaseModel
from typing import List
import json

client = OpenAI()

class ReviewInsights(BaseModel):
    overall_sentiment: str
    top_positive_themes: List[str]
    top_negative_themes: List[str]
    product_issues: List[dict]  # [{issue, frequency, severity}]
    improvement_priorities: List[str]

def analyze_reviews(reviews: list[dict]) -> ReviewInsights:
    response = client.chat.completions.create(
        model="gpt-4o",
        response_format={"type": "json_object"},
        messages=[
            {"role": "system", "content": "Extract structured review insights. Return JSON with keys: overall_sentiment, top_positive_themes (list), top_negative_themes (list), product_issues (list of {issue, frequency, severity}), improvement_priorities (ordered list)."},
            {"role": "user", "content": f"Analyze {len(reviews)} reviews: {json.dumps(reviews)}"}
        ]
    )
    return ReviewInsights(**json.loads(response.choices[0].message.content))

def respond_to_review(review: dict) -> str:
    tone_guide = "Thank warmly, reinforce what they loved." if review["rating"] >= 4 else \
        "Acknowledge issue empathetically, apologize, offer private resolution path. Under 80 words. Human, not corporate."
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": f"Write a brand response. Rating: {review['rating']}/5. Review: {review['text']}\nInstructions: {tone_guide}"}]
    )
    return response.choices[0].message.content

sample_reviews = [
    {"rating": 5, "text": "Best trail shoes I've owned. Incredible grip, comfortable right out of the box."},
    {"rating": 2, "text": "Sole separated after 3 months of moderate use. Disappointed at this price."},
    {"rating": 4, "text": "Great grip but runs small — size up half a size."},
    {"rating": 1, "text": "Fell apart on first real hike. Stitching came undone at the toe box."},
]

insights = analyze_reviews(sample_reviews)
print(f"Top issues: {insights.top_negative_themes}")
print(f"Priorities: {insights.improvement_priorities}")

for review in [r for r in sample_reviews if r["rating"] <= 3]:
    print(f"\nResponse to {review['rating']}-star review:")
    print(respond_to_review(review))

Outcome: Product teams receive structured defect reports and improvement priorities without reading thousands of reviews manually. Marketing has brand-consistent responses ready for negative reviews within hours of posting. The combination accelerates both product improvement cycles and reputation management.


Choosing the Right E-Commerce Agent Pattern#

The six examples above span the customer lifecycle: discovery (recommendations), conversion (pricing, abandoned cart), service (returns), operations (inventory), and feedback (reviews). Start with the returns agent (Example 3) because it has the fastest measurable ROI — every auto-resolved ticket saves direct labor cost with no infrastructure investment beyond your existing OMS API.

Add recommendation personalization (Example 1) next for revenue impact. Pricing and inventory agents (Examples 2, 4) require more data infrastructure but deliver significant margin improvements at scale once your data pipelines are in place.

Getting Started#

The AI agent tutorial for e-commerce walks through setting up the returns agent end to end. For recommendation systems, start with a vector database and embed your product catalog before adding the LLM reranking layer. The OpenAI Agents SDK examples cover the handoff pattern used in Example 3 in more depth.

For multi-step workflows that span several of these patterns, review AI Agent Workflow Examples for orchestration patterns that connect recommendation, pricing, and inventory signals into a unified decision loop.

Frequently Asked Questions#

The FAQ section renders from the frontmatter faq array above.

Related Examples

Agentic RAG Examples: 5 Real Workflows

Six agentic RAG examples with working Python code covering query routing, self-correcting retrieval with hallucination detection, multi-document reranking, iterative retrieval with web fallback, conversational RAG with memory, and corrective RAG with grade-and-retry loops.

7 AI Agent Coding Examples (Real Projects)

Discover 7 real-world AI coding agent examples covering code review, PR generation, test writing, bug diagnosis, documentation generation, and refactoring automation. Each example includes architecture details and working code for engineering teams.

AI Data Analyst Examples: 6 Real Setups

Explore 6 AI data analyst agent examples covering natural language SQL generation, automated chart creation, anomaly detection, report generation, and business intelligence workflows. Includes Python code for building production-ready data analysis agents.

← Back to All Examples