Mastra: TypeScript AI Agent Framework Profile
Mastra is a TypeScript-native framework for building AI agents, workflows, and RAG pipelines. Built by the team behind Gatsby (now part of Netlify), Mastra targets the large population of JavaScript and TypeScript developers who want to build AI agents without switching to Python. The framework provides a comprehensive set of primitives — agents, workflows, memory, vector search, integrations — in a single coherent TypeScript package.
Compare Mastra with other agent frameworks in the AI agent tools directory.
Overview#
The AI agent framework space has been dominated by Python tools — LangChain, CrewAI, LlamaIndex, Agno. This creates a real barrier for the substantial share of production web applications built on Node.js, Next.js, and TypeScript. Developers building AI features into TypeScript applications face a choice: use Python-centric frameworks through API calls, or work with immature JavaScript alternatives.
Mastra was created to fill this gap. The founding team's background in Gatsby gave them deep experience building developer frameworks that prioritize DX (developer experience), and they applied this philosophy to AI agent tooling. Mastra launched in 2024 and reached over 12,000 GitHub stars within months, validating the demand for a TypeScript-first option.
The framework is designed to work naturally within the TypeScript/Node.js ecosystem, including first-class integration with Next.js, Vercel deployment, and the broader npm ecosystem.
Core Architecture#
Agents#
Mastra agents are defined with a system prompt, tools, and model configuration:
import { Agent } from "@mastra/core/agent";
import { openai } from "@ai-sdk/openai";
import { searchTool, emailTool } from "./tools";
const researchAgent = new Agent({
name: "Research Agent",
instructions: `You are a research assistant. Use search tools to find
current information and summarize findings clearly.`,
model: openai("gpt-4o"),
tools: { searchTool, emailTool },
});
// Run the agent
const response = await researchAgent.generate(
"Summarize the latest developments in MCP protocol adoption"
);
Mastra supports any model available through the AI SDK (OpenAI, Anthropic, Google, Mistral, and more).
Workflows#
For multi-step, deterministic processes, Mastra provides a workflow system with typed steps:
import { createWorkflow, createStep } from "@mastra/core/workflow";
const extractStep = createStep({
id: "extract",
inputSchema: z.object({ url: z.string() }),
outputSchema: z.object({ content: z.string() }),
execute: async ({ context }) => {
const content = await fetchAndExtract(context.inputData.url);
return { content };
},
});
const summarizeStep = createStep({
id: "summarize",
inputSchema: z.object({ content: z.string() }),
outputSchema: z.object({ summary: z.string() }),
execute: async ({ context, mastra }) => {
const agent = mastra.getAgent("summarizerAgent");
const result = await agent.generate(context.inputData.content);
return { summary: result.text };
},
});
const contentPipeline = createWorkflow({
name: "Content Processing Pipeline",
steps: [extractStep, summarizeStep],
});
Workflows provide deterministic execution, error handling, retries, and observability that isn't available in agent free-form generation.
Memory#
Mastra's memory system stores conversation history and user context:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/memory/stores";
const memory = new Memory({
store: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
lastMessages: 10,
semanticRecall: {
enabled: true,
topK: 3,
},
},
});
const agentWithMemory = new Agent({
name: "Customer Service Agent",
model: anthropic("claude-3-5-sonnet-20241022"),
memory,
});
Semantic recall means the agent retrieves not just recent messages but semantically relevant past interactions — useful for agents that need to remember facts stated earlier in long conversations.
RAG and Vector Search#
Mastra includes built-in RAG tooling with support for multiple vector store backends:
import { MastraVector } from "@mastra/vector-pg"; // PostgreSQL + pgvector
const vectorStore = new MastraVector({
connectionString: process.env.DATABASE_URL,
});
// Index documents
await vectorStore.upsert({
indexName: "company-docs",
vectors: embeddedChunks,
});
// Query
const results = await vectorStore.query({
indexName: "company-docs",
queryVector: await embed("How do I reset my password?"),
topK: 5,
});
Supported vector backends include PostgreSQL with pgvector, Pinecone, Qdrant, Weaviate, and others.
Integrations#
Mastra ships with pre-built integrations for common services:
- GitHub: Repository operations, issue management, pull request workflows
- Slack: Message sending, channel management, user lookups
- Google: Gmail, Calendar, Drive operations
- HubSpot/Salesforce: CRM data access
- Stripe: Payment and subscription data
These integrations expose typed TypeScript interfaces that make it easy to build agents with access to real services.
Observability#
Mastra includes built-in OpenTelemetry integration and an observability dashboard:
- Trace every agent run from input to output
- See which tools were called and their results
- Track workflow step execution and timing
- Monitor token usage and costs
The dashboard is particularly useful during development for understanding why an agent produced a particular response.
Deployment#
Mastra runs as a standard Node.js HTTP server, deployable anywhere Node.js runs:
- Vercel: First-class support with optimized middleware configuration
- Cloudflare Workers: Edge deployment support
- AWS/GCP/Azure: Standard container deployment
- Self-hosted: Any Node.js-compatible server
import { Mastra } from "@mastra/core";
const mastra = new Mastra({
agents: { researchAgent, customerServiceAgent },
workflows: { contentPipeline },
vectors: { vectorStore },
});
// Exposes REST endpoints for all agents and workflows
await mastra.startServer({ port: 3000 });
Strengths#
TypeScript-native: Type safety, IDE autocomplete, and integration with TypeScript tooling are not afterthoughts — they're primary design goals. TypeScript developers feel at home immediately.
Full-stack integration: Mastra is designed to work inside Next.js applications, not as a separate Python service. AI agent code and web application code can share types, utilities, and business logic.
Batteries included: Agents, workflows, memory, RAG, integrations, and observability in one package. Avoiding dependency on multiple incompatible libraries is a real productivity advantage.
Backed by experienced framework builders: The Gatsby team's framework design experience shows in Mastra's developer experience quality.
Limitations#
Newer than Python alternatives: LangChain, CrewAI, and similar Python frameworks have larger ecosystems, more community examples, and more production case studies.
TypeScript ecosystem constraints: Some AI research tools and experimental models are available only as Python libraries. Mastra cannot natively use these without a Python service bridge.
Still evolving API: As of early 2026, some Mastra APIs are still stabilizing. Teams building on Mastra should be prepared for occasional breaking changes.
Ideal Use Cases#
- Next.js AI applications: Building AI features into web applications without the overhead of a separate Python microservice.
- TypeScript-only engineering teams: Organizations where all production code is TypeScript and Python expertise is limited.
- Serverless and edge deployments: Applications running on Vercel, Cloudflare, or similar platforms.
- Full-stack developers: Individual developers building complete AI applications who want a single coherent framework.
How It Compares#
Mastra vs LangChain.js: LangChain.js is LangChain's TypeScript port. Mastra was designed TypeScript-first; LangChain.js is ported from Python. The architectural quality difference shows in developer experience. Mastra's API feels more idiomatic TypeScript.
Mastra vs OpenAI Agents SDK: The OpenAI Agents SDK has excellent TypeScript support but is tied to OpenAI's models and ecosystem. Mastra is model-agnostic.
Mastra vs AI SDK (Vercel): Vercel's AI SDK provides streaming UI components and model-agnostic generation. Mastra provides the full agent framework infrastructure on top of similar primitives. They complement each other; Mastra uses the AI SDK for model connections.
Bottom Line#
Mastra has established the strongest position in TypeScript agent framework space, combining developer experience quality with a comprehensive feature set. For TypeScript development teams building production AI applications, Mastra eliminates the Python-vs-JavaScript trade-off that has constrained the ecosystem.
Best for: TypeScript and Node.js developers building AI agents and workflows, teams deploying on Vercel or Cloudflare, and full-stack developers who want AI agent infrastructure in their primary language.
Frequently Asked Questions#
Can Mastra use local models like Ollama? Yes. Mastra uses the Vercel AI SDK for model connections, which supports Ollama and other OpenAI-compatible local model servers.
Does Mastra support multi-agent workflows? Yes. Mastra supports agents calling other agents as tools, enabling multi-agent architectures where specialized agents collaborate on complex tasks.
Is Mastra compatible with LangChain tools? Not directly — the tool interfaces differ. However, wrapping a LangChain tool in a Mastra-compatible interface is straightforward TypeScript.
Can I use Mastra in a monorepo alongside my Next.js application? Yes. This is the recommended pattern. Mastra packages are standard npm packages that integrate naturally into TypeScript monorepo setups with tools like Turborepo.