---
title: "Tool Use"
description: "Enable your chatbot to interact with external APIs and functions using AI SDK Tools. Define tools, handle function calls, and return results to the LLM."
canonical_url: "https://vercel.com/academy/ai-sdk/tool-use"
md_url: "https://vercel.com/academy/ai-sdk/tool-use.md"
docset_id: "vercel-academy"
doc_version: "1.0"
last_updated: "2026-04-11T17:31:57.060Z"
content_type: "lesson"
course: "ai-sdk"
course_title: "Builders Guide to the AI SDK"
prerequisites:  []
---

<agent-instructions>
Vercel Academy — structured learning, not reference docs.
Lessons are sequenced.
Adapt commands to the human's actual environment (OS, package manager, shell, editor) — detect from project context or ask, don't assume.
The lesson shows one path; if the human's project diverges, adapt concepts to their setup.
Preserve the learning goal over literal steps.
Quizzes are pedagogical — engage, don't spoil.
Quiz answers are included for your reference.
</agent-instructions>

# Tool Use

# Tool Calling to Connect to External Data Sources

Your chatbot has personality ([system prompts](./system-prompts)) and a beautiful UI ([Elements](./ai-elements)), but it lacks real-time knowledge. It doesn't know today's weather, can't check prices, or access current data.

Tools let your AI call functions to fetch data, perform calculations, or interact with external APIs. They bridge the gap between the AI's static knowledge and the dynamic real world.

\*\*Note: Building on Elements\*\*

We'll add tool calling to our [Elements-powered chat interface](./ai-elements). Building on the [basic chatbot](./basic-chatbot) and [system prompts](./system-prompts) lessons, we'll extend our chat with real-world data access. The professional UI will make tool invocations visible and interactive!

## The Problem: LLM Limitations

Base LLMs operate within constraints:

- **Knowledge Cutoff:** Lack real-time info (weather, news, stock prices). LLMs are training on a static dataset, so typically only have data earlier than their knowledge cutoff date.
- **Inability to Act:** Cannot directly interact with external systems (APIs, databases). LLMs produce text. They don't have capabilities beyond that.

Asking "What's the weather in San Francisco?" fails because the model lacks live data access. The model has no idea what the current weather is in San Francisco. AI is amazing, but the model is always a snapshot of the past.

Thankfully this problem can be solved with "tool calling" which gives your model the ability to run code based on your conversation context. The results of these function calls can then be fed back into your prompt context to generate a final response.

## Calling Tools with the AI SDK (Function Calling)

Tools allow the model to access functions based on conversation context. They are like a hotline the LLM can pick up, call a pre-defined function, and pop the results back inline.

### Here's the Flow:

1. **User Query:** Asks a question requiring external data/action.
2. **Model Identifies Need:** Matches query to tool `description`.
3. **Model Generates Tool Call:** Outputs structured request to call specific tool with inferred parameters.
4. **SDK Executes Tool:** API route receives call, SDK invokes `execute` function.
5. **Result Returned:** `execute` function runs (e.g., calls weather API), returns data.
6. **Model Generates Response:** Tool result is automatically fed back to model for final text response.

If you've used a coding environment like Cursor, you've seen this flow in action. That's how Cursor and similar tools interact with your codebase.

Remember that tools grant LLMs access to real-time data and action capabilities, dramatically expanding chatbot usefulness.

To see this in action you'll build a tool to check the weather.

## Step 1: Define `getWeather` Tool

Create a new file `app/api/chat/tools.ts` to define our weather tool.

\*\*Note: Server Actions or Route Handlers?\*\*

Tool endpoints in this lesson live under `/app/api/chat` because we need a
reusable HTTP surface that the `useChat` hook (and anything else) can `fetch`.
The AI SDK defaults to that Route Handler path, so keep it in place for chat
flows even if you reuse the same mutation logic elsewhere. When your UI is the
only caller and the mutation is form-driven, Server Actions keep things
ergonomic (secure secrets, automatic revalidation, no endpoint). If other
clients (mobile apps, webhooks, cron jobs) hit the same logic, move it into a
Route Handler or share a module between both surfaces.
The Next.js docs on [Updating Data](https://nextjs.org/docs/app/getting-started/updating-data),
[Route Handlers](https://nextjs.org/docs/app/getting-started/route-handlers),
and the [Backend-for-Frontend guide](https://nextjs.org/docs/app/guides/backend-for-frontend)
lay out the trade-offs, and the [AI SDK Next.js quickstart](https://ai-sdk.dev/docs/getting-started/nextjs-app-router)
documents the default `/app/api/chat` contract.

1. **Start with the basic structure:**

```typescript title="app/api/chat/tools.ts"
import { tool } from 'ai';
import { z } from 'zod';

export const getWeather = tool({
  // TODO: Add a clear description for the AI to understand when to use this tool
  description: '',

  // TODO: Define the input schema using Zod
  // The tool needs a 'city' parameter (string)
  inputSchema: z.object({
    // Add schema here
  }),

  // TODO: Implement the execute function
  // This function runs when the AI calls the tool
  execute: async ({ city }) => {
    // Implementation goes here
  },
});
```

2. **Add the description to help the AI understand when to use this tool:**

```typescript title="app/api/chat/tools.ts" {2}
export const getWeather = tool({
  description: `Get the current weather conditions and temperature for a specific city.`,

  // Still TODO: inputSchema and execute
});
```

The description is what the AI reads to decide if this tool matches the user's request.

\*\*Note: Prompt Engineering for Tools\*\*

The `description` field is crucial - it's how the AI understands when to use your tool. Be specific and clear:

- ✅ Good: "Get current weather for a specific city. Use when users ask about weather, temperature, or conditions."
- ❌ Bad: "Weather tool"

The AI uses semantic matching between the user's query and your description to decide which tool to call.

3. **Define the input schema - what parameters the tool needs:**

```typescript title="app/api/chat/tools.ts" {4-6}
export const getWeather = tool({
  description: `Get the current weather conditions and temperature for a specific city.`,

  inputSchema: z.object({
    city: z.string().describe('The city name for weather lookup'),
  }),

  // Still TODO: execute function
});
```

The AI will extract the city name from the user's message and pass it to your tool.

\*\*Note: 💡 Need Help Designing Tool Schemas?\*\*

Unsure about what parameters your tool should accept or how to structure them? Try this:

```markdown title="Prompt: Designing Effective Tool Input Schemas"
<context>
I'm building a tool for my Vercel AI SDK chatbot using the `tool()` helper with Zod schemas.
My tool will: [describe what your tool does]
Target use cases: [describe when users would invoke this tool]
</context>

<tool-purpose>
Tool name: getWeather
Purpose: Fetch current weather conditions and temperature for a specified location
External API: Open-Meteo weather API (free, no key needed)
</tool-purpose>

<current-schema-draft>
inputSchema: z.object({
  city: z.string().describe('The city name for weather lookup'),
})
</current-schema-draft>

<questions>
1. **Parameter granularity:** Should I just accept "city" or also "country" to handle ambiguous city names (e.g., Paris, France vs Paris, Texas)?

2. **Optional parameters:** Should I add optional fields like:
   - `units` (celsius/fahrenheit)?
   - `includeHourly` (boolean for detailed forecast)?
   Or keep it simple with just required fields?

3. **Validation:** Should I use `.refine()` to validate city names, or trust the AI to extract valid inputs?

4. **Description quality:** My current description is "The city name for weather lookup" - is this specific enough for the AI to:
   - Extract the right parameter from conversational queries?
   - Handle variations like "What's it like in SF?" → city: "San Francisco"?

5. **Edge cases:** How should my schema handle:
   - Misspelled city names?
   - Cities with special characters (São Paulo)?
   - Coordinates instead of city names (some users might provide lat/lon)?
</questions>

<example-user-queries>
- "What's the weather in San Francisco?"
- "Is it raining in NYC?"
- "Tell me about the temperature in Tokyo today"
- "Weather forecast for London, UK"

Should my schema handle all of these, or should I keep it simple and rely on the AI to normalize inputs?

Recommend a schema design with rationale for each decision (parameter choices, validation, edge case handling).
</example-user-queries>
```

This will help you design robust, flexible tool schemas that handle real-world usage patterns!

4. **Implement the execute function with a simple weather API:**

```typescript title="app/api/chat/tools.ts" {8-35}
export const getWeather = tool({
  description: `Get the current weather conditions and temperature for a specific city.`,

  inputSchema: z.object({
    city: z.string().describe('The city name for weather lookup'),
  }),

  execute: async ({ city }) => {
    // For demo: use a simple city-to-coordinates mapping
    // In production, you'd use a geocoding API
    const cityCoordinates: Record<string, { lat: number; lon: number }> = {
      'san francisco': { lat: 37.7749, lon: -122.4194 },
      'new york': { lat: 40.7128, lon: -74.006 },
      london: { lat: 51.5074, lon: -0.1278 },
      tokyo: { lat: 35.6762, lon: 139.6503 },
      paris: { lat: 48.8566, lon: 2.3522 },
    };

    const coords = cityCoordinates[city.toLowerCase()] ||
                   cityCoordinates['new york']; // Default fallback

    // Call the free Open-Meteo weather API (no key needed!)
    const response = await fetch(
      `https://api.open-meteo.com/v1/forecast?` +
      `latitude=${coords.lat}&longitude=${coords.lon}&` +
      `current=temperature_2m,weathercode&timezone=auto`
    );

    const weatherData = await response.json();

    return {
      city,
      temperature: weatherData.current.temperature_2m,
      weatherCode: weatherData.current.weathercode,
    };
  },
});
```

\*\*Note: What just happened?\*\*

You built a complete tool in 4 progressive steps:

1. **Description**: Tells the AI when to use this tool
2. **Input Schema**: Defines what parameters the AI should extract
3. **Execute Function**: The actual code that runs when called
4. **Return Value**: Structured data the AI can use in its response

The Open-Meteo API is free and requires no API key - perfect for demos!

## Step 2: Connect the Tool to Your API Route

Now update your API route to use this tool. Modify `app/api/chat/route.ts`:

```typescript title="app/api/chat/route.ts" {2,14}
import { streamText, convertToModelMessages } from "ai";
import { getWeather } from "./tools";

export const maxDuration = 30;

export async function POST(req: Request) {
	try {
		const { messages } = await req.json();

		const result = streamText({
			model: "openai/gpt-5-mini", // Fast model handles tool calling efficiently for real-time interactions
			system: "You are a helpful assistant.",
			messages: await convertToModelMessages(messages),
			tools: { getWeather },
		});

		return result.toUIMessageStreamResponse();
	} catch (error) {
		console.error("Chat API error:", error);

		return new Response(
			JSON.stringify({
				error: "Failed to process chat request",
				details: error instanceof Error ? error.message : "Unknown error",
			}),
			{
				status: 500,
				headers: { "Content-Type": "application/json" },
			},
		);
	}
}
```

Key changes:

- Import the `getWeather` tool from `./tools`
- Add `tools: { getWeather }` to register it with the AI

Your chatbot now has access to the weather tool! Try asking "What's the weather in Tokyo?" - but you'll notice the response shows raw JSON data. Let's fix that next.

## Step 3: Handle Tool Calls in the UI

With tools enabled, messages now have different `parts` - some are text, some are tool calls. We need to handle both types.

First, update your message rendering to check the part type. Remember our current code just shows text? Let's evolve it:

```typescript title="app/(5-chatbot)/chat/page.tsx"
// Current code - only handles text:
{message.role === "assistant" ? (
  <Response>
    {message.parts
      ?.filter((part) => part.type === "text")
      .map((part) => part.text)
      .join("")}
  </Response>
) : (
  // user messages...
)}
```

Now let's handle both text AND tool calls. We'll use a switch statement to handle different part types:

```typescript title="app/(5-chatbot)/chat/page.tsx" {3-23}
// Updated code - handles multiple part types:
{message.role === "assistant" ? (
  message.parts?.map((part, i) => {
    switch (part.type) {
      case "text":
        return (
          <Response key={`${message.id}-${i}`}>
            {part.text}
          </Response>
        );
      case "tool-getWeather":  // Tool parts are named "tool-TOOLNAME"
        // For now, show raw JSON to see what we're working with
        return (
          <div key={`${message.id}-${i}`} className="text-xs font-mono p-2 bg-gray-100 rounded">
            Weather Tool Called:
            <pre>Input: {JSON.stringify(part.input, null, 2)}</pre>
            <pre>Output: {JSON.stringify(part.output, null, 2)}</pre>
          </div>
        );
      default:
        return null;
    }
  })
) : (
  // user messages stay the same...
)}
```

**Test it now:** Ask "What's the weather in San Francisco?" and you'll see:

- Your message appears
- Raw tool call data showing the city parameter
- The temperature and weather data returned
- The AI's final response using that data

![Screenshot of the chat UI showing the raw tool call data](https://ezs2ytwtdks5l2we.public.blob.vercel-storage.com/ai-sdk-course-tool-call-san-francisco-raw-data-json.png)

This raw view helps you understand the tool calling flow!

## Step 4: Make It Beautiful with Elements

Now that you understand the raw data, let's replace that JSON dump with beautiful Elements components. First, add the Tool imports to your existing imports:

```typescript title="app/(5-chatbot)/chat/page.tsx" {2-8}
import { Response } from "@/components/ai-elements/response";
import {
  Tool,
  ToolContent,
  ToolHeader,
  ToolInput,
  ToolOutput,
} from "@/components/ai-elements/tool";
import {
  PromptInput,
```

Then replace your raw JSON display with the Elements components:

```typescript title="app/(5-chatbot)/chat/page.tsx" {11-21}
switch (part.type) {
  case "text":
    return (
      <Response key={`${message.id}-${i}`}>
        {part.text}
      </Response>
    );
  case "tool-getWeather":
    // Replace the raw JSON with Elements components
    return (
      <Tool key={part.toolCallId || `${message.id}-${i}`}>
        <ToolHeader type={part.type} state={part.state} />
        <ToolContent>
          <ToolInput input={part.input} />
          <ToolOutput
            output={JSON.stringify(part.output, null, 2)}
            errorText={part.errorText}
          />
        </ToolContent>
      </Tool>
    );
  default:
    return null;
}
```

**Test it:** Ask "What's the weather in San Francisco?" again. Now instead of raw JSON, you'll see:

- A beautiful tool card with the tool name and status
- Formatted input parameters showing the city
- Nicely displayed output data with temperature and humidity

![Screenshot of the chat UI showing the beautiful tool card with the tool name and status](https://ezs2ytwtdks5l2we.public.blob.vercel-storage.com/ai-sdk-tool-call-weather-ai-elements.png)

The Elements components automatically handle loading states, errors, and formatting - much better than raw JSON!

## Step 5: Test the Complete Implementation

Start your dev server:

```bash
pnpm dev
```

Navigate to <http://localhost:3000/chat> and ask: "What's the weather in San Francisco?"

You should now see:

1. **Your message** - "What's the weather in San Francisco?"
2. **Tool execution card** - Shows the weather API call with input city and output data

\*\*Note: Why No Natural Language Response?\*\*

Notice you only see the tool output - no AI explanation of the weather data. By default, the AI stops after executing a tool and returns the raw results.

To get the AI to provide a natural language response that synthesizes the tool data (like "The weather in San Francisco is 19°C and cloudy"), you need to enable multi-step conversations. We'll cover this in the next lesson!

\*\*Side Quest: Define a Complex Tool\*\*

```typescript title="app/(5-chatbot)/api/chat/tools.ts"
import { z } from 'zod'

export const flightBookingParameters = z.object({
trip: z.object({
  origin: z.string().describe('Origin airport code (e.g., LAX, JFK)'),
  destination: z
    .string()
    .describe('Destination airport code (e.g., LHR, NRT)'),
  departureDate: z
    .string()
    .regex(/^\d{4}-\d{2}-\d{2}$/, 'YYYY-MM-DD')
    .describe('Departure date in YYYY-MM-DD format'),
  returnDate: z
    .string()
    .regex(/^\d{4}-\d{2}-\d{2}$/, 'YYYY-MM-DD')
    .describe('Return date in YYYY-MM-DD format (for round trips)')
    .optional(),
}),
passengers: z
  .array(
    z.object({
      type: z.enum(['adult', 'child', 'infant']).describe('Passenger type'),
      count: z
        .number()
        .int()
        .min(1)
        .max(9)
        .describe('Number of this passenger type'),
    }),
  )
  .min(1)
  .describe('Passenger list with at least one entry'),
preferences: z
  .object({
    cabinClass: z
      .enum(['economy', 'premium', 'business', 'first'])
      .describe('Preferred cabin class')
      .optional(),
    directFlightsOnly: z
      .boolean()
      .describe('Whether to only show direct flights')
      .optional(),
  })
  .optional(),
});

```

\*\*Side Quest: Production-Ready Error Handling\*\*

\*\*Note: 💡 Need Help with Error Handling Strategies?\*\*

Unsure how to implement robust error handling in your tools? Try this:

```markdown title="Prompt: Production-Grade Tool Error Handling"
<context>
I'm building production-ready tools for my Vercel AI SDK chatbot.
My current tools call external APIs (weather, flight booking, etc.) and need resilient error handling.
I'm working in TypeScript with async/await patterns.
</context>

<current-implementation>
execute: async ({ city }) => {
  const coords = cityCoordinates[city.toLowerCase()];
  const response = await fetch(`https://api.open-meteo.com/v1/forecast?latitude=${coords.lat}...`);
  const weatherData = await response.json();
  return {
    city,
    temperature: weatherData.current.temperature_2m,
    weatherCode: weatherData.current.weathercode,
  };
}
</current-implementation>

<problems>
1. **No timeout handling:** If the API is slow, the tool hangs indefinitely
2. **No validation:** Assumes `city` exists in `cityCoordinates` - crashes if not
3. **No response checking:** Doesn't verify `response.ok` before parsing JSON
4. **No retry logic:** Transient network failures kill the tool call
5. **No caching:** Same city requested multiple times = redundant API calls
</problems>

<questions>
1. **Timeout strategy:** Should I use `AbortSignal.timeout()` or a custom timeout wrapper? What's a reasonable timeout (3s? 5s? 10s)?

2. **Graceful degradation:** When an error occurs, should I:
   - Return a partial result with error details?
   - Return a user-friendly error message?
   - Let the AI explain the failure to the user?

3. **Retry logic:** For transient failures:
   - How many retries are reasonable (2? 3?)?
   - Should I use exponential backoff (100ms, 200ms, 400ms)?
   - Which HTTP status codes warrant retries (429, 500, 503)?

4. **Input validation:** Should I:
   - Validate city names against a whitelist before hitting the API?
   - Use Zod `.refine()` to check for valid inputs?
   - Trust the AI to provide valid inputs and just handle API errors?

5. **Caching:** Should I:
   - Cache successful responses for X minutes (5? 15? 30?)?
   - Use a simple in-memory Map or integrate Redis?
   - Cache based on exact input match or normalize inputs first?

6. **Error messages:** How detailed should errors be for the LLM?
   - Technical: "API returned 503 Service Unavailable"
   - User-friendly: "Weather service is temporarily down"
   - Actionable: "Unable to fetch weather for [city]. Try a major city name."
</questions>

<specific-scenario>
User asks: "What's the weather in Atlantis?" (non-existent city)

Current behavior: Crashes with "Cannot read property 'lat' of undefined"
Desired behavior: Return friendly error the AI can explain to user

Show me the error handling code that would solve this, with explanations of each technique used.
</specific-scenario>
```

This will help you implement production-grade error handling patterns!

```typescript title="error-handling-tool.ts"
execute: async ({ city }) => {
  try {
    const response = await fetch(url, {
      signal: AbortSignal.timeout(5000) // 5 second timeout
    });

    if (!response.ok) {
      throw new Error(`API error: ${response.status}`);
    }

    return await response.json();
  } catch (error) {
    console.error('Tool execution failed:', error);

    // Return structured error for AI to explain
    return {
      error: 'Unable to fetch weather data',
      city,
      suggestion: 'Please try another city or check back later'
    };
  }
}
```

## Key Takeaways

You've given your chatbot superpowers with tool calling:

- **Tools extend AI capabilities** - Access real-time data, perform calculations, call APIs
- **The `tool` helper** defines what tools can do with description, parameters, and execute
- **Tool registration via `tools` property** - Makes tools available to the model
- **Elements UI displays everything beautifully** - Professional presentation of both text and tool activity

## Further Reading (Optional)

Strengthen your tool-calling implementation with these security-focused resources:

- [LLM Function Calling Security (OpenAI Docs)](https://platform.openai.com/docs/guides/function-calling/security-considerations)\
  Official guidance on hardening function calls (parameter validation, auth, rate limits).
- [OWASP Top 10 for LLM Applications](https://owasp.org/www-project-top-10-for-large-language-model-applications/)\
  Community-maintained list of the most critical security risks when deploying LLMs.
- [Prompt Injection Payloads Encyclopedia (PIPE)](https://github.com/jthack/PIPE)\
  A living catalogue of real-world prompt-injection vectors to test against.
- [NVIDIA NeMo Guardrails Security Guidelines](https://docs.nvidia.com/nemo/guardrails/latest/security/guidelines.html)\
  Practical design principles for safely granting LLMs access to external tools/APIs.
- [Function Calling Using LLMs — Martin Fowler](https://martinfowler.com/articles/function-call-LLM.html)\
  Architectural walkthrough of building a secure, extensible tool-calling agent.
- [Step-by-Step Guide to Securing LLM Applications (Protect AI)](https://protectai.com/blog/step-by-step-guide-to-securing-llm-applications)\
  Lifecycle-based checklist covering training, deployment and runtime hardening.

## Up Next: Multi-Step Conversations & Generative UI

Your model can now call a single tool and provide responses. But what if you need multiple tools in one conversation? Or want to display rich UI components instead of just text?

The next lesson explores **Multi-Step Conversations** where the AI can chain multiple tool calls together, and **Generative UI** to render beautiful interactive components directly in the chat.


---

[Full course index](/academy/llms.txt) · [Sitemap](/academy/sitemap.md)
