• Sanity is now available on the Vercel Marketplace

    Sanity is now available on the Vercel Marketplace as a native CMS integration. Teams can now install, configure, and manage Sanity directly from the Vercel dashboard, eliminating manual API token setup and environment variable configuration.

    This integration keeps CMS setup inside your existing Vercel workflow instead of requiring a separate dashboard for provisioning and account management.

    Link to headingGet started with the integration

    Define your content schema, set up the client, and start fetching content. Schemas define your content structure in code, specifying document types and their fields.

    src/sanity/schemaTypes/postType.ts
    import { defineField, defineType } from "sanity";
    export const postType = defineType({
    name: "post",
    title: "Post",
    type: "document",
    fields: [
    defineField({ name: "title", type: "string" }),
    defineField({ name: "slug", type: "slug", options: { source: "title" } }),
    defineField({ name: "body", type: "array", of: [{ type: "block" }] }),
    ],
    });

    Define a post document type with title, slug, and rich text body fields

    Register your schema types in an index file so Sanity can load them.

    src/sanity/schemaTypes/index.ts
    import { postType } from "./postType";
    export const schemaTypes = [postType];

    Export all schema types for Sanity Studio to use

    The Sanity client connects your application to your content. The Marketplace integration provisions the project ID as an environment variable automatically.

    src/sanity/lib/client.ts
    import { createClient } from "next-sanity";
    export const client = createClient({
    projectId: process.env.NEXT_PUBLIC_SANITY_PROJECT_ID,
    dataset: "production",
    apiVersion: "2024-01-01",
    useCdn: false,
    });

    Create a reusable client configured with your project's environment variables

    With the client configured, you can fetch content using GROQ (Graph-Relational Object Queries), Sanity's query language for requesting exactly the fields you need.

    src/app/page.tsx
    import { client } from "@/sanity/lib/client";
    const POSTS_QUERY = `*[_type == "post"] | order(publishedAt desc)[0...12]{
    _id, title, slug, publishedAt
    }`;
    export default async function HomePage() {
    const posts = await client.fetch(POSTS_QUERY);
    return (
    <ul>
    {posts.map((post) => (
    <li key={post._id}>{post.title}</li>
    ))}
    </ul>
    );
    }

    Fetch the 12 most recent posts and render them as a list

    That's all you need to go from install to fetching content. Install Sanity from the Vercel Marketplace to get started, or deploy the Next.js + Sanity Personal Website template to start from a working example.

    Marketplace Team

  • Simplified file retrieval from Vercel Sandbox environments

    The Vercel Sandbox SDK now includes two new methods that make file retrieval simple.

    When you run code in a Vercel Sandbox, that code can generate files like a CSV report, a processed image, or a PDF invoice. These files are created inside isolated VMs, so they need to be retrieved across a network boundary. Until now, this required manual stream handling with custom piping.

    Link to headingDownload a file

    If you want to download a generated report from your sandbox to your local machine, you can use downloadFile() to seamlessly stream the contents.

    const dstPath = await sandbox.downloadFile(
    { path: 'generated-file.csv', cwd: '/vercel/sandbox' },
    { path: 'generated-file.csv', cwd: '/tmp' }
    );

    Link to headingRead file contents to buffer

    Both methods handle the underlying stream operations automatically. For example, if your sandbox runs a script that generates a chart as a PNG, you can pull it out with a single call to readFileToBuffer(), no manual stream wiring needed.

    const buffer = await sandbox.readFileToBuffer({ path: 'chart.png' });

    Learn more about the Sandbox SDK or explore the updated documentation.

    Laurens Duijvesteijn, Rob Herley

  • Use Claude Opus 4.6 on AI Gateway

    Anthropic's latest flagship model, Claude Opus 4.6, is now available on AI Gateway. Built to power agents that handle real-world work, Opus 4.6 excels across the entire development lifecycle. Opus 4.6 is also the first Opus model to support the extended 1M token context window.

    The model introduces adaptive thinking, a new parameter that lets the model decide when and how much to reason. This approach enables more efficient responses while maintaining quality across programming, analysis, and creative tasks, delivering equal/better performance than extended thinking. Opus 4.6 can interleave thinking and tool calls within a single response.

    To use the model, set model to anthropic/claude-opus-4.6. The following example also uses adaptive thinking and the effort parameter.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'anthropic/claude-opus-4.6',
    prompt:
    `Trace this race condition through the event loop, identify all
    affected code paths, and implement a fix with proper test coverage.`,
    providerOptions: {
    anthropic: {
    thinking: { type: 'adaptive' },
    effort: 'max',
    },
    },
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • Build logs now support interactive links

    build-logs-clickable-darkbuild-logs-clickable-dark

    URLs in build logs are now interactive. Navigate directly to internal and external resources without manually copying and pasting. External links open in a new tab.

    This eliminates any extra steps you may encounter when investigating build issues or following documentation links.

    Learn more about accessing build logs.

  • Parallel's Web Search and tools are live on Vercel

    You can now use Parallel's LLM-optimized web search and other tools all across Vercel.

    Link to headingAI Gateway

    Unlike provider-specific web search tools that only work with certain models, Parallel's web search tool works universally across all providers. This means you can add web search capabilities to any model without changing your implementation.

    To use through AI SDK, set parallel_search: gateway.tools.parallelSearch() in tools.

    import { gateway, streamText } from 'ai';
    const result = streamText({
    model: 'moonshotai/kimi-k2.5', // Works with any model
    prompt: 'What are the best new restaurants in San Francisco?',
    tools: {
    parallel_search: gateway.tools.parallelSearch(),
    },
    });

    Parallel web search extracts relevant excerpts from web pages, making it ideal for agentic tasks and real-time information retrieval. For more control, you can also configure the tool to use specific parameters.

    import { gateway, streamText } from 'ai';
    const result = streamText({
    model: 'moonshotai/kimi-k2.5',
    prompt: 'What new AI model releases have been announced this month?',
    tools: {
    parallel_search: gateway.tools.parallelSearch({
    mode: 'one-shot',
    maxResults: 10,
    sourcePolicy: {
    includeDomains: [
    'openai.com',
    'anthropic.com',
    'deepseek.com',
    'moonshot.ai',
    'deepmind.google',
    ],
    afterDate: '2026-01-01',
    },
    }),
    },
    });

    For agentic workflows, use mode: 'agentic' to get concise, token-efficient search results that work well in multi-step reasoning.

    Time-sensitive queries can control cache freshness with maxAgeSeconds, while domain-specific search lets you restrict results to trusted sources or exclude noisy domains.

    Parallel web search requests are charged at exactly the same rate as the Parallel API. $5 per 1,000 requests (includes up to 10 results per request) and additional results beyond 10 charged at $1 per 1,000 additional results. Read the docs for more information and details on how to use the tool.

    Link to headingAI SDK

    AI SDK supports Parallel as a tool for both web search and extraction. To use, simply install the parallel-web tool package.

    pnpm install @parallel-web/ai-sdk-tools

    View the docs for more details on how to utilize the tools.

    Link to headingVercel Marketplace

    You can utilize all Parallel products: Search, Extract, Task, Findall, and Monitoring in Vercel Agent Marketplace with centralized billing through Vercel and a single API key. To get started, go to the Parallel integration and connect your account or deploy the Next.js template to integrate Parallel's web research APIs with Vercel in action.

    Get started with Parallel for your AI applications through AI Gateway, the AI SDK tool package, or Vercel Marketplace.

  • Parallel joins the Vercel Agent Marketplace

    Parallel is now available on the Vercel Agent Marketplace with native integration support.

    Parallel provides web tools and agents designed for LLM-powered applications, including Search, Extract, Tasks, FindAll, and Monitoring capabilities. The Vercel integration provides a single API key that works across all Parallel products, with billing handled directly through your Vercel account.

    For developers building AI features on Vercel, Parallel enables agents to access the open web for tasks like answering questions, monitoring changes, and extracting structured data. Since Parallel returns results optimized for LLM consumption, your agents can resolve tasks with fewer round trips and reduced cost and latency.

    import Parallel from "parallel-web";
    const client = new Parallel({ apiKey: process.env.PARALLEL_API_KEY });
    async function main() {
    const search = await client.beta.search({
    objective: "When was the United Nations established? Prefer UN's websites.",
    search_queries: [
    "Founding year UN",
    "Year of founding United Nations"
    ],
    max_results: 10,
    excerpts: { max_chars_per_result: 10000 },
    });
    console.log(search.results);
    }
    main().catch(console.error);

    Execute your first API call in minutes

    Install Parallel from the Marketplace or deploy the Next.js template to see Parallel's web research APIs integrated with Vercel in action.

    Marketplace Team

  • AI Gateway and one-click deploys now available on TRAE

    ByteDance's coding agent TRAE now integrates both AI Gateway and direct Vercel deployments, bringing unified AI access and instant production shipping to over 1.6 million monthly active developers. Teams can now access hundreds of models through a single API key and deploy applications directly to Vercel from the TRAE interface.

    AI Gateway provides unified access to models from Anthropic, OpenAI, Google, xAI, DeepSeek, Z.AI, MiniMax, Moonshot AI, and more without managing multiple provider accounts.

    The integration includes automatic failover that routes around provider outages, zero markup on AI tokens, and unified observability to monitor both deployments and AI usage. Meanwhile, the Vercel deployment integration handles authorization automatically and returns live URLs immediately after clicking Deploy.

    Link to headingSOLO Mode

    Setting up Vercel deployment

    In SOLO mode, click the + tab and choose Integrations to connect your Vercel account. When your project is ready, click Deploy in the chat panel to ship directly to production.

    Once linked, all projects can immediately deploy to Vercel and are also visible in your Vercel dashboard.

    Setting up AI Gateway

    In Integrations, choose Vercel AI Gateway as your AI Service and add your API key from the Vercel AI Gateway dashboard. Select any model and start coding with automatic failover, low latency, and full observability.

    Link to headingIDE Mode

    TRAE's IDE mode supports AI Gateway as a model provider with access to the full range of available models alongside direct deployment capabilities.

    Configuration

    // Click the model list dropdown in Builder chat and select Add Model
    // Choose Vercel AI Gateway for Provider
    // Select your model or choose Other Models and enter the creator/model slug
    // Add your API key

    You can switch models with a single configuration change while maintaining unified billing through Vercel. This creates a complete development experience where teams write code with any AI model, then ship to production with one click from the same interface.

    Get started with AI Gateway or explore the documentation to learn more.

  • Turbo build machines by default for new Pro projects

    Turbo build machines are now the default for all new Pro projects and projects upgrading from Hobby to the Pro plan.

    Turbo build machines were introduced in October for all paid plans, delivering 30vCPUs and 60GB of memory for faster build performance.

    Teams adopting Turbo build machines have seen significant build time improvements:

    • up to 30% faster for builds under 2 minutes

    • up to 50% faster for builds that take 2-10 minutes

    • up to 70% faster for builds over 10 minutes

    Learn more in the documentation or customize your build machine in settings.