---
title: "Observability"
description: "Implement observability and logging to monitor your SvelteKit application in production and debug issues effectively."
canonical_url: "https://vercel.com/academy/svelte-on-vercel/svelte-observability"
md_url: "https://vercel.com/academy/svelte-on-vercel/svelte-observability.md"
docset_id: "vercel-academy"
doc_version: "1.0"
last_updated: "2026-04-11T19:42:17.185Z"
content_type: "lesson"
course: "svelte-on-vercel"
course_title: "Svelte on Vercel"
prerequisites:  []
---

<agent-instructions>
Vercel Academy — structured learning, not reference docs.
Lessons are sequenced.
Adapt commands to the human's actual environment (OS, package manager, shell, editor) — detect from project context or ask, don't assume.
The lesson shows one path; if the human's project diverges, adapt concepts to their setup.
Preserve the learning goal over literal steps.
Quizzes are pedagogical — engage, don't spoil.
Quiz answers are included for your reference.
</agent-instructions>

# Observability

# Observability and Structured Logging

When something breaks in production, "it's not working" isn't enough information. You need to know which endpoint failed, how long it took, what the input was, and whether the error is transient or permanent. Structured logging gives you that context.

## Outcome

Add structured logging to the ski-alerts API endpoints so you can trace requests and debug issues in production.

## Fast Track

1. Use `console.log`, `console.warn`, and `console.error` with structured JSON data
2. Add request timing to API endpoints
3. Log at the right level: info for success, warn for retryable errors, error for fatal failures

## Log Levels Matter

```
console.log()   → INFO  → Normal operations, request timing, step completions
console.warn()  → WARN  → Retryable errors, degraded state, slow responses
console.error() → ERROR → Fatal errors, unrecoverable failures, data corruption
```

Vercel captures all three levels and displays them in the **Logs** tab of your project dashboard. You can filter by level, search by text, and correlate logs with specific deployments.

## Existing Logging in the Workflow

The workflow steps from Section 3 already have some logging:

```typescript title="workflows/evaluate-alerts.ts"
// Step completion (INFO)
console.log('[Workflow] Round complete', {
  round: recheckCount + 1,
  evaluated: allResults.length,
  triggered: triggered.length
});

// Retryable error (WARN)
console.warn(`[Evaluate] Weather fetch failed for ${resort.name}, attempt ${attempt}`, error);

// Fatal error (ERROR)
console.error(`[Evaluate] Fatal: Resort not found: ${resortId}`);
```

This is good but incomplete. You can't correlate logs from the same request or measure total processing time from the logs alone.

## Hands-on exercise 4.2

Let's add structured logging with request context and timing to the API endpoints:

**Requirements:**

1. Add a request ID to the workflow endpoint so you can correlate enqueued runs
2. Add a request ID and timing to the evaluate endpoint
3. Log the start and end of each evaluate request with duration
4. Use consistent prefixes (`[Workflow]`, `[Evaluate]`) for easy filtering

**Implementation hints:**

- Generate a request ID with `crypto.randomUUID()` and include it in every log for that request
- Use `Date.now()` at start and end to calculate duration
- Log input summary (number of alerts, which resorts) at the start of each request
- Log output summary (results, triggered count, errors) at the end
- Keep the objects small. Don't log entire alert arrays, just counts and IDs

## Try It

1. **Send a workflow request:**
   ```bash
   $ curl -X POST http://localhost:5173/api/workflow \
     -H "Content-Type: application/json" \
     -d '{"alerts": [{"id": "a1", "resortId": "mammoth", "condition": {"type": "conditions", "match": "powder"}, "originalQuery": "test", "createdAt": "2025-01-01", "triggered": false}]}'
   ```

2. **Check the server logs for structured output:**

   ```
   [Workflow] Enqueued { requestId: "abc-123", alertCount: 1, resorts: ["mammoth"] }
   [Workflow] Round complete { round: 1, evaluated: 1, triggered: 0 }
   ```

   The first line comes from the route handler. The second comes from the workflow itself (added in Section 3).

3. **Test the evaluate endpoint with structured logging:**

   ```bash
   $ curl -X POST http://localhost:5173/api/evaluate \
     -H "Content-Type: application/json" \
     -d '{"alerts": [{"id": "a1", "resortId": "mammoth", "condition": {"type": "conditions", "match": "powder"}, "originalQuery": "test", "createdAt": "2025-01-01", "triggered": false}]}'
   ```

   Logs should show start and end with duration:

   ```
   [Evaluate] Started { requestId: "def-456", alertCount: 1, resorts: ["mammoth"] }
   [Evaluate] Completed { requestId: "def-456", duration: 523, evaluated: 1, triggered: 0 }
   ```

## Commit

```bash
git add -A
git commit -m "feat(observability): add structured logging with request IDs"
git push
```

## Done-When

- [ ] Workflow endpoint logs each enqueued run with a request ID and alert summary
- [ ] Evaluate endpoint logs start and end of each request with duration
- [ ] Each request has a unique request ID in all its log lines
- [ ] Log levels match severity (log for success, warn for retryable, error for fatal)

## Solution

The workflow route handler (`/api/workflow`) just calls `start()` and returns a run ID. It doesn't do the actual work, so there's not much to log there. The interesting logging happens in two places: the workflow file (where steps run) and the evaluate endpoint (which does synchronous evaluation).

**1. Add request tracking to the workflow route handler:**

```typescript title="src/routes/api/workflow/+server.ts" {5-6,14-18}
import { json } from '@sveltejs/kit';
import { start } from 'workflow/api';
import evaluateAlerts from '../../../../workflows/evaluate-alerts';
import type { RequestHandler } from './$types';

export const POST: RequestHandler = async ({ request }) => {
  const requestId = crypto.randomUUID().slice(0, 8);
  const { alerts } = await request.json();

  if (!alerts || !Array.isArray(alerts)) {
    return json({ error: 'alerts array required' }, { status: 400 });
  }

  console.log('[Workflow] Enqueued', {
    requestId,
    alertCount: alerts.length,
    resorts: [...new Set(alerts.map((a: { resortId: string }) => a.resortId))]
  });

  const run = await start(evaluateAlerts, [{ alerts }]);

  return json({
    requestId,
    runId: run.runId,
    status: 'started'
  });
};
```

The request ID lets you correlate this log line with whatever the workflow steps produce. Short and sweet. The actual evaluation logging already exists in `workflows/evaluate-alerts.ts` from Section 3.

**2. Add structured logging to the evaluate endpoint:**

The `/api/evaluate` endpoint does synchronous weather fetching. This is where timing and error tracking matter most:

```typescript title="src/routes/api/evaluate/+server.ts" {2-3,9-13,27-32}
export const POST: RequestHandler = async ({ request }) => {
  const requestId = crypto.randomUUID().slice(0, 8);
  const startTime = Date.now();
  const { alerts } = (await request.json()) as { alerts: Alert[] };

  // ... validation and grouping unchanged ...

  console.log('[Evaluate] Started', {
    requestId,
    alertCount: alerts.length,
    resorts: [...alertsByResort.keys()]
  });

  // ... existing evaluation loop ...
  // Add { requestId } to warn/error calls within the loop

  console.log('[Evaluate] Completed', {
    requestId,
    duration: Date.now() - startTime,
    evaluated: results.length,
    triggered: results.filter((r) => r.triggered).length
  });

  return json({ /* ... unchanged ... */ });
};
```

## Troubleshooting

\*\*Warning: Logs don't appear in the Vercel dashboard\*\*

Vercel logs are per-deployment. Make sure you're looking at the correct deployment in the **Logs** tab, not a previous one. Logs also take a few seconds to appear after the request completes.

\*\*Warning: requestId shows up in some log lines but not others\*\*

Make sure `requestId` is defined at the top of the handler before any log calls. If you generate it after the first `console.log`, that first line won't have it. Define it right after `Date.now()`.

## Advanced: Vercel Log Drain

For production monitoring, you can forward Vercel logs to external services. Go to your project's **Settings → Log Drains** to configure integrations with Datadog, Axiom, or other observability platforms. See the [Vercel Log Drains docs](https://vercel.com/docs/observability/log-drains) for setup instructions.


---

[Full course index](/academy/llms.txt) · [Sitemap](/academy/sitemap.md)
