API DesignArchitecture Practices

Server-Sent Events: The Humble Hero Between REST and WebSockets

OL
Oscar van der Leij
12 min read
Server-Sent Events: The Humble Hero Between REST and WebSockets

A platform engineer at a growing SaaS company was troubleshooting their notification system on a Tuesday morning when she noticed something odd. Their WebSocket implementation was rock solid, with zero complaints. But their infrastructure costs? Through the roof. Thousands of persistent connections, each one a tiny vampire sucking resources from their server fleet. Meanwhile, their REST-based polling solution for less critical updates was hammering the API with requests every few seconds, creating a thunderstorm of unnecessary traffic. She found herself wondering: wasn't there something in between?

The challenge many architects face is the false dichotomy between request-response patterns and full-duplex communication. We reach for REST when we need simplicity, and WebSockets when we need real-time bidirectional communication. But there's a vast middle ground: scenarios where the server needs to push updates to clients, but the client rarely needs to talk back. Stock tickers, live dashboards, progress notifications, server logs streaming to a browser: these are one-way streets masquerading as two-way highways. That's where Server-Sent Events (SSE) quietly excels, sitting in the corner like the reliable friend who doesn't need constant attention but always shows up when needed.

The Diplomatic Courier

Server-Sent Events can be seen as a diplomatic courier service. With traditional REST, you're constantly sending messengers (HTTP requests) to check if there's any news: "Any updates?" "How about now?" "Anything yet?" It's exhausting and inefficient. WebSockets, on the other hand, are like installing a dedicated telephone line. Powerful, but you're paying for that infrastructure whether you're actively talking or not.

SSE is different. It's like subscribing to a courier service where the server sends you updates whenever there's news, but you don't need to maintain an expensive two-way communication channel. The connection stays open, the server pushes data when it has something to say, and your client listens. Simple. Elegant. HTTP-native.

At its core, SSE is built on standard HTTP. A client makes a request, the server responds with a text/event-stream content type, and keeps the connection open. The server can then send events as formatted text whenever it wants. The browser's built-in EventSource API handles reconnection automatically, and you get a clean, straightforward interface for receiving updates.

The Technical Deep Dive

SSE operates on a beautifully simple protocol. When a client establishes a connection, the server responds with specific headers and then streams events in a text-based format. Each event follows a simple structure:

event: message_type
data: {"key": "value"}
id: unique_event_id

The blank line at the end signals the completion of an event. The browser's EventSource automatically parses these, handles reconnections, and even tracks the last received event ID for resuming interrupted streams.

The Protocol Specifications

Feature SSE WebSockets Long Polling
Direction Server to Client Bidirectional Client to Server
Protocol HTTP WebSocket (ws://) HTTP
Auto-reconnect Built-in Manual Manual
Message Format Text (UTF-8) Text or Binary Any
Browser Support Excellent (except IE) Excellent Universal
Proxy/Firewall Friendly Yes Sometimes problematic Yes
Connection Overhead Low Medium High

When SSE Shines

Not every real-time problem needs the same solution. SSE hits its sweet spot in scenarios where data flows predominantly in one direction: from the server to the client. If your architecture has a server producing a continuous stream of updates that clients need to consume, SSE is often the simplest and most resource-efficient choice. It slots naturally into existing HTTP infrastructure, which means fewer surprises with firewalls, proxies, and load balancers. Consider SSE your go-to solution when:

  • Updates flow primarily from server to client: Live feeds, notifications, monitoring dashboards
  • You want HTTP compatibility: Works through proxies, load balancers, and CDNs without special configuration
  • Automatic reconnection matters: The browser handles it natively with exponential backoff
  • You're already invested in HTTP infrastructure: Authentication, compression, caching all work as expected
  • Resource efficiency is important: Lower overhead than WebSockets for one-way communication

When to Look Elsewhere

SSE is a sharp tool, but it is not the right tool for every job. Its one-directional nature, which is its greatest strength in the right context, becomes a liability when clients need to send frequent or complex messages back to the server. Before committing to SSE, check whether any of these conditions apply to your use case. If they do, a different protocol will serve you better and save you from working against the grain of the technology. Skip SSE when:

  • You need true bidirectional communication: Chat applications, collaborative editing, real-time gaming
  • Binary data is essential: WebSockets handle binary more efficiently
  • Internet Explorer support is required: Though if you're still supporting IE, we need to have a different conversation
  • Sub-second latency is critical: WebSockets have slightly lower latency due to their frame-based protocol

Rolling Up Our Sleeves

Let's build a practical example: a real-time server health monitoring dashboard. The server will push CPU, memory, and request metrics to connected clients.

We'll create a single ASP.NET Core Web API project that also serves the HTML client as a static file. By the end you'll have a working SSE stream you can test in any browser by just running dotnet run.

Project Structure

HealthMonitor/
├── HealthMonitor.Api/
│   ├── Controllers/
│   │   └── MetricsController.cs          # SSE streaming endpoint
│   ├── Models/
│   │   └── ServerMetrics.cs              # Metrics record
│   ├── Services/
│   │   ├── IAuthService.cs               # Auth abstraction
│   │   ├── MockAuthService.cs            # Dev/test implementation
│   │   └── FakeAuthenticationHandler.cs  # Dev auth middleware
│   ├── Program.cs                        # Bootstrap + static files
│   └── HealthMonitor.Api.csproj          # References client folder
├── HealthMonitor.Client/
│   └── index.html                        # Browser dashboard
└── HealthMonitor.Tests/
    └── SseLoadTest.cs                    # Concurrent load test

Create the projects with:

mkdir HealthMonitor && cd HealthMonitor
dotnet new webapi -n HealthMonitor.Api --no-openapi
dotnet new xunit -n HealthMonitor.Tests
mkdir HealthMonitor.Client

Step 1: Define the model

Start with the data model so the controller can reference it cleanly:

// HealthMonitor.Api/Models/ServerMetrics.cs
namespace HealthMonitor.Api.Models;

public record ServerMetrics
{
    public int CpuUsage { get; init; }
    public int MemoryUsage { get; init; }
    public int RequestsPerSecond { get; init; }
    public DateTime Timestamp { get; init; }
}

Step 2: Set up the streaming endpoint

This is the complete controller file including auth and reconnection support (covered in Steps 3 and 5):

// HealthMonitor.Api/Controllers/MetricsController.cs
using System.Security.Claims;
using System.Text;
using System.Text.Json;
using HealthMonitor.Api.Models;
using HealthMonitor.Api.Services;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace HealthMonitor.Api.Controllers;

[ApiController]
[Route("api/[controller]")]
public class MetricsController(IAuthService authService) : ControllerBase
{
    [Authorize]
    [HttpGet("stream")]
    public async Task StreamMetrics(
        [FromHeader(Name = "Last-Event-ID")] string? lastEventId,
        CancellationToken cancellationToken)
    {
        var userId = User.FindFirst(ClaimTypes.NameIdentifier)?.Value;
        if (!await authService.CanViewMetrics(userId))
        {
            Response.StatusCode = StatusCodes.Status403Forbidden;
            return;
        }

        Response.Headers.Append("Content-Type", "text/event-stream");
        Response.Headers.Append("Cache-Control", "no-cache");
        Response.Headers.Append("Connection", "keep-alive");

        // Resume from last received event if the client reconnected
        var eventId = int.TryParse(lastEventId, out var id) ? id : 0;

        while (!cancellationToken.IsCancellationRequested)
        {
            try
            {
                var metrics = await GatherSystemMetrics();

                // Format as SSE event
                var eventData = $"id: {++eventId}\n" +
                              $"event: metrics\n" +
                              $"data: {JsonSerializer.Serialize(metrics, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase })}\n\n";

                var bytes = Encoding.UTF8.GetBytes(eventData);
                await Response.Body.WriteAsync(bytes, cancellationToken);
                await Response.Body.FlushAsync(cancellationToken);

                await Task.Delay(TimeSpan.FromSeconds(2), cancellationToken);
            }
            catch (Exception ex) when (ex is OperationCanceledException ||
                                       ex is IOException)
            {
                // Client disconnected, exit gracefully
                break;
            }
        }
    }

    private Task<ServerMetrics> GatherSystemMetrics() =>
        Task.FromResult(new ServerMetrics
        {
            CpuUsage = Random.Shared.Next(10, 90),
            MemoryUsage = Random.Shared.Next(40, 80),
            RequestsPerSecond = Random.Shared.Next(100, 1000),
            Timestamp = DateTime.UtcNow
        });
}

Key implementation details:

  • Content-Type header: Must be text/event-stream for SSE
  • Cache-Control: Prevents proxies from buffering the stream
  • Flushing: Critical for ensuring events reach the client immediately
  • Cancellation handling: Gracefully handle client disconnections
  • Event ID: Tracks position in the stream so reconnecting clients can resume

Step 3: Add authentication

The controller uses [Authorize], so the project needs an auth scheme registered. For development, a fake handler auto-authenticates every request so you can test without a real identity provider.

Services/IAuthService.cs is the abstraction the controller depends on:

// HealthMonitor.Api/Services/IAuthService.cs
namespace HealthMonitor.Api.Services;

public interface IAuthService
{
    Task<bool> CanViewMetrics(string? userId);
}

Services/MockAuthService.cs is the development implementation (replace with real logic in production):

// HealthMonitor.Api/Services/MockAuthService.cs
namespace HealthMonitor.Api.Services;

public class MockAuthService : IAuthService
{
    public Task<bool> CanViewMetrics(string? userId)
    {
        // Allow all users for development/testing. Replace with real auth logic in production.
        return Task.FromResult(true);
    }
}

Services/FakeAuthenticationHandler.cs satisfies [Authorize] without a real login flow during development:

// HealthMonitor.Api/Services/FakeAuthenticationHandler.cs
using System.Security.Claims;
using System.Text.Encodings.Web;
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Options;

namespace HealthMonitor.Api.Services;

public class FakeAuthenticationHandler : AuthenticationHandler<AuthenticationSchemeOptions>
{
    public FakeAuthenticationHandler(
        IOptionsMonitor<AuthenticationSchemeOptions> options,
        ILoggerFactory logger,
        UrlEncoder encoder,
        ISystemClock clock)
        : base(options, logger, encoder, clock)
    {
    }

    protected override Task<AuthenticateResult> HandleAuthenticateAsync()
    {
        var claims = new[] { new Claim(ClaimTypes.NameIdentifier, "dev-user") };
        var identity = new ClaimsIdentity(claims, Scheme.Name);
        var principal = new ClaimsPrincipal(identity);
        var ticket = new AuthenticationTicket(principal, Scheme.Name);

        return Task.FromResult(AuthenticateResult.Success(ticket));
    }
}

Step 4: Wire it together

Program.cs registers all services, sets up the fake auth scheme, and serves the HealthMonitor.Client/ folder as static files so the dashboard loads at the root URL:

// HealthMonitor.Api/Program.cs
using Microsoft.Extensions.FileProviders;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();
builder.Services.AddSingleton<HealthMonitor.Api.Services.IAuthService, HealthMonitor.Api.Services.MockAuthService>();

builder.Services.AddAuthentication("DevAuth")
    .AddScheme<Microsoft.AspNetCore.Authentication.AuthenticationSchemeOptions,
               HealthMonitor.Api.Services.FakeAuthenticationHandler>("DevAuth", _ => { });

builder.Services.AddAuthorization();

var app = builder.Build();

// Serve HealthMonitor.Client/ (sibling folder) as root static files
var clientRoot = Path.Combine(builder.Environment.ContentRootPath, "..", "HealthMonitor.Client");
var clientFileProvider = new PhysicalFileProvider(clientRoot);

app.UseDefaultFiles(new DefaultFilesOptions
{
    FileProvider = clientFileProvider,
    DefaultFileNames = new List<string> { "index.html" }
});
app.UseStaticFiles(new StaticFileOptions
{
    FileProvider = clientFileProvider,
    RequestPath = ""
});

app.UseHttpsRedirection();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();

app.Run();

Update HealthMonitor.Api.csproj to copy the client folder to the build output:

<!-- HealthMonitor.Api/HealthMonitor.Api.csproj -->
<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>net9.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
  </PropertyGroup>

  <ItemGroup>
    <Content Include="..\HealthMonitor.Client\**\*" CopyToOutputDirectory="PreserveNewest" />
  </ItemGroup>
</Project>

Step 5: Handle reconnection

The browser reconnects automatically and sends the Last-Event-ID header. The controller in Step 2 already reads it via [FromHeader(Name = "Last-Event-ID")] and initialises eventId from it, so the stream resumes from the correct position. If you have a persistent event store, you can replay missed events before entering the live loop by querying events with an ID greater than lastEventId.

Step 6: Create the browser dashboard

This file lives in the sibling HealthMonitor.Client/ folder. The API project references and serves it as a static file (configured in Step 4), so the EventSource URL is relative to the same origin, with no hardcoded port and no CORS issues:

<!-- HealthMonitor.Client/index.html -->
<!DOCTYPE html>
<html lang="en">
<body>
  <p>CPU: <span id="cpu">--</span></p>
  <p>Memory: <span id="memory">--</span></p>
  <p>Req/s: <span id="rps">--</span></p>

  <script>
    const eventSource = new EventSource('/api/metrics/stream', {
        withCredentials: true
    });

    eventSource.addEventListener('metrics', (event) => {
        const metrics = JSON.parse(event.data);
        updateDashboard(metrics);
        console.log(`Event ID: ${event.lastEventId}`);
    });

    eventSource.addEventListener('error', (error) => {
        console.error('SSE error:', error);
        // EventSource will automatically attempt to reconnect
    });

    function updateDashboard(metrics) {
        document.getElementById('cpu').textContent = `${metrics.cpuUsage}%`;
        document.getElementById('memory').textContent = `${metrics.memoryUsage}%`;
        document.getElementById('rps').textContent = metrics.requestsPerSecond;
    }
  </script>
</body>
</html>

Now run dotnet run from HealthMonitor.Api/. Check the console output for the assigned port (e.g. Now listening on: http://localhost:5241) and open that URL. The dashboard will load and start streaming metrics immediately.

Step 7: Test and monitor

Manual test with curl:

curl -N -H "Accept: text/event-stream" http://localhost:<PORT>/api/metrics/stream

Load test HealthMonitor.Tests/SseLoadTest.cs:

// HealthMonitor.Tests/SseLoadTest.cs
using System.IO;
using System.Net.Http;
using Xunit;

namespace HealthMonitor.Tests;

public class SseLoadTest
{
    [Fact]
    public async Task Should_Handle_Multiple_Concurrent_Connections()
    {
        var tasks = Enumerable.Range(0, 100)
            .Select(_ => ConnectAndReceiveEvents())
            .ToArray();

        await Task.WhenAll(tasks);
    }

    private async Task ConnectAndReceiveEvents()
    {
        using var client = new HttpClient();
        client.Timeout = TimeSpan.FromMinutes(5);

        using var response = await client.GetAsync(
            "http://localhost:<PORT>/api/metrics/stream", // replace <PORT> with the port shown in dotnet run output
            HttpCompletionOption.ResponseHeadersRead);

        using var stream = await response.Content.ReadAsStreamAsync();
        using var reader = new StreamReader(stream);

        var eventCount = 0;
        while (eventCount < 10) // Receive 10 events then exit
        {
            var line = await reader.ReadLineAsync();
            if (line?.StartsWith("data:") == true)
                eventCount++;
        }
    }
}

SSE connections are persistent. Unlike a normal request that completes in milliseconds, each connection holds a response stream open indefinitely. A server that handles one connection fine can fall over under hundreds of concurrent ones if it blocks threads or exhausts the connection pool. This test opens 100 connections simultaneously, each reading 10 events before exiting, to verify the server scales the way SSE promises it should. Run it before deploying. Make sure HealthMonitor.Api is already running, then from the HealthMonitor.Tests/ folder:

dotnet test

The diagram below shows the full connection lifecycle: the initial handshake, the event stream, a client disconnect, and the automatic reconnection with the Last-Event-ID header that lets the server resume from where it left off.

SSE ServerClientsClient BrowserAPI ServerInitial ConnectionEvent StreamingClient DisconnectAuto ReconnectLast-Event-ID HeaderEventSource()GET /events200 OKdata: {...}Network lossConnection closedAutomatic retryResume fromLast-Event-ID: 42Resume stream

The Victory Lap

Server-Sent Events deserve a prominent place in your architectural toolkit:

  • Embrace the simplicity: SSE gives you server push with HTTP's familiar semantics and infrastructure compatibility
  • Choose the right tool: Use SSE for server-to-client updates, WebSockets for bidirectional communication, REST for request-response
  • Leverage built-in features: Automatic reconnection and event ID tracking come free with the browser's EventSource API
  • Scale with confidence: SSE connections are lighter weight than WebSockets and infinitely more efficient than polling

Here's your homework: Look at your current architecture. Where are you polling for updates? Where are you maintaining WebSocket connections just to push occasional notifications? Could SSE simplify your life and reduce your infrastructure costs?

And perhaps most importantly: When was the last time you questioned whether the "standard" solution was actually the right solution for your specific use case?

Share this article

Enjoyed this article?

Subscribe to get more insights delivered to your inbox monthly

Subscribe to Newsletter