OG Image Generation on the Edge

I work at Cloudflare, and I’m always looking for excuses to explore the platform’s primitives in weird ways. When I rebuilt my personal site, OG images seemed like the perfect opportunity: a real(ish) problem that could be solved by combining Workers, Browser Rendering, R2, and the Cache API. Fair warning: this is experimental, not battle-tested. I’m sharing what I learned, not prescribing a production-ready solution.

Yes, I'm biased. But at least I'm having fun.

TL;DR

Embed your OG image design as a hidden template element on your page. A Cloudflare Worker visits the page, extracts the template, and screenshots it. Same HTML, same CSS, same fonts. No template drift.

A rough reference implementation is available on GitHub. You can deploy it to your own Cloudflare account in a few minutes if you want to try this approach yourself.

mattrothenberg/cf-og

Cloudflare Worker for screenshotting OG images from your page templates.

The Key Insight

Most OG image solutions work like this: you define your template separately, pass data to it, and get an image back. Satori wants JSX. Other services want you to use their template builder. The template always lives somewhere else.

This approach is different. The OG template is embedded directly in your deployed page as a hidden <template> element. Same HTML. Same CSS. Same Tailwind classes. Same fonts. When you want an OG image, a worker visits your actual page, extracts the template, and screenshots it.

This means there’s no template drift. When you update your design system, your OG images update automatically. No separate build step, no special syntax, no “OG-compatible” subset of CSS to learn.

I was heavily inspired by OGKit, built by Peter Suhm. Peter figured out that OG images could just be hidden web content waiting to be screenshotted, and that insight was the spark for this whole system.

Seriously, go follow him. He's cooking.

Why Not Satori?

Satori is the popular choice for OG images, and for good reason. It’s fast and runs anywhere JavaScript does. Vercel’s @vercel/og is built on it, and it powers countless sites.

But Satori requires you to maintain a separate design system:

  • Limited CSS support: no grid, limited flexbox, specific font handling
  • JSX-to-SVG translation: your components need to be written with Satori’s subset in mind
  • Font loading complexity: you need to bundle fonts or fetch them at runtime
  • Separate templates: your OG designs live in a different place than your site’s actual pages

For simple text-on-gradient images, Satori is great. But I wanted to use WebGL shaders, the same Tailwind utilities, and the same fonts as the rest of my site. I didn’t want to maintain a parallel design system just for OG images.

What You Can Build

Because the screenshot captures a real browser, you can use anything that renders in Chrome: canvas graphics, CSS patterns, custom fonts, whatever. Here are some examples (click to open the full size image):

Grain gradient — WebGL shader
Grain gradient — WebGL shader
Dither pattern — WebGL shader
Dither pattern — WebGL shader
Physics spheres — React Three Fiber + Rapier
Physics spheres — React Three Fiber + Rapier
Minimal typography — pure CSS
Minimal typography — pure CSS

These are live images served by the OG worker. The grain and dither examples use canvas to generate noise patterns, while the minimal example is pure CSS. All three use the same Tailwind utilities and fonts as the rest of the site.


How It Works

The system has four main pieces:

Template

An inert HTML element that holds your OG design. Same CSS, same components — just hidden until needed.

Worker

A Cloudflare Worker that receives screenshot requests, checks caches, and orchestrates the whole flow.

Durable Object

A singleton that holds a persistent browser instance, deduplicates concurrent requests, and manages failure cooldowns.

Cache

Two-tier caching with Edge Cache API for speed and R2 for durability. Repeat requests are instant.

The Template Element

The key building block is HTML’s <template> element. Content inside a <template> is inert: it doesn’t render, scripts don’t execute, images don’t load. It’s just sitting there, waiting to be used.

Here’s the OGTemplate component:

---
interface Props {
  width?: number;
  height?: number;
}

const { width = 1200, height = 630 } = Astro.props;
---

<template
  data-og-template
  data-og-width={String(width)}
  data-og-height={String(height)}
>
  <div style={`width:${width}px;height:${height}px;overflow:hidden;`}>
    <slot />
  </div>
</template>

The data-og-* attributes tell the worker what dimensions to use. The slot receives your actual OG design — regular HTML and CSS that can use all your site’s styles.

Template Extraction

When the worker visits your page, it runs a script that extracts the template and replaces the page content:

// Runs in the browser context via page.evaluate()
(() => {
  const tpl = document.querySelector('template[data-og-template]');
  if (!tpl) return null;

  // Read dimensions from data attributes
  const width = parseInt(tpl.getAttribute('data-og-width') || '1200', 10);
  const height = parseInt(tpl.getAttribute('data-og-height') || '630', 10);

  // Replace body with template content
  document.body.innerHTML = '';
  document.body.style.cssText = `margin:0;padding:0;width:${width}px;height:${height}px;overflow:hidden;`;
  document.body.appendChild(tpl.content.cloneNode(true));

  return { width, height };
})()

This is the magic moment: the page transforms from your regular content into a perfectly-sized OG image canvas. The worker then resizes the viewport to match and takes a screenshot.

Wiring Up Meta Tags

To use the OG worker, you point your og:image meta tag at the worker URL, passing your page URL as a parameter:

<meta
  property="og:image"
  content="https://og.mattrothenberg.com/?url=https://mattrothenberg.com/notes/edge-og-images"
/>

When a social platform fetches this URL, the worker screenshots your page’s template and returns the image. In Astro, I use a helper component to generate this URL:

---
// OGMeta.astro
interface Props {
  worker: string;  // https://og.mattrothenberg.com
  site: string;    // https://mattrothenberg.com
  path?: string;   // /notes/my-post
}

const { worker, site, path = Astro.url.pathname } = Astro.props;
const ogUrl = `${worker}/?url=${encodeURIComponent(site + path)}`;
---

<meta property="og:image" content={ogUrl} />
<meta property="og:image:width" content="1200" />
<meta property="og:image:height" content="630" />
<meta name="twitter:image" content={ogUrl} />

The Screenshot Flow

Here’s the complete request flow through the worker:

The browser rendering step uses Cloudflare’s Browser Rendering API, which gives you a Puppeteer-compatible interface to a headless Chrome instance running on their edge network.

A naive implementation launches a fresh browser for every screenshot, then closes it. This works, but it’s wasteful — launching a browser is the expensive part, and concurrent requests for the same image would each spin up their own browser.

The solution is a Durable Object (OGRenderer) that holds a persistent browser instance and reuses it across requests. The worker routes screenshot requests to a singleton DO:

// In the worker's fetch handler
const id = env.OG_RENDERER.idFromName("singleton");
const stub = env.OG_RENDERER.get(id);
const doRes = await stub.fetch(doUrl.toString());

The DO does three things on top of the core screenshot logic:

  1. Browser reusegetBrowser() returns the existing browser if it’s still alive, or launches a new one if not. No more cold start per request.
  2. Request deduplication — concurrent requests for the same cache key await the same in-flight Promise instead of each taking their own screenshot.
  3. Failure cooldown — if a screenshot fails, that cache key gets a 60-second cooldown to prevent retry storms from burning through Browser Rendering quota.

The generate() method contains the actual screenshotting logic. It creates a new page (cheap) rather than a new browser (expensive):

private async generate({ url }: { url: string }): Promise<ArrayBuffer> {
  const browser = await this.getBrowser();
  const page = await browser.newPage();

  try {
    await page.setViewport({ width: 1200, height: 630, deviceScaleFactor: 2 });
    await page.goto(url, { waitUntil: "domcontentloaded", timeout: 15000 });

    // Extract and render the template
    const dimensions = await page.evaluate(OG_TEMPLATE_EXTRACT_SCRIPT);
    if (!dimensions) throw new Error(`No OG template found on page: ${url}`);

    await page.setViewport({
      width: dimensions.width,
      height: dimensions.height,
      deviceScaleFactor: 1,
    });

    // Wait for the component to signal it's done rendering
    await this.waitForReady(page);

    const screenshot = await page.screenshot();
    return screenshot;
  } finally {
    await page.close(); // Close the page, not the browser
  }
}

Note the finally block closes the page, not the browser. The browser stays alive for the next request.

Readiness Signal

The original implementation waited a fixed number of requestAnimationFrame callbacks before taking the screenshot — essentially “wait N frames and hope everything has rendered.” Too few frames and heavy WebGL scenes would get captured mid-render; too many and simple templates would waste time.

Components now explicitly signal when they’re done rendering by setting window.__OG_READY__ = true. The screenshotter polls for this signal:

private async waitForReady(page: Page): Promise<void> {
  try {
    await page.waitForFunction('window.__OG_READY__ === true', { timeout: 25000 });
  } catch {
    // Signal never set — fall back to 10-frame RAF wait for simple templates
    await page.evaluate(`new Promise(resolve => {
      let f = 0;
      const w = () => { f++; f < 10 ? requestAnimationFrame(w) : resolve(); };
      requestAnimationFrame(w);
    })`);
  }
}

For Three.js scenes that need many frames before they’re fully rendered, a ReadinessSignal component uses useFrame to count 30 rendered frames before signaling:

function ReadinessSignal() {
  const frameCount = useRef(0);
  const signaled = useRef(false);

  useFrame(() => {
    if (signaled.current) return;
    frameCount.current++;
    if (frameCount.current >= 30) {
      signaled.current = true;
      (window as any).__OG_READY__ = true;
    }
  });

  return null;
}

Simpler components that render synchronously can signal immediately in a useEffect:

useEffect(() => {
  (window as any).__OG_READY__ = true;
}, []);

Caching & Performance

Here’s the honest part: this approach is slow — at least on a cold start. The first request after the worker has been idle spins up a new browser instance, which takes a few seconds. Subsequent requests reuse the warm browser via the Durable Object, so screenshots are noticeably faster. Either way, this isn’t acceptable for blocking requests, which is why aggressive caching is non-negotiable.

Like, 8-10 seconds slow. You'd notice.

The caching strategy is what makes this viable:

Tier 1: Edge Cache (Cache API)

  • Stored at Cloudflare’s edge locations
  • Sub-millisecond reads for repeat requests

Tier 2: R2 Storage

  • Persistent object storage
  • Survives edge cache evictions
  • Automatically backfills the edge cache on read
async function getFromCache({ env, key, requestUrl, options }) {
  // 1. Try edge cache first (fastest)
  const cache = caches.default;
  const cacheRequest = new Request(new URL(`/cache/${key}`, requestUrl));
  const cached = await cache.match(cacheRequest);
  if (cached) return cached;

  // 2. Try R2 (persistent)
  const r2Object = await env.OG_CACHE.get(key);
  if (r2Object) {
    const response = new Response(r2Object.body, {
      headers: {
        "Content-Type": "image/png",
        "Cache-Control": `public, max-age=${cacheTtl}`,
      },
    });

    // Backfill edge cache (non-blocking)
    cache.put(cacheRequest, response.clone()).catch(() => {});

    return response;
  }

  return null; // Cache miss, need to generate
}

The cache key is a SHA-256 hash of just the URL — one URL always maps to exactly one cached image. Is the tradeoff worth it? For me, yes. I value design flexibility over raw generation speed, and the caching makes the slowness invisible in normal use.


Security & Admin

Locking Down the Screenshotter

Giving an HTTP endpoint the ability to screenshot arbitrary URLs is a recipe for abuse if you’re not careful. The worker has a few guardrails to keep things boring.

Please don't pwn me.

First, there’s an ALLOWED_ORIGINS list. The worker checks the target URL’s origin against a configured allowlist before doing anything else. If the domain isn’t on the list, the request is rejected immediately. No screenshot, no cache lookup, nothing.

Second, target URLs are normalized to just the origin and pathname. Query parameters get stripped entirely, which prevents attackers from busting the cache by appending random strings to otherwise-identical URLs. If you need to regenerate an image (say, after a design change), that happens through the admin UI’s purge functionality, not through URL manipulation.

Third, before spinning up a Browser Rendering session, the worker does a cheap preflight fetch of the target URL. If the page doesn’t return a 200 or the HTML doesn’t contain a data-og-template attribute, the request bails out immediately. This keeps the expensive part — launching a headless browser — behind a lightweight validation step that costs almost nothing.

Admin UI

To manage the cache, I built a small admin interface that shows all cached images with their metadata. You can browse, inspect, and purge images without touching the R2 console:

The admin UI is a React SPA bundled into a single HTML file that the worker serves at /admin. It uses the same R2 bucket listing API that you’d use in a dashboard, wrapped in a minimal interface.

Protecting the Admin with Cloudflare Access

Instead of building auth into the worker, I used Cloudflare Access to handle it at the edge.

Access enforces authentication before requests even reach your worker, so the worker stays focused on its actual job.


Local Development

For development, you don’t want to wait for the worker to screenshot every change. The OGTemplate component includes an inline script that activates when you add ?og-preview to any URL:

if (new URLSearchParams(location.search).has('og-preview')) {
  document.addEventListener('DOMContentLoaded', function() {
    var tpl = document.querySelector('template[data-og-template]');
    if (tpl) {
      var w = parseInt(tpl.getAttribute('data-og-width') || '1200', 10);
      var h = parseInt(tpl.getAttribute('data-og-height') || '630', 10);
      document.body.innerHTML = '';
      document.body.style.cssText = `margin:0;padding:0;width:${w}px;height:${h}px;overflow:hidden;`;
      document.body.appendChild(tpl.content.cloneNode(true));
    }
  });
}

Visit localhost:4321/blog/my-post?og-preview and you see exactly what the OG image will look like. Instant feedback, no deploy required.


The Result

Almost every page on this site has an OG image that’s just HTML and Tailwind. When I update my design system, the OG images update too. No separate templates to maintain, no design drift.

It's a work in progress. Don't check.

The tradeoffs are real: cold starts are slow, and you need Cloudflare’s Browser Rendering (which has usage limits and costs). But for a personal site where most images are cached and design flexibility matters, it’s been the right choice.

mattrothenberg/cf-og

Cloudflare Worker for screenshotting OG images from your page templates.