Edge computing brings computation closer to users, dramatically reducing latency and improving performance. By running code at the edge—in locations geographically distributed worldwide—applications can serve users faster than ever before.
What is Edge Computing?
Edge computing executes code at the edge of the network, close to end users. Instead of requests traveling to a central server, processing happens in distributed edge locations worldwide, reducing round-trip latency from hundreds of milliseconds to tens of milliseconds.
Benefits of Edge Computing
- Low Latency: 50-200ms reduction in response times
- Global Scale: Serve users from nearest location
- Reduced Bandwidth: Process data closer to source
- Cost Efficiency: Less data transfer to origin servers
- Resilience: Distributed infrastructure reduces single points of failure
Cloudflare Workers
Cloudflare Workers run JavaScript, TypeScript, or WASM on Cloudflare's global edge network across 300+ cities worldwide. They execute in V8 isolates, providing fast startup times.
Basic Worker Example
// worker.js
export default {
async fetch(request, env) {
const url = new URL(request.url);
// Geo-location from request
const country = request.cf.country;
const city = request.cf.city;
// Custom logic
if (url.pathname === '/api/hello') {
return new Response(JSON.stringify({
message: `Hello from ${city}, ${country}`,
timestamp: Date.now(),
}), {
headers: { 'Content-Type': 'application/json' },
});
}
// Modify response
const response = await fetch(request);
const newResponse = new Response(response.body, response);
newResponse.headers.set('X-Edge-Country', country);
return newResponse;
},
};
Advanced Edge Routing
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
const country = request.cf.country;
// A/B testing at the edge
const userId = request.headers.get('cf-connecting-ip');
const variant = await env.A_B_TEST.get(userId) ||
(Math.random() > 0.5 ? 'A' : 'B');
// Route based on location
if (country === 'US') {
url.hostname = 'us-origin.example.com';
} else if (country === 'EU') {
url.hostname = 'eu-origin.example.com';
}
// Cache with Cloudflare Cache API
const cacheKey = new Request(url.toString(), request);
const cache = caches.default;
let response = await cache.match(cacheKey);
if (!response) {
response = await fetch(url.toString());
response = new Response(response.body, response);
response.headers.set('Cache-Control', 'public, max-age=3600');
ctx.waitUntil(cache.put(cacheKey, response.clone()));
}
return response;
},
};
Vercel Edge Functions
Vercel Edge Functions run on the Edge Runtime, which is built on Web Standards APIs. They're automatically deployed to Vercel's global edge network.
Edge API Routes
// app/api/edge/route.ts
export const runtime = 'edge';
export async function GET(request: Request) {
// Access geo-location
const { geo } = request;
const country = geo?.country || 'unknown';
const city = geo?.city || 'unknown';
// Personalization based on location
const timezone = geo?.timezone || 'UTC';
const localTime = new Date().toLocaleString('en-GB', {
timeZone: timezone,
});
return Response.json({
message: `Hello from ${city}, ${country}`,
localTime,
edge: true,
});
}
// With streaming
export async function POST(request: Request) {
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
for (let i = 0; i < 10; i++) {
controller.enqueue(encoder.encode(`Chunk ${i}\n`));
await new Promise(resolve => setTimeout(resolve, 100));
}
controller.close();
},
});
return new Response(stream, {
headers: { 'Content-Type': 'text/plain' },
});
}
Edge Caching Strategies
Cache-Control Headers
// Cache static assets for 1 year
response.headers.set(
'Cache-Control',
'public, max-age=31536000, immutable'
);
// Cache API responses for 1 hour with revalidation
response.headers.set(
'Cache-Control',
'public, s-maxage=3600, stale-while-revalidate=86400'
);
// No cache for dynamic content
response.headers.set(
'Cache-Control',
'private, no-cache, no-store, must-revalidate'
);
Edge KV and Durable Objects
Cloudflare Workers KV provides low-latency key-value storage at the edge. Durable Objects offer strongly consistent stateful computation. Perfect for real-time features, session management, or distributed counters.
Use Cases
Personalization
Customize content based on user location, language preferences, or A/B test variants—all at the edge before reaching origin servers.
Authentication and Authorization
Validate JWT tokens, check permissions, and route authenticated users—all at the edge, reducing load on origin servers.
Real-time APIs
Edge functions with low latency are perfect for real-time features like live scores, chat systems, or notifications.
Data Transformation
Transform data formats, filter responses, or aggregate data at the edge before sending to clients, reducing bandwidth and processing on origin.
Performance Optimization
- Minimize external API calls (cache when possible)
- Use streaming for large responses
- Optimize bundle size (smaller = faster cold starts)
- Leverage edge caching aggressively
- Use geo-routing for optimal origin selection
Limitations to Consider
- Execution Time: Edge functions have time limits (50ms-30s depending on platform)
- Memory: Limited memory (128MB-256MB typically)
- No Persistent Storage: Use external storage or edge KV
- Cold Starts: Minimal but can occur
- Runtime Differences: Some Node.js APIs not available
Real-World Example
We implemented edge functions for a global e-commerce platform:
- Geo-based pricing and currency conversion at the edge
- Product recommendations based on user location
- Edge caching for product listings (95% cache hit rate)
- Authentication and rate limiting at the edge
- Result: 70% reduction in origin server load, 60% faster response times
Best Practices
- Keep functions lightweight and fast
- Cache aggressively when data doesn't change frequently
- Use edge functions for read-heavy operations
- Handle errors gracefully (fallback to origin if needed)
- Monitor performance and costs
- Test from different geographic locations
Conclusion
Edge computing is transforming web application architecture. By bringing computation closer to users, you can achieve dramatic performance improvements. Start with simple use cases like personalization or caching, then expand to more complex features. Remember to measure latency improvements and optimize for your specific use cases.