Server-side rendering (SSR) can dramatically improve your application’s perceived performance by delivering meaningful content before JavaScript loads—but poorly optimized SSR can actually hurt more than it helps. Between long server render times, waterfall data fetching, and expensive hydration, there are plenty of ways to shoot yourself in the foot. Let’s explore practical techniques for building SSR that’s genuinely fast: streaming responses, smart caching strategies, optimized data fetching patterns, and hydration that doesn’t block the main thread.
The SSR Performance Challenge
SSR adds a crucial step to your rendering pipeline: your server needs to generate HTML before sending anything to the client. This introduces new performance bottlenecks that don’t exist in client-only applications:
- Time-to-first-byte (TTFB) delays while your server fetches data and renders
- Waterfall data fetching that blocks HTML generation
- Heavy hydration that freezes the page once JavaScript loads
- Cache invalidation complexity across server and client
The good news? React 19 and modern frameworks give us powerful tools to solve these problems systematically.
Streaming SSR Overview
Streaming SSR allows sending HTML as it’s generated rather than waiting for the complete page. This dramatically reduces Time to First Byte (TTFB) and improves perceived performance.
Key streaming benefits:
- Faster TTFB - Send initial HTML immediately
- Progressive loading - Stream content as data becomes available
- Better UX - Users see content while slower sections load
- Improved Core Web Vitals - Better LCP and FID scores
For detailed streaming implementation patterns, see Streaming SSR Optimization.
// Quick streaming example
const stream = await renderToReadableStream(<App />, {
bootstrapScripts: ['/client.js'],
onError(error) {
console.error('SSR Error:', error);
},
});
return new Response(stream, {
headers: { 'Content-Type': 'text/html' },
});
}Strategic Suspense Boundaries
Place Suspense boundaries around components that might delay the initial render:
function ProductPage({ productId }: { productId: string }) {
return (
<div>
{/* ✅ Static content streams immediately */}
<Header />
<nav>
<SearchBar />
</nav>
{/* ✅ Product details stream when data arrives */}
<Suspense fallback={<ProductDetailsSkeleton />}>
<ProductDetails productId={productId} />
</Suspense>
{/* ✅ Reviews stream independently */}
<Suspense fallback={<ReviewsSkeleton />}>
<ProductReviews productId={productId} />
</Suspense>
<Footer />
</div>
);
}
async function ProductDetails({ productId }: { productId: string }) {
// This data fetch doesn't block the header/nav from streaming
const product = await getProduct(productId);
return (
<section>
<h1>{product.name}</h1>
<img src={product.imageUrl} alt={product.name} />
<p>{product.description}</p>
</section>
);
}Smart Data Fetching Patterns
The biggest SSR performance killer is sequential data fetching. Your components shouldn’t create data waterfalls—fetch what you need in parallel, and cache aggressively.
Preload Critical Data
Start data fetching early in your request handler, before React even begins rendering:
export async function handleRequest(request: Request) {
const url = new URL(request.url);
const productId = url.searchParams.get('productId');
// ✅ Start critical data fetching immediately
const dataPromises = {
product: productId ? getProduct(productId) : null,
user: getUserFromSession(request),
categories: getCategories(), // Usually static/cached
};
// Start rendering while data fetches continue
const stream = await renderToReadableStream(
<App
productPromise={dataPromises.product}
userPromise={dataPromises.user}
categoriesPromise={dataPromises.categories}
/>,
{ bootstrapScripts: ['/client.js'] },
);
return new Response(stream, {
headers: { 'Content-Type': 'text/html' },
});
}Use React’s use() Hook for Promise Unwrapping
React 19’s use() hook lets components consume promises directly, eliminating the need for complex state management:
function ProductDetails({ productPromise }: { productPromise: Promise<Product> }) {
// ✅ Suspends until promise resolves
const product = use(productPromise);
return (
<div>
<h1>{product.name}</h1>
<img src={product.imageUrl} alt={product.name} loading="lazy" />
<p>{product.description}</p>
</div>
);
}
function ProductPage({ productPromise }: { productPromise: Promise<Product> }) {
return (
<Suspense fallback={<ProductSkeleton />}>
<ProductDetails productPromise={productPromise} />
</Suspense>
);
}Parallel Data Fetching with Error Boundaries
Fetch independent data in parallel and handle failures gracefully:
async function getPageData(productId: string) {
// ✅ Fetch everything in parallel
const [productResult, reviewsResult, recommendationsResult] = await Promise.allSettled([
getProduct(productId),
getProductReviews(productId),
getRecommendations(productId),
]);
return {
product: productResult.status === 'fulfilled' ? productResult.value : null,
reviews: reviewsResult.status === 'fulfilled' ? reviewsResult.value : [],
recommendations:
recommendationsResult.status === 'fulfilled' ? recommendationsResult.value : [],
errors: [productResult, reviewsResult, recommendationsResult]
.filter((result) => result.status === 'rejected')
.map((result) => result.reason),
};
}
function ProductPage({ data }: { data: Awaited<ReturnType<typeof getPageData>> }) {
if (!data.product) {
throw new Error('Product not found');
}
return (
<div>
<ProductDetails product={data.product} />
{data.reviews.length > 0 && <ProductReviews reviews={data.reviews} />}
{data.recommendations.length > 0 && <Recommendations items={data.recommendations} />}
</div>
);
}Caching Strategies That Actually Work
Effective SSR caching operates at multiple levels: HTTP responses, rendered components, and data fetches. The key is invalidating each level appropriately.
HTTP-Level Response Caching
Cache complete HTML responses for pages that don’t change frequently:
const cache = new Map<string, { html: string; timestamp: number }>();
export async function renderWithCache(url: string, ttl = 300000) {
// 5 minutes
const cacheKey = url;
const cached = cache.get(cacheKey);
// ✅ Serve from cache if still fresh
if (cached && Date.now() - cached.timestamp < ttl) {
return new Response(cached.html, {
headers: {
'Content-Type': 'text/html',
'X-Cache': 'HIT',
},
});
}
// Render fresh HTML
const stream = await renderToReadableStream(<App url={url} />);
const html = await streamToString(stream);
// Cache the complete HTML
cache.set(cacheKey, { html, timestamp: Date.now() });
return new Response(html, {
headers: {
'Content-Type': 'text/html',
'X-Cache': 'MISS',
},
});
}
async function streamToString(stream: ReadableStream): Promise<string> {
const reader = stream.getReader();
const decoder = new TextDecoder();
let html = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
html += decoder.decode(value, { stream: true });
}
return html;
}Component-Level Caching
Cache expensive component renders independently of the full page:
const componentCache = new Map<string, React.ReactElement>();
function CachedExpensiveComponent({ data, ttl = 60000 }: { data: ComplexData; ttl?: number }) {
const cacheKey = `expensive-${data.id}-${data.version}`;
// Check for cached component
if (componentCache.has(cacheKey)) {
return componentCache.get(cacheKey)!;
}
// Render expensive component
const rendered = (
<div>
{/* Expensive calculations/rendering here */}
<ComplexChart data={data.chartData} />
<DataVisualization metrics={data.metrics} />
</div>
);
// Cache with TTL cleanup
componentCache.set(cacheKey, rendered);
setTimeout(() => componentCache.delete(cacheKey), ttl);
return rendered;
}Data Fetching with Deduplication
Prevent duplicate API calls when multiple components need the same data:
const dataCache = new Map<string, Promise<any>>();
export function cachedFetch<T>(url: string, options: RequestInit = {}, ttl = 300000): Promise<T> {
const cacheKey = `${url}-${JSON.stringify(options)}`;
// Return existing promise if already fetching
if (dataCache.has(cacheKey)) {
return dataCache.get(cacheKey)!;
}
// Create new fetch promise
const promise = fetch(url, options)
.then((response) => {
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
})
.finally(() => {
// Remove from cache after TTL
setTimeout(() => dataCache.delete(cacheKey), ttl);
});
dataCache.set(cacheKey, promise);
return promise;
}
// Usage in components
async function ProductDetails({ productId }: { productId: string }) {
// ✅ Multiple components can call this safely - only one actual fetch occurs
const product = await cachedFetch<Product>(`/api/products/${productId}`);
return <div>{product.name}</div>;
}Optimizing Hydration Performance
Hydration is where SSR applications often stumble—the server-rendered HTML becomes interactive, but this process can block the main thread for hundreds of milliseconds.
Selective Hydration with Suspense
Don’t hydrate everything at once. Use Suspense boundaries to hydrate components progressively:
function App() {
return (
<div>
{/* ✅ Critical interactive elements hydrate first */}
<Header />
<SearchForm />
{/* ✅ Less critical components hydrate when ready */}
<Suspense fallback={null}>
<CommentSection />
</Suspense>
<Suspense fallback={null}>
<RecommendedProducts />
</Suspense>
{/* ✅ Heavy components can hydrate last */}
<Suspense fallback={null}>
<InteractiveMap />
</Suspense>
</div>
);
}Reduce JavaScript Bundle Size
Less JavaScript means faster hydration. Code-split aggressively and lazy-load non-critical components:
import { lazy } from 'react';
// ✅ Load heavy components only when needed
const InteractiveChart = lazy(() => import('./InteractiveChart'));
const VideoPlayer = lazy(() => import('./VideoPlayer'));
const CommentEditor = lazy(() => import('./CommentEditor'));
function ProductPage() {
const [showChart, setShowChart] = useState(false);
const [showComments, setShowComments] = useState(false);
return (
<div>
<ProductInfo />
{showChart && (
<Suspense fallback={<ChartSkeleton />}>
<InteractiveChart />
</Suspense>
)}
{showComments && (
<Suspense fallback={<CommentsSkeleton />}>
<CommentEditor />
</Suspense>
)}
<button onClick={() => setShowChart(true)}>Show Analytics</button>
<button onClick={() => setShowComments(true)}>Add Comment</button>
</div>
);
}Avoid Hydration Mismatches
Server-client mismatches force React to re-render everything from scratch. Common culprits include timestamps, random IDs, and browser-specific APIs:
// ❌ Will cause hydration mismatch
function CurrentTime() {
return <div>Current time: {new Date().toLocaleString()}</div>;
}
// ✅ Consistent between server and client
function CurrentTime() {
const [time, setTime] = useState<string | null>(null);
useEffect(() => {
setTime(new Date().toLocaleString());
}, []);
// Show static content during hydration
if (time === null) {
return <div>Loading current time...</div>;
}
return <div>Current time: {time}</div>;
}
// ❌ Random IDs cause mismatches
function generateId() {
return Math.random().toString(36);
}
// ✅ Stable IDs using useId hook
function StableComponent() {
const id = useId(); // Same on server and client
return (
<div>
<label htmlFor={id}>Username:</label>
<input id={id} type="text" />
</div>
);
}Real-World Implementation Strategy
Here’s a practical approach for implementing optimized SSR in your application:
Measure First, Optimize Second
Establish performance baselines before making changes:
// Add performance timing to your SSR handler
export async function handleSSRRequest(request: Request) {
const start = performance.now();
try {
const response = await renderPageWithStreaming(request);
const renderTime = performance.now() - start;
console.log(`SSR render time: ${renderTime.toFixed(2)}ms`);
// Add timing header for monitoring
response.headers.set('X-Render-Time', renderTime.toString());
return response;
} catch (error) {
const errorTime = performance.now() - start;
console.error(`SSR error after ${errorTime.toFixed(2)}ms:`, error);
throw error;
}
}Progressive Enhancement Approach
Start with a solid SSR foundation, then layer on optimizations:
export class SSROptimizer {
private cacheEnabled = false;
private streamingEnabled = false;
constructor(
private options: {
enableCache?: boolean;
enableStreaming?: boolean;
cacheMaxAge?: number;
} = {},
) {
this.cacheEnabled = options.enableCache ?? false;
this.streamingEnabled = options.enableStreaming ?? false;
}
async render(url: string): Promise<Response> {
// Start with basic SSR
let renderFunction = this.basicSSR;
// Add streaming if enabled
if (this.streamingEnabled) {
renderFunction = this.streamingSSR;
}
// Add caching if enabled
if (this.cacheEnabled) {
renderFunction = this.cachedSSR;
}
return renderFunction(url);
}
private async basicSSR(url: string): Promise<Response> {
const html = await renderToString(<App url={url} />);
return new Response(html, {
headers: { 'Content-Type': 'text/html' },
});
}
private async streamingSSR(url: string): Promise<Response> {
const stream = await renderToReadableStream(<App url={url} />);
return new Response(stream, {
headers: { 'Content-Type': 'text/html' },
});
}
private async cachedSSR(url: string): Promise<Response> {
// Implementation with caching layer
// ...
}
}Monitor and Iterate
Set up monitoring to track the metrics that matter:
interface SSRMetrics {
renderTime: number;
dataFetchTime: number;
cacheHitRate: number;
hydrationTime: number;
timeToFirstByte: number;
}
export function trackSSRMetrics(): SSRMetrics {
const metrics = {
renderTime: 0,
dataFetchTime: 0,
cacheHitRate: 0,
hydrationTime: 0,
timeToFirstByte: 0,
};
// Measure server-side rendering time
const renderStart = performance.now();
// ... render logic
metrics.renderTime = performance.now() - renderStart;
// Track cache performance
metrics.cacheHitRate = (cacheHits / totalRequests) * 100;
// Client-side hydration timing
if (typeof window !== 'undefined') {
const hydrationStart = performance.now();
// ... hydration logic
metrics.hydrationTime = performance.now() - hydrationStart;
}
return metrics;
}Common Pitfalls and How to Avoid Them
Overeager Data Fetching
Don’t fetch data you won’t use. Use conditional data fetching based on user permissions, feature flags, or URL parameters:
// ❌ Always fetches all data
async function getPageData(userId: string) {
const [user, posts, comments, analytics] = await Promise.all([
getUser(userId),
getPosts(userId),
getComments(userId),
getAnalytics(userId), // Only needed for admin users
]);
return { user, posts, comments, analytics };
}
// ✅ Conditional data fetching
async function getPageData(userId: string, currentUser?: User) {
const promises: Promise<any>[] = [getUser(userId), getPosts(userId)];
// Only fetch comments if user has permission
if (currentUser?.canViewComments) {
promises.push(getComments(userId));
}
// Only fetch analytics for admin users
if (currentUser?.isAdmin) {
promises.push(getAnalytics(userId));
}
const results = await Promise.all(promises);
return {
user: results[0],
posts: results[1],
comments: currentUser?.canViewComments ? results[2] : null,
analytics: currentUser?.isAdmin ? results[results.length - 1] : null,
};
}Cache Invalidation Complexity
Keep cache invalidation simple with consistent patterns:
export class CacheManager {
private cache = new Map<string, any>();
private tags = new Map<string, Set<string>>(); // tag -> cache keys
set(key: string, value: any, tags: string[] = []) {
this.cache.set(key, value);
// Associate cache entry with tags
for (const tag of tags) {
if (!this.tags.has(tag)) {
this.tags.set(tag, new Set());
}
this.tags.get(tag)!.add(key);
}
}
// Invalidate all cache entries with specific tags
invalidateByTags(tags: string[]) {
for (const tag of tags) {
const keys = this.tags.get(tag);
if (keys) {
for (const key of keys) {
this.cache.delete(key);
}
this.tags.delete(tag);
}
}
}
get(key: string) {
return this.cache.get(key);
}
}
// Usage
const cacheManager = new CacheManager();
// Cache with relevant tags
cacheManager.set('user-123', userData, ['user', 'user-123']);
cacheManager.set('posts-user-123', userPosts, ['posts', 'user-123']);
// Invalidate all user-related cache when user updates
cacheManager.invalidateByTags(['user-123']);