<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ananthakrishnan - Blogs]]></title><description><![CDATA[Ananthakrishnan - Blogs]]></description><link>https://blog.akananthu.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 12:41:49 GMT</lastBuildDate><atom:link href="https://blog.akananthu.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[View Transitions — Rethinking How UI Changes Are Animated]]></title><description><![CDATA[We’ve had animations on the web for a long time.
CSS transitionsCSS animationsJS-based animation libraries
They all try to solve the same problem:

Move UI from one state → to another state.

But they]]></description><link>https://blog.akananthu.com/view-transitions-rethinking-how-ui-changes-are-animated</link><guid isPermaLink="true">https://blog.akananthu.com/view-transitions-rethinking-how-ui-changes-are-animated</guid><category><![CDATA[CSS]]></category><category><![CDATA[React]]></category><category><![CDATA[Frontend Development]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Sat, 04 Apr 2026 02:07:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/f98c4e70-1dfd-48be-8e36-0be26cf51e1f.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’ve had animations on the web for a long time.</p>
<p>CSS transitions<br />CSS animations<br />JS-based animation libraries</p>
<p>They all try to solve the same problem:</p>
<blockquote>
<p>Move UI from one state → to another state.</p>
</blockquote>
<p>But they all share the same limitation:</p>
<p>They animate <strong>elements</strong>.</p>
<p>Not <strong>views</strong>.</p>
<hr />
<h2>First, What Is a “Transition”?</h2>
<p>Before jumping into APIs, lets look into transition.</p>
<p>A transition is not:</p>
<blockquote>
<p>“fade this div”</p>
</blockquote>
<p>A transition is:</p>
<blockquote>
<p>“move the user from one UI state to another, without breaking continuity”</p>
</blockquote>
<p>Example:</p>
<ul>
<li><p>Clicking a product → opens product page</p>
</li>
<li><p>Switching tabs → content changes</p>
</li>
<li><p>Filtering a list → items reorder</p>
</li>
</ul>
<p>Without transitions:</p>
<p>UI <strong>jumps</strong></p>
<p>With transitions:</p>
<p>UI <strong>flows</strong></p>
<hr />
<h2>The Core Problem With Traditional Animations</h2>
<p>Let’s take a simple route change.</p>
<p>You go from:</p>
<pre><code class="language-plaintext">Home → Product Page
</code></pre>
<p>What happens?</p>
<ul>
<li><p>Old DOM is removed</p>
</li>
<li><p>New DOM is inserted</p>
</li>
</ul>
<p>There is <strong>no connection between them</strong></p>
<p>So developers try to fake it:</p>
<ul>
<li><p>clone elements</p>
</li>
<li><p>manually position</p>
</li>
<li><p>animate opacity/position</p>
</li>
<li><p>sync timings</p>
</li>
</ul>
<p>This becomes:</p>
<pre><code class="language-plaintext">complex
fragile
hard to maintain
</code></pre>
<p>This is why most apps either:</p>
<ul>
<li><p>don’t animate route changes</p>
</li>
<li><p>or use heavy libraries</p>
<hr />
<h2>Enter View Transitions</h2>
<p>The View Transition API changes the abstraction.</p>
<p>Instead of animating elements manually, you tell the browser:</p>
<blockquote>
<p>“I’m about to change the UI. You handle the transition.”</p>
</blockquote>
</li>
</ul>
<hr />
<h2>What It Actually Does</h2>
<p>At a high level, the browser:</p>
<ol>
<li><p>Takes a snapshot of the <strong>old UI</strong></p>
</li>
<li><p>Applies your DOM update</p>
</li>
<li><p>Takes a snapshot of the <strong>new UI</strong></p>
</li>
<li><p>Animates between them</p>
</li>
</ol>
<p>This happens at the <strong>browser compositor level</strong><br />—not inside your React tree or CSS layout.</p>
<p><strong>That’s the key difference.</strong></p>
<hr />
<h2>Important: What View Transition API Is NOT</h2>
<p>Let’s clear the biggest misconception:</p>
<blockquote>
<p>View Transition API is NOT an extension of CSS transitions</p>
</blockquote>
<p>It does not animate:</p>
<pre><code class="language-plaintext">width
height
position
opacity
</code></pre>
<p>directly on DOM elements.</p>
<p>Instead:</p>
<blockquote>
<p>It creates visual snapshots and animates them using keyframes at the compositor level</p>
</blockquote>
<p>This is why:</p>
<ul>
<li><p>it feels smoother</p>
</li>
<li><p>it avoids layout thrashing</p>
</li>
<li><p>it works across page navigations</p>
</li>
</ul>
<hr />
<h2>Why This Matters?</h2>
<p>Traditional animation:</p>
<pre><code class="language-plaintext">DOM → Layout → Paint → Composite
</code></pre>
<p>View Transition:</p>
<pre><code class="language-plaintext">Snapshot → Composite Animation
</code></pre>
<p>No layout recalculation during animation.</p>
<p>Result:</p>
<pre><code class="language-plaintext">✔ smoother animations
✔ less jank
✔ better performance
</code></pre>
<hr />
<h2>The Core API</h2>
<p>The entry point is simple:</p>
<pre><code class="language-javascript">document.startViewTransition(() =&gt; {
  // your UI update here
})
</code></pre>
<p>This tells the browser:</p>
<blockquote>
<p>“Capture before and after, then animate between them.”</p>
</blockquote>
<hr />
<h2>What Happens Internally?</h2>
<p>When you call it:</p>
<ol>
<li><p>Browser captures current frame</p>
</li>
<li><p>Runs your update function</p>
</li>
<li><p>Captures new frame</p>
</li>
<li><p>Creates pseudo elements:</p>
</li>
</ol>
<pre><code class="language-css">::view-transition-old(root)
::view-transition-new(root)
</code></pre>
<ol>
<li>Animates between them</li>
</ol>
<p>You can control this with CSS.</p>
<hr />
<h2>Basic Example</h2>
<pre><code class="language-javascript">document.startViewTransition(() =&gt; {
  setPage('product')
})
</code></pre>
<p>Then in CSS:</p>
<pre><code class="language-css">::view-transition-old(root) {
  animation: fadeOut 0.3s ease;
}

::view-transition-new(root) {
  animation: fadeIn 0.3s ease;
}
</code></pre>
<hr />
<h2>This Changes How You Think About Animations</h2>
<p>Before:</p>
<pre><code class="language-plaintext">animate individual components
</code></pre>
<p>Now:</p>
<pre><code class="language-plaintext">animate entire UI state changes
</code></pre>
<p>This is a <strong>mental shift</strong>.</p>
<hr />
<h2>Shared Element Transitions</h2>
<p>This is where it gets powerful.</p>
<p>You can link elements across states:</p>
<pre><code class="language-css">.card {
  view-transition-name: product-card;
}
</code></pre>
<p>Now the browser understands:</p>
<blockquote>
<p>“This element in old view = same element in new view”</p>
</blockquote>
<p>And animates it seamlessly.</p>
<hr />
<h2>SPA vs MPA</h2>
<p>The API works in both:</p>
<h3>SPA (React, Next.js)</h3>
<pre><code class="language-javascript">document.startViewTransition(() =&gt; {
  navigate('/product')
})
</code></pre>
<h3>MPA (normal links)</h3>
<p>Browser handles navigation transitions automatically (where supported).</p>
<hr />
<h2>Why This API Exists?</h2>
<p>Historically:</p>
<ul>
<li><p>SPA transitions → complex JS</p>
</li>
<li><p>MPA transitions → almost impossible</p>
</li>
</ul>
<p>View Transitions unify both.</p>
<p>They reduce:</p>
<pre><code class="language-plaintext">manual DOM work
animation bugs
sync issues
</code></pre>
<hr />
<h2>React’s Experimental <code>&lt;ViewTransition&gt;</code></h2>
<p>React is trying to make this even easier.</p>
<pre><code class="language-javascript">import { ViewTransition } from 'react';

&lt;ViewTransition&gt;
  &lt;Page /&gt;
&lt;/ViewTransition&gt;
</code></pre>
<hr />
<h2>What React Is Doing Under the Hood?</h2>
<p>React:</p>
<ul>
<li><p>automatically assigns <code>view-transition-name</code></p>
</li>
<li><p>groups elements</p>
</li>
<li><p>triggers transitions during state updates</p>
</li>
</ul>
<p>You don’t manually call <code>startViewTransition</code>.</p>
<hr />
<h2>Important Behavior</h2>
<p>React triggers transitions only when updates happen inside:</p>
<pre><code class="language-plaintext">startTransition()
Suspense
useDeferredValue
</code></pre>
<p>Not every state update triggers animation.</p>
<hr />
<h2>Types of Transitions in React</h2>
<p>React categorizes transitions:</p>
<pre><code class="language-plaintext">enter
exit
update
shared
</code></pre>
<p>Each can be customised.</p>
<hr />
<h2>Example</h2>
<pre><code class="language-javascript">&lt;ViewTransition enter="slide-in" exit="fade-out"&gt;
  &lt;ProductCard /&gt;
&lt;/ViewTransition&gt;
</code></pre>
<hr />
<h2>Why This Is Powerful?</h2>
<p>Instead of:</p>
<pre><code class="language-plaintext">manually syncing animations across components
</code></pre>
<p>You get:</p>
<pre><code class="language-plaintext">automatic view-level transitions
</code></pre>
<hr />
<h2>Limitations (Important)</h2>
<p>This API is powerful, but not magic.</p>
<h3>1. It’s still evolving</h3>
<ul>
<li><p>React version is experimental</p>
</li>
<li><p>Some browsers still catching up.</p>
</li>
</ul>
<hr />
<h3>2. Debugging is harder</h3>
<p>Because animations happen:</p>
<pre><code class="language-plaintext">outside your DOM
inside compositor
</code></pre>
<hr />
<h3>3. Not everything animates perfectly</h3>
<ul>
<li><p>layout shifts can break continuity</p>
</li>
<li><p>shared elements need correct naming</p>
</li>
</ul>
<hr />
<h2>The Real Mental Model</h2>
<p>If you remember one thing:</p>
<blockquote>
<p>View Transitions don’t animate elements<br />They animate <strong>snapshots of UI states</strong></p>
</blockquote>
<hr />
<h2>When Should You Use It?</h2>
<p>Use it when:</p>
<pre><code class="language-plaintext">route transitions
page navigation
list → detail views
UI state switches
</code></pre>
<p>Avoid it when:</p>
<pre><code class="language-plaintext">micro animations (hover, button click)
fine-grained control needed
</code></pre>
<hr />
<h2>Conclusion</h2>
<p>For years, web animations were:</p>
<blockquote>
<p>“move this element from A to B”</p>
</blockquote>
<p>Now the browser is finally giving us:</p>
<blockquote>
<p>“move the entire experience from A to B”</p>
</blockquote>
<p>That’s a big shift.</p>
]]></content:encoded></item><item><title><![CDATA[Docker Deep Dive — What Containers Actually Are]]></title><description><![CDATA[In the previous article, we discussed two important ideas:
ContainerizationOrchestration
Containerization packages applications.
Orchestration runs those containers at scale.
Tools like Docker and Kub]]></description><link>https://blog.akananthu.com/docker-deep-dive-what-containers-actually-are</link><guid isPermaLink="true">https://blog.akananthu.com/docker-deep-dive-what-containers-actually-are</guid><category><![CDATA[Docker]]></category><category><![CDATA[docker images]]></category><category><![CDATA[docker container]]></category><category><![CDATA[Devops]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Sun, 15 Mar 2026 12:18:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/4a11bf02-e5bf-44b9-84f0-4888874a3199.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the <a href="https://ananthakrishnan.hashnode.dev/containerization-vs-orchestration">previous article</a>, we discussed two important ideas:</p>
<p><strong>Containerization</strong><br /><strong>Orchestration</strong></p>
<p>Containerization packages applications.</p>
<p>Orchestration runs those containers at scale.</p>
<p>Tools like <strong>Docker</strong> and <strong>Kubernetes</strong> power this system.</p>
<p>But what actually happens when you run:</p>
<pre><code class="language-shell">docker run nginx
</code></pre>
<p>Does Docker start a virtual machine?</p>
<p>Does it boot an operating system?</p>
<p>Not exactly.</p>
<p>To understand Docker properly, we need to understand one key idea:</p>
<p><strong>Containers are just Linux processes.</strong></p>
<p>Everything else in Docker is built around that fact.</p>
<hr />
<h2>What Docker Really Is?</h2>
<p>Docker is a <strong>containerization platform</strong>.</p>
<p>It helps developers:</p>
<pre><code class="language-plaintext">Build containers
Ship containers
Run containers
</code></pre>
<p>Docker introduced three core building blocks:</p>
<pre><code class="language-plaintext">Dockerfile
Docker Image
Docker Container
</code></pre>
<p>These three components form the foundation of how Docker works.</p>
<p>But before understanding them, we need to understand what a <strong>container actually is</strong>.</p>
<hr />
<h2>Containers Are Just Linux Processes</h2>
<p>A container is <strong>not a virtual machine</strong>.</p>
<p>It is simply a <strong>Linux process running with isolation</strong>.</p>
<p>You can think of a container as:</p>
<pre><code class="language-plaintext">Application + Runtime + Dependencies
</code></pre>
<p>All running as an isolated process on the host system.</p>
<p>Under the hood, Docker simply asks the Linux kernel to start a process.</p>
<p>But with two important mechanisms applied:</p>
<pre><code class="language-plaintext">Linux Namespaces
Linux cgroups
</code></pre>
<p>Namespaces create isolation.</p>
<p>cgroups control resource limits.</p>
<p>Together they form the foundation of containers.</p>
<hr />
<h2>The Role of Linux Namespaces</h2>
<p>Linux <strong>namespaces</strong> are responsible for <strong>isolation</strong>.</p>
<p>They make a process believe it is running in its own environment.</p>
<p>In other words, namespaces create the <strong>illusion of a separate system</strong>.</p>
<p>A container might think it has:</p>
<pre><code class="language-plaintext">its own process tree
its own network interfaces
its own filesystem
its own hostname
</code></pre>
<p>But in reality, everything is still running on the same host machine.</p>
<p>The Linux kernel simply isolates what each process can see.  </p>
<hr />
<h2>Types of Linux Namespaces</h2>
<p>Linux provides several different namespaces.</p>
<p>Each namespace isolates a different part of the system.</p>
<p>Docker combines multiple namespaces together to create containers.</p>
<hr />
<h3>1. PID Namespace (Process Isolation)</h3>
<p>The <strong>PID namespace</strong> isolates process IDs.</p>
<p>Each container gets its own <strong>process tree</strong>.</p>
<p>Inside a container, if you run:</p>
<pre><code class="language-shell">ps aux
</code></pre>
<p>You might see something like:  </p>
<pre><code class="language-shell">PID 1    node server.js
PID 7    nginx
</code></pre>
<p>But on the <strong>host machine</strong>, the same processes might appear as:  </p>
<pre><code class="language-shell">PID 12834    node server.js
PID 12840    nginx
</code></pre>
<p>Inside the container, the application sees itself as <strong>PID 1</strong>, which normally represents the main system process.</p>
<p>This makes the container believe it is running on its own system.</p>
<p>Each container therefore has its own <strong>independent process tree</strong>.  </p>
<hr />
<h3>2. Network Namespace</h3>
<p>The <strong>network namespace</strong> isolates networking.</p>
<p>Each container gets its own network stack including:</p>
<pre><code class="language-plaintext">IP address
Network interfaces
Routing table
Port space
</code></pre>
<p>Example:</p>
<pre><code class="language-plaintext">Container A → 172.17.0.2
Container B → 172.17.0.3
</code></pre>
<p>Even if both containers run the same port, they do not conflict.</p>
<p>For example:</p>
<pre><code class="language-plaintext">Container A → port 3000
Container B → port 3000
</code></pre>
<p>Because they exist in <strong>separate network namespaces</strong>, both applications can run on the same port.</p>
<p>Docker internally connects containers through a virtual bridge network.  </p>
<hr />
<h3>3. Mount Namespace (Filesystem Isolation)</h3>
<p>The <strong>mount namespace</strong> isolates the filesystem.</p>
<p>Each container sees its own filesystem.</p>
<p>This filesystem is built from <strong>Docker image layers</strong>.</p>
<p>For example a container might see:</p>
<pre><code class="language-plaintext">/
├ bin
├ usr
├ app
└ node_modules
</code></pre>
<p>Containers cannot access host files unless explicitly mounted.</p>
<p>This is why Docker volumes exist.</p>
<hr />
<h3>4. UTS Namespace (Hostname Isolation)</h3>
<p>The <strong>UTS namespace</strong> controls the system hostname and domain name.</p>
<p>Each container can have its own hostname.</p>
<p>Example:</p>
<pre><code class="language-shell">docker run --hostname myapp nginx
</code></pre>
<p>Inside the container:</p>
<pre><code class="language-shell">hostname
→ myapp
</code></pre>
<p>Even though the host machine may have a completely different hostname.</p>
<hr />
<h3>5. IPC Namespace (Inter-Process Communication)</h3>
<p>The <strong>IPC namespace</strong> isolates communication mechanisms between processes.</p>
<p>These include:</p>
<pre><code class="language-plaintext">shared memory
message queues
semaphores
</code></pre>
<p>Containers cannot share these resources with other containers unless explicitly configured.</p>
<p>This prevents unintended communication between processes in different containers.</p>
<hr />
<h3>6. User Namespace</h3>
<p>The <strong>user namespace</strong> maps users inside a container to different users on the host system.</p>
<p>This improves security.</p>
<p>For example:</p>
<p>Inside a container, a process may run as:</p>
<pre><code class="language-shell">root (UID 0)
</code></pre>
<p>But on the host machine, that same process might actually map to:</p>
<pre><code class="language-shell">UID 100000
</code></pre>
<p>So even though the process appears to have <strong>root privileges inside the container</strong>, it does <strong>not actually have root access on the host system</strong>.</p>
<p>This prevents containers from gaining full system privileges.</p>
<hr />
<h2>Namespace Summary</h2>
<pre><code class="language-plaintext">Namespace   What it isolates

PID         Processes
Network     Networking stack
Mount       Filesystem
UTS         Hostname
IPC         Process communication
User        User IDs
</code></pre>
<p>Together, these namespaces create the illusion that a container is running on its own machine.</p>
<hr />
<h2>What Happens When a Container Starts</h2>
<p>When you run a command like:</p>
<pre><code class="language-shell">docker run nginx
</code></pre>
<p>Docker does not start a virtual machine.</p>
<p>Instead, Docker asks the Linux kernel to start a <strong>new process</strong> with isolated namespaces.</p>
<p>The kernel creates something like this:</p>
<pre><code class="language-plaintext">Container A
 ├ PID namespace
 ├ Network namespace
 ├ Mount namespace
 ├ UTS namespace
 ├ IPC namespace
 └ User namespace
</code></pre>
<p>The container process runs inside these isolated views of the system.</p>
<p>Each container gets its <strong>own set of namespaces</strong>.</p>
<p>Example system:</p>
<pre><code class="language-plaintext">Host Machine
 ├ Container A
 │   ├ PID namespace A
 │   ├ Network namespace A
 │   └ Mount namespace A
 │
 └ Container B
     ├ PID namespace B
     ├ Network namespace B
     └ Mount namespace B
</code></pre>
<p>Because of this isolation:</p>
<p>Container A <strong>cannot see</strong></p>
<pre><code class="language-plaintext">Processes of Container B
Network interfaces of Container B
Filesystem of Container B
</code></pre>
<p>All containers share the same kernel, but each sees a different <strong>isolated view of the system</strong>.</p>
<hr />
<h2>What Namespaces Do NOT Do</h2>
<p>Namespaces isolate resources.</p>
<p>But they <strong>do not limit resource usage</strong>.</p>
<p>For example, one container might consume:</p>
<pre><code class="language-plaintext">100% CPU
20 GB RAM
</code></pre>
<p>Even though it is isolated, it can still exhaust system resources.</p>
<p>This is where <strong>cgroups</strong> come in.</p>
<hr />
<h2>What Are cgroups?</h2>
<p><strong>cgroups (Control Groups)</strong> limit how much system resources a process can use.</p>
<p>They control things like:</p>
<pre><code class="language-plaintext">CPU usage
Memory usage
Disk I/O
Network bandwidth
</code></pre>
<p>Docker uses cgroups to enforce resource limits.</p>
<p>Example:</p>
<pre><code class="language-plaintext">docker run -m 512m myapp
</code></pre>
<p>This limits the container to <strong>512 MB of RAM</strong>.</p>
<p>The Linux kernel enforces this using cgroups.</p>
<hr />
<h2>The Real Definition of a Container</h2>
<p>A container is essentially:</p>
<pre><code class="language-plaintext">Linux Process + Namespaces (isolation) + cgroups (resource limits)
</code></pre>
<p>Docker simply makes it easier to create and manage this setup.</p>
<hr />
<h2>Why Containers Start Instantly</h2>
<p>Virtual machines must:</p>
<pre><code class="language-plaintext">boot an operating system
initialize hardware
start system services
</code></pre>
<p>Containers do none of this.</p>
<p>Starting a container simply means:</p>
<pre><code class="language-plaintext">create namespaces
apply cgroup limits
start the process
</code></pre>
<p>This is why containers start in <strong>seconds or even milliseconds</strong>.</p>
<hr />
<h2>Docker Images</h2>
<p>Docker images are <strong>read-only templates</strong> used to create containers.</p>
<p>They contain:</p>
<pre><code class="language-plaintext">application code
runtime
libraries
dependencies
</code></pre>
<p>Images use a <strong>layered filesystem</strong>.</p>
<p>Example layers:</p>
<pre><code class="language-plaintext">Base OS layer
Node runtime layer
Dependencies layer
Application code layer
</code></pre>
<p>Each instruction in a Dockerfile creates a new layer.</p>
<p>Docker caches these layers which makes builds much faster.</p>
<hr />
<h2>A Common Misconception About Images</h2>
<p>Many people believe containers <strong>must always be created from images</strong>.</p>
<p>In Docker workflows this is usually true.</p>
<p>But technically, containers do not require images.</p>
<p>Remember:</p>
<p>A container is just a <strong>process with namespaces and cgroups</strong>.</p>
<p>Images simply provide a convenient way to package the filesystem required for that process.</p>
<hr />
<h2>Dockerfile</h2>
<p>A <strong>Dockerfile</strong> is the blueprint used to build Docker images.</p>
<p>Example:</p>
<pre><code class="language-dockerfile">FROM node:20

WORKDIR /app

COPY package.json .

RUN npm install

COPY . .

CMD ["node", "index.js"]
</code></pre>
<p>This tells Docker:</p>
<pre><code class="language-plaintext">Start from a Node.js base image
Create a working directory
Copy package.json
Install dependencies
Copy application code
Start the server
</code></pre>
<p>Build process:</p>
<pre><code class="language-plaintext">Dockerfile → Image → Container
</code></pre>
<hr />
<h2>Container Networking</h2>
<p>Containers often need to communicate with each other.</p>
<p>Docker creates virtual networks to enable this.</p>
<p>Example system:</p>
<pre><code class="language-plaintext">Container A → API
Container B → Database
Container C → Cache
</code></pre>
<p>Docker automatically assigns IP addresses and allows containers to communicate using service names.</p>
<p>Under the hood this uses:</p>
<pre><code class="language-plaintext">network namespaces
virtual ethernet interfaces
linux bridges
</code></pre>
<hr />
<h2>Docker Volumes</h2>
<p>Containers are <strong>ephemeral</strong>.</p>
<p>If a container is deleted, its data disappears.</p>
<p>To persist data, Docker uses <strong>volumes</strong>.</p>
<p>A volume is a directory on the host filesystem mounted into the container.</p>
<p>Example:</p>
<pre><code class="language-plaintext">Host directory → /var/lib/docker/volumes/mydata
Container path → /data
</code></pre>
<hr />
<h2>Types of Docker Volumes</h2>
<h3>Bind Mounts</h3>
<p>You specify the exact host directory.</p>
<p>Example:</p>
<pre><code class="language-shell">docker run -v /home/user/data:/data myapp
</code></pre>
<hr />
<h3>Anonymous Volumes</h3>
<p>Docker creates a volume automatically without a name.</p>
<p>These are harder to manage.</p>
<hr />
<h3>Named Volumes</h3>
<p>Docker manages the volume with a specific name.</p>
<p>Example:</p>
<pre><code class="language-plaintext">docker volume create mydata
</code></pre>
<p>Named volumes are easier to reuse and maintain.</p>
<hr />
<h2>Putting It All Together</h2>
<p>When you run:</p>
<pre><code class="language-shell">docker run nginx
</code></pre>
<p>Docker does roughly this:</p>
<pre><code class="language-plaintext">pull image
create namespaces
apply cgroups
configure networking
mount volumes
start the process
</code></pre>
<p>What you see as a container…</p>
<p>Is really just a <strong>Linux process running with isolation and resource limits</strong>.</p>
<hr />
<p>Now we understand how Docker actually works internally.</p>
<p>We covered:</p>
<pre><code class="language-plaintext">Linux namespaces
cgroups
Docker images
Docker volumes
container networking
</code></pre>
<p>But another problem still remains.</p>
<p>How do we run <strong>hundreds or thousands of containers across multiple machines</strong>?</p>
<p>How are they scheduled, scaled, and monitored?</p>
<p>That is where <strong>Kubernetes</strong> comes in.</p>
]]></content:encoded></item><item><title><![CDATA[Containerization vs Orchestration]]></title><description><![CDATA[Why Both Exist?
Modern applications rarely run as a single program anymore.
A typical backend system today might look like this:
API Service
Authentication Service
Payment Service
Notification Service]]></description><link>https://blog.akananthu.com/containerization-vs-orchestration</link><guid isPermaLink="true">https://blog.akananthu.com/containerization-vs-orchestration</guid><category><![CDATA[Docker]]></category><category><![CDATA[containerization]]></category><category><![CDATA[Devops]]></category><category><![CDATA[deployment]]></category><category><![CDATA[Orchestration]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Sun, 15 Mar 2026 10:58:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/04d6d516-2ab1-4b37-9bfe-ae549c29413c.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Why Both Exist?</h3>
<p>Modern applications rarely run as a single program anymore.</p>
<p>A typical backend system today might look like this:</p>
<pre><code class="language-plaintext">API Service
Authentication Service
Payment Service
Notification Service
Database
Cache
</code></pre>
<p>Each component runs independently.</p>
<p>Sometimes even in different programming languages.</p>
<pre><code class="language-plaintext">User service → Node.js
Payment service → Java
Analytics service → Python
</code></pre>
<p>This architecture is powerful.</p>
<p>But it introduces a different problem.</p>
<p>Not a coding problem.</p>
<p>A <strong>deployment problem</strong>.</p>
<h3>A Simple Mental Model</h3>
<p>Think of modern infrastructure like running a restaurant.</p>
<p>You don't just have one person doing everything.</p>
<p>You have specialists.</p>
<pre><code class="language-plaintext">Chef
Cashier
Delivery
Inventory
Cleaning
</code></pre>
<p>Each person handles a specific task.</p>
<p>Now imagine managing this restaurant manually.</p>
<p>You would need to:</p>
<ul>
<li><p>hire people</p>
</li>
<li><p>replace them if they leave</p>
</li>
<li><p>add more during busy hours</p>
</li>
<li><p>coordinate their work</p>
</li>
</ul>
<p>Software systems face the same challenge.</p>
<p>Different services must run together.</p>
<p>They must scale when traffic increases.</p>
<p>And they must recover if something fails.</p>
<p>This is where <strong>containers</strong> and <strong>orchestration</strong> enter the picture.</p>
<hr />
<h3>The Problem That Started Everything</h3>
<p>Before containers existed, applications were deployed directly on servers.</p>
<p>A typical situation looked like this.</p>
<p>Developer machine:</p>
<pre><code class="language-plaintext">Node 18
PostgreSQL 14
Redis 6
Ubuntu
</code></pre>
<p>Production server:</p>
<pre><code class="language-plaintext">Node 16
PostgreSQL 12
Redis 5
Amazon Linux
</code></pre>
<p>The result was predictable.</p>
<p>The application worked locally.</p>
<p>But failed in production.</p>
<p>This became such a common issue that developers turned it into a running joke:</p>
<blockquote>
<p>"It works on my machine."</p>
</blockquote>
<p>Applications depend heavily on their <strong>environment</strong>.</p>
<p>Change the environment, and things can break.</p>
<p>So the industry looked for a solution.</p>
<hr />
<h3>The First Attempt: Virtual Machines</h3>
<p>The first major solution was <strong>Virtual Machines</strong>.</p>
<p>Instead of installing applications directly on servers, developers packaged them inside virtual machines.</p>
<p>A server might look like this:</p>
<pre><code class="language-plaintext">Server
 ├ VM1 → Ubuntu + App1
 ├ VM2 → Ubuntu + App2
 └ VM3 → Ubuntu + App3
</code></pre>
<p>Each VM contained everything needed to run the application.</p>
<pre><code class="language-plaintext">Operating System
System Libraries
Runtime
Application
</code></pre>
<p>This solved the environment mismatch problem.</p>
<p>But it created another issue.</p>
<p>Virtual machines were <strong>heavy</strong>.</p>
<p>Each VM included an entire operating system.</p>
<p>Typical VM images were several gigabytes in size.</p>
<p>Starting them could take minutes.</p>
<p>Running dozens of them quickly became inefficient.</p>
<p>So the industry looked for something lighter.</p>
<hr />
<h3>Enter Containers</h3>
<p>Containers took a different approach.</p>
<p>Instead of packaging an entire operating system, containers package <strong>only what the application needs</strong>.</p>
<p>A container usually includes:</p>
<pre><code class="language-plaintext">Application code
Runtime (Node, Python, etc.)
Libraries
Dependencies
</code></pre>
<p>But it does <strong>not</strong> include a full operating system.</p>
<p>This makes containers dramatically smaller.</p>
<p>And much faster to start.</p>
<p>But this raises an interesting question.</p>
<p>If containers don't have their own operating system…</p>
<p><strong>how do they run?</strong></p>
<hr />
<h3>The Host Kernel</h3>
<p>To understand this, we need to talk about the <strong>kernel</strong>.</p>
<p>Every operating system has a kernel.</p>
<p>The kernel is the core part of the OS responsible for managing:</p>
<pre><code class="language-plaintext">Processes
Memory
File systems
Networking
Hardware access
</code></pre>
<p>In a virtual machine, each VM runs its own kernel.</p>
<pre><code class="language-plaintext">Hardware
 └ Hypervisor
     └ VM
         └ Guest OS Kernel
             └ Application
</code></pre>
<p>Containers work differently.</p>
<p>Instead of running their own kernels, containers <strong>share the kernel of the host operating system</strong>.</p>
<pre><code class="language-plaintext">Hardware
 └ Host Operating System (Kernel)
     └ Container Runtime (Docker)
         ├ Container
         ├ Container
         └ Container
</code></pre>
<p>Each container still behaves like an isolated environment.</p>
<p>But under the hood, they are all using the <strong>same kernel</strong>.</p>
<p>This is what makes containers so lightweight.</p>
<p>They do not need to boot an entire operating system.</p>
<p>As a result:</p>
<pre><code class="language-plaintext">VM size → several GB
Container size → often under a few hundred MB
</code></pre>
<p>Startup times also improve dramatically.</p>
<pre><code class="language-plaintext">VM → minutes
Container → seconds
</code></pre>
<p>Containers solved the environment problem.</p>
<p>If a container runs on one machine, it will behave the same way on another.</p>
<p>But this introduced another challenge.</p>
<p>Running <strong>one container</strong> is easy.</p>
<p>Running <strong>hundreds of containers</strong> is not.</p>
<hr />
<h3>A Quick Comparison</h3>
<p>Virtual Machines vs Containers</p>
<pre><code class="language-plaintext">Virtual Machine
• Full operating system
• Large images (GBs)
• Slower startup
• Strong isolation

Container
• Share host kernel
• Small images (MBs)
• Start in seconds
• Lightweight
</code></pre>
<p>Containers were perfect for modern applications.</p>
<p>But they created a new operational challenge.</p>
<hr />
<h3>The Next Problem</h3>
<p>Modern systems rarely run a single container.</p>
<p>A typical application might include several services:</p>
<pre><code class="language-plaintext">API Service
User Service
Payment Service
Email Service
Database
Cache
</code></pre>
<p>Each service runs in its own container.</p>
<p>Now imagine managing this in production.</p>
<p>Questions start appearing quickly.</p>
<p>What happens if:</p>
<ul>
<li><p>a container crashes?</p>
</li>
<li><p>traffic suddenly increases?</p>
</li>
<li><p>a service needs more instances?</p>
</li>
<li><p>a new version needs to be deployed?</p>
</li>
</ul>
<p>Running a few containers manually is manageable.</p>
<p>Running dozens across multiple machines is not.</p>
<p>This is where orchestration comes in.</p>
<hr />
<h3>Orchestration</h3>
<p>Orchestration is the automated management of containers.</p>
<p>Instead of manually starting and monitoring containers, an orchestration system handles it for you.</p>
<p>It takes care of things like:</p>
<pre><code class="language-plaintext">Scaling containers
Restarting failed containers
Load balancing
Service discovery
Rolling deployments
Networking between services
</code></pre>
<p>In simple terms:</p>
<p>Containerization <strong>packages applications</strong>.</p>
<p>Orchestration <strong>runs and manages them at scale</strong>.</p>
<hr />
<h3>Kubernetes</h3>
<p>The most widely used orchestration system today is <strong>Kubernetes</strong>.</p>
<p>Kubernetes manages clusters of machines and distributes containers across them.</p>
<p>A simplified cluster might look like this:</p>
<pre><code class="language-plaintext">Cluster
 ├ Worker Node
 │   ├ Container
 │   └ Container
 ├ Worker Node
 │   ├ Container
 │   └ Container
 └ Worker Node
     ├ Container
     └ Container
</code></pre>
<p>Instead of manually controlling containers, you describe the <strong>desired state</strong>.</p>
<p>For example:</p>
<pre><code class="language-plaintext">replicas: 3
</code></pre>
<p>This tells Kubernetes:</p>
<p><code>Always keep three instances of this container running.</code></p>
<p>If one container crashes, Kubernetes automatically creates a replacement.</p>
<p>If traffic increases, it can scale the system.</p>
<p>If you deploy a new version, Kubernetes can update containers gradually without downtime.</p>
<hr />
<h3>Containerization vs Orchestration</h3>
<p>These two ideas solve different layers of the same problem.</p>
<p>Containerization focuses on <strong>packaging applications</strong>.</p>
<p>Example tool:</p>
<pre><code class="language-plaintext">Docker
</code></pre>
<p>Orchestration focuses on <strong>managing containers at scale</strong>.</p>
<p>Example tool:</p>
<pre><code class="language-plaintext">Kubernetes
</code></pre>
<p>A typical workflow looks like this:</p>
<pre><code class="language-plaintext">Developer writes code
        ↓
Docker packages it into a container
        ↓
Image pushed to registry
        ↓
Kubernetes deploys containers
        ↓
Cluster runs and manages them
</code></pre>
<p>Containerization creates the building blocks.</p>
<p>Orchestration manages those blocks in production.</p>
<hr />
<h3>What Comes Next</h3>
<p>This article introduced two core ideas behind modern infrastructure:</p>
<p><strong>Containerization</strong><br /><strong>Orchestration</strong></p>
<p>But we have only scratched the surface.</p>
<p>In the upcoming articles, we will go deeper into:</p>
<ul>
<li><p>how Docker images actually work</p>
</li>
<li><p>why containers start in seconds</p>
</li>
<li><p>Linux namespaces and cgroups</p>
</li>
<li><p>container networking</p>
</li>
<li><p>how Kubernetes manages thousands of containers</p>
</li>
</ul>
<p>Once these pieces start connecting together, modern infrastructure stops looking like magic.</p>
<p>And starts making a lot more sense.</p>
]]></content:encoded></item><item><title><![CDATA[SSR vs SSG vs ISR — Part 3: The Hybrid Reality]]></title><description><![CDATA[If you read Part 1 and Part 2 carefully, you already know the “pure” forms:

Server Side Rendering (SSR) does work during every request.

Static Site Generation (SSG) does work once at build time.

In]]></description><link>https://blog.akananthu.com/ssr-vs-ssg-vs-isr-part-3-the-hybrid-reality</link><guid isPermaLink="true">https://blog.akananthu.com/ssr-vs-ssg-vs-isr-part-3-the-hybrid-reality</guid><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Wed, 04 Mar 2026 12:16:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/d95a7931-7d51-4711-8697-aec77597fe98.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you read <a href="https://ananthakrishnan.hashnode.dev/ssr-vs-ssg-vs-isr-the-request-ssr">Part 1</a> and <a href="https://ananthakrishnan.hashnode.dev/ssr-vs-ssg-vs-isr-part-2-the-build-ssg-isr">Part 2</a> carefully, you already know the “pure” forms:</p>
<ul>
<li><p><strong>Server Side Rendering (SSR)</strong> does work during every request.</p>
</li>
<li><p><strong>Static Site Generation (SSG)</strong> does work once at build time.</p>
</li>
<li><p><strong>Incremental Static Regeneration (ISR)</strong> does work at build time and occasionally in the background.</p>
</li>
</ul>
<p>In most real-world applications with dynamic data, these strategies aren’t used in isolation.</p>
<p>In reality, <strong>production code uses all three</strong> deliberately, based on <em>data consistency requirements</em>, <em>performance goals</em>, and <em>user expectations</em>.</p>
<p>This part is less about definitions and more about <strong>architectural thinking</strong> and <strong>practical usage</strong>.</p>
<h3>Let’s dig in.</h3>
<p>The E-Commerce Product Page That Broke My Brain</p>
<p>Imagine this route:</p>
<pre><code class="language-plaintext">/products/airpods-pro
</code></pre>
<p>In <a href="https://ananthakrishnan.hashnode.dev/ssr-vs-ssg-vs-isr-the-request-ssr">Part 1</a> and <a href="https://ananthakrishnan.hashnode.dev/ssr-vs-ssg-vs-isr-part-2-the-build-ssg-isr">Part 2</a>, we treated these rendering modes in isolation.</p>
<p>But a real product page is not a textbook example.</p>
<p>In production, a product page typically contains:</p>
<ul>
<li><p>Static product metadata (title, description, images)</p>
</li>
<li><p>Pricing that changes frequently</p>
</li>
<li><p>Discounts or promotions</p>
</li>
<li><p>Stock availability that can change in seconds</p>
</li>
<li><p>Reviews that grow over time</p>
</li>
<li><p>Personalisation (e.g., user-specific pricing)</p>
</li>
</ul>
<p>Now ask yourself:</p>
<blockquote>
<p>Should the entire product page be SSR?</p>
<p>Should it be SSG?</p>
<p>Should it be ISR?</p>
</blockquote>
<p>If you answered <strong>one of the above</strong>, you’re still thinking in terms of knobs — not goals.</p>
<p>Let’s shift that.</p>
<hr />
<h2>Performance + Freshness = Partial Responsibility</h2>
<p>In production, people care about:</p>
<ul>
<li><p><strong>First Contentful Paint (FCP)</strong></p>
</li>
<li><p><strong>Largest Contentful Paint (LCP)</strong></p>
</li>
<li><p><strong>Data freshness</strong></p>
</li>
<li><p><strong>Scalability</strong></p>
</li>
<li><p><strong>Cost</strong></p>
</li>
<li><p><strong>SEO</strong></p>
</li>
</ul>
<p>And any sane team will divide information based on <em>how frequently it changes</em>.</p>
<p>Some parts change once a month.</p>
<p>Others change every second.</p>
<p>Treating them the same is wasteful.</p>
<p>So the real architecture looks like this:</p>
<pre><code class="language-plaintext">Product Page
├── Static Content (SSG/ISR)
├── Frequent Dynamic Data (Client Fetch / SSR API)
├── Personalised Content (Client Fetch / SSR API)
</code></pre>
<p>Let’s unpack this through a real sequence.</p>
<hr />
<h2>Static Layout — Pre-rendered for Speed and SEO</h2>
<p>The parts that don’t change often — product title, description, images — go through:</p>
<ul>
<li><p><strong>SSG</strong> if they’re truly static</p>
</li>
<li><p><strong>ISR</strong> if they need occasional refresh</p>
</li>
<li><p>Build time or background regeneration</p>
</li>
</ul>
<p>Example (App Router):</p>
<pre><code class="language-javascript">export const revalidate = 60  // only if product info changes periodically

async function fetchProductData(slug) {
  return await getProduct(slug)
}

export default async function Page({ params }) {
  const product = await fetchProductData(params.slug)

  return (
    &lt;div&gt;
      &lt;h1&gt;{product.name}&lt;/h1&gt;
      &lt;img src={product.image} /&gt;
      &lt;p&gt;{product.description}&lt;/p&gt;
      {/* The rest will come later */}
    &lt;/div&gt;
  )
}
</code></pre>
<p>This section benefits from:</p>
<ul>
<li><p><strong>Static rendering</strong> → very fast initial HTML</p>
</li>
<li><p><strong>Improved SEO</strong> → search bots see full description</p>
</li>
<li><p><strong>Low cost</strong> → no work on every request</p>
</li>
</ul>
<p>In DevTools:</p>
<ul>
<li><p>HTML loads with minimal TTFB</p>
</li>
<li><p>No blocking work for static content</p>
</li>
</ul>
<p><strong>This alone increases perceived performance.</strong></p>
<hr />
<h2>Behind the Scenes — Why Static Alone Is Not Enough</h2>
<p>But in an e-commerce site, static content is never enough.</p>
<p>Because prices and stock change constantly.</p>
<p>If your static HTML says:</p>
<pre><code class="language-plaintext">Price: ₹19,999
Stock: Available
</code></pre>
<p>But right now:</p>
<pre><code class="language-plaintext">Price: ₹18,499
Stock: Sold out
</code></pre>
<p>Then you have <em>technical SEO correctness issues</em>, <em>conversion loss</em>, or worse — <em>legal issues</em> for incorrect pricing.</p>
<p>That’s when you stop generating HTML and start calling APIs.</p>
<hr />
<h2>Client-Side Fetching: Fresh Data, Fast Static</h2>
<p>Once the static HTML is delivered, we fill in the dynamic pieces on the client:</p>
<pre><code class="language-javascript">"use client"

import { useEffect, useState } from "react"

function ProductDynamic({ slug }) {
  const [data, setData] = useState(null)
  const [loading, setLoading] = useState(true)

  useEffect(() =&gt; {
    async function fetchLiveData() {
      const res = await fetch(`/api/products/${slug}/live-data`)
      const json = await res.json()
      setData(json)
      setLoading(false)
    }

    fetchLiveData()
  }, [slug])

  if (loading) return &lt;p&gt;Loading latest price...&lt;/p&gt;

  return (
    &lt;&gt;
      &lt;p&gt;Price: ₹{data.price}&lt;/p&gt;
      &lt;p&gt;Stock: {data.stock}&lt;/p&gt;
    &lt;/&gt;
  )
}

export default ProductDynamic;
</code></pre>
<ul>
<li><p>Fresh data</p>
</li>
<li><p>No blocking HTML generation</p>
</li>
<li><p>Instant response</p>
</li>
<li><p>Separation of concerns</p>
</li>
</ul>
<p>And this fetch happens in the browser:</p>
<pre><code class="language-plaintext">Browser → GET /api/products/airpods-pro/live-data
</code></pre>
<p>Which means:</p>
<ul>
<li><p>Price and stock are up to date</p>
</li>
<li><p>API returns correct values from DB</p>
</li>
<li><p>No server rendering needed for the entire page</p>
</li>
<li><p>Only relevant data is fetched</p>
</li>
</ul>
<p>You’ll see this in DevTools:</p>
<ul>
<li><p>HTML loads instantly</p>
</li>
<li><p>Fetch/XHR calls fill in dynamic content after HTML</p>
</li>
</ul>
<p>This is how hybrid actually feels.</p>
<hr />
<h2>The Revalidation API — The Missing Piece</h2>
<p>Static content can become stale, and waiting for a timer is often not acceptable.</p>
<p>Imagine:</p>
<ul>
<li><p>A flash sale just started</p>
</li>
<li><p>A discount just went live</p>
</li>
<li><p>Inventory suddenly dropped</p>
</li>
</ul>
<p>Waiting 60 seconds for ISR to regenerate is not great.</p>
<p>So production systems often use:</p>
<blockquote>
<p><strong>On-Demand Revalidation API</strong></p>
</blockquote>
<p>Next.js provides this for exactly this purpose.</p>
<p>Instead of waiting for a 60-second revalidate window, we <strong>tell Next.js exactly when to regenerate a page</strong>.</p>
<p>Here’s how it looks:</p>
<pre><code class="language-javascript">// app/api/revalidate/route.js
import { revalidatePath } from "next/cache"

export async function POST(req) {
  const { slug } = await req.json()

  revalidatePath(`/products/${slug}`)  // regenerate just that page

  return NextResponse.json({ revalidated: true })
}
</code></pre>
<p>Now when the admin updates the price:</p>
<ol>
<li><p>The admin panel updates the DB</p>
</li>
<li><p>It calls:</p>
<pre><code class="language-javascript">await fetch('/api/revalidate', {
  method: 'POST',
  body: JSON.stringify({ slug })
})
</code></pre>
</li>
<li><p>Next.js runs the revalidation function, deletes the cached HTML for that path, and the next incoming request triggers regeneration of the page.</p>
</li>
<li><p>All future requests serve fresh static HTML</p>
</li>
</ol>
<p>With this:</p>
<ul>
<li><p>No waiting for timer</p>
</li>
<li><p>No stale pages for long windows</p>
</li>
<li><p>Precise control over regeneration</p>
</li>
</ul>
<p>This is how production sites keep static pages fresh <strong>without blocking users</strong>.</p>
<hr />
<h2>Why Production Sites Use This Hybrid Pattern</h2>
<p>Let’s revisit the key trade-offs:</p>
<h3>SSR</h3>
<ul>
<li><p>Always fresh</p>
</li>
<li><p>Expensive per request</p>
</li>
<li><p>Higher TTFB</p>
</li>
<li><p>Slow at scale</p>
</li>
</ul>
<h3>SSG</h3>
<ul>
<li><p>Very fast</p>
</li>
<li><p>Pre-rendered</p>
</li>
<li><p>Great SEO</p>
</li>
<li><p>Static until next build</p>
</li>
</ul>
<h3>ISR</h3>
<ul>
<li><p>Static most of the time</p>
</li>
<li><p>Fresh occasionally</p>
</li>
<li><p>Low cost</p>
</li>
<li><p>Good compromise</p>
</li>
</ul>
<h3>Hybrid (SSG/ISR + Client Fetch + Revalidation)</h3>
<ul>
<li><p>Static for layout</p>
</li>
<li><p>Fresh dynamic data</p>
</li>
<li><p>On-demand regeneration</p>
</li>
<li><p>Scales well</p>
</li>
<li><p>Fast responses</p>
</li>
</ul>
<p>This pattern aligns with production requirements:</p>
<ul>
<li><p>Fast initial load</p>
</li>
<li><p>Correct content</p>
</li>
<li><p>Scalable infrastructure</p>
</li>
<li><p>SEO-friendly output</p>
</li>
</ul>
<p>When a new price is set:</p>
<ol>
<li><p>Layout was static</p>
</li>
<li><p>Dynamic fetch pulls in real pricing</p>
</li>
<li><p>Revalidate API regenerates static output</p>
</li>
<li><p>No stale content lives long</p>
</li>
</ol>
<p>This is not hypothetical.</p>
<p>This is how teams actually build commerce platforms today.</p>
<hr />
<h2>Observability in DevTools</h2>
<p>When exploring this hybrid pattern:</p>
<h3>HTML (Doc)</h3>
<ul>
<li><p>Built static</p>
</li>
<li><p>Instant load</p>
</li>
<li><p>Minimal TTFB</p>
</li>
</ul>
<h3>Fetch / XHR</h3>
<ul>
<li><p>Client requests dynamic data</p>
</li>
<li><p>Price and stock arrive separately</p>
</li>
</ul>
<h3>Revalidation Trigger</h3>
<ul>
<li><p>Revalidate API call</p>
</li>
<li><p>You won’t see it in Doc timing</p>
</li>
<li><p>But you notice updated HTML next request</p>
</li>
</ul>
<p>This separation is the architectural signature of hybrid systems.</p>
<hr />
<h2>Final Reflection</h2>
<p>The real confusion around SSR, SSG, and ISR doesn’t come from the concepts themselves.</p>
<p>It comes from the fact that <strong>no real production system can be explained with only one strategy.</strong></p>
<p>It’s not:</p>
<pre><code class="language-plaintext">“This page is SSR”
</code></pre>
<p>or</p>
<pre><code class="language-plaintext">“This page is SSG”
</code></pre>
<p>It’s:</p>
<pre><code class="language-plaintext">“This page’s layout is static,
 this page’s dynamic content is fetched,
 and we regenerate only when needed.”
</code></pre>
<p>That is the hybrid reality.</p>
<p>And it’s only visible when you stop thinking in definitions,</p>
<p>and start thinking about:</p>
<ul>
<li><p>timelines</p>
</li>
<li><p>execution models</p>
</li>
<li><p>when work happens</p>
</li>
<li><p>how content stays fresh</p>
</li>
</ul>
<p>Which is exactly why we built this series.</p>
<h3>The End.</h3>
]]></content:encoded></item><item><title><![CDATA[SSR vs SSG vs ISR — Part 2: The Build (SSG & ISR)]]></title><description><![CDATA[In Part 1, I observed SSR the only way that made sense to me — through the Network tab.
Every refresh cost ~800ms.The browser waited.The server worked.The timestamp changed.
It was obvious what was ha]]></description><link>https://blog.akananthu.com/ssr-vs-ssg-vs-isr-part-2-the-build-ssg-isr</link><guid isPermaLink="true">https://blog.akananthu.com/ssr-vs-ssg-vs-isr-part-2-the-build-ssg-isr</guid><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Wed, 04 Mar 2026 11:31:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/7593bbcf-8ddb-4a97-a21c-3751afb7effb.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <a href="https://ananthakrishnan.hashnode.dev/ssr-vs-ssg-vs-isr-the-request-ssr">Part 1</a>, I observed SSR the only way that made sense to me — through the Network tab.</p>
<p>Every refresh cost ~800ms.<br />The browser waited.<br />The server worked.<br />The timestamp changed.</p>
<p>It was obvious what was happening.</p>
<p>The expensive work was happening during the request.</p>
<p>So for Part 2, I wanted to answer a different question:</p>
<blockquote>
<p>What happens if we move that work somewhere else?</p>
</blockquote>
<hr />
<h2>Moving the Work to the Build</h2>
<p>For SSG, I created two routes:</p>
<pre><code class="language-plaintext">/pages-ssg
/app-ssg
</code></pre>
<p>They look exactly like the SSR example.</p>
<p>Same artificial 800ms delay.</p>
<p>Same <code>Generated At</code> timestamp.</p>
<p>Same UI.</p>
<p>The only difference is when the data is fetched.</p>
<p>In the Pages Router, I used <code>getStaticProps</code>.</p>
<p>In the App Router, I didn’t add anything special.</p>
<p>And that part surprised me.</p>
<p>Because in the App Router, static rendering is the default.</p>
<p>You don’t opt into SSG.</p>
<p>You opt out of it.</p>
<p>If your route doesn’t use dynamic APIs like <code>cookies()</code> or <code>headers()</code>,</p>
<p>and you don’t force dynamic rendering,</p>
<p>Next.js quietly pre-renders it at build time.</p>
<p>That’s the baseline.</p>
<hr />
<h2>Something Broke — And That’s When It Clicked</h2>
<p>Initially, I reused the same fetch call from the SSR example:</p>
<pre><code class="language-plaintext">await fetch("http://localhost:3000/api/product")
</code></pre>
<p>It worked perfectly in SSR.</p>
<p>Then I ran:</p>
<pre><code class="language-plaintext">npm run build
</code></pre>
<p>And it failed.</p>
<p>It wasn’t a subtle failure either.</p>
<p>It just couldn’t connect.</p>
<p>At first, that felt strange.</p>
<p>But then it became obvious.</p>
<p>During build time, there is no server running at <code>localhost:3000</code>.</p>
<p>There are no API routes listening.</p>
<p>There is nothing to respond.</p>
<p>SSG doesn’t execute inside a running Next.js server.</p>
<p>It executes during compilation.</p>
<p>That’s a completely different phase.</p>
<p>So I removed the API call and extracted the logic into a function:</p>
<pre><code class="language-typescriptreact">const getProduct = async () =&gt; {
  await new Promise((resolve) =&gt; setTimeout(resolve, 800));
  return {
    name: "Sample Product",
    price: Math.floor(Math.random() * 1000),
    generatedAt: new Date().toISOString(),
  };
};
</code></pre>
<p>Now the same function runs:</p>
<ul>
<li><p>During request time in SSR</p>
</li>
<li><p>During build time in SSG</p>
</li>
<li><p>During regeneration in ISR</p>
</li>
</ul>
<p>Same logic.</p>
<p>Different execution phase.</p>
<p>That distinction matters more than any definition.</p>
<hr />
<h2>Observing SSG in Production</h2>
<p>After building:</p>
<pre><code class="language-plaintext">npm run build
npm start
</code></pre>
<p>I opened DevTools again.</p>
<p>Network → Doc → Timing.</p>
<p>Then refreshed <code>/pages-ssg</code>.</p>
<p>And this is what I saw:</p>
<img src="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/c533d62b-4904-46d3-91fc-b857c77fefde.png" alt="" style="display:block;margin:0 auto" />

<p>There was no 800ms wait.</p>
<p>The timestamp didn’t change.</p>
<p>The server wasn’t doing anything during the request.</p>
<p>The expensive work had already happened.</p>
<p>SSG doesn’t make your request faster.</p>
<p>It removes work from the request entirely.</p>
<p>The browser just downloads a pre-generated HTML file.</p>
<p>That’s it.</p>
<h3>Dev Mode Lies</h3>
<p>If you test SSG in development mode, it won’t look like this.</p>
<p>In development, Next.js prioritises fast refresh and developer experience.</p>
<p>It doesn’t aggressively optimise static output.</p>
<p>So if you want to understand how SSG really behaves:</p>
<p>Always test in production mode.</p>
<p>That was one of the biggest sources of confusion for me.</p>
<h3>Then Comes ISR</h3>
<p>ISR felt like a compromise.</p>
<p>Not fully static.<br />Not fully dynamic.</p>
<p>In Pages Router:</p>
<pre><code class="language-javascript">export async function getStaticProps() {
  console.log("ISR: Generating static props...");
  return {
    props: {
      product: {
        name: "Sample ISR Product",
        price: Math.floor(Math.random() * 1000),
        generatedAt: new Date().toISOString(),
      },
    },
    revalidate: 10, 
  };
}
</code></pre>
<p>In App Router:</p>
<pre><code class="language-plaintext">export const revalidate = 10
</code></pre>
<p>That’s it.</p>
<p>No <code>force-static</code>.</p>
<p>No <code>force-dynamic</code>.</p>
<p>Just a revalidation window.</p>
<p>And again, no call to <code>localhost:3000/api/product</code>.</p>
<p>Because the initial render still happens at build time.</p>
<p>If there’s no server running during build, there’s nothing to call.</p>
<p>That constraint forces you to think differently about where your data comes from.</p>
<hr />
<h2>What ISR Actually Does</h2>
<p>Right after build, ISR behaves exactly like SSG.</p>
<p>Fast.</p>
<p>Static.</p>
<p>Stable timestamp.</p>
<img src="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/6b24abc8-996d-4059-8901-579659234070.png" alt="" style="display:block;margin:0 auto" />

<p>Then I waited 10 seconds.</p>
<p>Refreshed.</p>
<p>And something interesting happened.</p>
<img src="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/4841909a-84fe-4aad-9d00-c5642e8070b8.png" alt="" style="display:block;margin:0 auto" />

<p>The first request after the revalidate window did not block for 800ms.</p>
<p>It still served the old HTML immediately.</p>
<p>But in the background, regeneration started.</p>
<p>Only after regeneration completed did the timestamp and price update on subsequent requests.</p>
<p>That’s the subtle but powerful difference.</p>
<p>ISR does not render per request.</p>
<p>It renders per interval.</p>
<p>And during regeneration, it serves stale content instead of blocking.</p>
<p>It shifts compute away from users most of the time.</p>
<hr />
<h2>Reframing Everything</h2>
<p>At this point, the terminology stopped mattering to me.</p>
<p>What mattered was this:</p>
<ul>
<li><p>SSR → Work happens during request.</p>
</li>
<li><p>SSG → Work happens during build.</p>
</li>
<li><p>ISR → Work happens during build and occasionally in the background.</p>
</li>
</ul>
<p>That’s the entire model.</p>
<p>Everything else — performance, scalability, freshness — flows from that decision.</p>
<p>Once you see it in the Network tab,</p>
<p>it stops being abstract.</p>
<p>You can measure it.</p>
<p>You can reason about it.</p>
<p>And most importantly —</p>
<p>You can choose intentionally.</p>
<hr />
<p>In Part 3, we’ll leave the controlled experiment and look at real applications.</p>
<p>Because in production, almost nothing is purely SSR or purely SSG.</p>
<p>And that’s where the real architectural decisions begin.</p>
]]></content:encoded></item><item><title><![CDATA[SSR vs SSG vs ISR - Part 1: The Request (SSR)]]></title><description><![CDATA[When I started this experiment , I thought SSR was simple:

“SSR renders on every request.”

But when I tried to observe it in practice, something clicked:
All the diagrams in blogs and docs tell you ]]></description><link>https://blog.akananthu.com/ssr-vs-ssg-vs-isr-the-request-ssr</link><guid isPermaLink="true">https://blog.akananthu.com/ssr-vs-ssg-vs-isr-the-request-ssr</guid><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Wed, 04 Mar 2026 08:37:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/ac0e9edd-0537-4307-b1f3-ff96b4e65c1c.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I started this <a href="https://github.com/akanandhu/SSRxSSGxISR">experiment</a> , I thought SSR was simple:</p>
<blockquote>
<p>“SSR renders on every request.”</p>
</blockquote>
<p>But when I tried to observe it in practice, something clicked:</p>
<p>All the diagrams in blogs and docs tell you what,<br />but none tell you <em>why it feels slow</em> or <em>what the browser actually waits for</em>.</p>
<p>So I set up a simple route in my test project:</p>
<pre><code class="language-plaintext">/pages-ssr   // For Pages Router SSR
/app-ssr     // For App Router SSR
</code></pre>
<p>Both do the same thing:</p>
<ul>
<li><p>simulate an <strong>800 ms delay</strong></p>
</li>
<li><p>fetch “data”</p>
</li>
<li><p>return a timestamp</p>
</li>
<li><p>render a simple UI</p>
</li>
</ul>
<p>Everything else remains constant.</p>
<p>Same UI.<br />Same data.<br />Same delay.</p>
<p>Only the <em>rendering strategy</em> changes.</p>
<h3>SSR — Not Theory, But What Happens</h3>
<p>Here’s what I observed in DevTools:</p>
<ol>
<li><p>Open Chrome DevTools → Network</p>
</li>
<li><p>Filter by <strong>Doc</strong></p>
</li>
<li><p>Reload <code>/pages-ssr</code></p>
</li>
<li><p>Click the document request</p>
</li>
<li><p>Go to the <strong>Timing</strong> tab</p>
</li>
<li><p>Look at <strong>Waiting for server response</strong></p>
</li>
</ol>
<p>This is the important part:</p>
<img src="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/d2c2a27a-3f88-4f78-b0c6-31e1b516476e.png" alt="" style="display:block;margin:0 auto" />

<p>You’ll see:</p>
<ul>
<li><p><strong>Waiting ~800 ms</strong></p>
</li>
<li><p>Content download ~negligible</p>
</li>
<li><p>Timestamp changes on every refresh</p>
</li>
</ul>
<p>The browser <em>waits</em> until the server is done doing everything — data fetch, render, HTML generation — before it ever gets a byte of HTML.  </p>
<p>That’s what SSR really means.</p>
<p>The server is doing work <em>synchronously</em> before responding.<br />No pre-rendered file.<br />No caching.<br />Just work on every request.</p>
<p>In the old blog posts you’d hear:</p>
<blockquote>
<p>“SSR increases TTFB.”</p>
</blockquote>
<p>But Time to First Byte (TTFB) didn’t become meaningful to me until I saw it in the timing breakdown:</p>
<p>📌 <strong>The server doesn’t send HTML until rendering completes.</strong><br />📌 <strong>The browser genuinely waits.</strong><br />📌 <strong>Most of that wait is server compute.</strong></p>
<p>That’s the real mechanism.</p>
<h2>Why SSR Isn’t Just a Buzzword</h2>
<p>To understand why this matters, consider what SSR <em>promises</em>:</p>
<ul>
<li>Fresh data on every request</li>
</ul>
<p>And what it <em>costs</em>:</p>
<ul>
<li><p>You pay the rendering cost each time</p>
</li>
<li><p>You pay it before the browser gets HTML</p>
</li>
<li><p>That cost shows up as TTFB</p>
</li>
</ul>
<p>In my experiment:</p>
<pre><code class="language-plaintext">Waiting (TTFB): ~800 ms
Content Download: ~0.5 ms
</code></pre>
<p>Look at that green bar in the timing tab. That is total server compute.</p>
<img src="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/8160e9f8-276a-4f10-b3c4-a4939f078345.png" alt="" style="display:block;margin:0 auto" />

<p>No network, no JS. Just server work before response.</p>
<p>That’s SSR.</p>
<p>If you refresh 10 times, the timestamp and timing stay consistent — because every request triggers a new render.</p>
<h2>Pages Router vs App Router — Same Outcome, Different API</h2>
<p>In the Pages Router, we force SSR using:</p>
<pre><code class="language-javascript">export async function getServerSideProps() { … }
</code></pre>
<p>In the App Router, we achieve SSR using:</p>
<pre><code class="language-javascript">export const dynamic = "force-dynamic";
</code></pre>
<p>Different syntax.</p>
<p>Same real behavior.</p>
<p>From the browser’s perspective:</p>
<ul>
<li><p>HTML is generated per request</p>
</li>
<li><p>The timing breakdown is the same</p>
</li>
<li><p>TTFB reflects server work before response</p>
</li>
</ul>
<p>The difference is just how Next.js expresses it.</p>
<h2>What I Learned About SSR</h2>
<p>Here’s the important insight:</p>
<p>SSR is not about React.</p>
<p>SSR is about <em>when</em> HTML is generated.</p>
<p>You can use it when:</p>
<ul>
<li><p>You must guarantee fresh data every time</p>
</li>
<li><p>You cannot cache at any level</p>
</li>
<li><p>Every user needs up-to-the-millisecond correctness</p>
</li>
</ul>
<p>But you pay for it by:</p>
<ul>
<li><p>Blocking the request</p>
</li>
<li><p>Increasing TTFB</p>
</li>
<li><p>Executing server compute per request</p>
</li>
</ul>
<p>That’s the real SSR execution model — and now we can <em>measure</em> it.</p>
<hr />
<p>In the next <strong>Part</strong>, we’ll flip the model completely.</p>
<p>We’ll move the rendering cost out of the request and into the build.</p>
<p>And that change — from per-request to pre-generated — will change everything.</p>
]]></content:encoded></item><item><title><![CDATA[SSR vs SSG vs ISR —  Part 0: Same Same But Different]]></title><description><![CDATA[This Is Not Another Definition Post
I’ve read the official documentation.I’ve read blog posts.I’ve seen the diagrams:

SSR → renders on every request

SSG → renders at build time

ISR → static but rev]]></description><link>https://blog.akananthu.com/ssr-vs-ssg-vs-isr-what-actually-happens</link><guid isPermaLink="true">https://blog.akananthu.com/ssr-vs-ssg-vs-isr-what-actually-happens</guid><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Sat, 28 Feb 2026 18:51:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65ec0b64d124c1c86ca0fb08/b767510c-8b2c-43d3-8641-f6828c399389.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>This Is Not Another Definition Post</h2>
<p>I’ve read the official documentation.<br />I’ve read blog posts.<br />I’ve seen the diagrams:</p>
<ul>
<li><p><strong>SSR</strong> → renders on every request</p>
</li>
<li><p><strong>SSG</strong> → renders at build time</p>
</li>
<li><p><strong>ISR</strong> → static but revalidates</p>
</li>
</ul>
<p>I understood the definitions.</p>
<p>But I still had confusion.</p>
<p>Not about what they are called —<br />but about how they actually work and when to choose one over the other.</p>
<p>Questions like:</p>
<ul>
<li><p>Why exactly does SSR increase TTFB?</p>
</li>
<li><p>Why does SSG feel instant?</p>
</li>
<li><p>What is ISR really doing behind the scenes?</p>
</li>
<li><p>What does Next.js use by default?</p>
</li>
</ul>
<p>Instead of reading more explanations, I decided to observe the behaviour directly.</p>
<p>So I built a small controlled experiment.</p>
<p><a href="https://github.com/akanandhu/SSRxSSGxISR">Link to Repository</a></p>
<h2>What This Project Actually Is</h2>
<p>This <a href="https://github.com/akanandhu/SSRxSSGxISR">repository</a> is not a real application.</p>
<p>It is a simple demo web app designed to demonstrate the differences between SSR, SSG, and ISR in a measurable way.</p>
<p>Each route in the project:</p>
<ul>
<li><p>Simulates an 800ms data fetch</p>
</li>
<li><p>Displays a <code>Generated At</code> timestamp</p>
</li>
<li><p>Uses exactly one rendering strategy</p>
</li>
</ul>
<p>Everything else remains constant.</p>
<ul>
<li><p>Same UI</p>
</li>
<li><p>Same data</p>
</li>
<li><p>Same artificial delay</p>
</li>
</ul>
<p>The only variable that changes is the rendering model.</p>
<p>This allows us to observe the impact of each strategy purely at the network and execution level.</p>
<p>For the best understanding, I recommend cloning the repository and running it locally.<br />If you prefer not to, I will include screenshots throughout the series so you can still follow the analysis.</p>
<p>What We Are Actually Comparing</p>
<p>We are not comparing APIs.</p>
<p>We are not comparing router styles.</p>
<p>We are comparing one fundamental question:</p>
<blockquote>
<p>When is the HTML generated?</p>
</blockquote>
<p>That is the only axis that truly matters.</p>
<table>
<thead>
<tr>
<th>Strategy</th>
<th>When HTML Is Generated</th>
</tr>
</thead>
<tbody><tr>
<td>SSR</td>
<td>At request time</td>
</tr>
<tr>
<td>SSG</td>
<td>At build time</td>
</tr>
<tr>
<td>ISR</td>
<td>At build time + background regeneration</td>
</tr>
</tbody></table>
<p>Everything else — performance, freshness, scalability — flows from that decision.</p>
<p>For deeper clarity, the examples in this project are implemented using both the Pages Router and the App Router.<br />This helps separate rendering strategy from router implementation details.</p>
<p>How to Explore the Repository</p>
<p>To properly understand what is happening, do not just read the code.</p>
<p>Open DevTools.</p>
<p>For each route:</p>
<ol>
<li><p>Go to <strong>Network</strong></p>
</li>
<li><p>Filter by <strong>Doc</strong></p>
</li>
<li><p>Click the document request</p>
</li>
<li><p>Open the <strong>Timing</strong> tab</p>
</li>
</ol>
<p>Observe:</p>
<ul>
<li><p>Waiting for server response (TTFB)</p>
</li>
<li><p>Content download time</p>
</li>
<li><p>Total request duration</p>
</li>
<li><p>Timestamp behaviour across refreshes</p>
</li>
</ul>
<p>Do not rely only on the UI output.</p>
<p>Observe the network waterfall.</p>
<p>That is where the execution model becomes clear.</p>
<p>The Structure of This Series</p>
<p>We will start from the definition level and progressively go deeper.</p>
<p>This series is divided into three focused parts.</p>
<p>Part 1 — The Request (SSR)</p>
<p>We will analyze:</p>
<ul>
<li><p>Why every refresh costs ~800ms</p>
</li>
<li><p>Why TTFB increases</p>
</li>
<li><p>What “render per request” truly means</p>
</li>
<li><p>What the Network tab reveals about server blocking</p>
</li>
</ul>
<h3>Part 2 — The Build (SSG &amp; ISR)</h3>
<p>We will examine:</p>
<ul>
<li><p>Why SSG appears instant</p>
</li>
<li><p>Why development mode can mislead you</p>
</li>
<li><p>How ISR regeneration actually works</p>
</li>
<li><p>What happens after the revalidation window expires</p>
</li>
</ul>
<h3>Part 3 — The Hybrid Reality</h3>
<p>Finally, we connect everything to real-world architecture:</p>
<ul>
<li><p>Why most production apps use hybrid rendering</p>
</li>
<li><p>Why confusion happens in real projects</p>
</li>
<li><p>How to choose the correct strategy intentionally</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[useSleep()]]></title><description><![CDATA[Chapter 1: The 2AM Hero
There was a time when sleeping at 2AM felt super cool.
School days.Early college days.
If you slept at 10PM, you were “uncle”.If you slept at 2AM, you were elite.
Late night mo]]></description><link>https://blog.akananthu.com/usesleep</link><guid isPermaLink="true">https://blog.akananthu.com/usesleep</guid><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Fri, 20 Feb 2026 03:54:36 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/65ec0b64d124c1c86ca0fb08/9de3d489-c3fe-4e7b-a112-1eafedf38366.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Chapter 1: The 2AM Hero</h2>
<p>There was a time when sleeping at 2AM felt super cool.</p>
<p>School days.<br />Early college days.</p>
<p>If you slept at 10PM, you were “uncle”.<br />If you slept at 2AM, you were elite.</p>
<p>Late night movies.<br />Random YouTube.<br />Scrolling for no reason.<br />Thinking about the startup I built perfectly in my mind… but nowhere else.</p>
<p>Important clarification:<br />I didn’t even code till I was 22.</p>
<p>So this wasn’t “hustle”.<br />It was just vibes.</p>
<p>But the brain doesn’t care about vibes.<br />It builds habits.</p>
<h2>Chapter 2: The Side Effects</h2>
<p>Years passed.</p>
<p>2AM became default.</p>
<p>Then something weird started happening.</p>
<p>Even if I slept 9–10 hours…<br />I woke up tired.</p>
<p>Heavy head.<br />Low energy.<br />Morning irritation.</p>
<p>It felt like my body booted up in safe mode.</p>
<h2>Chapter 3: The Science (Very Light)</h2>
<p>I honestly wasn’t planning to explain the science part.</p>
<p>How can someone who hasn’t properly opened a biology textbook after 10th standard suddenly start talking about hormones and brain systems?</p>
<p>But since this is a tech blog — and we like understanding how things work under the hood — I’ll try.</p>
<p>Inside our body, there’s something called the <strong>circadian rhythm</strong>.</p>
<p>Think of it like an internal scheduler.<br />A biological clock that runs on roughly a 24-hour cycle.</p>
<p>It regulates when you feel sleepy, when you feel alert, when your body temperature drops, when hormones are released — almost everything related to your daily energy.</p>
<p>At the center of this system is a tiny part of the brain that responds mainly to light and darkness.</p>
<p>When it gets dark, your brain starts releasing <strong>melatonin</strong> — the hormone responsible for making you feel sleepy. Your alertness decreases, body temperature drops slightly, and your system begins preparing for rest.</p>
<p>Now here’s the part most of us ignore.</p>
<p>Your brain doesn’t know the difference between sunlight and your bedroom LED light.</p>
<p>Light is light.</p>
<p>Especially <strong>blue light</strong> — the kind emitted from:</p>
<ul>
<li><p>Phone screens</p>
</li>
<li><p>Laptop screens</p>
</li>
<li><p>TVs</p>
</li>
<li><p>White LED bulbs</p>
</li>
</ul>
<p>When you sit under bright light at 11:45PM scrolling or watching something, your brain interprets that as daytime.</p>
<p>So melatonin release gets delayed.</p>
<p>Which means:</p>
<ul>
<li><p>You don’t feel sleepy at the right time.</p>
</li>
<li><p>You stay awake longer without realising it.</p>
</li>
<li><p>You go to bed but your brain is still active.</p>
</li>
</ul>
<p>Even normal room lighting can reduce melatonin levels slightly. Bright LED lighting can suppress it more significantly.</p>
<p>It’s like shifting your system clock every night and expecting all scheduled processes to run properly.</p>
<p>Morning works in the opposite way.</p>
<p>When sunlight hits your eyes, melatonin drops and <strong>cortisol</strong> rises — helping you feel awake and alert. If you wake up late and avoid natural light, that signal also gets delayed.</p>
<p>So when I was sleeping at 2AM one day, 12:30AM the next, 3AM on weekends — my internal clock was constantly adjusting.</p>
<p>Even if I slept 9–10 hours, the timing was misaligned.</p>
<p>Sleep isn’t just about how long you sleep.<br />It’s about when you sleep.</p>
<p>Your body prefers consistency over chaos.</p>
<p>Once I fixed my sleep time and wake-up time — and reduced late-night light exposure — things slowly started syncing again.</p>
<p>No hacks.<br />No extreme routines.</p>
<p>Just alignment.</p>
<h2>Chapter 4: Operation Reset</h2>
<p>No 5AM challenge.<br />No motivational reels.<br />No ice bath nonsense.</p>
<p>Just one rule:</p>
<p>Sleep at the same time.<br />Wake up at the same time.<br />Every single day.</p>
<p>Even Sunday.</p>
<p>Day 1?<br />Hard.</p>
<p>After that?<br />Smooth.</p>
<p>Surprisingly, I never struggled to wake up after the first day.</p>
<p>That shocked me.</p>
<p>The body adjusts faster than the ego.</p>
<h2>Chapter 5: The Upgrade</h2>
<p>Now I wake up fresh.</p>
<p>Not because I sleep 10 hours.<br />But because I sleep consistently.</p>
<p>Brain feels clear.<br />Energy is stable.<br />Mornings feel calm.</p>
<p>Kerala mornings are underrated by the way.<br />That quiet air.<br />That slow sunlight.<br />Peaceful.</p>
<p>Earlier I used to miss all that because I was sleeping through it.</p>
<h2>Chapter 6: Final Dialogue</h2>
<p>I’m not saying wake up at 5AM.<br />I’m not saying night life is evil.</p>
<p>I’m just saying this:</p>
<p>Don’t romanticize 2AM like I did.</p>
<p>Discipline feels boring.<br />But boring things build strong systems.</p>
<p>And as a 25-year-old tech guy trying to build a life,<br />stable energy is more powerful than late night drama.</p>
<p>If you’re reading this at 1:38AM…</p>
<p>Close this tab.</p>
<p>Sleep brother.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering Dynamic URL Handling in Next.js 15 with Rewrites]]></title><description><![CDATA[As a frontend developer, I've encountered various challenges with routing in modern web applications. With the release of Next.js 15, I wanted to share a useful pattern I've implemented to handle location-based URL paths efficiently—without creating ...]]></description><link>https://blog.akananthu.com/mastering-dynamic-url-handling-in-nextjs-15-with-rewrites</link><guid isPermaLink="true">https://blog.akananthu.com/mastering-dynamic-url-handling-in-nextjs-15-with-rewrites</guid><category><![CDATA[Next.js]]></category><category><![CDATA[React]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Ananthakrishnan A]]></dc:creator><pubDate>Wed, 26 Feb 2025 12:32:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1740573005558/9181db9d-b180-4144-af45-493a9d7f2641.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a frontend developer, I've encountered various challenges with routing in modern web applications. With the release of Next.js 15, I wanted to share a useful pattern I've implemented to handle location-based URL paths efficiently—without creating redundant pages.  </p>
<p><strong>The Challenge: Location-Based Content Without the Routing Mess</strong>  </p>
<p>My application needed to display similar content for multiple locations, using URLs like:</p>
<ul>
<li><p><code>/sell-your-car-in-kochi</code></p>
</li>
<li><p><code>/sell-your-car-in-tvm</code></p>
</li>
<li><p><code>/sell-your-car-in-kollam</code></p>
</li>
</ul>
<p>The conventional approach would require creating separate pages for each location, leading to duplicated code and maintenance headaches. This is where Next.js rewrites come to the rescue.  </p>
<h2 id="heading-understanding-rewrites-in-nextjs-15">Understanding Rewrites in Next.js 15</h2>
<p>Rewrites allow you to map an incoming request path to a different destination path. Unlike redirects, rewrites don't change the URL in the browser, making them perfect for serving the same content under different URLs while preserving the original URL for SEO and user experience.</p>
<h2 id="heading-implementing-a-location-based-content-solution">Implementing a Location-Based Content Solution</h2>
<h3 id="heading-step-1-create-your-base-page">Step 1: Create Your Base Page</h3>
<p>First, I created a single page component at <code>app/sell-your-car/page.tsx</code> that would serve as the template for all location variants:</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// app/sell-your-car/page.tsx</span>
<span class="hljs-keyword">import</span> { NextPage } <span class="hljs-keyword">from</span> <span class="hljs-string">'next'</span>;

interface PageProps {
  <span class="hljs-attr">params</span>: Record&lt;string, string&gt;;
  searchParams: {
    location?: string;
  };
}

<span class="hljs-keyword">const</span> SellYourCar: NextPage&lt;PageProps&gt; = <span class="hljs-function">(<span class="hljs-params">{ params, searchParams }</span>) =&gt;</span> {
  <span class="hljs-comment">// Extract location from search params (passed through rewrites)</span>
  <span class="hljs-keyword">const</span> location: string = searchParams.location || <span class="hljs-string">'your area'</span>;

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"container mx-auto py-8"</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">h1</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-3xl font-bold mb-6"</span>&gt;</span>Sell Your Car in {location}<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">section</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mb-8"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">h2</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-2xl font-semibold mb-4"</span>&gt;</span>Why Sell Your Car With Us in {location}?<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>
          We offer the best prices for your used car in {location}. Our network of buyers 
          ensures you get competitive offers quickly and hassle-free.
        <span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">section</span>&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">section</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">h2</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-2xl font-semibold mb-4"</span>&gt;</span>How It Works in {location}<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">ol</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"list-decimal pl-5"</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mb-2"</span>&gt;</span>Fill out our simple online form with your car details<span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mb-2"</span>&gt;</span>Receive instant estimated value for your car<span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mb-2"</span>&gt;</span>Schedule an inspection at our {location} center<span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mb-2"</span>&gt;</span>Get paid on the spot after inspection approval<span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">ol</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">section</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
  );
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> SellYourCar;
</code></pre>
<h3 id="heading-step-2-configure-rewrites-in-nextconfigjs">Step 2: Configure Rewrites in next.config.js</h3>
<p>Next, I set up the rewrites in my <code>next.config.js</code> file:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// next.config.js</span>
<span class="hljs-comment">/** @type {import('next').NextConfig} */</span>
<span class="hljs-keyword">const</span> nextConfig = {
  <span class="hljs-keyword">async</span> rewrites() {
    <span class="hljs-keyword">return</span> [
      {
        source: <span class="hljs-string">'/sell-your-car-in-:location'</span>,
        destination: <span class="hljs-string">'/sell-your-car?location=:location'</span>,
      },
    ];
  },
};

<span class="hljs-built_in">module</span>.<span class="hljs-built_in">exports</span> = nextConfig;
</code></pre>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>This simple yet powerful implementation solves our routing challenge elegantly. With just two files, we've created a system that can handle an unlimited number of location-based URLs without creating separate pages for each one.</p>
<p>The key benefits of this approach include:</p>
<ol>
<li><p><strong>DRY Code</strong>: One component serves multiple URLs, reducing duplication</p>
</li>
<li><p><strong>Maintainability</strong>: Updates to the page structure only need to be made in one place</p>
</li>
<li><p><strong>SEO-Friendly</strong>: Each URL preserves its unique path for search engines</p>
</li>
<li><p><strong>Performance</strong>: Only one page component needs to be loaded and cached</p>
</li>
<li><p><strong>Scalability</strong>: Adding new locations doesn't require code changes</p>
</li>
</ol>
<p>Next.js 15's rewrites feature provides the perfect solution for handling dynamic, parameter-based URLs while maintaining clean code architecture. This pattern can be applied to many other scenarios beyond location-based content – product categories, service areas, or any situation where you need similar content with different parameters.</p>
<p>By leveraging TypeScript, we also gain the benefits of type safety and better developer experience, making our implementation both robust and maintainable.</p>
<p>The best part? It's incredibly simple to implement, requiring minimal code changes to achieve powerful URL handling capabilities.</p>
]]></content:encoded></item></channel></rss>