<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Mahrab's Mindfield]]></title><description><![CDATA[Mahrab's Mindfield]]></description><link>https://blog.mahrabhossain.me</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 11:29:33 GMT</lastBuildDate><atom:link href="https://blog.mahrabhossain.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[When Time Isn't Universal: The Brilliance of Lamport's Logical Clocks (1978)]]></title><description><![CDATA[Imagine a world where "before" and "after" aren't always clear. This isn't a philosophical musing, but a very real problem in distributed systems networks of independent computers that communicate by sending messages. When each computer has its own c...]]></description><link>https://blog.mahrabhossain.me/when-time-isnt-universal-the-brilliance-of-lamports-logical-clocks-1978</link><guid isPermaLink="true">https://blog.mahrabhossain.me/when-time-isnt-universal-the-brilliance-of-lamports-logical-clocks-1978</guid><category><![CDATA[Paper Review]]></category><category><![CDATA[research-writeup]]></category><category><![CDATA[distributed system]]></category><category><![CDATA[concurrency]]></category><category><![CDATA[logical clock]]></category><category><![CDATA[algorithms]]></category><dc:creator><![CDATA[Mirza Mahrab Hossain]]></dc:creator><pubDate>Sat, 28 Jun 2025 23:20:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/ZfwypJBrRyU/upload/3668ae084aa19f72a48b9115e557589a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a world where "before" and "after" aren't always clear. This isn't a philosophical musing, but a very real problem in <strong>distributed systems</strong> networks of independent computers that communicate by sending messages. When each computer has its own clock, how do you determine the true order of events across the entire system? This seemingly simple question haunted early distributed computing, until Leslie Lamport, in his groundbreaking 1978 paper, <strong>"Time, Clocks, and the Ordering of Events in a Distributed System,"</strong> offered a surprisingly elegant solution: <strong>logical clocks.</strong></p>
<p>This post offers a basic walkthrough of Lamport's core idea, using simple Python to simulate how these conceptual clocks bring order to chaos.</p>
<h2 id="heading-1-the-distributed-dilemma-what-is-time">1. The Distributed Dilemma: What is "Time"?</h2>
<p>In a single computer, time is straightforward: events happen one after another. But in a distributed system, where messages travel at varying speeds and computers have slightly different internal clocks, things get messy:</p>
<ul>
<li><p>Computer A sends a message at 10:00:01 (according to its clock).</p>
</li>
<li><p>Computer B receives it at 10:00:00 (according to <em>its</em> clock).</p>
</li>
</ul>
<p>Which event happened first? What if two events happen on different machines "at the same time" (according to their local clocks) but are causally related? Lamport realized that what truly matters isn't <em>physical</em> time synchronization, but the <strong>causal ordering of events</strong>. That is, if event A causes event B, then A <em>must</em> have happened before B.</p>
<p>He introduced the "happened-before" relation to, defining it based on:</p>
<ol>
<li><p>Events within a single process.</p>
</li>
<li><p>Sending and receiving messages between processes.</p>
</li>
</ol>
<p>From this, he devised <strong>logical clocks</strong>: simple counters that ensure if A "happened-before" B, then A's logical timestamp will be less than B's.</p>
<h2 id="heading-2-how-logical-clocks-bring-order">2. How Logical Clocks Bring Order</h2>
<p>Lamport's logical clocks are simply monotonic counters (they only go up). Each process (computer) maintains its own logical clock. The rules for updating these clocks are incredibly simple:</p>
<ol>
<li><p><strong>Local Events:</strong> When a process executes an internal event (not sending or receiving a message), it increments its logical clock by one.</p>
</li>
<li><p><strong>Sending Messages:</strong> When a process sends a message, it first increments its logical clock, and then includes this new logical timestamp in the message.</p>
</li>
<li><p><strong>Receiving Messages:</strong> When a process receives a message:</p>
<ul>
<li><p>It first increments its own logical clock.</p>
</li>
<li><p>Then, it updates its logical clock to be the maximum of its current clock value and the timestamp received in the message. It then performs the receive event.</p>
</li>
</ul>
</li>
</ol>
<p>These three rules ensure that if event A causally precedes event B (A $\to$ B), then the logical timestamp of A will be less than the logical timestamp of B.</p>
<h2 id="heading-3-a-simple-python-simulation">3. A Simple Python Simulation</h2>
<p>Let's simulate three processes (P1, P2, P3) interacting, each with its own logical clock.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> time

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Process</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, name</span>):</span>
        self.name = name
        self.logical_clock = <span class="hljs-number">0</span>
        print(<span class="hljs-string">f"[<span class="hljs-subst">{self.name}</span>] Initializing with clock: <span class="hljs-subst">{self.logical_clock}</span>"</span>)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">event</span>(<span class="hljs-params">self, description</span>):</span>
        self.logical_clock += <span class="hljs-number">1</span>
        print(<span class="hljs-string">f"[<span class="hljs-subst">{self.name}</span>] Event '<span class="hljs-subst">{description}</span>' at clock: <span class="hljs-subst">{self.logical_clock}</span>"</span>)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">send_message</span>(<span class="hljs-params">self, receiver, message</span>):</span>
        self.logical_clock += <span class="hljs-number">1</span> <span class="hljs-comment"># Increment before sending</span>
        timestamp_to_send = self.logical_clock
        print(<span class="hljs-string">f"[<span class="hljs-subst">{self.name}</span>] Sending '<span class="hljs-subst">{message}</span>' to <span class="hljs-subst">{receiver.name}</span> with clock: <span class="hljs-subst">{timestamp_to_send}</span>"</span>)
        <span class="hljs-comment"># Simulate network delay</span>
        time.sleep(<span class="hljs-number">0.1</span>) 
        receiver.receive_message(self, message, timestamp_to_send)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">receive_message</span>(<span class="hljs-params">self, sender, message, sender_timestamp</span>):</span>
        self.logical_clock = max(self.logical_clock + <span class="hljs-number">1</span>, sender_timestamp + <span class="hljs-number">1</span>) <span class="hljs-comment"># Rule 3</span>
        print(<span class="hljs-string">f"[<span class="hljs-subst">{self.name}</span>] Receiving '<span class="hljs-subst">{message}</span>' from <span class="hljs-subst">{sender.name}</span> at clock: <span class="hljs-subst">{self.logical_clock}</span> (Sender's clock was: <span class="hljs-subst">{sender_timestamp}</span>)"</span>)

<span class="hljs-comment"># --- Simulation ---</span>
p1 = Process(<span class="hljs-string">"P1"</span>)
p2 = Process(<span class="hljs-string">"P2"</span>)
p3 = Process(<span class="hljs-string">"P3"</span>)

<span class="hljs-comment"># P1 performs some local events</span>
p1.event(<span class="hljs-string">"Task A completed"</span>)
p1.event(<span class="hljs-string">"Generating report"</span>)

<span class="hljs-comment"># P1 sends a message to P2</span>
p1.send_message(p2, <span class="hljs-string">"Report Ready"</span>)

<span class="hljs-comment"># P2 performs a local event after receiving</span>
p2.event(<span class="hljs-string">"Processing report"</span>)

<span class="hljs-comment"># P2 sends a message to P3</span>
p2.send_message(p3, <span class="hljs-string">"Data for analysis"</span>)

<span class="hljs-comment"># P1 performs another local event</span>
p1.event(<span class="hljs-string">"Final check"</span>)

<span class="hljs-comment"># P3 performs an event, then sends to P1</span>
p3.event(<span class="hljs-string">"Analyzing data"</span>)
p3.send_message(p1, <span class="hljs-string">"Analysis complete"</span>)

<span class="hljs-comment"># Observe the clock values and their ordering</span>
print(<span class="hljs-string">"\n--- Final Clock States ---"</span>)
print(<span class="hljs-string">f"[<span class="hljs-subst">{p1.name}</span>] Final Clock: <span class="hljs-subst">{p1.logical_clock}</span>"</span>)
print(<span class="hljs-string">f"[<span class="hljs-subst">{p2.name}</span>] Final Clock: <span class="hljs-subst">{p2.logical_clock}</span>"</span>)
print(<span class="hljs-string">f"[<span class="hljs-subst">{p3.name}</span>] Final Clock: <span class="hljs-subst">{p3.logical_clock}</span>"</span>)
</code></pre>
<h3 id="heading-example-output-actual-values-may-vary-slightly-due-to-timesleep-ordering-but-the-principle-holds">Example Output (actual values may vary slightly due to <code>time.sleep</code> ordering, but the principle holds):</h3>
<pre><code class="lang-plaintext">[P1] Initializing with clock: 0
[P2] Initializing with clock: 0
[P3] Initializing with clock: 0
[P1] Event 'Task A completed' at clock: 1
[P1] Event 'Generating report' at clock: 2
[P1] Sending 'Report Ready' to P2 with clock: 3
[P2] Receiving 'Report Ready' from P1 at clock: 4 (Sender's clock was: 3)
[P2] Event 'Processing report' at clock: 5
[P2] Sending 'Data for analysis' to P3 with clock: 6
[P3] Receiving 'Data for analysis' from P2 at clock: 7 (Sender's clock was: 6)
[P1] Event 'Final check' at clock: 4
[P3] Event 'Analyzing data' at clock: 8
[P3] Sending 'Analysis complete' to P1 with clock: 9
[P1] Receiving 'Analysis complete' from P3 at clock: 10 (Sender's clock was: 9)

--- Final Clock States ---
[P1] Final Clock: 10
[P2] Final Clock: 5
[P3] Final Clock: 9
</code></pre>
<p>Notice how the clocks increment and adjust based on messages. While <code>P1</code>'s final clock is 10, <code>P2</code>'s is 5, and <code>P3</code>'s is 9. These aren't physical times, but they preserve the causal order. For instance, P2's "Processing report" (clock 5) <em>causally happened after</em> P1's "Report Ready" message (clock 3, leading to P2's clock becoming 4).</p>
<h2 id="heading-4-why-lamports-paper-was-so-groundbreaking">4. Why Lamport's Paper Was So Groundbreaking</h2>
<p>Lamport's insight was that <strong>"the ordering of events in a distributed system is only a partial ordering."</strong> We can't always say definitively if event A happened before event B if they are concurrent and don't affect each other. But for causally related events, logical clocks provide a consistent, global ordering without requiring expensive clock synchronization or a single global timer.</p>
<p>His contributions were profound because they:</p>
<ul>
<li><p><strong>Decoupled Time from Physical Clocks:</strong> Proved that logical ordering is sufficient for many distributed problems.</p>
</li>
<li><p><strong>Provided a Foundation for Concurrency Control:</strong> Enabled algorithms for mutual exclusion, consistency, and snapshotting in distributed systems.</p>
</li>
<li><p><strong>Simplicity and Elegance:</strong> The rules are disarmingly simple, yet they solve a complex problem.</p>
</li>
<li><p><strong>Formed the Basis for Vector Clocks:</strong> A later extension that provides a stronger ordering guarantee (a "total" ordering of events).</p>
</li>
</ul>
<h2 id="heading-5-real-world-applications">5. Real-World Applications</h2>
<p>Lamport's logical clocks, or concepts directly derived from them, are essential for:</p>
<ul>
<li><p><strong>Distributed Databases and Transaction Systems:</strong> Ensuring consistency and correct ordering of operations across multiple servers.</p>
</li>
<li><p><strong>Conflict Resolution in Collaborative Systems:</strong> Determining the correct sequence of edits in shared documents (e.g., Google Docs).</p>
</li>
<li><p><strong>Message Queues and Event Streaming:</strong> Guaranteeing message delivery order and processing sequence.</p>
</li>
<li><p><strong>Debugging Distributed Systems:</strong> Helping developers understand the flow of events and pinpoint causal relationships.</p>
</li>
<li><p><strong>Blockchain Technologies (indirectly):</strong> While blockchains use cryptographic timestamps and consensus mechanisms, the need to agree on an immutable, causal order of transactions across a decentralized network resonates with Lamport's foundational ideas about distributed time.</p>
</li>
</ul>
<p>This powerful concept, born from a seemingly abstract problem, continues to ensure that distributed systems function reliably, even when the notion of a single, universal "time" is impossible.</p>
<p>Want to go deeper? Read the original 1978 paper: <a target="_blank" href="https://lamport.azurewebsites.net/pubs/time-clocks.pdf"><strong>"Time, Clocks, and the Ordering of Events in a Distributed System"</strong></a> by Leslie Lamport.</p>
]]></content:encoded></item><item><title><![CDATA[The Genesis of AI: Understanding McCulloch & Pitts' 1943 Neuron Model]]></title><description><![CDATA[Before "AI" was a buzzword, before silicon chips powered our world, a groundbreaking paper laid the conceptual cornerstone for artificial neural networks. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts published "A Logical Calc...]]></description><link>https://blog.mahrabhossain.me/the-genesis-of-ai-understanding-mcculloch-and-pitts-1943-neuron-model</link><guid isPermaLink="true">https://blog.mahrabhossain.me/the-genesis-of-ai-understanding-mcculloch-and-pitts-1943-neuron-model</guid><category><![CDATA[Paper Review]]></category><category><![CDATA[research-writeup]]></category><category><![CDATA[AI]]></category><category><![CDATA[neural networks]]></category><category><![CDATA[history of ai]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Mirza Mahrab Hossain]]></dc:creator><pubDate>Sat, 28 Jun 2025 20:30:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/nGoCBxiaRO0/upload/0d56c8175d3f70e82935a34c7fea1885.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Before "AI" was a buzzword, before silicon chips powered our world, a groundbreaking paper laid the conceptual cornerstone for artificial neural networks. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts published <strong>"A Logical Calculus of the Ideas Immanent in Nervous Activity."</strong> This wasn't about building a robot, but about proving, mathematically, that simple "on-off" neurons could perform any logical operation.</p>
<p>This post offers a look at their revolutionary idea and its lasting impact, even touching on how we might simulate it simply in Python.</p>
<h2 id="heading-1-the-idea-neurons-as-logic-gates">1. The Idea: Neurons as Logic Gates</h2>
<p>McCulloch and Pitts sought to understand how the brain, with its complex network of interconnected neurons, could perform intricate computations. Their brilliant simplification was to model a neuron not as a biological entity, but as a <strong>binary threshold unit</strong>:</p>
<ul>
<li>It receives inputs (signals from other neurons).</li>
<li>Each input has a weight (excitatory or inhibitory).</li>
<li>If the sum of weighted inputs exceeds a certain threshold, the neuron "fires" (produces an output).</li>
<li>Otherwise, it remains "silent."</li>
</ul>
<p>Crucially, they showed that such simplified neurons, when connected appropriately, could implement <strong>any logical function</strong> (AND, OR, NOT, XOR, etc.). This established a profound link between neuroscience and formal logic, suggesting that computation was an inherent property of neural structures.</p>
<h2 id="heading-2-representing-a-mcculloch-pitts-neuron">2. Representing a "McCulloch-Pitts Neuron"</h2>
<p>Let's imagine a very simple McCulloch-Pitts neuron in code. For instance, an AND gate:</p>
<pre><code class="lang-python"><span class="hljs-comment"># A simple representation of an AND gate using McCulloch-Pitts logic</span>
<span class="hljs-comment"># Inputs are binary (0 or 1)</span>
<span class="hljs-comment"># Output is binary (0 or 1)</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">and_neuron</span>(<span class="hljs-params">input1, input2</span>):</span>
    weights = {<span class="hljs-string">'input1'</span>: <span class="hljs-number">0.5</span>, <span class="hljs-string">'input2'</span>: <span class="hljs-number">0.5</span>} <span class="hljs-comment"># Equal weights for simplicity</span>
    threshold = <span class="hljs-number">0.6</span> <span class="hljs-comment"># A threshold just above the sum of one input, below two</span>

    weighted_sum = (input1 * weights[<span class="hljs-string">'input1'</span>]) + (input2 * weights[<span class="hljs-string">'input2'</span>])

    <span class="hljs-keyword">if</span> weighted_sum &gt;= threshold:
        <span class="hljs-keyword">return</span> <span class="hljs-number">1</span> <span class="hljs-comment"># Neuron fires (output is 1)</span>
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span> <span class="hljs-comment"># Neuron does not fire (output is 0)</span>

<span class="hljs-comment"># Test the AND neuron</span>
print(<span class="hljs-string">f"AND(0, 0): <span class="hljs-subst">{and_neuron(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>)}</span>"</span>)
print(<span class="hljs-string">f"AND(0, 1): <span class="hljs-subst">{and_neuron(<span class="hljs-number">0</span>, <span class="hljs-number">1</span>)}</span>"</span>)
print(<span class="hljs-string">f"AND(1, 0): <span class="hljs-subst">{and_neuron(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>)}</span>"</span>)
print(<span class="hljs-string">f"AND(1, 1): <span class="hljs-subst">{and_neuron(<span class="hljs-number">1</span>, <span class="hljs-number">1</span>)}</span>"</span>)
</code></pre>
<h3 id="heading-example-output">Example Output:</h3>
<pre><code>AND(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>): <span class="hljs-number">0</span>
AND(<span class="hljs-number">0</span>, <span class="hljs-number">1</span>): <span class="hljs-number">0</span>
AND(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>): <span class="hljs-number">0</span>
AND(<span class="hljs-number">1</span>, <span class="hljs-number">1</span>): <span class="hljs-number">1</span>
</code></pre><p>This simple simulation demonstrates how their abstract model could perform basic logical operations. Complex operations would involve networks of these fundamental units.</p>
<h2 id="heading-3-from-abstract-logic-to-computational-paradigms">3. From Abstract Logic to Computational Paradigms</h2>
<p>What made McCulloch and Pitts' paper so impactful was its <strong>formalization of neural computation</strong>. They rigorously proved the computational universality of their simplified neuron model. Key assumptions included:</p>
<ul>
<li><strong>All-or-none activity:</strong> A neuron either fires or doesn't.</li>
<li><strong>Fixed synaptic delays:</strong> Signals take a predictable time to travel.</li>
<li><strong>Fixed network structure:</strong> The connections between neurons are immutable.</li>
</ul>
<p>While these simplifications don't fully capture biological complexity, they were precisely what allowed for mathematical analysis and laid the groundwork for future developments.</p>
<p>Later evolutions, building on this foundation, include:</p>
<ul>
<li><strong>The Perceptron (Rosenblatt, 1958):</strong> Introduced learning rules to adjust weights automatically, enabling the network to learn from data.</li>
<li><strong>Multilayer Perceptrons:</strong> Overcame the Perceptron's limitations in solving non-linear problems (like XOR) by adding hidden layers.</li>
<li><strong>Backpropagation (Rumelhart, Hinton, Williams, 1986):</strong> Revolutionized the training of deep neural networks, making complex architectures feasible.</li>
</ul>
<h2 id="heading-4-real-world-applications-born-from-a-theoretical-seed">4. Real-World Applications (Born from a Theoretical Seed)</h2>
<p>The direct practical applications of the 1943 paper were limited, as it was a purely theoretical construct. However, its conceptual legacy is immense, acting as a foundational pillar for:</p>
<ul>
<li><strong>Artificial Neural Networks (ANNs):</strong> The entire field of ANNs, deep learning, and modern machine learning owes its theoretical lineage to this seminal work.</li>
<li><strong>Computational Neuroscience:</strong> It established a paradigm for thinking about brain function in computational terms, influencing how we model biological neural systems.</li>
<li><strong>Cybernetics:</strong> A key early text in this interdisciplinary field focused on control and communication in animals and machines, highlighting the similarities between biological and artificial systems.</li>
<li><strong>Theoretical Computer Science:</strong> Reinforced the powerful idea that sophisticated computation could emerge from simple, interconnected components, influencing the design of early computing machines.</li>
</ul>
<h3 id="heading-try-to-conceptualize-more-complex-logic">Try to conceptualize more complex logic:</h3>
<p>Consider how you might combine our <code>and_neuron</code> with a <code>not_neuron</code> (where the weight would be negative and the threshold adjusted) to create an <code>NAND</code> gate. From there, the McCulloch-Pitts theorem implies you can build <em>any</em> other logic function!</p>
<p>McCulloch and Pitts' 1943 paper stands as a monumental work, a theoretical blueprint that anticipated decades of computational progress. Its elegant demonstration that simple, interconnected units could perform complex logic paved the way for the sophisticated AI systems we interact with today, reminding us that even the most complex technologies often begin with profoundly simple, yet brilliant, ideas.</p>
<p>Want to go deeper? Read the original 1943 paper:
<a target="_blank" href="https://www.cs.cmu.edu/~epxing/Class/10715/reading/McCulloch.and.Pitts.pdf"><strong>"A Logical Calculus of the Ideas Immanent in Nervous Activity"</strong></a> by W. S. McCulloch and W. Pitts.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding PageRank: A Simple Walk Through Web Authority]]></title><description><![CDATA[PageRank is an algorithm designed to measure the importance of web pages. Rather than simply counting links, it assigns weight to links based on their source, making it more difficult to game than naive link counting. Originally developed as part of ...]]></description><link>https://blog.mahrabhossain.me/understanding-pagerank-a-simple-walk-through-web-authority</link><guid isPermaLink="true">https://blog.mahrabhossain.me/understanding-pagerank-a-simple-walk-through-web-authority</guid><category><![CDATA[research-writeup]]></category><category><![CDATA[Paper Review]]></category><category><![CDATA[algorithm]]></category><category><![CDATA[web search]]></category><category><![CDATA[Data Science]]></category><dc:creator><![CDATA[Mirza Mahrab Hossain]]></dc:creator><pubDate>Tue, 24 Jun 2025 20:17:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/xG8IQMqMITM/upload/b466f2bd067e43be374dc2d0488670c6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>PageRank is an algorithm designed to measure the importance of web pages. Rather than simply counting links, it assigns weight to links based on their source, making it more difficult to game than naive link counting. Originally developed as part of the early Google search engine, it remains one of the most elegant and intuitive approaches to ranking nodes in a graph.</p>
<p>This post offers a basic walkthrough of how PageRank works, using simple Python code to simulate the core idea.</p>
<h2 id="heading-1-the-idea-behind-pagerank">1. The Idea Behind PageRank</h2>
<p>In a web of documents, not all links are equal. A link from an authoritative page (e.g., a major university site) should carry more weight than one from a random blog. PageRank models this using a <strong>random surfer</strong>: someone who clicks on links at random. The rank of a page is the probability that the surfer lands there.</p>
<p>The rank of page <em>P</em> depends on the ranks of pages linking to it, divided by their number of outgoing links. Mathematically, it becomes a recursive problem solved via iteration.</p>
<h2 id="heading-2-representing-the-web">2. Representing the Web</h2>
<p>Let’s represent a tiny internet as a directed graph:</p>
<pre><code class="lang-python">graph = {
    <span class="hljs-string">'A'</span>: [<span class="hljs-string">'B'</span>, <span class="hljs-string">'C'</span>],
    <span class="hljs-string">'B'</span>: [<span class="hljs-string">'C'</span>],
    <span class="hljs-string">'C'</span>: [<span class="hljs-string">'A'</span>],
    <span class="hljs-string">'D'</span>: [<span class="hljs-string">'C'</span>]
}
`
</code></pre>
<p>Here, page A links to B and C, B links to C, etc. D is a so-called <em>dangling node</em>, with no outbound links.</p>
<h2 id="heading-3-simple-pagerank-implementation">3. Simple PageRank Implementation</h2>
<p>The PageRank vector is initialized uniformly and updated over multiple iterations using the <strong>power iteration</strong> method. We’ll also include a <strong>damping factor</strong> (typically 0.85), representing the chance that a surfer follows a link instead of jumping to a random page.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">pagerank</span>(<span class="hljs-params">graph, damping=<span class="hljs-number">0.85</span>, max_iter=<span class="hljs-number">100</span>, tol=<span class="hljs-number">1e-6</span></span>):</span>
    N = len(graph)
    ranks = {node: <span class="hljs-number">1</span>/N <span class="hljs-keyword">for</span> node <span class="hljs-keyword">in</span> graph}
    <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> range(max_iter):
        new_ranks = {}
        <span class="hljs-keyword">for</span> node <span class="hljs-keyword">in</span> graph:
            rank_sum = <span class="hljs-number">0</span>
            <span class="hljs-keyword">for</span> src <span class="hljs-keyword">in</span> graph:
                <span class="hljs-keyword">if</span> node <span class="hljs-keyword">in</span> graph[src]:
                    rank_sum += ranks[src] / len(graph[src])
                <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> graph[src]:
                    rank_sum += ranks[src] / N  <span class="hljs-comment"># handle dangling nodes</span>
            new_ranks[node] = (<span class="hljs-number">1</span> - damping) / N + damping * rank_sum

        <span class="hljs-comment"># Convergence check</span>
        delta = sum(abs(new_ranks[n] - ranks[n]) <span class="hljs-keyword">for</span> n <span class="hljs-keyword">in</span> graph)
        ranks = new_ranks
        <span class="hljs-keyword">if</span> delta &lt; tol:
            <span class="hljs-keyword">break</span>
    <span class="hljs-keyword">return</span> ranks
</code></pre>
<h3 id="heading-example-output">Example Output:</h3>
<pre><code class="lang-python">ranks = pagerank(graph)
<span class="hljs-keyword">for</span> page, score <span class="hljs-keyword">in</span> ranks.items():
    print(<span class="hljs-string">f"<span class="hljs-subst">{page}</span>: <span class="hljs-subst">{score:<span class="hljs-number">.4</span>f}</span>"</span>)
</code></pre>
<p>This gives us a ranking of pages based on their structural importance in the link graph.</p>
<h2 id="heading-4-from-research-to-revolution">4. From Research to Revolution</h2>
<p>What made PageRank groundbreaking wasn't just its cleverness, it aligned with a powerful intuition: <strong>authority flows through connections</strong>. Unlike keyword stuffing or metadata tricks, PageRank was hard to manipulate at scale.</p>
<p>Later evolutions include:</p>
<ul>
<li><strong>Personalized PageRank</strong>, for individual users or domains</li>
<li><strong>Topic-sensitive PageRank</strong>, for filtering by subject</li>
<li><strong>HITS (Hyperlink-Induced Topic Search)</strong>, a related algorithm that distinguishes <em>hubs</em> from <em>authorities</em></li>
</ul>
<h2 id="heading-5-real-world-applications">5. Real-World Applications</h2>
<p>Although Google's modern ranking system is vastly more complex, PageRank remains a conceptual backbone in graph analysis. It’s used in:</p>
<ul>
<li>Citation analysis for academic papers</li>
<li>Social network influence modeling</li>
<li>Recommendation engines</li>
<li>Biological network centrality (e.g., protein-protein interaction networks)</li>
</ul>
<h3 id="heading-try-it-with-networkx">Try it with NetworkX</h3>
<p>For practical experimentation, Python’s <code>networkx</code> has a built-in PageRank method:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> networkx <span class="hljs-keyword">as</span> nx

G = nx.DiGraph(graph)
ranks = nx.pagerank(G)
</code></pre>
<p>This opens doors to large-scale experiments which is useful not only for search engines but for understanding any system where <em>connections matter</em> more than <em>raw counts</em>.</p>
<p>Want to go deeper? Read the original 1998 paper:
<a target="_blank" href="https://www.cis.upenn.edu/~mkearns/teaching/NetworkedLife/pagerank.pdf"><strong>“The PageRank Citation Ranking: Bringing Order to the Web”</strong></a> by Page et al.</p>
]]></content:encoded></item><item><title><![CDATA[Blockchain: A Quick Look]]></title><description><![CDATA[At its core, blockchain is a distributed, append-only ledger. It organizes data into blocks, chains them together using cryptographic hashes, and distributes them across multiple peers to ensure integrity and consensus. Originally proposed as the fou...]]></description><link>https://blog.mahrabhossain.me/blockchain-a-quick-look</link><guid isPermaLink="true">https://blog.mahrabhossain.me/blockchain-a-quick-look</guid><category><![CDATA[Blockchain]]></category><category><![CDATA[Python]]></category><category><![CDATA[technology]]></category><category><![CDATA[coding]]></category><category><![CDATA[Web3]]></category><category><![CDATA[distributed systems]]></category><dc:creator><![CDATA[Mirza Mahrab Hossain]]></dc:creator><pubDate>Tue, 24 Jun 2025 14:22:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/JNxTZzpHmsI/upload/84fef630e520c110f10fb15a9778b80b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At its core, blockchain is a distributed, append-only ledger. It organizes data into blocks, chains them together using cryptographic hashes, and distributes them across multiple peers to ensure integrity and consensus. Originally proposed as the foundation of Bitcoin, the concept has since evolved into a broader class of decentralized systems with applications in finance, supply chain, identity, and more.</p>
<p>This writeup offers a basic walkthrough of how a blockchain works, using simple Python code to simulate its core mechanics.</p>
<h2 id="heading-1-block-structure">1. Block Structure</h2>
<p>A block is a container of data. It holds a list of transactions or messages, a timestamp, a reference to the previous block, and its own unique hash computed from all this information.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> hashlib
<span class="hljs-keyword">import</span> time

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Block</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, index, previous_hash, data, timestamp=None</span>):</span>
        self.index = index
        self.timestamp = timestamp <span class="hljs-keyword">or</span> time.time()
        self.data = data
        self.previous_hash = previous_hash
        self.hash = self.compute_hash()

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">compute_hash</span>(<span class="hljs-params">self</span>):</span>
        block_string = <span class="hljs-string">f"<span class="hljs-subst">{self.index}</span><span class="hljs-subst">{self.timestamp}</span><span class="hljs-subst">{self.data}</span><span class="hljs-subst">{self.previous_hash}</span>"</span>
        <span class="hljs-keyword">return</span> hashlib.sha256(block_string.encode()).hexdigest()
</code></pre>
<p>Each block references the hash of the previous block. This creates an immutable chain. Tampering with any block would invalidate all hashes after it.</p>
<h2 id="heading-2-creating-a-chain">2. Creating a Chain</h2>
<p>The blockchain itself is simply a list of these blocks. It starts with a <strong>genesis block</strong> and grows by adding new blocks sequentially.</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Blockchain</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self</span>):</span>
        self.chain = [self.create_genesis_block()]

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_genesis_block</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-keyword">return</span> Block(<span class="hljs-number">0</span>, <span class="hljs-string">"0"</span>, <span class="hljs-string">"Genesis Block"</span>)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">add_block</span>(<span class="hljs-params">self, data</span>):</span>
        previous_block = self.chain[<span class="hljs-number">-1</span>]
        new_block = Block(len(self.chain), previous_block.hash, data)
        self.chain.append(new_block)
</code></pre>
<p>Example usage:</p>
<pre><code class="lang-python">bc = Blockchain()
bc.add_block(<span class="hljs-string">"First transaction"</span>)
bc.add_block(<span class="hljs-string">"Second transaction"</span>)

<span class="hljs-keyword">for</span> block <span class="hljs-keyword">in</span> bc.chain:
    print(<span class="hljs-string">f"Block <span class="hljs-subst">{block.index}</span>: <span class="hljs-subst">{block.data}</span>"</span>)
    print(<span class="hljs-string">f"Hash: <span class="hljs-subst">{block.hash}</span>\n"</span>)
</code></pre>
<p>This demonstrates how blockchains enforce linear history and tamper-evidence using cryptographic hashes.</p>
<h2 id="heading-3-simulated-proof-of-work">3. Simulated Proof-of-Work</h2>
<p>Real blockchains like Bitcoin use <strong>Proof-of-Work (PoW)</strong> to enforce computational effort before adding a block. Here’s a simplified version:</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">BlockWithProof</span>(<span class="hljs-params">Block</span>):</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, index, previous_hash, data, difficulty=<span class="hljs-number">2</span></span>):</span>
        self.difficulty = difficulty
        self.nonce = <span class="hljs-number">0</span>
        super().__init__(index, previous_hash, data)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">compute_hash</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-keyword">while</span> <span class="hljs-literal">True</span>:
            block_string = <span class="hljs-string">f"<span class="hljs-subst">{self.index}</span><span class="hljs-subst">{self.timestamp}</span><span class="hljs-subst">{self.data}</span><span class="hljs-subst">{self.previous_hash}</span><span class="hljs-subst">{self.nonce}</span>"</span>
            hash_result = hashlib.sha256(block_string.encode()).hexdigest()
            <span class="hljs-keyword">if</span> hash_result.startswith(<span class="hljs-string">"0"</span> * self.difficulty):
                <span class="hljs-keyword">return</span> hash_result
            self.nonce += <span class="hljs-number">1</span>
</code></pre>
<p>This mechanism enforces computational work by requiring that the resulting hash starts with a number of leading zeros. Increasing <code>difficulty</code> makes mining slower.</p>
<h2 id="heading-4-peer-synchronization-and-consensus-simplified">4. Peer Synchronization and Consensus (Simplified)</h2>
<p>In real systems, multiple nodes propose and verify blocks. For this simulation, a naive consensus approach assumes the longest valid chain is accepted.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">resolve_conflict</span>(<span class="hljs-params">local_chain, remote_chain</span>):</span>
    <span class="hljs-keyword">if</span> len(remote_chain) &gt; len(local_chain):
        <span class="hljs-keyword">return</span> remote_chain
    <span class="hljs-keyword">return</span> local_chain
</code></pre>
<p>Real consensus mechanisms like <strong>Proof-of-Stake</strong>, <strong>PBFT</strong>, or <strong>Raft</strong> add far more complexity, involving signatures, votes, and fault tolerance.</p>
<h2 id="heading-5-real-world-systems-ethereum-and-hyperledger">5. Real-World Systems: Ethereum and Hyperledger</h2>
<p>While simulations demonstrate the structure, actual blockchains come with full networking, virtual machines, transaction pools, and persistent storage. Below is a minimal guide to running <strong>Ethereum</strong> and <strong>Hyperledger Fabric</strong> locally.</p>
<h3 id="heading-ethereum-geth">Ethereum (geth)</h3>
<p><strong>Install and run a local Ethereum node:</strong></p>
<pre><code class="lang-bash"><span class="hljs-comment"># Install Go if not already installed</span>
sudo apt install golang

<span class="hljs-comment"># Clone and build Geth (Go Ethereum)</span>
git <span class="hljs-built_in">clone</span> https://github.com/ethereum/go-ethereum.git
<span class="hljs-built_in">cd</span> go-ethereum
make geth

<span class="hljs-comment"># Initialize a private chain</span>
./build/bin/geth init genesis.json

<span class="hljs-comment"># Start a node</span>
./build/bin/geth --networkid 1337 --http --http.api eth,web3,personal,net,miner,admin,txpool --mine --allow-insecure-unlock
</code></pre>
<p>To interact with the node:</p>
<pre><code class="lang-bash">./build/bin/geth attach
</code></pre>
<p>Example command inside the console:</p>
<pre><code class="lang-javascript">eth.accounts
eth.getBalance(eth.accounts[<span class="hljs-number">0</span>])
</code></pre>
<h3 id="heading-hyperledger-fabric">Hyperledger Fabric</h3>
<p>Fabric is modular and container-based. The easiest way to get started is using the official samples.</p>
<p><strong>Setup steps:</strong></p>
<pre><code class="lang-bash"><span class="hljs-comment"># Prerequisites: Docker and Docker Compose</span>
git <span class="hljs-built_in">clone</span> https://github.com/hyperledger/fabric-samples.git
<span class="hljs-built_in">cd</span> fabric-samples/test-network

<span class="hljs-comment"># Start test network</span>
./network.sh up

<span class="hljs-comment"># Create a channel and deploy chaincode</span>
./network.sh createChannel
./network.sh deployCC
</code></pre>
<p>Then run transactions using the included scripts:</p>
<pre><code class="lang-bash">./network.sh invoke
./network.sh query
</code></pre>
<p>Fabric uses its own CA, smart contract engine (chaincode), and endorsing peer architecture, making it suitable for enterprise environments where identities and access control are critical.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding the Graphics Pipeline: A Deep Dive into Real-Time Rendering]]></title><description><![CDATA[Modern GPUs are built to execute a highly parallel and programmable pipeline designed for transforming vertex data into rendered pixels. Whether using OpenGL, Vulkan, or DirectX, the fundamental structure remains similar. The graphics pipeline consis...]]></description><link>https://blog.mahrabhossain.me/understanding-the-graphics-pipeline-a-deep-dive-into-real-time-rendering</link><guid isPermaLink="true">https://blog.mahrabhossain.me/understanding-the-graphics-pipeline-a-deep-dive-into-real-time-rendering</guid><category><![CDATA[C++]]></category><category><![CDATA[coding]]></category><category><![CDATA[shader]]></category><category><![CDATA[technology]]></category><category><![CDATA[openGL]]></category><category><![CDATA[GLSL]]></category><category><![CDATA[graphics pipeline]]></category><dc:creator><![CDATA[Mirza Mahrab Hossain]]></dc:creator><pubDate>Tue, 24 Jun 2025 14:14:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/KNZHyTpre18/upload/1aaee422304194cebaef123767b9b575.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Modern GPUs are built to execute a highly parallel and programmable pipeline designed for transforming vertex data into rendered pixels. Whether using OpenGL, Vulkan, or DirectX, the fundamental structure remains similar. The graphics pipeline consists of sequential programmable and fixed-function stages that work on geometry and pixel data.</p>
<p>This article walks through each stage from a technical standpoint, including C++ and GLSL examples.</p>
<h2 id="heading-input-assembly-structuring-geometry-for-the-gpu">Input Assembly: Structuring Geometry for the GPU</h2>
<p>The process begins on the CPU side by defining geometry in terms of <strong>vertices</strong>, which form the basis for all rendering.</p>
<pre><code class="lang-cpp"><span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Vertex</span> {</span>
    glm::vec3 position;
    glm::vec3 normal;
    glm::vec2 texCoord;
};
</code></pre>
<p>These vertices are uploaded to GPU memory using <strong>Vertex Buffer Objects (VBOs)</strong>. A <strong>Vertex Array Object (VAO)</strong> is then created to describe the layout of these attributes and bind them efficiently.</p>
<pre><code class="lang-cpp">GLuint vao, vbo;
glGenVertexArrays(<span class="hljs-number">1</span>, &amp;vao);
glGenBuffers(<span class="hljs-number">1</span>, &amp;vbo);

glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, <span class="hljs-keyword">sizeof</span>(vertices), vertices, GL_STATIC_DRAW);

<span class="hljs-comment">// Position</span>
glVertexAttribPointer(<span class="hljs-number">0</span>, <span class="hljs-number">3</span>, GL_FLOAT, GL_FALSE, <span class="hljs-keyword">sizeof</span>(Vertex), (<span class="hljs-keyword">void</span>*)<span class="hljs-number">0</span>);
glEnableVertexAttribArray(<span class="hljs-number">0</span>);

<span class="hljs-comment">// Normal</span>
glVertexAttribPointer(<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, GL_FLOAT, GL_FALSE, <span class="hljs-keyword">sizeof</span>(Vertex), (<span class="hljs-keyword">void</span>*)offsetof(Vertex, normal));
glEnableVertexAttribArray(<span class="hljs-number">1</span>);

<span class="hljs-comment">// TexCoord</span>
glVertexAttribPointer(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, GL_FLOAT, GL_FALSE, <span class="hljs-keyword">sizeof</span>(Vertex), (<span class="hljs-keyword">void</span>*)offsetof(Vertex, texCoord));
glEnableVertexAttribArray(<span class="hljs-number">2</span>);
</code></pre>
<h2 id="heading-vertex-shader-per-vertex-transformation">Vertex Shader: Per-Vertex Transformation</h2>
<p>The <strong>Vertex Shader</strong> is the first programmable stage in the GPU pipeline. It transforms each vertex from local object space into <strong>clip space</strong> using a model-view-projection (MVP) matrix.</p>
<pre><code class="lang-glsl">#version 450
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inNormal;
layout(location = 2) in vec2 inTexCoord;

layout(set = 0, binding = 0) uniform MVP {
    mat4 model;
    mat4 view;
    mat4 projection;
} mvp;

layout(location = 0) out vec3 fragNormal;
layout(location = 1) out vec2 fragTexCoord;

void main() {
    gl_Position = mvp.projection * mvp.view * mvp.model * vec4(inPosition, 1.0);
    fragNormal = mat3(transpose(inverse(mvp.model))) * inNormal;
    fragTexCoord = inTexCoord;
}
</code></pre>
<p>Operations include space transformation, normal correction, and forwarding interpolated values to the fragment stage.</p>
<h2 id="heading-primitive-assembly-and-clipping">Primitive Assembly and Clipping</h2>
<p>After vertex processing, the GPU assembles vertices into primitives based on the drawing mode (e.g., <code>GL_TRIANGLES</code>). These primitives are then clipped against the <strong>view frustum</strong> to discard geometry outside the camera’s field of view.</p>
<p>Clipping is done in <strong>homogeneous clip space</strong>, and the GPU uses perspective division to transform coordinates into <strong>Normalized Device Coordinates (NDC)</strong>.</p>
<h2 id="heading-rasterization-from-primitives-to-fragments">Rasterization: From Primitives to Fragments</h2>
<p>Once primitives are assembled and clipped, the <strong>rasterizer</strong> maps them onto screen-space pixels. The result is a grid of <strong>fragments</strong>, each carrying interpolated data such as texture coordinates, normals, or custom attributes.</p>
<p>Rasterization does not produce actual colors yet. It simply determines which pixels are covered by which primitives and prepares data for the fragment shader.</p>
<h2 id="heading-fragment-shader-computing-final-pixel-colors">Fragment Shader: Computing Final Pixel Colors</h2>
<p>The <strong>Fragment Shader</strong> is executed once per fragment and outputs the final pixel color. It performs lighting, texturing, and any other per-pixel effects.</p>
<pre><code class="lang-glsl">#version 450
layout(location = 0) in vec3 fragNormal;
layout(location = 1) in vec2 fragTexCoord;
layout(location = 0) out vec4 outColor;

layout(set = 1, binding = 0) uniform sampler2D diffuseTexture;

void main() {
    vec3 normal = normalize(fragNormal);
    vec3 lightDir = normalize(vec3(0.5, 0.8, 0.6));
    float diff = max(dot(normal, lightDir), 0.0);
    vec3 color = texture(diffuseTexture, fragTexCoord).rgb;
    outColor = vec4(color * diff, 1.0);
}
</code></pre>
<p>The output of this shader is then passed through various post-processing stages before being written to the framebuffer.</p>
<h2 id="heading-per-fragment-operations-depth-stencil-and-blending">Per-Fragment Operations: Depth, Stencil, and Blending</h2>
<p>Before a fragment becomes a pixel, several operations determine its fate.</p>
<ul>
<li><strong>Depth Testing</strong> compares the fragment’s depth with existing depth buffer contents. If it fails, the fragment is discarded.</li>
<li><strong>Stencil Testing</strong> can mask out certain areas of the screen based on complex rules.</li>
<li><strong>Blending</strong> combines the incoming fragment color with the color already present in the framebuffer. This is crucial for transparency.</li>
</ul>
<p>Example OpenGL blending setup:</p>
<pre><code class="lang-cpp">glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
</code></pre>
<h2 id="heading-framebuffer-writing-the-final-output">Framebuffer: Writing the Final Output</h2>
<p>The accepted fragment is written to a <strong>Framebuffer Object (FBO)</strong>. This framebuffer may be the default one (for direct display) or a custom one used for post-processing.</p>
<p>Framebuffers can have multiple attachments for color, depth, and stencil. Multiple render targets (MRT) allow writing to several color attachments at once, useful for deferred shading.</p>
<h2 id="heading-optional-stages-advanced-pipeline-features">Optional Stages: Advanced Pipeline Features</h2>
<p>Modern rendering pipelines support additional programmable stages:</p>
<h3 id="heading-geometry-shader">Geometry Shader</h3>
<p>Executed after the vertex shader, it can generate new primitives from existing ones. It’s useful for dynamic LOD, wireframe generation, or billboard creation.</p>
<pre><code class="lang-glsl">layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;

void main() {
    for (int i = 0; i &lt; 3; ++i) {
        gl_Position = gl_in[i].gl_Position;
        EmitVertex();
    }
    EndPrimitive();
}
</code></pre>
<h3 id="heading-tessellation-shaders">Tessellation Shaders</h3>
<p>Used in conjunction with <strong>patches</strong> to dynamically subdivide geometry on the GPU. Controlled via a Tessellation Control Shader (TCS) and Tessellation Evaluation Shader (TES). These are used for smooth surface rendering in high-end applications.</p>
<h3 id="heading-compute-shaders">Compute Shaders</h3>
<p>Though not part of the rasterization pipeline, compute shaders can pre-process textures, simulate particle systems, or do physics-based calculations entirely on the GPU.</p>
<pre><code class="lang-glsl">layout(local_size_x = 16, local_size_y = 16) in;
layout(rgba32f, binding = 0) uniform image2D outputImage;

void main() {
    ivec2 id = ivec2(gl_GlobalInvocationID.xy);
    vec4 color = vec4(float(id.x)/800.0, float(id.y)/600.0, 0.0, 1.0);
    imageStore(outputImage, id, color);
}
</code></pre>
<p>GPU rendering follows a structured sequence where vertex data is transformed, shaded, and tested before becoming final pixel output. Each frame reflects a pipeline of discrete, highly parallel stages operating with precision across the GPU.</p>
<p>As the internal flow becomes clearer, rendering techniques such as deferred shading, post-processing, and physically based lighting begin to align with the underlying mechanics. The process retains its complexity, but the architecture behind it becomes accessible and deliberate rather than opaque.</p>
]]></content:encoded></item></channel></rss>