<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Complexity Theory on VinhMDev</title><link>https://vinhmdev.com/topics/complexity-theory/</link><description>Recent content in Complexity Theory on VinhMDev</description><generator>Hugo</generator><language>en</language><lastBuildDate>Sat, 28 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://vinhmdev.com/topics/complexity-theory/index.xml" rel="self" type="application/rss+xml"/><item><title>Paper 03: Normal Accidents Theory &amp; The Fallacy of Root Cause Analysis</title><link>https://vinhmdev.com/posts/paper-03-normal-accidents-theory-the-fallacy-of-root-cause-analysis/</link><pubDate>Sat, 28 Feb 2026 00:00:00 +0000</pubDate><guid>https://vinhmdev.com/posts/paper-03-normal-accidents-theory-the-fallacy-of-root-cause-analysis/</guid><description>&lt;h2 id="i-introduction-the-newtonian-ghost-in-the-machine" class="relative group"&gt;I. Introduction: The Newtonian Ghost in the Machine &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#i-introduction-the-newtonian-ghost-in-the-machine" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;In traditional engineering, we are taught that systems are like Swiss watches: if the watch stops, it is because a specific gear broke or a human forgot to wind it. This reductionist approach—isolating parts to understand the whole—works for simple or complicated systems.&lt;/p&gt;
&lt;p&gt;However, in Complex Adaptive Systems like modern cloud-native architectures, the &amp;ldquo;Swiss Watch&amp;rdquo; model fails. As John Allspaw and the STELLA Report highlight, there is a fundamental gap between the &amp;ldquo;invisible&amp;rdquo; system below the line (code, hardware, networks) and the representations above the line (telemetry, dashboards) that operators interact with. When a global outage occurs, our instinct is to hunt for a Root Cause. This paper argues that in the presence of high complexity, the Root Cause is a phantom, and the accident itself is Normal.&lt;/p&gt;</description></item><item><title>Paper 02: The Law of Chaos: Decoding Entropy in Distributed Architecture</title><link>https://vinhmdev.com/posts/paper-02-the-law-of-chaos-decoding-entropy-in-distributed-architecture/</link><pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate><guid>https://vinhmdev.com/posts/paper-02-the-law-of-chaos-decoding-entropy-in-distributed-architecture/</guid><description>&lt;h2 id="i-prelude-the-paradigm-shift" class="relative group"&gt;I. Prelude: The Paradigm Shift &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#i-prelude-the-paradigm-shift" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;In our previous post, we discussed the fundamental shift from the era of pure computational logic to the era of probability Weights. However, this boundary doesn&amp;rsquo;t just exist within AI models. It manifests in every node of the distributed systems we operate daily.&lt;/p&gt;
&lt;p&gt;The harsh reality every system architect must accept is this: when you decompose a Monolith into Microservices, you aren&amp;rsquo;t just splitting code. You are fundamentally changing the physical nature of the system: moving from a Deterministic state to a Probabilistic one.&lt;/p&gt;</description></item><item><title>Paper 01: The Illusion of Control: System Design in the Era of AI</title><link>https://vinhmdev.com/posts/paper-01-the-illusion-of-control-system-design-in-the-era-of-ai/</link><pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate><guid>https://vinhmdev.com/posts/paper-01-the-illusion-of-control-system-design-in-the-era-of-ai/</guid><description>&lt;h2 id="i-the-limits-of-traditional-programming" class="relative group"&gt;I. The Limits of Traditional Programming &lt;span class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100"&gt;&lt;a class="group-hover:text-primary-300 dark:group-hover:text-neutral-700" style="text-decoration-line: none !important;" href="#i-the-limits-of-traditional-programming" aria-label="Anchor"&gt;#&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Software engineering has long relied on absolute control. System architects design software with the core principle that explicitly written logic will always return a predictable result.&lt;/p&gt;
&lt;p&gt;However, this model falls short when integrating Large Language Models (LLMs) directly into core features. We are moving from managing strict if-else statements to orchestrating probability distributions. Applying the old control-based mindset to AI will inevitably cause cascading failures when the system encounters unfamiliar data.&lt;/p&gt;</description></item></channel></rss>