

<feed xmlns="http://www.w3.org/2005/Atom">
  <id>https://morcules-blog.pages.dev/</id>
  <title>Morcules</title>
  <subtitle>Systems programming blog page focused on C, networking, multithreading, and low-level performance programming with real-world benchmarks and optimizations.</subtitle>
  <updated>2026-05-11T15:01:36+00:00</updated>
  <author>
    <name>Morcules</name>
    <uri>https://morcules-blog.pages.dev/</uri>
  </author>
  <link rel="self" type="application/atom+xml" href="https://morcules-blog.pages.dev/feed.xml"/>
  <link rel="alternate" type="text/html" hreflang="en"
    href="https://morcules-blog.pages.dev/"/>
  <generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator>
  <rights> © 2026 Morcules </rights>
  <icon>/assets/img/favicons/favicon.ico</icon>
  <logo>/assets/img/favicons/favicon-96x96.png</logo>


  
  <entry>
    <title>How Programming Languages Really Work</title>
    <link href="https://morcules-blog.pages.dev/posts/How-programming-languages-really-work/" rel="alternate" type="text/html" title="How Programming Languages Really Work" />
    <published>2026-05-04T00:00:00+00:00</published>
  
    <updated>2026-05-04T00:00:00+00:00</updated>
  
    <id>https://morcules-blog.pages.dev/posts/How-programming-languages-really-work/</id>
    <content type="text/html" src="https://morcules-blog.pages.dev/posts/How-programming-languages-really-work/" />
    <author>
      <name>Morcules</name>
    </author>

  
    
    <category term="Programming" />
    
    <category term="C" />
    
    <category term="ASM" />
    
  

  <summary>Have you ever wondered how programming languages really work? There are several phases that your code goes through. Some compilers have more phases, but I will only talk about the basic ones. These stages are included in my own compiler. We will talk about a compiler made fully from scratch without using LLVM, to go more into detail.  Tokenizer  The tokenizer phase is really simple. Basically, ...</summary>

  </entry>

  
  <entry>
    <title>How Hashmaps Really Work Under the Hood</title>
    <link href="https://morcules-blog.pages.dev/posts/How-hashmaps-really-work-under-the-hood/" rel="alternate" type="text/html" title="How Hashmaps Really Work Under the Hood" />
    <published>2026-05-04T00:00:00+00:00</published>
  
    <updated>2026-05-04T00:00:00+00:00</updated>
  
    <id>https://morcules-blog.pages.dev/posts/How-hashmaps-really-work-under-the-hood/</id>
    <content type="text/html" src="https://morcules-blog.pages.dev/posts/How-hashmaps-really-work-under-the-hood/" />
    <author>
      <name>Morcules</name>
    </author>

  
    
    <category term="Programming" />
    
    <category term="Data Structures" />
    
  

  <summary>Have you ever wondered how hashmaps really work under the hood? It might seem very complex, but once you understand it, it’s actually very simple.  Hashmaps Are Just Arrays With Smart Indexing  Hashmaps are basically arrays. They use generated indexes from the input.  How Hashing Creates the Index  So how do we generate those indexes? It’s right there in the name, by hashing. The exact algorith...</summary>

  </entry>

  
  <entry>
    <title>Reducing Atomic Overhead in a Multithreaded C Allocator by 50%</title>
    <link href="https://morcules-blog.pages.dev/posts/Reducing-Atomic-Overhead-in-a-Multithreaded-C-Allocator-by-50/" rel="alternate" type="text/html" title="Reducing Atomic Overhead in a Multithreaded C Allocator by 50%" />
    <published>2026-05-01T00:00:00+00:00</published>
  
    <updated>2026-05-01T00:00:00+00:00</updated>
  
    <id>https://morcules-blog.pages.dev/posts/Reducing-Atomic-Overhead-in-a-Multithreaded-C-Allocator-by-50/</id>
    <content type="text/html" src="https://morcules-blog.pages.dev/posts/Reducing-Atomic-Overhead-in-a-Multithreaded-C-Allocator-by-50/" />
    <author>
      <name>Morcules</name>
    </author>

  
    
    <category term="Networking" />
    
    <category term="C" />
    
  

  <summary>When working with multi-threaded C code, atomic operations often look cheap, until they become your biggest bottleneck. In my case, a small change in how threads acquired memory reduced CPU usage by 40-50%, cutting execution from roughly 2 billion cycles to 1 billion cycles when sending 50MB of data in my networking library.  Performance Changes Before:    CPU Cycles : 2,273,393,860   User Cpu ...</summary>

  </entry>

</feed>


