Welcome to GigaElixir Gazette, your 5-minute digest of Elixir ecosystem news that actually matters.

This week in Elixir, a critical DoS vulnerability lands for plug_cowboy - unauthenticated HTTP/2 requests can exhaust the atom table and crash your BEAM node. Patch or upgrade to 2.8.1 immediately.

Also: ElixirConf EU 2026 wrapped in Malaga with dozens of talks now landing online, OTP 28.5 ships secure coding guidelines, and a 14-day benchmark maps where Elixir 1.18 beats Node.js 22 - and where it does not.

. WEEKLY PICKS .

🔐 CVE-2026-32688: One HTTP/2 Header Crashes Your Entire BEAM Node

An unauthenticated attacker can crash your Erlang VM by sending HTTP/2 requests with unique :scheme header values. plug_cowboy calls String.to_atom/1 on the client-supplied value without validation - each unique value permanently allocates an atom. With 1,048,576 atoms max (never garbage-collected), a sustained attack exhausts the table and takes down the node. Affects plug_cowboy 2.0.0 through 2.8.0. HTTP/1.1 listeners are not affected. Upgrade immediately.

🎯 ElixirConf EU 2026 Slides Land: DurableServer, Agentic Elixir, and Types

Two days, three tracks, dozens of talks in Malaga. Chris McCord opened with DurableServer - always running somewhere by design. José Valim closed Friday with the latest on Elixir types. Andrea Leopardi covered agentic Elixir. Sessions ranged from RAG pipelines in pure Elixir to desktop apps with Tauri, Kafka-backed streaming at scale, and Hologram's local-first browser approach. Community-contributed slides thread is live and accepting additions.

⚡ Thinking Elixir 301: LiveDebugger 1.0, Volt 0.8, and a Big Erlang Departure

Episode 301 covers several community milestones. Phoenix LiveDebugger hit v1.0 with an interactive tour included. Volt reached v0.8.0 as an Elixir-native frontend build tool that cuts Node.js from the pipeline entirely. German Velasco made TestingLiveView.com completely free for the community. The headline departure: Francesco Cesarini is leaving Erlang Solutions while staying active in the EEF. LiveStash v0.2.0 ships API improvements with Redis and Mnesia adapters previewed.

🛠️ OTP 28.5 Ships with New Secure Coding Guidelines for Erlang

The April 23 patch release adds one notable item beyond bug fixes: a new Secure Coding Guidelines document in the Design Principles section covers how to write secure Erlang code. Bug fixes include a map corruption issue in enif_make_map_from_arrays for arrays with 33 or more keys where duplicate detection was failing, and a unicode handling fix for erl.exe on Windows. Applications updated: erts-16.4, ssl-11.6, mnesia-4.25.3.

📊 Elixir 1.18 vs Node.js 22: 4.2x Lower Latency, But Node Wins on JSON

A 14-day benchmark across 12 real-world workloads shows Elixir 1.18 handles 128k concurrent WebSocket connections per 2vCPU node versus Node.js 22's 32k, with 47ms p99 latency versus 212ms. Node.js still wins on single-threaded JSON parsing at 1.2M ops/sec versus 700k. Memory tells the real story: 128MB per 10k connections for Elixir versus 512MB for Node.js. For real-time collaboration workloads, 3 Elixir nodes replace 8 Node.js nodes.

💡 Pro Tip

Your Mental Model of BEAM Garbage Collection Is Slowing You Down

Most developers reach for JVM comparisons when someone asks how the BEAM handles concurrency. That is the wrong starting point.

The JVM built threads first, added garbage collection across the entire heap, and concurrency became a coordination problem.

The BEAM made the opposite choice: isolated processes with independent garbage collection, connected only through message passing. A BEAM process starts with roughly 2KB of stack space, spawns in microseconds, and the scheduler manages thousands across a small pool of OS threads - one per CPU core by default.

Spawning 100,000 processes on a single machine is idiomatic, not dangerous. The OS has no direct knowledge of most of them.

The scheduler is preemptive but not time-based. It preempts based on reductions - a unit of work roughly mapping to function calls. This means no single process can starve others even if it runs an expensive loop, because the scheduler forces context switches at reduction boundaries.

Garbage collection never stops the world because each process has its own heap. One process collecting garbage has no effect on any other. World-stopping GC is a JVM problem. On the BEAM, GC pauses are bounded by what a single process accumulated - not the entire system state.

This is why 100k concurrent connections do not produce GC spikes. Every connection's process lives and dies independently.

Remember, for understanding BEAM at depth:

  • BEAM processes are not OS threads - 2KB start stack, microsecond spawn, OS unaware of most

  • Reduction-based preemption means no process can monopolize the scheduler, even in tight loops

  • Per-process GC means garbage collection never pauses the whole system - only the individual process

  • Message passing is the only connection between processes - no shared state, no locks, no coordination overhead

. TIRED OF DEVOPS HEADACHES? .

Deploy your next Elixir app hassle-free with Gigalixir and focus more on coding, less on ops.

We're specifically designed to support all the features that make Elixir special, so you can keep building amazing things without becoming a DevOps expert.

See you next week,

Michael

P.S. Forward this to a friend who loves Elixir as much as you do 💜

Keep Reading