- GigaElixir Gazette
- Posts
- ⚡ One Endpoint Froze Everything
⚡ One Endpoint Froze Everything
One CPU-heavy request froze Node. BEAM kept serving traffic.
Welcome to GigaElixir Gazette, your 5-minute digest of Elixir ecosystem news that actually matters 👋.
. WEEKLY PICKS .
🔐 Kafka Client Gets SASL Authentication After 8 Years: kafka_ex maintainer closes 2017 GitHub issue by implementing PLAIN and SCRAM mechanisms for Elixir's native Kafka client. PLAIN sends credentials over TLS in single message, SCRAM exchanges cryptographic proofs without transmitting passwords. Architecture uses behavior pattern making new mechanisms trivial—OAUTHBEARER in open PR, MSK IAM planned. Fresha runs kafka_ex in production, needed auth for managed providers requiring authentication. Implementation uses send_fun abstraction separating mechanism logic from socket/framing details. Docker Compose setup with real Kafka brokers validates end-to-end authentication flows.
⚠️ Pattern Matching with Floats Bypassed Case Clause in Production: Team refactored if charge == 0 to case statement during module upgrade. Production records with charge = 0.0 started hitting wrong branch because pattern matching is type-strict. 0 === 0.0 returns false even though 0 == 0.0 returns true. Original if condition used == for numeric comparison, case statement uses pattern matching checking both value and type. Fix required guard clause value when value == 0 instead of bare 0 pattern. Silent failure—no warnings, no errors, just skipped clauses shipping bad logic to production.
🤖 Beam Bots Framework Brings Resilient Robotics to BEAM: New framework maps ROS2 patterns to native Elixir primitives—pubsub becomes Phoenix.PubSub, Services become GenServer.call, Actions become supervised Tasks. Spark DSL defines robot topology declaratively with physical structure mirroring code nesting. Supervision tree mirrors robot's kinematic chain providing fault isolation where failing gripper takes longer to crash entire system. Includes forward kinematics via Nx with GPU support, hierarchical PubSub for sensor data, URDF export for Gazebo integration. Author building Mars rover with rocker-bogie suspension for real-world validation after framework currently having zero physical robot deployments.
🎮 Game of Islands Built Entirely in LiveView With Zero JavaScript: Developer implements full web UI for Lance Halvorsen's Islands Engine game using only Phoenix LiveView. No custom JavaScript written—interactive game board, real-time updates, player actions all handled server-side. Based on "Functional Web Development with Elixir, OTP, and Phoenix" teaching fundamental concepts through game implementation. Live demo deployed on Fly.io demonstrates LiveView's capability for complex interactive UIs without frontend frameworks. Complete source available showing GenServer game state management, OTP supervision, and LiveView pubsub coordination.
⚙️ Gust Task Orchestrator Open Sourced as Airflow Alternative: Team escaped Airflow complexity and Astronomer bills by building task orchestration system in Elixir. Leverages BEAM's supervision trees, process isolation, and fault tolerance for more efficient workflow management than Python-based alternatives. Open source release includes core orchestration engine demonstrating Elixir's suitability for data pipeline coordination. Aims to provide simpler operational model than Airflow's heavyweight deployment requirements while maintaining enterprise workflow capabilities.

Your API Went Down Because One User Hit The Heavy Endpoint
The API worked flawlessly under normal load. Then someone triggered the data export endpoint. Five seconds later: health checks started timing out, monitoring alerted, and every other request hung. One heavy computation took down the entire service. The culprit wasn't load—it was Node.js's cooperative multitasking model asking your code politely if it's done yet.
Node.js runs on a single-threaded event loop. Brilliant for I/O operations waiting on databases or external APIs. Fatal flaw: CPU-bound work blocks everything. When a function runs, it keeps running until it explicitly yields or returns. A user calculating a large hash, parsing massive JSON, or running a tight loop doesn't just slow down—they freeze the entire event loop. Every other request waits. Health checks timeout. The API becomes unresponsive.
Most teams reach for worker threads as the solution. Smart teams question why the runtime doesn't handle this by default. The BEAM doesn't ask your code for permission to switch tasks—it forces context switches every 2,000 reductions (roughly 2,000 function calls). A process calculating the millionth digit of Pi gets interrupted mid-calculation so your user's health check can respond. The scheduler doesn't care if work is "important"—it enforces fairness.
Here's the operational difference. Node.js death loop experiment: create an endpoint that blocks the CPU for 5 seconds with while(Date.now() - start < 5000) {}, then try hitting the health check endpoint. The health check spins and times out. The server is completely unresponsive. Elixir equivalent: run Stream.cycle([1]) |> Enum.take(10_000_000) in one endpoint, immediately hit the health check from another request. Health check responds instantly. One scheduler core spikes to 100%, but the BEAM pauses compute work every few microseconds to serve other requests.
Go learned this the hard way. For years, Go used cooperative scheduling—goroutines only yielded at function calls. Tight loops like for { i++ } with no function calls hijacked threads permanently. Garbage collector paused. Latency spiked. February 2020, Go 1.14 introduced asynchronous preemption using OS signals to forcibly interrupt greedy goroutines. Google engineers realized you cannot trust code to be polite at scale. Erlang figured this out in 1986.
In microservices or SaaS platforms, noisy neighbors destroy performance predictability. Node.js: one tenant's heavy request ruins P99 latency for everyone on that instance. Python asyncio: same problem. Elixir: the noisy neighbor is contained—their request might take longer, but it doesn't drag down everyone else. The BEAM trades a tiny bit of raw sequential speed for massive operational predictability. When you're serving millions of users, you want the runtime managing resources, not hoping your code stays polite under load.
Remember, for stable low-latency systems:
Cooperative multitasking trusts code to yield – Node.js and Python asyncio ask politely if work is done; one greedy function freezes everything
Preemptive scheduling enforces fairness – BEAM interrupts every 2,000 reductions regardless of what code is doing, heavy work can't monopolize CPU
Worker threads are opt-in complexity – Node.js solution requires manual offloading; BEAM makes fairness the default with zero configuration
Go 1.14 validated BEAM's approach – Google added forced preemption after years of cooperative scheduling caused latency spikes from tight loops
. TIRED OF DEVOPS HEADACHES? .
Deploy your next Elixir app hassle-free with Gigalixir and focus more on coding, less on ops.
We're specifically designed to support all the features that make Elixir special, so you can keep building amazing things without becoming a DevOps expert.
See you next week,
Michael
P.S. Forward this to a friend who loves Elixir as much as you do 💜