- GigaElixir Gazette
- Posts
- đŽ No More Pager Interrupted Nights
đŽ No More Pager Interrupted Nights
Go goroutines leaked until Elixir supervision made the pagers stopâhere's the data
Welcome to GigaElixir Gazette, your 5-minute digest of Elixir ecosystem news that actually matters đ.
. WEEKLY PICKS .
đŻ ReqCassette Saves Time and Money Testing LLM APIs: Record-and-replay library for Req makes HTTP testing deterministic and async-safe. Built on Req's native plug system with process isolation instead of global mockingâperfect for expensive LLM API calls you don't want hitting production repeatedly during test runs. First call records, subsequent calls replay from cassette files instantly.
đ§ How Do You Control Claude Code From Elixir? ACPex implements Zed Industries' Agent Client Protocol with proper OTP supervision trees and fault tolerance. Communicate with AI code agents (Claude Code, Gemini, Goose) via JSON-RPC over stdio with complete type safety through Ecto schemas. Interactive Livebook examples show controlling Claude Code directly from Elixir processes.
đŸ Squeezing BEAM Into 16MB on Energy-Harvesting Hardware: Peer Stritzinger's Code BEAM talk covers running full Erlang runtime, OTP, TCP/IP stack, and USB support in just 16MB RAM on RTEMS real-time OS powered by temperature gradient energy harvesting. GRiSP nano boots from cold start while every millijoule countsâpushing BEAM into places it was never designed for reveals brutal trade-offs in memory usage and boot optimization.
đ Pinterest, Discord, and Bleacher Report All Moved Critical Systems to Elixir: Pinterest rewrote notifications and shrank the codebase while improving response times. Discord adopted BEAM for real-time messaging stack. Bleacher Report reported dramatic server count reductions for high-traffic infrastructure. Not language warsâoperational economics where JVM heap tuning, GC tail latency, and warm-up complexity create overhead that BEAM's lightweight processes eliminate.
đïž Invoice Settlement System Tutorial Shows OTP Supervision Power: Multi-part series compares fault-tolerant invoice processing across Go, Java, and Elixir implementations. The Elixir version using GenServer, supervision trees, and Broadway reduces infrastructure code by 90% while providing automatic retry, process isolation, and hot code updates that other languages require manual implementation to achieve.

Your Background Workers Made You a Part-Time Firefighter Until the Pager Finally Shut Up
The background notifications system made me feel like a part-time firefighter. 2:40 AM pages. Grafana looked healthy. Users were unhappy. Goroutines driftedâone unlucky path skipped a defer cancel, not enough to OOM, just enough to creep. Panics inside workers sometimes nuked entire batches with no alarms, no backpressure. Graceful shutdown during deploys worked 95% of the time. That 5% chaos meant manual ops at odd hours.
Textbook Go setup couldn't stop the bleeding. Containers, horizontal autoscaling, Prometheus, worker pools, context cancellation, errgroup, backoffâall the standard patterns. Each handler evolved its own "temporary vs permanent" error rules. Policy entropy spread across the codebase. None of this is Go's faultâit's shared memory plus DIY orchestration when you're moving fast and need reliability yesterday.
Go pushed 120k messages per hour with 170ms p95 latency. Elixir managed 110k at 210msâslower on paper. But Go paged the team 3-5 times per month during deploys. Elixir: 0-1. Go missed or duplicated 23 messages in 30 days. Elixir: zero. Memory jitter on Go swung ±220MB randomly. Elixir held steady at ±40MB. When Go crashed, manual ops. When Elixir crashed, 3-15ms automatic recovery without human intervention.
Each user gets a GenServer gate that rate-limits five messages per second through simple arithmeticâno locks, no shared counters. Bad payload crashes that user's process. The supervisor restarts just that gate in milliseconds. Other users never notice. One bad actor can't bring down the system. Broadway and GenStage manage backpressure automatically. A process crash becomes control flow, not catastrophe requiring runbooks and incident calls.
The breakthrough insight: raw throughput and low p95 latency don't predict operational reality. Go kept the service quick but demanded constant firefighting. Elixir made it calmâboring graphs, zero drama, systems that respect sleep schedules. The team got happier not because Elixir is magic, but because fault isolation and automatic recovery build trust. When things break small and heal fast, you stop being a part-time firefighter.
Remember, for evaluating concurrency models:
Operational metrics matter more than benchmarks â Pages per month and recovery time predict real costs better than p95 latency
Fault isolation prevents cascading failures â Process-per-task models contain errors that shared-memory approaches spread system-wide
Automatic recovery reduces operational burden â Supervision trees restart failures in milliseconds without manual intervention
Memory stability indicates leak prevention â ±40MB jitter vs ±220MB reveals whether your concurrency model leaks resources slowly
. TIRED OF DEVOPS HEADACHES? .
Deploy your next Elixir app hassle-free with Gigalixir and focus more on coding, less on ops.
We're specifically designed to support all the features that make Elixir special, so you can keep building amazing things without becoming a DevOps expert.
See you next week,
Michael
P.S. Forward this to a friend who loves Elixir as much as you do đ