- GigaElixir Gazette
- Posts
- ⚡ Your Calls Drop. Discord’s Don’t.
⚡ Your Calls Drop. Discord’s Don’t.
Discord’s secret isn’t bandwidth — it’s BEAM’s process isolation, not OS threads.
Welcome to GigaElixir Gazette, your 5-minute digest of Elixir ecosystem news that actually matters 👋.
. WEEKLY PICKS .
🎯 Phoenix Creator: Elixir Dominates The Agentic AI Era: Chris McCord argues JavaScript's dominance in AI coding misses the point—Elixir's unified tooling and BEAM architecture accidentally positioned it perfectly for LLM-powered development. McCord's AGENTS.md innovation teaches frontier models to write correct, idiomatic Elixir code while avoiding common hallucinations like list index-based access. With context windows doubling every seven months, agents stay on task longer, and Elixir's cohesive Mix tooling eliminates the fragmented ecosystem chaos plaguing JavaScript. Read the interview
🔐 How Do You Avoid Copy-Pasting Authorization Checks Across Controllers, LiveViews, and GraphQL Resolvers? Curiosum's Permit provides DSL-free authorization from a single source of truth. The "load and authorize" pattern means entering controller actions with records already loaded and authorized—if unauthorized, handlers deal with it automatically. Built as four modular libraries (base, Ecto, Phoenix, Absinthe) so you opt into only what you need without tight coupling.
Explore the library
⚡ Ecto Batch Operations Tutorial Covers insert_all Limitations: AppSignal's guide walks through bulk inserts and Ecto.Multi for coordinated updates. The critical insight most tutorials skip: Repo.insert_all/3 bypasses changeset validations entirely, meaning data integrity checks must happen before insertion. Database-level constraints can fail entire batches on single violations—the solution involves chunking records into smaller batches wrapped in transactions that handle constraint errors gracefully.
Read the tutorial
🚀 Not Every Team Wants LiveView: A Phoenix fork called Combo strips LiveView and code generation in favor of better Vite and Inertia integration for teams maintaining custom project templates. The author admits "probably nobody needs this" and warns it's not production-ready, but it exists for the 0.00000001% who want Phoenix without LiveView's opinions. Worth watching if your team already committed to Inertia across multiple frameworks.
Check the repo
💼 Eight Months Studying Elixir and OTP, Zero Job Offers: Reddit discussion exposes the catch-22 new Elixir developers face—positions demand extensive Elixir experience, but limited job availability prevents gaining that experience. Community members report companies in the Elixir ecosystem aren't as open to "strong engineering background, less Elixir experience" candidates as expected. The conversation confirms what hiring data already shows: Elixir remains undervalued by recruiters who don't understand BEAM's competitive advantages.
Join the discussion

Your Call Drops. Threads Are to Blame.
Your call drops mid-sentence when the phone switches towers. You redial. It happens again. Meanwhile, thousands of other calls stay connected—John's conversation never stuttered. Why does your infrastructure treat connections like they're all in this together?
You reach for OS threads like everyone else—one per connection, problem solved. Your 8GB server caps at 16,384 threads before memory exhaustion. Discord handles 2.5 million concurrent voice connections on BEAM. Same problem, 150x the capacity. The difference? BEAM processes cost 327 words (1.3KB) instead of 512KB per thread.
OS threads share memory. Multiple threads hit the same data simultaneously—race conditions appear. Add locks to prevent corruption. Now threads wait for each other instead of working. Even with locks, one thread writes bad data and every other thread reads it. That 512KB per thread? Pure overhead before doing anything useful.
BEAM processes operate in isolated memory chunks, communicating via messages—no shared memory means no race conditions. Creating a process builds minimal internal structure: identifier, mailbox, stack, heap, metadata. No system calls, no kernel involvement. The scheduler gives each process a time slice—if a task takes too many operations, it pauses mid-execution, saves state, and switches to another process. Long-running tasks never block the CPU. Supervision trees monitor process health and restart failures without affecting the rest of the system.
The breakthrough insight: Erlang accidentally solved modern concurrency problems before they existed. Built for telecom systems in 1986, it optimized for reliability and isolation when everyone else optimized for single-core performance. Multicore CPUs, distributed systems, millions of concurrent connections—BEAM handled them naturally while other languages retrofitted concurrency through libraries and frameworks.
Remember, for understanding BEAM's advantages:
OS threads cost 512KB minimum – BEAM processes cost 327 words, enabling millions of concurrent operations
Shared memory creates race conditions – Process isolation eliminates entire categories of concurrency bugs
Supervision trees handle failures – Restart crashed processes without affecting healthy ones
Preemptive scheduling prevents blocking – Long-running tasks pause automatically, giving other processes CPU time
. TIRED OF DEVOPS HEADACHES? .
Deploy your next Elixir app hassle-free with Gigalixir and focus more on coding, less on ops.
We're specifically designed to support all the features that make Elixir special, so you can keep building amazing things without becoming a DevOps expert.
See you next week,
Michael
P.S. Forward this to a friend who loves Elixir as much as you do 💜