- GigaElixir Gazette
- Posts
- 🔥 NIFs that don't crash your VM
🔥 NIFs that don't crash your VM
Native code without the crashes, scientific computing gets real, Phoenix's sync magic
Welcome to GigaElixir Gazette, your 5-minute digest of Elixir ecosystem news that actually matters 👋.
. WEEKLY PICKS .
🔮 WebSockets Are Overkill - Phoenix Sync Proves The Simpler Pattern Works
Everyone reaches for WebSockets when they need real-time collaboration. Then they drown in connection management, authentication middleware, and message serialization hell. Christian Alexander's TanStack DB + Phoenix Sync tutorial demonstrates the contrarian approach: stream database changes directly to the browser and let the BEAM handle complexity you're manually implementing. The architecture is elegant because it's minimal - embedded ElectricSQL streams changes as "shapes," authenticated controllers scope data access, Phoenix.Sync.Writer handles transactions. He built a ranked-choice voting app where changes sync instantly across users, from localStorage prototype to production backend in surprisingly little code. The key insight most teams miss: stop fighting the database with custom WebSocket protocols. Stream it directly and use patterns the BEAM already optimized. Real-time collaboration without reinventing distributed systems.
⚡ Python's zarr-python Is The Bottleneck In Every ML Pipeline - Elixir Just Fixed It
ExZarr 1.0 ships with 26x faster parallel chunk reads than Python's zarr-python while maintaining full compatibility. The scientific computing world has been stuck with Python for chunked N-dimensional arrays because nothing else existed. Now Elixir has a native implementation that's production-ready: 1,713 tests, 80% coverage, zero warnings, comprehensive security docs. Multiple storage backends out of the box. Zarr replaced HDF5 as the cloud-native format for terabyte datasets - compressed, chunked arrays designed for S3/GCS workflows where distributed workers access data in parallel. The performance gap exists because Python's GIL kills parallel I/O, while the BEAM was built for exactly this workload. Next release adds Nx integration, which means BEAM-native ML pipelines just became viable for climate science and genomics teams working at scale. Python's monopoly on scientific computing just got challenged.
🛡️ That ImageMagick NIF Took Down Production At 3 AM - SafeNIF Prevents The Next One
NIFs crash your entire BEAM node when they segfault. Not the process, not the supervisor tree - the whole VM. That ImageMagick binding processing user uploads? One malformed JPEG and your entire cluster goes dark. SafeNIF wraps untrusted NIFs in isolated peer nodes on the same machine. When the NIF crashes, only the peer dies. Your main node keeps running. The implementation is pure Elixir with Erlang stdlib - no Docker containers, no Kubernetes complexity, no orchestration nightmares. Just supervision trees doing what they were designed for, extended to native code. The trade-off is inter-node communication overhead, but that's acceptable when the alternative is explaining to your CTO why a C library bug took down production. BEAM's fault tolerance guarantees finally work for native extensions the way they work for everything else.
Permit.Phoenix 0.4 exists because Curiosum watched dozens of teams reinvent authorization wheels with the same bugs. Mount callbacks checking current_user.role scattered across LiveViews. Policy logic duplicated in templates and controllers. Multi-tenant scoping that breaks when product adds "managers can edit their team's records but only view others." The 0.4 release ships with scopes for multi-tenant apps, proper error handling that doesn't leak implementation details, and sensible defaults that eliminate boilerplate most teams copy-paste wrong. The video tutorial walks through real-world patterns extracted from production codebases. If you're still writing `if current_user.role == :admin` in templates, this library is the intervention you need before your authorization logic becomes unmaintainable.
📝 Six Months From Now You Won't Remember Why That Index Exists
Ecto schemas drift from their migrations. Production databases accumulate indexes nobody remembers adding. Foreign key cascade rules become mysteries when the developer who wrote them leaves. EctoAnnotate scans your migrations and writes annotations directly into schema files - primary keys, foreign keys with actions, indexes, associations. Detects binary_id vs integer relationships automatically. Run `mix ecto_annotate --annotate` once and your schema files document themselves. The annotations live as comments above your schema definition, updated whenever migrations change. Inspired by Ruby's annotate gem, which prevented countless "wait, is this column nullable?" conversations. Your future self will thank you when debugging production issues at 2 AM.

Epmd Beats Complex Service Discovery In 80% Of Production Deployments
Everyone cargo-cults Kubernetes DNS or gossip protocols for BEAM clustering because that's what the tutorials show. Then they spend three weeks debugging why nodes won't connect in staging. Meanwhile, epmd with `.hosts.erlang` has been solving this problem since 1998 with zero configuration complexity.
Most teams miss this deployment fact: if your servers can handle databases, they can handle epmd too. Cloud vendors push service mesh because they profit from complexity. Kubernetes purists push DNS because it fits their mental model. But production systems that actually stay up?
They use epmd for the 80% of infrastructure that doesn't change, and only add dynamic discovery when scaling actually demands it. The libcluster documentation won't tell you this directly, but the strategy system exists because clustering requirements change with scale - not because you need complexity from day one.
Start with epmd. Add multicast UDP gossip for local development (solves the "manually connecting nodes in iex" problem instantly). Introduce Kubernetes DNS only when you're actually doing dynamic scaling that epmd can't handle. Most teams never reach that point.
The controversial part: teams that jump straight to complex service discovery are solving problems they don't have yet. Epmd's "limitation" of requiring known hostnames is only a limitation if your infrastructure is so dynamic that hostnames change constantly. If that's true, you have bigger problems than clustering. For the vast majority of deployments - even large ones - stable infrastructure with epmd outperforms elaborate discovery mechanisms that add failure modes without adding value.
Remember:
Epmd with .hosts.erlang works for 80% of production deployments - add complexity only when infrastructure reality demands dynamic discovery
Multicast UDP gossip solves local development clustering instantly - stop manually connecting nodes in iex sessions
Service discovery complexity adds failure modes without value unless you're actually doing dynamic scaling at the point where hostnames change constantly
Libcluster's strategy system exists for scaling transitions, not day-one architecture - design for strategy evolution when you hit real limits
. TIRED OF DEVOPS HEADACHES? .
Deploy your next Elixir app hassle-free with Gigalixir and focus more on coding, less on ops.
We're specifically designed to support all the features that make Elixir special, so you can keep building amazing things without becoming a DevOps expert.
See you next week,
Michael
P.S. Forward this to a friend who loves Elixir as much as you do 💜