- GigaElixir Gazette
- Posts
- 🎰 The AI Code Generator Slot Machine
🎰 The AI Code Generator Slot Machine
Jevons Paradox from 1865 explains why easier programming creates MORE code debt—here's the survival guide
Welcome to GigaElixir Gazette, your 5-minute digest of Elixir ecosystem news that actually matters đź‘‹.
. WEEKLY PICKS .
🔍 Phoenix Ships Server Logs in Browser Console for Dev: Phoenix 1.7.15 streams backend logs to browser console during development. Chris McCord uses existing Phoenix channels and Erlang logger handlers—30 lines of code. Enable web_console_logger: true and server exceptions appear alongside client-side errors in DevTools. Backend crash? See the stack trace where you're inspecting DOM state. No more split-screen juggling or lost error context when Phoenix processes restart mid-development session.
🤔 LiveView State Management Pain Points Validated by Community: What kills LiveView apps at scale? Reddit thread exposes the truth: WebSocket disconnects lose state more frequently than expected, prop drilling through nested components creates maintenance nightmares, manual PubSub synchronization adds hidden complexity. Multiple teams report complete rewrites after hitting state walls. Survival patterns emerging: URL params for navigation state, hidden form fields for session persistence, ETS for shared cross-process state surviving individual LiveView crashes. Community exploring solutions without importing client-side Redux complexity.
📋 Phoenix LiveView Decision Framework Published: Developer documents when LiveView wins versus when it creates operational headaches. Built for internal tools, admin dashboards, simple CRUD apps where server round-trips don't matter. Struggles with complex UI elements needing heavy client-side interaction, network instability causing visible disconnects, offline-first requirements fighting WebSocket architecture. DevOps complexity surfaces fast: sticky sessions needed for horizontal scaling, WebSocket configuration non-obvious across load balancers, connection drops more noticeable than request-response. Team knowledge determines success—Elixir experience separates productivity from fighting the framework.
⏰ Minimal Periodic Task Runner Using GenServer Timeouts: Pattern for lightweight background jobs without Oban's database overhead. GenServer :timeout message triggers next execution automatically. Return {:noreply, state, timeout_ms} from init/1 and handle_info/2. Process isolation prevents overlapping runs—if task takes longer than interval, next execution waits. Built for trivial cleanup tasks like purging expired records or rotating logs where job history tracking adds zero value. Alternative to :timer.send_interval/2 with proper supervision tree integration.
🚀 Ash Phoenix Starter Kit Adds Multitenancy and Impersonation: Community project ships schema-based multitenancy with team switching, user invitations, super user impersonation built on Ash Framework's declarative resource system. Authentication and group management included with authorization policies enforced at data layer, not controller level where they leak. Charts and maps components planned. Early stage but gaining traction—developers requesting Stripe payment integrations for SaaS boilerplate. Functions as step-by-step learning resource for Ash beginners showing real-world patterns beyond documentation examples.

Your LLM Generated Perfect Code Until Production Proved It Was Gambling
Josh Price opened Code BEAM Europe 2025 asking if we've built the software equivalent of a useless box—pour in prompts, get ten app versions, spend Sunday reviewing which ones won't crash in production. Then he delivered an economics lesson from 1865 UK, when James Watt invented efficient steam engines and British government economists predicted coal demand would plummet. Instead it soared. Jevons observed efficiency counterintuitively increases demand. Programming becomes easier, software becomes cheaper. Cheaper software means we get buried in it. LLMs make programming radically easier—prepare for the avalanche.
Traditional functions work deterministically: f(x) produces identical output for identical input. LLMs replaced that with probability distributions. Same prompt, different code every time. You're not debugging logic anymore—you're narrowing variance ranges. Josh calls it a slot machine: buy tokens, pull lever, maybe jackpot one in ten. Maybe garbage. Code generation speeds up but time shifts elsewhere—reviewing output, debugging edge cases, rewriting what looked perfect until production traffic hit it.
Everyone vibe codes entire apps without looking at generated code. Smart teams build deterministic cores with disposable edges. Josh's architecture: Ash Framework uses declarative DSL for domain logic—structured specs LLMs reliably generate from without hallucinating business rules. Core stays solid. UI layer becomes disposable, regenerated on demand. This inverts the typical iceberg where authentication, CRUD, API layers live underwater requiring manual implementation. Ash generates infrastructure predictably from domain specs. LLMs build UI on that foundation without rolling dice on critical paths.
How do you review an entire application? You don't—you constrain what LLMs can generate before they start. Structured specifications narrow LLM probability ranges. Specs become source artifacts, LLMs compile them to implementation. OpenAI's model spec lives on GitHub as markdown files describing behavior with examples—specs, not code, as primary source. But vague prompts produce vague results. Specificity constrains randomness.
Security isn't optional when LLMs can follow instructions embedded in any content. Simon Willison termed it "the lethal trifecta": untrusted input plus private data access plus external communication equals data exfiltration. Sandbox everything. The new skill isn't writing code—it's recognizing quality output, minimizing variance, knowing what to ask for. Jevons Paradox means software gets plentiful and cheap, which favors engineers mastering probability management over those treating LLMs as magic wands.
Remember, for AI-augmented development:
Deterministic core prevents cascading failures – Structured specs for domain logic, raw LLM generation for disposable UI only
Probability management replaces code crafting – Narrowing output variance and recognizing quality from noise is the new skill
Structured specs constrain LLM randomness – Declarative domain modeling limits probability ranges better than vague prompts
Security sandbox prevents data exfiltration – Untrusted input + private data + external communication = lethal trifecta
. TIRED OF DEVOPS HEADACHES? .
Deploy your next Elixir app hassle-free with Gigalixir and focus more on coding, less on ops.
We're specifically designed to support all the features that make Elixir special, so you can keep building amazing things without becoming a DevOps expert.
See you next week,
Michael
P.S. Forward this to a friend who loves Elixir as much as you do đź’ś