Advanced
Want to master Elixir for distributed, high-performance systems? This tutorial covers BEAM VM internals, distributed Elixir, metaprogramming with macros, performance optimization, and production deployment.
Coverage
This tutorial covers 85-95% of Elixir knowledge - master-level topics for distributed systems and advanced optimization.
Prerequisites
- Intermediate Tutorial complete
- Strong understanding of OTP (GenServer, Supervisor, Application)
- Experience building Phoenix applications
- Proficiency with Ecto and testing
- Production deployment experience helpful
Learning Outcomes
By the end of this tutorial, you will:
- Understand BEAM VM architecture (scheduler, process model, garbage collection)
- Build distributed Elixir systems with nodes and clustering
- Master metaprogramming with macros and compile-time code generation
- Optimize performance with profiling and benchmarking tools
- Handle umbrella projects for monorepo development
- Deploy Elixir releases to production
- Implement advanced OTP patterns (Registry, DynamicSupervisor, PartitionSupervisor)
- Debug production issues with observability tools
Learning Path
%% Color Palette: Blue #0173B2, Orange #DE8F05, Teal #029E73, Purple #CC78BC, Brown #CA9161
graph TD
A[Type System 1.19+ ⭐] --> B[BEAM VM Internals ⭐]
B --> C[Distributed Elixir ⭐]
C --> D[Metaprogramming & Macros ⭐]
D --> E[Performance Optimization]
E --> F[Profiling & Benchmarking]
F --> G[Umbrella Projects]
G --> H[Production Deployment]
H --> I[Observability]
style A fill:#DE8F05,stroke:#000000,stroke-width:3px,color:#000000
style B fill:#DE8F05,stroke:#000000,stroke-width:3px,color:#000000
style C fill:#DE8F05,stroke:#000000,stroke-width:3px,color:#000000
style D fill:#DE8F05,stroke:#000000,stroke-width:3px,color:#000000
Color Palette: Orange (#DE8F05 - critical sections for advanced Elixir)
⭐ Most important sections: Type System (1.19+), BEAM VM, Distributed Elixir, and Metaprogramming - unique strengths of Elixir!
Section 1: Type System (Elixir 1.19+)
Elixir 1.19 introduced set-theoretic types for enhanced compile-time checking.
Understanding Set-Theoretic Types
Set-theoretic types treat types as sets of values:
def add(a, b) do
a + b
end
def add(a :: integer(), b :: integer()) :: integer() do
a + b
endType System Benefits:
- Compile-time warnings: Catch type errors before runtime
- Better documentation: Types serve as inline documentation
- IDE support: Improved autocomplete and refactoring
- Gradual typing: Add types incrementally (not required)
Basic Type Annotations
defmodule Calculator do
# Function with type specs
@spec add(integer(), integer()) :: integer()
def add(a, b), do: a + b
@spec divide(integer(), integer()) :: {:ok, float()} | {:error, String.t()}
def divide(_a, 0), do: {:error, "Division by zero"}
def divide(a, b), do: {:ok, a / b}
# Multiple return types (union)
@spec process(map()) :: {:ok, String.t()} | {:error, atom()}
def process(%{status: :success} = data) do
{:ok, data.message}
end
def process(_), do: {:error, :invalid_data}
endType Checking with Dialyzer
defp deps do
[
{:dialyxir, "~> 1.4", only: [:dev, :test], runtime: false}
]
end
mix dialyzerExample type error:
defmodule Broken do
@spec add(integer(), integer()) :: integer()
def add(a, b) do
# Dialyzer error: return type is float, not integer
a / b
end
endEnhanced Type Checking (Elixir 1.19+)
Improved inference for pattern matching:
defmodule Enhanced do
# Compiler infers tighter types from pattern matching
def process({:ok, value}) when is_integer(value) do
# Compiler knows value is integer here
value * 2
end
def process({:error, reason}) when is_binary(reason) do
# Compiler knows reason is string here
String.upcase(reason)
end
def process(_) do
:unknown
end
endUnion types with guards:
defmodule TypeGuards do
@type result :: {:ok, integer()} | {:error, String.t()}
@spec double(result()) :: result()
def double({:ok, n}) when is_integer(n), do: {:ok, n * 2}
def double({:error, msg}) when is_binary(msg), do: {:error, msg}
endSet-Theoretic Type Operations
Intersection types:
@type user :: %{name: String.t(), age: integer()} & map()
@spec greet(user()) :: String.t()
def greet(%{name: name}), do: "Hello, #{name}!"Negation types:
@type non_nil_string :: String.t() and not nil
@spec upcase(non_nil_string()) :: non_nil_string()
def upcase(str) when is_binary(str), do: String.upcase(str)Compiler Diagnostics (Elixir 1.19+)
Elixir 1.19 provides better error messages:
defmodule DiagnosticsExample do
def broken do
# Better error: suggests you meant String.upcase/1
String.uppercase("hello")
end
def wrong_type do
x = 5
# Better error: shows x is integer, can't be used as string
String.length(x)
end
endEnhanced warnings:
def unused(a, b) do
a + a # Warning: variable b is unused
end
def unreachable do
if true do
:always_true
else
:never_reached # Warning: unreachable code
end
endAdvanced Type Specs
Generic types:
defmodule Container do
@type t(a) :: %__MODULE__{value: a}
defstruct [:value]
@spec new(a) :: t(a) when a: any()
def new(value), do: %__MODULE__{value: value}
@spec map(t(a), (a -> b)) :: t(b) when a: any(), b: any()
def map(%__MODULE__{value: v}, fun) do
%__MODULE__{value: fun.(v)}
end
end
Container.new(5) |> Container.map(&(&1 * 2)) # %Container{value: 10}Opaque types:
defmodule SecureToken do
@opaque t :: String.t()
@spec generate() :: t()
def generate do
:crypto.strong_rand_bytes(32) |> Base.encode64()
end
@spec verify(t(), t()) :: boolean()
def verify(token1, token2) do
token1 == token2
end
end
token = SecureToken.generate()Behaviours with typespecs:
defmodule Storage do
@callback put(key :: String.t(), value :: any()) :: :ok | {:error, term()}
@callback get(key :: String.t()) :: {:ok, any()} | {:error, :not_found}
@callback delete(key :: String.t()) :: :ok
end
defmodule MemoryStorage do
@behaviour Storage
@impl Storage
@spec put(String.t(), any()) :: :ok
def put(key, value) do
# Implementation
:ok
end
@impl Storage
@spec get(String.t()) :: {:ok, any()} | {:error, :not_found}
def get(key) do
# Implementation
{:error, :not_found}
end
@impl Storage
@spec delete(String.t()) :: :ok
def delete(_key), do: :ok
endPerformance Impact of Type Checking
Compilation performance (Elixir 1.19 improvement):
time_before = :os.system_time(:millisecond)
Code.compile_file("lib/my_large_module.ex")
time_after = :os.system_time(:millisecond)
IO.puts("Compiled in #{time_after - time_before}ms")Elixir 1.19 compilation improvements:
- 4x faster compilation on average
- Incremental compilation optimizations
- Parallel module compilation
- Reduced memory usage during compilation
Type checking does NOT affect runtime:
@spec slow_function(integer()) :: integer()
def slow_function(n) do
# No runtime overhead from type specs
:timer.sleep(1000)
n * 2
endGradual Typing Strategy
Start with critical functions:
defmodule GradualTyping do
# Public API: Add types
@spec create_user(map()) :: {:ok, User.t()} | {:error, Ecto.Changeset.t()}
def create_user(attrs) do
# Internal helper: No types yet
validate_attrs(attrs)
end
# Private: Types optional
defp validate_attrs(attrs) do
# Implementation
end
endIncremental adoption:
- Add types to public API first
- Add types to modules with complex logic
- Add types to frequently changed modules
- Let Dialyzer guide you to remaining issues
Pragmatic type specs:
@spec process(String.t(), integer(), boolean(), atom()) :: :ok
def process(str, num, flag, type) do
# Hard to maintain
end
@spec process(any(), any(), any(), any()) :: any()
def process(str, num, flag, type) do
# Loses type safety
end
@spec process(String.t(), pos_integer(), opts :: keyword()) :: :ok | {:error, term()}
def process(str, num, opts) do
# Clear intent, practical constraints
endBest Practices
Do:
- ✅ Type public APIs and exported functions
- ✅ Use specific types (
:ok | {:error, String.t()}vsany()) - ✅ Run Dialyzer in CI/CD pipeline
- ✅ Document complex types with
@typedoc - ✅ Use opaque types for internal data structures
Don’t:
- ❌ Over-specify every private function (diminishing returns)
- ❌ Use
any()everywhere (defeats purpose) - ❌ Ignore Dialyzer warnings (fix or suppress explicitly)
- ❌ Fight the type system (Elixir is dynamically typed at core)
Section 2: BEAM VM Internals
Understanding the BEAM VM unlocks Elixir’s concurrency and fault tolerance.
Process Model
Every Elixir process runs on the BEAM VM:
pid = spawn(fn ->
receive do
{:hello, sender} -> send(sender, :world)
end
end)
send(pid, {:hello, self()})
receive do
:world -> IO.puts("Received world!")
endProcess Characteristics:
- Lightweight: 2-3 KB per process (can spawn millions)
- Isolated: No shared memory, communicate via messages
- Garbage collected independently
- Preemptively scheduled by BEAM scheduler
Scheduler Model
BEAM uses M:N scheduling - multiple processes on multiple schedulers:
IO.inspect(System.schedulers_online()) # 8 on 8-core machine
pid = spawn(fn -> :timer.sleep(5000) end)
Process.info(pid, :current_stacktrace)
Process.info(pid, :reductions) # Work done by processHow Scheduling Works:
- Each scheduler has a run queue of processes
- Scheduler runs process for ~2000 reductions (instructions)
- Process yields (I/O, receive, explicit yield) or preempted
- Scheduler picks next process from queue
- Work stealing balances load across schedulers
Visualizing Schedulers:
CPU Cores: 1 2 3 4
Schedulers: Sched1 Sched2 Sched3 Sched4
Run Queues: [P1] [P4] [P7] [P10]
[P2] [P5] [P8] [P11]
[P3] [P6] [P9] [P12]Process Heap and Garbage Collection
Each process has its own heap:
pid = spawn(fn ->
# Allocate large list
list = Enum.to_list(1..1_000_000)
:timer.sleep(10_000)
end)
Process.info(pid, :memory) # Bytes used
Process.info(pid, :heap_size) # Heap size in words
Process.info(pid, :total_heap_size) # Total heap (including old heap)Generational GC:
- Young heap: New allocations
- Old heap: Data that survived GC
- GC runs independently per process (no stop-the-world)
- Process blocks only itself during GC
:erlang.garbage_collect(pid)Message Passing Performance
Messages are copied between processes:
defmodule MessageBench do
def send_small_message(receiver, n) do
Enum.each(1..n, fn i ->
send(receiver, i)
end)
end
def send_large_message(receiver, n) do
large_data = Enum.to_list(1..10_000)
Enum.each(1..n, fn _i ->
send(receiver, large_data)
end)
end
endOptimization: Use ETS for shared data instead of copying:
table = :ets.new(:shared_data, [:set, :public, :named_table])
:ets.insert(table, {:data, Enum.to_list(1..10_000)})
[{:data, data}] = :ets.lookup(table, :data)Process Links and Monitors
Links: Bidirectional, crashes propagate:
parent = self()
child = spawn_link(fn ->
:timer.sleep(1000)
raise "Child crashed!"
end)
Process.flag(:trap_exit, true)
receive do
{:EXIT, ^child, reason} ->
IO.puts("Child exited: #{inspect(reason)}")
endMonitors: Unidirectional, notification only:
child = spawn(fn ->
:timer.sleep(1000)
exit(:normal)
end)
ref = Process.monitor(child)
receive do
{:DOWN, ^ref, :process, ^child, reason} ->
IO.puts("Child went down: #{inspect(reason)}")
endSection 3: Distributed Elixir
Connect multiple BEAM nodes for distributed systems.
Starting Nodes
Start named nodes:
iex --name node1@127.0.0.1 --cookie secret
iex --name node2@127.0.0.1 --cookie secretCookie: Shared secret for authentication (must match to connect).
Connecting Nodes
Node.connect(:"node2@127.0.0.1")
Node.list() # [:"node2@127.0.0.1"]
Node.list([:this, :visible]) # [:"node1@127.0.0.1", :"node2@127.0.0.1"]Remote Process Spawning
Spawn processes on remote nodes:
pid = Node.spawn(:"node2@127.0.0.1", fn ->
IO.puts("Running on #{inspect(Node.self())}")
:timer.sleep(5000)
end)Distributed Message Passing
Send messages across nodes:
defmodule RemoteServer do
def start do
pid = spawn(fn -> loop() end)
Process.register(pid, :remote_server)
pid
end
defp loop do
receive do
{:request, sender, data} ->
send(sender, {:response, data * 2})
loop()
end
end
end
RemoteServer.start()
send({:remote_server, :"node2@127.0.0.1"}, {:request, self(), 42})
receive do
{:response, result} -> IO.puts("Got: #{result}") # 84
endDistributed GenServer
defmodule DistributedCounter do
use GenServer
# Start on specific node
def start_link(node, initial_value) do
Node.spawn_link(node, fn ->
GenServer.start_link(__MODULE__, initial_value, name: __MODULE__)
end)
end
# Call across nodes
def increment(node) do
GenServer.call({__MODULE__, node}, :increment)
end
def get_value(node) do
GenServer.call({__MODULE__, node}, :get_value)
end
@impl true
def init(initial_value) do
{:ok, initial_value}
end
@impl true
def handle_call(:increment, _from, state) do
new_state = state + 1
{:reply, new_state, new_state}
end
@impl true
def handle_call(:get_value, _from, state) do
{:reply, state, state}
end
end
DistributedCounter.start_link(:"node2@127.0.0.1", 0)
DistributedCounter.increment(:"node2@127.0.0.1") # 1
DistributedCounter.get_value(:"node2@127.0.0.1") # 1Global Registry
Register processes globally across cluster:
defmodule GlobalService do
def start_link do
pid = spawn(fn -> loop() end)
:global.register_name(:my_service, pid)
pid
end
defp loop do
receive do
{:ping, sender} ->
send(sender, :pong)
loop()
end
end
end
GlobalService.start_link()
pid = :global.whereis_name(:my_service)
send(pid, {:ping, self()})
receive do
:pong -> IO.puts("Global service responded!")
endDistributed PubSub (Phoenix.PubSub)
children = [
{Phoenix.PubSub, name: MyApp.PubSub}
]
Phoenix.PubSub.subscribe(MyApp.PubSub, "events")
Phoenix.PubSub.broadcast(MyApp.PubSub, "events", {:event, "data"})
receive do
{:event, data} -> IO.puts("Received: #{data}")
endNetwork Partitions and Split Brain
Handle network partitions:
defmodule ClusterMonitor do
use GenServer
def start_link(opts) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
@impl true
def init(_opts) do
:net_kernel.monitor_nodes(true)
{:ok, %{nodes: Node.list()}}
end
@impl true
def handle_info({:nodeup, node}, state) do
IO.puts("Node connected: #{inspect(node)}")
{:noreply, %{state | nodes: [node | state.nodes]}}
end
@impl true
def handle_info({:nodedown, node}, state) do
IO.puts("Node disconnected: #{inspect(node)}")
new_nodes = List.delete(state.nodes, node)
{:noreply, %{state | nodes: new_nodes}}
end
endBest Practices:
- Use
:globalfor small clusters (< 50 nodes) - Use
pg(process groups) for larger clusters - Implement consensus algorithms for critical distributed state
- Handle network partitions explicitly
Section 4: Metaprogramming and Macros
Macros enable compile-time code generation.
Understanding the AST
Elixir code is represented as Abstract Syntax Tree:
quote do
1 + 2
end
quote do
if true, do: "yes", else: "no"
endBasic Macros
defmodule MyMacros do
defmacro say_hello(name) do
quote do
IO.puts("Hello, #{unquote(name)}!")
end
end
end
defmodule Test do
require MyMacros
def greet do
MyMacros.say_hello("World") # Macro expanded at compile time
end
end
Test.greet() # "Hello, World!"Quote and Unquote
quote: Convert code to ASTunquote: Inject values into AST
defmodule MathMacros do
defmacro multiply(a, b) do
quote do
unquote(a) * unquote(b)
end
end
# With bind_quoted (prevents multiple evaluation)
defmacro safe_multiply(a, b) do
quote bind_quoted: [a: a, b: b] do
a * b
end
end
end
require MathMacros
MathMacros.multiply(2, 3) # 6
defmodule Unsafe do
defmacro double(x) do
quote do
unquote(x) + unquote(x)
end
end
end
require Unsafe
Unsafe.double(IO.puts("hi")) # Prints "hi" twice! ❌
defmodule Safe do
defmacro double(x) do
quote bind_quoted: [x: x] do
x + x
end
end
end
require Safe
Safe.double(IO.puts("hi")) # Prints "hi" once ✅Pattern Matching in Macros
defmodule ControlFlow do
defmacro unless(condition, do: block) do
quote do
if !unquote(condition), do: unquote(block)
end
end
defmacro when_ok(expr, do: block) do
quote do
case unquote(expr) do
{:ok, result} -> unquote(block)
error -> error
end
end
end
end
require ControlFlow
ControlFlow.unless 1 == 2 do
IO.puts("Math works!")
end
ControlFlow.when_ok {:ok, 42} do
IO.puts("Success!")
endBuilding DSLs with Macros
defmodule Router do
defmacro __using__(_opts) do
quote do
import Router
Module.register_attribute(__MODULE__, :routes, accumulate: true)
@before_compile Router
end
end
defmacro __before_compile__(_env) do
quote do
def routes do
@routes |> Enum.reverse()
end
end
end
defmacro get(path, handler) do
quote do
@routes {:get, unquote(path), unquote(handler)}
end
end
defmacro post(path, handler) do
quote do
@routes {:post, unquote(path), unquote(handler)}
end
end
end
defmodule MyRouter do
use Router
get "/", :index
get "/about", :about
post "/users", :create_user
end
MyRouter.routes()Compile-Time Configuration
defmodule Config do
@env Mix.env()
@api_url if @env == :prod, do: "https://api.example.com", else: "http://localhost:4000"
def api_url, do: @api_url
end
Config.api_url() # Determined at compile timeMacro Hygiene
Macros don’t leak variables:
defmodule Hygienic do
defmacro set_x do
quote do
x = 10
end
end
end
require Hygienic
Hygienic.set_x()
x # ❌ Undefined variable (hygiene prevents leakage)
defmodule NonHygienic do
defmacro set_x do
quote do
var!(x) = 10
end
end
end
require NonHygienic
NonHygienic.set_x()
x # 10 ✅ (var! breaks hygiene)Best Practices:
- Use macros sparingly (functions are easier to understand)
- Prefer functions over macros unless compile-time generation needed
- Use
bind_quotedto prevent multiple evaluation - Document macro behavior clearly
- Test macro expansion with
Macro.expand/2
Section 5: Performance Optimization
Optimize Elixir applications for production workloads.
Profiling with :fprof
Profile function calls:
defmodule Fibonacci do
def fib(0), do: 0
def fib(1), do: 1
def fib(n), do: fib(n - 1) + fib(n - 2)
end
:fprof.trace([:start])
Fibonacci.fib(20)
:fprof.trace([:stop])
:fprof.profile()
:fprof.analyse(callers: true, sort: :acc, totals: true)Output shows time spent in each function.
Profiling with :eprof
Time-based profiling:
:eprof.start()
:eprof.start_profiling([self()])
result = expensive_function()
:eprof.stop_profiling()
:eprof.analyze(total: true)Benchmarking with Benchee
defp deps do
[
{:benchee, "~> 1.0", only: :dev}
]
end
Benchee.run(%{
"Enum.map" => fn input -> Enum.map(input, &(&1 * 2)) end,
"for comprehension" => fn input -> for x <- input, do: x * 2 end,
"Stream.map" => fn input -> input |> Stream.map(&(&1 * 2)) |> Enum.to_list() end
}, inputs: %{
"Small" => Enum.to_list(1..100),
"Medium" => Enum.to_list(1..10_000),
"Large" => Enum.to_list(1..1_000_000)
})Memory Profiling
Track memory usage:
Process.info(self(), :memory)
:recon_alloc.memory(:allocated)
:recon_alloc.memory(:used)Optimization Techniques
1. Tail Call Optimization:
def sum_slow([]), do: 0
def sum_slow([h | t]), do: h + sum_slow(t)
def sum_fast(list), do: sum_fast(list, 0)
defp sum_fast([], acc), do: acc
defp sum_fast([h | t], acc), do: sum_fast(t, acc + h)2. Lazy Evaluation with Streams:
result = 1..1_000_000
|> Enum.map(&(&1 * 2))
|> Enum.filter(&rem(&1, 3) == 0)
|> Enum.take(10)
result = 1..1_000_000
|> Stream.map(&(&1 * 2))
|> Stream.filter(&rem(&1, 3) == 0)
|> Enum.take(10)3. ETS for Shared State:
defmodule CacheSlow do
use GenServer
def get(key) do
GenServer.call(__MODULE__, {:get, key})
end
end
defmodule CacheFast do
def get(key) do
case :ets.lookup(:cache, key) do
[{^key, value}] -> value
[] -> nil
end
end
end4. Avoid String Concatenation in Loops:
def build_slow(n) do
Enum.reduce(1..n, "", fn i, acc ->
acc <> Integer.to_string(i)
end)
end
def build_fast(n) do
1..n
|> Enum.map(&Integer.to_string/1)
|> IO.iodata_to_binary()
end5. Pattern Match in Function Head:
def process(data) do
case data do
{:ok, value} -> value * 2
{:error, _} -> 0
end
end
def process({:ok, value}), do: value * 2
def process({:error, _}), do: 0Compiler Optimizations (Elixir 1.19+)
Elixir 1.19 introduced 4x faster compilation:
time = :timer.tc(fn ->
Code.compile_file("lib/my_app.ex")
end)
IO.puts("Compiled in #{elem(time, 0) / 1000}ms")Optimizations in 1.19:
- Parallel compilation of modules
- Incremental compilation improvements
- Faster type checking with set-theoretic types
- Reduced memory usage during compilation
Section 6: Umbrella Projects
Manage monorepos with umbrella projects.
Creating Umbrella Project
mix new my_app --umbrella
cd my_app
cd apps
mix new core
mix new web --sup
mix new workers --supStructure:
my_app/
├── apps/
│ ├── core/ # Business logic
│ ├── web/ # Phoenix app
│ └── workers/ # Background jobs
├── config/
└── mix.exsUmbrella Configuration
defmodule MyApp.MixProject do
use Mix.Project
def project do
[
apps_path: "apps",
version: "0.1.0",
start_permanent: Mix.env() == :prod,
deps: deps()
]
end
defp deps do
[]
end
end
defmodule Web.MixProject do
use Mix.Project
def project do
[
app: :web,
version: "0.1.0",
build_path: "../../_build",
config_path: "../../config/config.exs",
deps_path: "../../deps",
lockfile: "../../mix.lock",
elixir: "~> 1.14",
start_permanent: Mix.env() == :prod,
deps: deps()
]
end
defp deps do
[
{:core, in_umbrella: true}, # Depend on sibling app
{:phoenix, "~> 1.7"}
]
end
endWorking with Umbrella Apps
mix compile
mix test
mix cmd --app core mix test
iex -S mixShared Dependencies
Configure shared dependencies in root:
defp deps do
[
{:jason, "~> 1.4"} # Shared by all apps
]
end
defp deps do
[
{:ecto, "~> 3.11"} # Core-specific
]
endSection 7: Production Deployment
Deploy Elixir applications to production.
Mix Releases
Build production release:
def project do
[
# ...
releases: [
my_app: [
include_executables_for: [:unix],
applications: [runtime_tools: :permanent]
]
]
]
end
MIX_ENV=prod mix release
_build/prod/rel/my_app/bin/my_app start
_build/prod/rel/my_app/bin/my_app daemon # Background
_build/prod/rel/my_app/bin/my_app stopRelease Configuration
import Config
if config_env() == :prod do
database_url = System.fetch_env!("DATABASE_URL")
secret_key_base = System.fetch_env!("SECRET_KEY_BASE")
config :my_app, MyApp.Repo,
url: database_url,
pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10")
config :my_app, MyAppWeb.Endpoint,
http: [port: String.to_integer(System.get_env("PORT") || "4000")],
secret_key_base: secret_key_base,
server: true
endDocker Deployment
FROM hexpm/elixir:1.19.4-erlang-28.2.5-alpine-3.21.3 AS build
RUN apk add --no-cache build-base git
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
COPY mix.exs mix.lock ./
RUN mix deps.get --only prod
COPY . .
RUN MIX_ENV=prod mix compile
RUN MIX_ENV=prod mix release
FROM alpine:3.21.3
RUN apk add --no-cache libstdc++ openssl ncurses-libs
WORKDIR /app
COPY --from=build /app/_build/prod/rel/my_app ./
CMD ["bin/my_app", "start"]Environment Variables
export DATABASE_URL="ecto://user:pass@localhost/db"
export SECRET_KEY_BASE="long-secret-key"
export PORT="4000"
export POOL_SIZE="10"
source .env.prod
_build/prod/rel/my_app/bin/my_app startHealth Checks and Graceful Shutdown
defmodule MyAppWeb.HealthController do
use MyAppWeb, :controller
def check(conn, _params) do
json(conn, %{status: "ok", timestamp: DateTime.utc_now()})
end
end
config :my_app, MyAppWeb.Endpoint,
shutdown_timeout: 30_000 # 30 seconds for graceful shutdownSection 8: Observability
Monitor production Elixir applications.
Logger
require Logger
Logger.debug("Detailed debug info")
Logger.info("User logged in: #{user_id}")
Logger.warning("Rate limit approaching")
Logger.error("Database connection failed")
Logger.info("User action",
user_id: user_id,
action: "login",
ip: ip_address
)Telemetry
:telemetry.execute(
[:my_app, :payment, :processed],
%{amount: 100},
%{user_id: 123}
)
:telemetry.attach(
"payment-logger",
[:my_app, :payment, :processed],
fn _event, measurements, metadata, _config ->
Logger.info("Payment processed",
amount: measurements.amount,
user_id: metadata.user_id
)
end,
nil
)Observer
:observer.start()Observer Features:
- System overview (memory, CPU, processes)
- Process list and details
- Application tree
- ETS table viewer
- Memory allocation
Recon for Production
:recon.proc_count(:memory, 10)
:recon.proc_count(:message_queue_len, 10)
:recon.info(pid)
:recon_trace.calls({Module, :function, :_}, 10)Related Content
Previous Tutorials:
- Intermediate Tutorial - OTP and Phoenix
- Beginner Tutorial - Fundamentals
How-To Guides:
- Elixir Cookbook - Expert recipes
- How to Build Distributed Systems - Distribution patterns
- How to Write Macros - Metaprogramming guide
- How to Optimize Performance - Performance tuning
- How to Deploy Elixir Apps - Deployment strategies
- How to Monitor Production - Production monitoring
Explanations:
- Best Practices - Expert standards
- Anti-Patterns - Advanced pitfalls
Reference:
- Elixir Cheat Sheet - Complete reference
- Elixir Glossary - Advanced terms
Next Steps
Master These Concepts:
- BEAM VM: Understand process model and scheduler
- Distributed Elixir: Build multi-node systems
- Metaprogramming: Write macros for DSLs
- Performance: Profile and optimize critical paths
Continue Learning:
- How-To Guides - Practical distributed patterns
- Cookbook - Advanced recipes
- Best Practices - Production patterns
Advanced Projects:
- Distributed Cache: Multi-node cache with consistency
- Job Queue: Distributed task processing with GenStage
- Monitoring System: Custom telemetry and metrics
- DSL: Build domain-specific language with macros
Resources:
You now have master-level Elixir knowledge for distributed, high-performance systems!