LiteFS Config Reference
Most of the configuration options for LiteFS are set via a litefs.yml
config
file. You can find a fully-commented version of litefs.yml
in the LiteFS repository.
Config file search path
LiteFS will search the following paths to find the config file:
- The current directory:
./litefs.yml
- The home directory:
~/litefs.yml
- The system configuration directory:
/etc/litefs.yml
You can also explicitly set the config path using a command line flag:
litefs mount -config /path/to/litefs.yml
litefs run -config /path/to/litefs.yml
Environment Variables
LiteFS can evaluate environment variables within the configuration file.
Variables can be referenced using either the $MY_VAR
syntax or
the ${MY_VAR}
syntax.
Basic string equality & inequality expressions can also be evaluated using either single or double quotes:
${MY_VAR == "xyz"}
${MY_VAR != "xyz"}
Environment variable evaluation can be disabled by specifying the -no-expand-env
flag from the command line arguments.
FUSE directory
LiteFS uses a FUSE
directory mount to intercept Linux file system calls to track when SQLite
transactions start and end so they can be replicated. The recommended path is
/litefs
and it is the directory your application should use when interacting
with the database.
# The FUSE section handles settings on the FUSE file system. FUSE
# provides a layer for intercepting SQLite transactions on the
# primary node so they can be shipped to replica nodes
# transparently.
fuse:
# Required. This is the mount directory that applications will
# use to access their SQLite databases.
dir: "/litefs"
# Enable mounting of the file system by non-root users.
# You must enable the 'user_allow_other' option in /etc/fuse.conf as well.
allow-other: false
# The debug flag enables debug logging of all FUSE API calls.
# This will produce a lot of logging. Not for general use.
debug: false
Internal data directory
LiteFS acts as a passthrough file system so all your database files still need to be stored internally on a regular file system. This internal data directory should be saved on a persistent volume so you don’t lose your data between restarts.
LiteFS also stores transaction files (called LTX files) in this directory. These files are periodically removed through an internal process called retention enforcement.
# The data section specifies where internal LiteFS data is stored
# and how long to retain the transaction files.
#
# Transaction files are used to ship changes to replica nodes so
# they should persist long enough for replicas to retrieve them,
# even in the face of a short network interruption or a redeploy.
# Under high load, these files can grow large so it's not advised
# to extend retention too long.
data:
# Path to internal data storage.
dir: "/var/lib/litefs"
# If true, compresses LTX files using LZ4 compression. Enabled by default.
compress: true
# Duration to keep LTX files. Latest LTX file is always kept.
retention: "10m"
# Frequency with which to check for LTX files to delete.
retention-monitor-interval: "1m"
Supervisor / Exec
LiteFS provides the option to run as a supervisor to another process. Typically, this would be the command you run for your application. This is useful so that LiteFS can mount itself and connect to the cluster before starting your application. It will pass signals to your application and it will automatically shutdown when your application shuts down.
# The exec field specifies a command to run as a subprocess of
# LiteFS. This command will be executed after LiteFS either
# becomes primary or is connected to the primary node. LiteFS
# will forward signals to the subprocess and LiteFS will
# automatically shut itself down when the subprocess stops.
#
# This can also be specified after a double-dash (--) on the
# command line invocation of the 'litefs mount' command.
exec: "myapp -addr :8080"
HTTP API Server
LiteFS communicates with other nodes’ API over HTTP. This can be configured in
the http
section of the config.
The default port for LiteFS is 20202
. This port MUST NOT be publicly accessible.
# This section defines settings for the LiteFS HTTP API server.
# This API server is how nodes communicate with each other.
http:
# Specifies the bind address of the HTTP API server.
addr: ":20202"
HTTP Proxy Server
LiteFS includes a built-in HTTP proxy server that handles data consistency and request forwarding for most web applications. It is disabled by default but can be enabled by specifying a bind address & target.
# This section defines settings for the option HTTP proxy.
# This proxy can handle primary forwarding & replica consistency
# for applications that use a single SQLite database.
proxy:
# Specifies the bind address of the proxy server.
addr: ":8080"
# The hostport of the target application. If blank, proxy is disabled.
target: "localhost:8081"
# The name of the database used for TXID tracking.
db: "my.db"
# If true, enables verbose logging of requests by the proxy.
debug: false
# List of paths that are ignored by the proxy. The asterisk is
# the only available wildcard. These requests are passed
# through to the target as-is.
passthrough: ["/debug/*", "*.png"]
# The duration to hold write requests when no primary is available.
# This is helpful when a primary switches over and there is a small window
# where the new primary is unknown to the replica.
primary-redirect-timeout: "5s"
Lease Management
Leadership is managed by using an external lease system. This can be done using either Consul or by specifying a single, static primary node.
# The lease section defines how LiteFS creates a cluster and
# implements leader election. For dynamic clusters, use the
# "consul". This allows the primary to change automatically when
# the current primary goes down. For a simpler setup, use
# "static" which assigns a single node to be the primary and does
# not failover.
lease:
# Required. Must be either "consul" or "static".
type: "consul"
# Required. The URL for this node's LiteFS API.
# Should match HTTP port.
advertise-url: "http://$HOSTNAME:20202"
# Sets the hostname that other nodes will use to reference this
# node. Automatically assigned based on hostname(1) if not set.
hostname: ""
# Specifies whether the node can become the primary. If using
# "static" leasing, this should be set to true on the primary
# and false on the replicas.
candidate: true
# Filtered list of databases to replicate from the primary to this node.
# If no databases are specified, all databases are replicated. You may only
# filter databases on non-candidate nodes.
databases: [ "my.db", "myother.db" ]
Consul Leasing
Consul is the recommended lease backend as it allows leadership to change when
the primary node stops. The url
and advertise-url
are required so that the
node can connect to Consul and broadcast its LiteFS API URL.
lease:
# A Consul server provides leader election and ensures that the
# responsibility of the primary node can be moved in the event
# of a deployment or a failure.
consul:
# Required. The base URL of the Consul server.
url: "http://myhost:8500"
# Required. The key used for obtaining a lease by the primary.
# This must be unique for each cluster of LiteFS servers
key: ""
# Length of time before a lease expires. The primary will
# automatically renew the lease while it is alive, however,
# if it fails to renew in time then a new primary may be
# elected after the TTL. This only occurs for unexpected loss
# of the leader as normal operation will allow the leader to
# handoff the lease to another replica without downtime.
#
# Consul does not allow a TTL of less than 10 seconds.
ttl: "10s"
# Length of time after the lease expires before a candidate
# can become leader. This buffer is intended to prevent
# overlap in leadership due to clock skew or in-flight calls.
lock-delay: "1s"
Static Leasing
If you do not wish to run a separate Consul instance and you can tolerate downtime of your primary node, you can run LiteFS with a static lease. A static lease means that only a single, fixed node will ever be the primary.
There is no additional configuration section for static leasing. Simply set the
lease.candidate
config field to true
on the primary and false
on the
replicas. You will need to set lease.hostname
and lease.advertise-url
on the
primary as well.
Other options
Exit on error
# If true, then LiteFS will not exit if there is a validation
# issue on startup. This can be useful for debugging issues as
# it avoids constantly restarting the node on ephemeral hosting.
exit-on-error: false
Skip sync
# If true, then LiteFS will not wait until the node becomes the
# primary or connects to the primary before starting the subprocess.
skip-sync: false
Trace Log
The trace log provides a very low-level log of events inside LiteFS. It is typically only used for debugging issues and is turned off by default. This log can produce a lot of disk IO when enabled.
# The tracing section enables a rolling, on-disk tracing log.
# This records every operation to the database so it can be
# verbose and it can degrade performance. This is for debugging
# only and should not typically be enabled in production.
tracing:
# Toggles tracing if 'path' is also set. Defaults to true.
enabled: true
# Output path on disk.
path: "/var/log/lifefs/trace.log"
# Maximum size of a single trace log before rolling.
# Specified in megabytes.
max-size: 64
# Maximum number of trace logs to retain.
max-count: 10
# If true, historical logs will be compressed using gzip.
compress: true