Configuration
This guide is the reference for configuring Lucid. You will learn how to:
- Define the default connection and register named connections
- Configure each supported database driver
- Use connection strings and SSL for hosted databases
- Query multiple databases and use read/write replicas
- Tune connection pooling for production workloads
- Configure migrations, seeders, and schema generation
- Protect tables from
db:wipeand enable debug output
Overview
Lucid stores its database configuration inside config/database.ts. The file exports the result of defineConfig, which describes the default connection and every named connection your application can use.
Connections are registered when the application boots, but Lucid opens them lazily when you execute the first query. This keeps boot time low and lets applications define connections that are only used by background jobs, reports, or specific models. AdonisJS also closes every registered connection automatically during application shutdown, so graceful cleanup is handled for you.
The config file
A Lucid config has two required top-level properties. connection is the name of the default connection, and connections is the map of named connection configs.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'postgres',
connections: {
postgres: {
client: 'pg',
connection: {
host: env.get('DB_HOST'),
port: env.get('DB_PORT'),
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
},
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
The top-level config accepts the following options.
-
connection
-
The name of the default connection. Lucid uses this connection whenever you do not select one explicitly. The value must match one of the keys under
connections. -
connections
-
A map of named connection configs. Each entry describes a
client, aconnectionobject or connection string, and optional shared settings such asmigrations,seeders,pool, andschemaGeneration. -
prettyPrintDebugQueries
-
When set to
true, Lucid attaches a listener to thedb:queryevent that pretty-prints every executed SQL statement to the console. See Debug output for the details.
Drivers
Each connection must specify a client value matching a supported database driver. Lucid installs the required driver package when you run the configure command with --db, and you can also install it manually.
| Database | Client | Package |
|---|---|---|
| SQLite | better-sqlite3 | better-sqlite3 |
| SQLite | sqlite3 | sqlite3 |
| LibSQL | libsql | @libsql/sqlite3 |
| MySQL | mysql2 | mysql2 |
| PostgreSQL | pg | pg |
| MSSQL | mssql | tedious |
For connections that use a network database (MySQL, PostgreSQL, MSSQL), the connection object accepts a common set of fields: host, port, user, password, and database. Each driver adds its own options on top of these, documented in the sections below.
SQLite
Use SQLite for local development, tests, or lightweight production workloads that do not require a separate database server. SQLite stores data in a single file on disk.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'sqlite',
connections: {
sqlite: {
client: 'better-sqlite3',
connection: {
filename: env.get('DB_DATABASE', 'tmp/db.sqlite3'),
},
useNullAsDefault: true,
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
The connection object accepts the following fields.
-
filename
-
Path to the SQLite database file. Lucid creates the file automatically the first time a query runs, as long as the parent directory exists. The AdonisJS starter kits ship with a
tmp/directory, which is where new projects default to. -
flags
-
Driver-specific array of flags. Refer to the driver documentation for
sqlite3orbetter-sqlite3for supported values. -
mode
-
Optional driver mode, such as read-only. Refer to the driver documentation for supported values.
SQLite does not support read/write replicas. The replicas property is not available on SQLite connections.
LibSQL
LibSQL is a drop-in replacement for SQLite, often used with Turso for edge deployments. The config options are the same as SQLite.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'libsql',
connections: {
libsql: {
client: 'libsql',
connection: {
filename: env.get('DB_DATABASE', 'tmp/db.sqlite3'),
},
useNullAsDefault: true,
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
As with SQLite, LibSQL connections do not support read/write replicas through Lucid's replicas property.
MySQL
Use the mysql2 client for MySQL, MariaDB, or MySQL-compatible services.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'mysql',
connections: {
mysql: {
client: 'mysql2',
connection: {
host: env.get('DB_HOST'),
port: env.get('DB_PORT'),
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
},
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
The connection object accepts the common host, port, user, password, and database fields, plus the following driver-specific options.
-
socketPath
-
Path to a Unix socket for local MySQL connections. When provided, the driver ignores
hostandport. -
charset
-
Character set used for the connection. Defaults to
utf8mb4_unicode_ciin modern MySQL setups. -
timezone
-
Timezone the driver uses when reading and writing
DATETIMEandTIMESTAMPcolumns. Set to'Z'to force UTC, which is the safest default for most applications. -
ssl
-
SSL configuration for connections that require encrypted transport. See SSL and production notes for the common shapes.
-
dateStrings
-
When set to
true, the driver returns date and datetime columns as strings instead of JavaScriptDateobjects. Useful when Lucid models perform their own date parsing through Luxon. -
decimalNumbers
-
When set to
true(themysql2-specific option), the driver castsDECIMALandNEWDECIMALcolumns to JavaScript numbers instead of strings. Enable this only when the loss of precision is acceptable.
PostgreSQL
Use the pg client for PostgreSQL and compatible services.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'postgres',
connections: {
postgres: {
client: 'pg',
connection: {
host: env.get('DB_HOST'),
port: env.get('DB_PORT'),
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
},
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
The connection accepts the standard fields plus the following PostgreSQL-specific options.
-
ssl
-
SSL configuration for connections that require encrypted transport. Set to
truefor a simple encrypted connection, or pass an object with the Node.jstls.ConnectionOptionsshape to provide certificates and verification rules. See SSL and production notes. -
connectionString
-
A full PostgreSQL connection URL. When set inside the
connectionobject, the driver parses the URL and merges it with other fields. This is an alternative to passing the URL as the top-levelconnectionvalue.
The PostgreSQL config also exposes two options at the connection level, outside the connection object.
-
searchPath
-
Array of PostgreSQL schemas to set on the connection's
search_path. Use this when your application stores tables across multiple schemas, for example['public', 'tenant_1']. -
returning
-
Default value for the
RETURNINGclause added by Knex to insert and update queries. The default isid. Override this only if you need a different default column for returning rows.
MSSQL
Use the mssql client for Microsoft SQL Server and Azure SQL.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'mssql',
connections: {
mssql: {
client: 'mssql',
connection: {
server: env.get('DB_HOST'),
port: env.get('DB_PORT'),
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
options: {
encrypt: true,
},
},
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
Note that MSSQL uses server instead of host for the hostname. The connection accepts the following driver-specific options.
-
server
-
Hostname of the SQL Server instance. Required.
-
domain
-
Domain name for Windows authentication. Leave unset for standard username and password authentication.
-
connectionTimeout
-
Milliseconds to wait while establishing a new connection before giving up. Defaults to
15000. -
requestTimeout
-
Milliseconds to wait for a single query to complete before aborting. Defaults to
15000. -
options.encrypt
-
Whether to use TLS for the connection. Azure SQL and many managed SQL Server providers require this to be
true. -
options.trustServerCertificate
-
When set to
true, the driver skips TLS certificate verification. Use this only for development against self-signed certificates, never in production. -
options.isolationLevel
-
Default isolation level for transactions. Supported values:
READ_UNCOMMITTED,READ_COMMITTED,REPEATABLE_READ,SERIALIZABLE,SNAPSHOT. -
options.instanceName
-
Named instance of SQL Server to connect to when running multiple instances on the same host.
Connection strings
Connection strings are convenient when your hosting provider exposes credentials as a single URL. Pass the URL directly as the connection value.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'postgres',
connections: {
postgres: {
client: 'pg',
connection: env.get('DATABASE_URL'),
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
DATABASE_URL=postgres://username:password@localhost:5432/database_name
Keep one source of truth for credentials. If production exposes DATABASE_URL, use the connection string in both environments rather than splitting the same value into separate DB_HOST, DB_PORT, and DB_USER variables.
For PostgreSQL, you can also pass the URL inside the connection object using the connectionString field, which is useful when you need to merge it with additional options like ssl.
const dbConfig = defineConfig({
connection: 'postgres',
connections: {
postgres: {
client: 'pg',
connection: {
connectionString: env.get('DATABASE_URL'),
ssl: { rejectUnauthorized: false },
},
},
},
})
SSL and production notes
Most hosted databases require TLS for production connections. Configuration differs slightly across drivers.
For PostgreSQL, pass an ssl object with the Node.js tls.ConnectionOptions shape. Providers that use Let's Encrypt-style certificates usually work with the default settings. Self-signed certificates require rejectUnauthorized: false or an explicit ca chain.
connection: {
host: env.get('DB_HOST'),
port: env.get('DB_PORT'),
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
ssl: {
rejectUnauthorized: true,
ca: env.get('DB_CA_CERT'),
},
}
For MySQL, pass an ssl object with the same shape, or { rejectUnauthorized: false } to skip verification for providers with untrusted certificates.
connection: {
host: env.get('DB_HOST'),
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
ssl: {
rejectUnauthorized: true,
},
}
For MSSQL, TLS is configured inside options.encrypt. Set it to true for Azure SQL and most managed providers. When connecting to local development SQL Server instances, you may also need options.trustServerCertificate: true.
connection: {
server: env.get('DB_HOST'),
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
options: {
encrypt: true,
trustServerCertificate: false,
},
}
Two more production-relevant options worth calling out:
- For PostgreSQL, set
searchPathwhen your application uses non-default schemas. Without it, Lucid (and Knex) resolve unqualified table names againstpubliconly. - For MySQL, set
timezone: 'Z'so the driver reads and writes timestamps as UTC regardless of the database server's local timezone.
Multiple connections
Define multiple named connections when one application needs to query different databases. The top-level connection value remains the default.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'primary',
connections: {
primary: {
client: 'pg',
connection: env.get('PRIMARY_DATABASE_URL'),
migrations: {
paths: ['database/migrations'],
},
},
analytics: {
client: 'pg',
connection: env.get('ANALYTICS_DATABASE_URL'),
migrations: {
paths: ['database/analytics_migrations'],
},
},
},
})
export default dbConfig
Use db.connection('name') to query a named connection from application code.
import db from '@adonisjs/lucid/services/db'
export async function getSignupTotals() {
return db
.connection('analytics')
.from('daily_signups')
.select('date', 'total')
.orderBy('date', 'desc')
}
For models that always live on a non-default connection, set static connection on the model so every query and relationship uses the correct database.
import { DailySignupsSchema } from '#database/schema'
export default class DailySignup extends DailySignupsSchema {
static connection = 'analytics'
}
Each connection has its own migration directory as shown in the migrations.paths example above. Lucid's migration commands accept a --connection flag so you can target a specific connection. See the migrations guide for more.
Read/write replicas
Read/write replicas let you send read queries to one or more read nodes and write queries to a write node. Lucid picks read nodes in round-robin order, but it does not replicate data for you. Database replication must be configured outside Lucid.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'postgres',
connections: {
postgres: {
client: 'pg',
connection: {
user: env.get('DB_USER'),
password: env.get('DB_PASSWORD'),
database: env.get('DB_DATABASE'),
},
replicas: {
read: {
connection: [
{ host: env.get('DB_READ_HOST_1'), port: env.get('DB_PORT') },
{ host: env.get('DB_READ_HOST_2'), port: env.get('DB_PORT') },
],
},
write: {
connection: {
host: env.get('DB_WRITE_HOST'),
port: env.get('DB_PORT'),
},
},
},
migrations: {
naturalSort: true,
paths: ['database/migrations'],
},
},
},
})
export default dbConfig
The replicas property is available on MySQL, PostgreSQL, and MSSQL connections. SQLite and LibSQL do not support replicas.
Each replica block accepts the same connection shape as the main connection, plus its own optional pool block so that read and write pools can be tuned separately.
Select a read or write client explicitly when a workflow needs a specific mode.
import db from '@adonisjs/lucid/services/db'
export async function listPublishedPosts() {
return db
.connection('postgres', { mode: 'read' })
.from('posts')
.where('is_published', true)
}
export async function publishPost(id: number) {
return db
.connection('postgres', { mode: 'write' })
.from('posts')
.where('id', id)
.update({ is_published: true })
}
Replica lag can make a newly written row unavailable on a read replica for a short time.
For workflows that read a row immediately after writing it, select the write connection so the query runs against the up-to-date primary.
Connection pooling
Connection pooling keeps a limited set of open database connections and reuses them across queries. A larger pool is not automatically better: too many concurrent database connections can reduce database performance and starve other applications sharing the same database.
import env from '#start/env'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'postgres',
connections: {
postgres: {
client: 'pg',
connection: env.get('DATABASE_URL'),
pool: {
min: 2,
max: 10,
acquireTimeoutMillis: 60_000,
afterCreate: (conn, done) => {
conn.query(`SET timezone='UTC';`, (err: Error) => done(err, conn))
},
},
},
},
})
export default dbConfig
Set max based on the database server capacity and the number of application processes. If you run eight Node.js processes and each process has max: 10, the application can open up to eighty database connections.
The pool block accepts the following options.
-
min
-
Minimum number of connections kept open in the pool. Defaults to
2. -
max
-
Maximum number of connections the pool will open before queuing new requests. Defaults to
10. -
acquireTimeoutMillis
-
Milliseconds to wait for a connection from the pool before throwing a timeout error. Defaults to
60000. -
createTimeoutMillis
-
Milliseconds to wait while creating a new connection before treating the attempt as failed. Defaults to
30000. -
idleTimeoutMillis
-
Milliseconds an idle connection can sit in the pool before being closed. Defaults to
30000. -
createRetryIntervalMillis
-
Milliseconds to wait between failed attempts to create a new connection. Defaults to
200. -
reapIntervalMillis
-
How often the pool scans for idle connections that are past their
idleTimeoutMillis. Defaults to1000. -
propagateCreateError
-
When set to
true, failed connection creation errors are thrown to the caller that triggered the creation. Whenfalse(the default), errors are swallowed and the pool keeps retrying. -
afterCreate
-
Callback invoked after each new connection is created. Useful for running per-connection setup such as
SET timezone,SET search_path, orPRAGMAstatements. The callback receives the raw driver connection and adonecallback that must be called to signal completion. -
log
-
Custom logger callback. Called with informational messages from the underlying pool library.
-
validate
-
Custom validator callback invoked when a connection is checked out of the pool. Returning
falsecauses the connection to be destroyed and replaced.
Shared connection options
The options in this section apply to every driver and sit alongside client, connection, and pool on a connection.
-
useNullAsDefault
-
When
true, Knex usesNULLfor missing values during inserts rather than the database's default. Required for SQLite to avoid warnings when inserting rows without every column set, and harmless to leave on elsewhere. -
asyncStackTraces
-
When
true, Knex captures the originating call site for every query and includes it in errors. This makes tracing a failing query back to its source much easier during development. The feature has a small runtime cost, so enable it only in development environments.config/database.ts{client: 'pg',connection: env.get('DATABASE_URL'),asyncStackTraces: app.inDev,} -
debug
-
When
true, Knex's internal query logging is routed through the AdonisJS logger at debug level, so executed queries appear in your log output alongside the rest of the application. For richer information such as bindings, duration, and connection name, subscribe to thedb:queryevent instead. See Debug output.
Migrations config
Every connection has its own migrations block. Lucid's migration commands read this block to discover migration files, apply them, and track state.
{
client: 'pg',
connection: env.get('DATABASE_URL'),
migrations: {
naturalSort: true,
paths: ['database/migrations'],
tableName: 'adonis_schema',
disableTransactions: false,
disableRollbacksInProduction: true,
},
}
-
paths
-
Array of directories to scan for migration files. Every
.tsor.jsfile found in these directories is loaded and executed in sorted order. Defaults to['database/migrations']. -
naturalSort
-
When
true, migration files are sorted using natural sort order (so10_...comes after2_...). Use this for timestamp-prefixed filenames. Defaults tofalse. -
tableName
-
Name of the table Lucid uses to track which migrations have run. Defaults to
adonis_schema. Change this only when integrating with an existing database that already uses a different tracking table. -
disableTransactions
-
When
true, Lucid skips wrapping each migration file in a transaction. By default, every migration runs inside its own transaction so partial failures roll back cleanly. Disable this only when a migration uses statements that cannot run inside a transaction, such as certain PostgreSQL DDL operations. -
disableRollbacksInProduction
-
When
true, themigration:rollback,migration:reset, andmigration:refreshcommands refuse to run in production. Because rollback actions are usually destructive (dropping tables, removing columns), disabling them in production prevents accidental data loss.
Seeders config
Every connection also accepts a seeders block for the db:seed command.
{
client: 'pg',
connection: env.get('DATABASE_URL'),
seeders: {
paths: ['database/seeders'],
naturalSort: true,
},
}
-
paths
-
Array of directories to scan for seeder files. Defaults to
['database/seeders']. -
naturalSort
-
When
true, seeder files are sorted using natural sort order. Defaults tofalse.
Schema generation config
Lucid regenerates database/schema.ts automatically after every migration run. The schemaGeneration block controls this behavior.
{
client: 'pg',
connection: env.get('DATABASE_URL'),
schemaGeneration: {
enabled: true,
outputPath: 'database/schema.ts',
excludeTables: ['knex_migrations', 'adonis_schema_versions'],
rulesPaths: ['database/schema_rules.ts'],
},
}
-
enabled
-
When
false, Lucid skips both theschema:generatecommand and the automatic regeneration that runs after migrations. Use this when you want to commitdatabase/schema.tsmanually instead of regenerating it on every migration. Defaults totrue. -
outputPath
-
Path where the generated schema file is written. Defaults to
database/schema.ts. -
excludeTables
-
Array of table names that Lucid should skip when generating schema classes. Useful for third-party tables that your application does not query through Lucid models.
-
rulesPaths
-
Array of paths to schema rules files. These files can override type mappings, column names, and other generator behavior on a per-table or per-column basis. See the schema classes guide for the rules reference.
Protecting tables from db
The db:wipe command drops every table in the database. For workflows that mix Lucid-managed tables with tables maintained by other tools, such as PostGIS or a message queue, set wipe.ignoreTables to keep those tables intact.
{
client: 'pg',
connection: env.get('DATABASE_URL'),
wipe: {
ignoreTables: ['spatial_ref_sys', 'pgboss_jobs'],
},
}
PostgreSQL connections already exclude spatial_ref_sys from wipe operations by default. Add additional tables here as your setup requires.
Debug output
Lucid emits a db:query event for every executed query. Subscribing to this event is the recommended way to log, profile, or inspect SQL in any environment.
import emitter from '@adonisjs/core/services/emitter'
import logger from '@adonisjs/core/services/logger'
emitter.on('db:query', (query) => {
logger.debug({ sql: query.sql, bindings: query.bindings, duration: query.duration })
})
For local development, Lucid ships with a built-in pretty printer. Enable it by setting prettyPrintDebugQueries at the top level of your config.
import app from '@adonisjs/core/services/app'
import { defineConfig } from '@adonisjs/lucid'
const dbConfig = defineConfig({
connection: 'postgres',
prettyPrintDebugQueries: app.inDev,
connections: {
postgres: {
client: 'pg',
connection: env.get('DATABASE_URL'),
},
},
})
When enabled, every query is printed to the terminal with color, timing, and bindings already interpolated for readability.
A third option is the connection-level debug: true flag, which routes Knex's built-in query logging through the AdonisJS logger at debug level. Lucid also emits a one-time notice when this path is used, recommending the db:query event for richer logging. The flag is still useful when you want query output to flow through your existing logger pipeline without writing a custom event listener.
Next steps
- Database service guide for the
dbservice entry points, runtime connection selection, and manager APIs. - Migrations guide for migration file structure, commands, and multi-connection workflows.
- Transactions guide for isolation levels, managed and manual transaction APIs, and cross-model transactions.