Skip to content

Database

This guide covers the full capabilities of Remix V3's data-table package. It goes deeper than the tutorial, covering every column type, all query operators, lifecycle hooks, transactions, raw SQL, and performance optimization.

Table Definitions

Tables are defined with the table function. Each table has a name and a set of columns:

ts
import { table, column as c } from 'remix/data-table'

export const posts = table({
  name: 'posts',
  columns: {
    id: c.integer(),
    title: c.text(),
    body: c.text(),
    published: c.boolean(),
    view_count: c.integer(),
    created_at: c.timestamp(),
  },
})

All Column Types

MethodSQL TypeTypeScript TypeDescription
c.integer()INTEGERnumberWhole numbers
c.text()TEXT / VARCHARstringVariable-length strings
c.boolean()BOOLEAN / INTEGERbooleanTrue or false
c.decimal(precision, scale)DECIMAL(p,s) / REALnumberFixed-point numbers
c.enum([...values])TEXTunion of string literalsRestricted string values
c.json()JSON / TEXTunknownJSON data
c.timestamp()TIMESTAMP / INTEGERDateDate and time values

Column Options

Each column type accepts optional configuration:

ts
export const users = table({
  name: 'users',
  columns: {
    id: c.integer(),                          // Auto-incrementing primary key (first integer column)
    email: c.text({ unique: true }),           // Unique constraint
    name: c.text({ nullable: false }),         // NOT NULL (default for most columns)
    bio: c.text({ nullable: true }),           // Allow NULL values
    role: c.enum(['user', 'admin'], { default: 'user' }),  // Default value
    login_count: c.integer({ default: 0 }),    // Default value for integers
    metadata: c.json({ nullable: true }),      // Nullable JSON column
    created_at: c.timestamp({ default: 'now' }), // Default to current time
  },
})

Composite Primary Keys

By default, the first c.integer() column is treated as the primary key. For join tables or composite keys, use the primaryKey option:

ts
export const postTags = table({
  name: 'post_tags',
  primaryKey: ['post_id', 'tag_id'],
  columns: {
    post_id: c.integer(),
    tag_id: c.integer(),
  },
})

Type Inference

Use TableRow to get the TypeScript type for a row:

ts
import type { TableRow, InferInput, InferOutput } from 'remix/data-table'

// The full row type (all columns)
type Post = TableRow<typeof posts>
// { id: number; title: string; body: string; published: boolean; view_count: number; created_at: Date }

// The input type (for create/update -- no id, optionals for defaults)
type PostInput = InferInput<typeof posts>

// The output type (what queries return)
type PostOutput = InferOutput<typeof posts>

Lifecycle Hooks

Lifecycle hooks let you run code at specific points in the data lifecycle. They are defined on the table:

ts
export const posts = table({
  name: 'posts',
  columns: {
    id: c.integer(),
    title: c.text(),
    slug: c.text(),
    body: c.text(),
    word_count: c.integer({ default: 0 }),
    created_at: c.timestamp({ default: 'now' }),
    updated_at: c.timestamp({ nullable: true }),
  },

  // Runs before a row is created or updated
  beforeWrite(row) {
    // Auto-generate slug from title
    if (row.title) {
      row.slug = row.title.toLowerCase().replace(/\s+/g, '-').replace(/[^a-z0-9-]/g, '')
    }
    // Calculate word count
    if (row.body) {
      row.word_count = row.body.split(/\s+/).length
    }
    // Set updated timestamp
    row.updated_at = new Date()
    return row
  },

  // Runs after a row is created or updated (the returned row is in the DB)
  afterWrite(row) {
    console.log(`Post saved: ${row.id} - ${row.title}`)
  },

  // Runs after a row is read from the database
  afterRead(row) {
    // Transform data on the way out
    return row
  },

  // Runs before a row is deleted
  beforeDelete(row) {
    console.log(`Deleting post: ${row.id}`)
  },

  // Custom validation before write
  validate(row) {
    let issues: string[] = []
    if (!row.title || row.title.length === 0) {
      issues.push('Title is required')
    }
    if (row.title && row.title.length > 200) {
      issues.push('Title must be 200 characters or fewer')
    }
    return issues
  },
})

Hook execution order

For a create or update operation, hooks run in this order:

  1. validate -- If it returns issues, the operation is aborted
  2. beforeWrite -- Transform the row before saving
  3. The database write happens
  4. afterWrite -- React to the saved row

Query Building

Basic Queries

ts
import { query } from 'remix/data-table'
import { eq, gt, like } from 'remix/data-table/operators'

// Find one record by ID
let post = await db.findOne(posts, {
  where: eq(posts.columns.id, 42),
})

// Find many with conditions
let recentPosts = await db.findMany(posts, {
  where: eq(posts.columns.published, true),
  orderBy: [{ column: posts.columns.created_at, direction: 'desc' }],
  limit: 10,
  offset: 0,
})

// Count records
let totalPosts = await db.count(posts, {
  where: eq(posts.columns.published, true),
})

// Check if a record exists
let exists = await db.exists(posts, {
  where: eq(posts.columns.slug, 'hello-world'),
})

Select Specific Columns

ts
let titles = await db.findMany(posts, {
  select: ['id', 'title', 'slug'],
  where: eq(posts.columns.published, true),
})
// Returns: { id: number; title: string; slug: string }[]

Distinct

ts
let genres = await db.findMany(books, {
  select: ['genre'],
  distinct: true,
})

All Operators

Import operators from remix/data-table/operators:

ts
import {
  eq,       // Equal: column = value
  ne,       // Not equal: column != value
  gt,       // Greater than: column > value
  lt,       // Less than: column < value
  gte,      // Greater than or equal: column >= value
  lte,      // Less than or equal: column <= value
  like,     // Pattern match (case-sensitive): column LIKE pattern
  ilike,    // Pattern match (case-insensitive): column ILIKE pattern
  inList,   // In a set: column IN (value1, value2, ...)
  between,  // Range: column BETWEEN low AND high
  isNull,   // Null check: column IS NULL
  and,      // Combine conditions with AND
  or,       // Combine conditions with OR
} from 'remix/data-table/operators'

Usage Examples

ts
// Equality
let post = await db.findOne(posts, {
  where: eq(posts.columns.slug, 'hello-world'),
})

// Comparison
let expensiveBooks = await db.findMany(books, {
  where: gt(books.columns.price, 20),
})

// Pattern matching (% matches any characters, _ matches one character)
let searchResults = await db.findMany(posts, {
  where: ilike(posts.columns.title, '%remix%'),
})

// In a list of values
let selectedBooks = await db.findMany(books, {
  where: inList(books.columns.genre, ['fiction', 'sci-fi', 'fantasy']),
})

// Range
let midPriceBooks = await db.findMany(books, {
  where: between(books.columns.price, 10, 30),
})

// Null check
let drafts = await db.findMany(posts, {
  where: isNull(posts.columns.published_at),
})

Combining Conditions

ts
// AND: all conditions must be true
let results = await db.findMany(books, {
  where: and(
    eq(books.columns.genre, 'fiction'),
    gt(books.columns.price, 10),
    eq(books.columns.in_stock, true),
  ),
})

// OR: at least one condition must be true
let results = await db.findMany(books, {
  where: or(
    eq(books.columns.genre, 'fiction'),
    eq(books.columns.genre, 'sci-fi'),
  ),
})

// Nesting AND and OR
let results = await db.findMany(books, {
  where: and(
    eq(books.columns.in_stock, true),
    or(
      eq(books.columns.genre, 'fiction'),
      eq(books.columns.genre, 'sci-fi'),
    ),
    gte(books.columns.price, 5),
  ),
})

Ordering

ts
let results = await db.findMany(posts, {
  orderBy: [
    { column: posts.columns.published, direction: 'desc' },
    { column: posts.columns.created_at, direction: 'desc' },
  ],
})

Pagination

ts
let pageSize = 20
let page = 3

let results = await db.findMany(posts, {
  where: eq(posts.columns.published, true),
  orderBy: [{ column: posts.columns.created_at, direction: 'desc' }],
  limit: pageSize,
  offset: (page - 1) * pageSize,
})

Relations

Relations describe how tables connect to each other. They are used for eager loading related data.

belongsTo

A row in one table references a row in another table:

ts
import { belongsTo } from 'remix/data-table'

// Each comment belongs to a post
// Assumes comments has a `post_id` column referencing posts.id
export const postForComment = belongsTo(comments, posts)

// Each comment belongs to a user
export const authorForComment = belongsTo(comments, users)

hasMany

A row in one table has many related rows in another:

ts
import { hasMany } from 'remix/data-table'

// A post has many comments
export const commentsByPost = hasMany(posts, comments)

// A user has many posts
export const postsByUser = hasMany(users, posts)

hasManyThrough

A many-to-many relationship through a join table:

ts
import { hasManyThrough } from 'remix/data-table'

// Posts have many tags through post_tags
export const tagsForPost = hasManyThrough(posts, tags, postTags)

// Tags have many posts through post_tags
export const postsForTag = hasManyThrough(tags, posts, postTags)

The join table (postTags) must have columns referencing both sides:

ts
export const postTags = table({
  name: 'post_tags',
  primaryKey: ['post_id', 'tag_id'],
  columns: {
    post_id: c.integer(),
    tag_id: c.integer(),
  },
})

Eager Loading with .with()

Load related data in a single query using .with():

ts
// Load a post with its comments
let post = await db.findOne(posts, {
  where: eq(posts.columns.id, 42),
  with: {
    comments: commentsByPost,
  },
})
// post.comments is Comment[]

// Load a post with comments and each comment's author
let post = await db.findOne(posts, {
  where: eq(posts.columns.id, 42),
  with: {
    comments: {
      relation: commentsByPost,
      with: {
        author: authorForComment,
      },
    },
  },
})
// post.comments[0].author is User

// Load a post with its tags (many-to-many)
let post = await db.findOne(posts, {
  where: eq(posts.columns.id, 42),
  with: {
    tags: tagsForPost,
  },
})
// post.tags is Tag[]

CRUD Operations

create

Insert a single row:

ts
let post = await db.create(posts, {
  title: 'Hello World',
  body: 'This is my first post.',
  published: false,
  view_count: 0,
})
// Returns the created row with its auto-generated id

createMany

Insert multiple rows at once:

ts
let newPosts = await db.createMany(posts, [
  { title: 'Post 1', body: 'Body 1', published: true, view_count: 0 },
  { title: 'Post 2', body: 'Body 2', published: false, view_count: 0 },
  { title: 'Post 3', body: 'Body 3', published: true, view_count: 0 },
])

update

Update rows that match a condition:

ts
await db.update(posts, {
  where: eq(posts.columns.id, 42),
  set: {
    title: 'Updated Title',
    published: true,
  },
})

delete

Delete rows that match a condition:

ts
await db.delete(posts, {
  where: eq(posts.columns.id, 42),
})

// Delete all drafts older than 30 days
await db.delete(posts, {
  where: and(
    eq(posts.columns.published, false),
    lt(posts.columns.created_at, thirtyDaysAgo),
  ),
})

upsert

Insert a row or update it if a conflict occurs:

ts
await db.upsert(users, {
  values: {
    email: 'jane@example.com',
    name: 'Jane Doe',
    role: 'user',
    created_at: Date.now(),
  },
  conflictOn: ['email'],
  update: {
    name: 'Jane Doe',
  },
})

findOne

Retrieve a single row or null:

ts
let user = await db.findOne(users, {
  where: eq(users.columns.email, 'jane@example.com'),
})

if (!user) {
  return new Response('User not found', { status: 404 })
}

findMany

Retrieve multiple rows:

ts
let allPosts = await db.findMany(posts, {
  where: eq(posts.columns.published, true),
  orderBy: [{ column: posts.columns.created_at, direction: 'desc' }],
  limit: 20,
})

count

Count rows matching a condition:

ts
let totalPublished = await db.count(posts, {
  where: eq(posts.columns.published, true),
})

exists

Check if any row matches:

ts
let emailTaken = await db.exists(users, {
  where: eq(users.columns.email, 'jane@example.com'),
})

Transactions

Transactions ensure that a group of operations either all succeed or all fail. If any operation in a transaction throws an error, all changes are rolled back.

ts
await db.transaction(async (tx) => {
  // Create an order
  let order = await tx.create(orders, {
    user_id: userId,
    total: cartTotal,
    status: 'pending',
    created_at: Date.now(),
  })

  // Create order items
  for (let item of cartItems) {
    await tx.create(orderItems, {
      order_id: order.id,
      book_id: item.bookId,
      title: item.title,
      unit_price: item.price,
      quantity: item.quantity,
    })
  }

  // Update book stock
  for (let item of cartItems) {
    await tx.update(books, {
      where: eq(books.columns.id, item.bookId),
      set: { in_stock: false },
    })
  }
})

Transactions and async

All operations inside a transaction callback must use the tx object, not the main db object. Using db inside a transaction would execute queries outside the transaction boundary, defeating the purpose.

Raw SQL

For queries that cannot be expressed with the query builder, use raw SQL.

Tagged Template SQL

ts
import { sql } from 'remix/data-table'

// Parameterized queries (safe from SQL injection)
let results = await db.execute(
  sql`SELECT * FROM posts WHERE title ILIKE ${'%' + searchTerm + '%'} ORDER BY created_at DESC LIMIT ${20}`
)

// Complex aggregations
let stats = await db.execute(
  sql`SELECT genre, COUNT(*) as count, AVG(price) as avg_price FROM books GROUP BY genre ORDER BY count DESC`
)

Values interpolated into sql`...` are always parameterized -- they are never concatenated into the SQL string. This prevents SQL injection.

Raw SQL Strings

For cases where you need to insert raw SQL (like column names or table names), use rawSql:

ts
import { sql, rawSql } from 'remix/data-table'

let tableName = 'posts'
let results = await db.execute(
  sql`SELECT * FROM ${rawSql(tableName)} WHERE id = ${42}`
)

Be careful with rawSql

rawSql inserts a string directly into the SQL query without parameterization. Never use it with user input. Only use it for trusted values like table names or column names that you control.

Migrations

Migrations are SQL scripts that modify your database schema. They run in order and are tracked so each migration only runs once.

Creating Migrations

Create .sql files in your migrations directory. Name them with a numeric prefix so they sort correctly:

db/migrations/
  001-create-users.sql
  002-create-posts.sql
  003-add-published-at-to-posts.sql
sql
-- db/migrations/001-create-users.sql
CREATE TABLE users (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT NOT NULL UNIQUE,
  password_hash TEXT NOT NULL,
  name TEXT NOT NULL,
  role TEXT NOT NULL DEFAULT 'user',
  created_at INTEGER NOT NULL
);
sql
-- db/migrations/003-add-published-at-to-posts.sql
ALTER TABLE posts ADD COLUMN published_at INTEGER;
CREATE INDEX idx_posts_published_at ON posts(published_at);

Running Migrations

ts
import { createMigrationRunner } from 'remix/data-table/migrations'
import { loadMigrations } from 'remix/data-table/migrations/node'

let migrations = await loadMigrations('./db/migrations')
let runner = createMigrationRunner(adapter, migrations)

// Run all pending migrations
await runner.up()

Rolling Back

ts
// Roll back the last migration
await runner.down()

// Roll back the last N migrations
await runner.down(3)

Write reversible migrations

When possible, write migrations that can be undone. For CREATE TABLE, the reverse is DROP TABLE. For ADD COLUMN, the reverse is more complex (SQLite does not support DROP COLUMN in older versions). Plan your migrations carefully.

Migration Script

Create a standalone script for running migrations from the command line:

ts
// migrate.ts
import BetterSqlite3 from 'better-sqlite3'
import { createSqliteDatabaseAdapter } from 'remix/data-table-sqlite'
import { createMigrationRunner } from 'remix/data-table/migrations'
import { loadMigrations } from 'remix/data-table/migrations/node'

let sqlite = new BetterSqlite3('./db/app.sqlite')
let adapter = createSqliteDatabaseAdapter(sqlite)

let migrations = await loadMigrations('./db/migrations')
let runner = createMigrationRunner(adapter, migrations)

let command = process.argv[2] ?? 'up'

if (command === 'up') {
  await runner.up()
  console.log('Migrations applied.')
} else if (command === 'down') {
  await runner.down()
  console.log('Last migration rolled back.')
} else {
  console.log('Usage: tsx migrate.ts [up|down]')
}

sqlite.close()

Database Adapters

Remix supports multiple databases through adapters. Your table definitions and queries stay the same -- only the adapter changes.

SQLite

bash
npm install better-sqlite3
npm install -D @types/better-sqlite3
ts
import BetterSqlite3 from 'better-sqlite3'
import { createSqliteDatabaseAdapter } from 'remix/data-table-sqlite'
import { createDatabase } from 'remix/data-table'

let sqlite = new BetterSqlite3('./db/app.sqlite')
sqlite.pragma('foreign_keys = ON')
sqlite.pragma('journal_mode = WAL')

let adapter = createSqliteDatabaseAdapter(sqlite)
let db = createDatabase(adapter)

Enable WAL mode for SQLite

journal_mode = WAL (Write-Ahead Logging) allows concurrent reads while a write is in progress. This dramatically improves performance for web applications where reads far outnumber writes.

PostgreSQL

bash
npm install pg
npm install -D @types/pg
ts
import { Pool } from 'pg'
import { createPostgresDatabaseAdapter } from 'remix/data-table-postgres'
import { createDatabase } from 'remix/data-table'

let pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20, // Maximum connections in the pool
})

let adapter = createPostgresDatabaseAdapter(pool)
let db = createDatabase(adapter)

MySQL

bash
npm install mysql2
ts
import mysql from 'mysql2/promise'
import { createMysqlDatabaseAdapter } from 'remix/data-table-mysql'
import { createDatabase } from 'remix/data-table'

let pool = mysql.createPool({
  uri: process.env.DATABASE_URL,
  waitForConnections: true,
  connectionLimit: 20,
})

let adapter = createMysqlDatabaseAdapter(pool)
let db = createDatabase(adapter)

Performance Tips

Indexing

Create indexes on columns you frequently query by:

sql
-- In a migration file
CREATE INDEX idx_posts_slug ON posts(slug);
CREATE INDEX idx_posts_published ON posts(published);
CREATE INDEX idx_posts_created_at ON posts(created_at);

-- Composite index for queries that filter on both columns
CREATE INDEX idx_posts_published_created ON posts(published, created_at);

When to add indexes

Add indexes for columns used in WHERE clauses, ORDER BY clauses, and JOIN conditions. Do not index every column -- each index slows down writes and uses disk space. Start with the columns you query most and add more as needed.

Query Optimization

Select only the columns you need:

ts
// Instead of selecting all columns
let posts = await db.findMany(posts)

// Select only what you need
let posts = await db.findMany(posts, {
  select: ['id', 'title', 'slug'],
})

Use findOne instead of findMany when you expect one result:

ts
// findOne adds LIMIT 1 automatically
let user = await db.findOne(users, {
  where: eq(users.columns.email, 'jane@example.com'),
})

Batch operations with createMany:

ts
// Instead of creating one at a time in a loop
for (let item of items) {
  await db.create(orderItems, item) // N queries
}

// Use createMany for a single query
await db.createMany(orderItems, items) // 1 query

Use transactions for multiple related writes:

ts
// Wrapping related writes in a transaction is both safer and faster
// (fewer round trips to the database)
await db.transaction(async (tx) => {
  let order = await tx.create(orders, { ... })
  await tx.createMany(orderItems, items.map(i => ({ ...i, order_id: order.id })))
})

Connection Pooling

For PostgreSQL and MySQL, always use connection pooling in production:

ts
let pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,               // Maximum connections
  idleTimeoutMillis: 30000, // Close idle connections after 30s
  connectionTimeoutMillis: 5000, // Fail after 5s if no connection available
})

SQLite and concurrency

SQLite uses a file-level lock, so only one write can happen at a time. This is fine for most applications (SQLite can handle thousands of writes per second), but if you need high write concurrency, use PostgreSQL or MySQL.

Released under the MIT License.