Bright ideas and techniques for building with Convex.
Profile image
Ian Macartney
7 days ago

Stateful Migrations using Mutations

Icon of a schema in a yellow box next to an icon of a file migration in a black box

Migrations are inevitable. Initial schemas aren't perfect on the first try. As your understanding of the problem evolves, you will inevitably change your mind about the ideal way to store information. So how do you do it at scale, where you might not be able to change everything in a single transaction?

In this post, we’ll look at strategies for migrating data. In particular, scalable online migrations that don't require downtime or block pushes. We’ll be working specifically with Convex, but the concepts are universal.

To learn about migrations at a high level and some best practices, see this intro to migrations.

Schema Migrations

One thing to call out explicitly is that with Convex, you don’t have to write migration code like “add column” or “add index” explicitly. All you need to do is update your schema.ts file and Convex handles it. Convex isn’t rigidly structured like most SQL databases are. If you change your field from v.string() to v.union(v.string(), v.number()), Convex doesn’t have to reformat the data or table. However, it will enforce the schema you define, and will not let you deploy a schema that doesn't match the data at rest. Or you can turn off schema validation and throw unstructured data into Convex and it will also work1.

With schema validation enabled, Convex will help your code and data stay in sync by only letting you push schemas that match the current data. To add a string field to an object, for instance, you can push a schema where that field is v.optional(v.string()). Once there is a string on every object, Convex will let you push a schema that is just v.string() and future writes will enforce that the field will always be set and be a string.

In this way, Convex gives you the ease of just defining your types declaratively, while also guaranteeing that they match the reality of the data at rest when you deploy your code and schema. It’s also worth mentioning that transitions from one schema definition and code version to the next are atomic, thanks to Convex coordinating both the functions and the database.

The rest of this post is about how you go about changing the underlying data.

Data Migrations using Mutations

To migrate data in Convex, you can use a mutation to transform your data. In particular, you'd likely use an internalMutation so it isn't exposed on your public API.

I’ve made some helpers you can use in your project so you only have to write the code relevant to updating documents. It allows you to run migrations over your data in batches. We'll use them in the following examples. See below for steps to install and configure them.

Common use cases

To illustrate what writing migrations looks like in Convex, let's use the migration helper. Below we'll show where this comes from, but the gist is that it runs a specified function over each document in your table, in batches.

Here's how to achieve common migration patterns:

Adding a new field with a default value

export const setDefaultPlan = migration({
  table: "teams",
  migrateOne: async (ctx, team) => {
    if (!team.plan) {
      await db.patch(team._id, { plan: "basic" });

If you’re using a schema and validation, you’d likely update the team’s schema first to define “plan” as:

plan: v.optional(v.union(v.literal("basic"), v.literal("pro")))

Then, after all the fields have a value, you’d change it to:

plan: v.union(v.literal("basic"), v.literal("pro"))

Convex won’t let you deploy a schema that doesn’t conform to the data unless you turn off schema validation. As a result, you can safely trust that the typescript types inferred from your schema match the actual data.

Note: this doesn’t have to be a static value. You could write the value based on other fields in the document, or whatever custom logic you like.

As a reminder for those who skipped the primer, to do this correctly, you’d also want to update your code to start writing the default field value on new documents before running this mutation to avoid missing any documents.

Deleting a field

If you’re sure you want to get rid of data, you would modify the schema in reverse: making the field optional before you can delete the data.

isPro: v.boolean() -> isPro: v.optional(v.boolean())

Then you can run the following:

export const removeBoolean = migration({
  table: "teams",
  migrateOne: async (ctx, team) => {
    if (team.isPro !== undefined) {
      await db.patch(team._id, { isPro: undefined });

As mentioned in the migration primer, I advise deprecating fields over deleting them when real user data is involved.

Changing the type of a field

You can both add and delete fields in the same migration - we could have done both the setting a default plan and deleting the deprecated isPro plan:

export const updatePlanToEnum = migration({
  table: "teams",
  migrateOne: async (ctx, team) => {
    if (!team.plan) {
      await db.patch(team._id, {
        plan: team.isPro ? "pro" : "basic",
        isPro: undefined,

I'd recommend new fields when types change, but if you want to use the same field, you can do it with a union: zipCode: v.number() -> field: v.union(v.string(), v.number())

export const zipCodeShouldBeAString = migration({
  table: "addresses",
  migrateOne: async (ctx, address) => {
    if (typeof address.zipCode === "number") {
      // Note: as a convenience, it will apply a patch you return.
      return { zipCode: address.zipCode.toString() };

Inserting documents based on some state

Let's say you're changing user preferences from being an object in the users schema to its own document - you might consider doing this as preferences grows to be a lot of options, or to avoid accidentally returning preference data to clients for queries that return users. You can walk the users table and insert into another table:

export const changePreferencesToDocument = migration({
  table: "users",
  migrateOne: async (ctx, user) => {
    const prefs = await ctx.db
      .withIndex("userId", (q) => q.eq("userId", user._id))
    if (!prefs) {
      await ctx.db.insert("preferences", user.preferences);
      await ctx.db.patch(user._id, { preferences: undefined });

You'd want to also have code that is adding perferences documents by default for new users, so the migration is only responsible for older users. You'd also update your code to first check the user for preferences, and if it's unset, fetch it from the table. Later, once you're confident there are preferences for all users, remove the preferences object from the users schema, and the code can just read preferences from the table.

Deleting documents based on some state

If you had a bug where you didn't delete related documents correctely, you might want to clean up documents based on the existence of another document. For example, one gotcha with vector databases is forgetting to delete embedding documents linked to chunks of documents that have been deleted. When you do a vector search, you'd get results that no longer exist. To delete the related documents you could do:

export const deleteOrphanedEmbeddings = migration({
  table: "embeddings",
  migrateOne: async (ctx, doc) => {
    const chunk = await ctx.db
      .withIndex("embeddingId", (q) => q.eq("embeddingId", doc._id))
    if (!chunk) {
      await ctx.db.delete(doc._id);

Setting up convex-helpers/server/migrations

To use the above migration helper, first install convex-helpers:

npm i convex-helpers@latest

It can optionally keep track of migration state, allowing you to resume or skip already-completed migrations. If you don't do this, you can still run migrations but you'll have to look at logs to know when it's done or what cursor to resume from in the case of failure. If you want persistence, add the migrations table in convex/schema.ts:

// In convex/schema.ts
import { migrationsTable } from "convex-helpers/server/migrations";
export default defineSchema({
  migrations: migrationsTable,
  // other tables...

You can pick any table name for this, but it should match migrationTable used below.

To define the migration helper, use makeMigration. In convex/migrations.ts (or wherever you want to define it):

import { makeMigration } from "convex-helpers/server/migrations";
import { internalMutation } from "./_generated/server";

const migration = makeMigration(internalMutation, {
  migrationTable: "migrations",

We'll assume the migrations are stateful for the rest of the post.

Defining migrations

As shown in previous sections, you use the migration wrapper to define internal mutations that run your migration function over all documents. In addition to the syntax above, you can also just return a patch from a migration:

export const myMigration = migration({
  table: "users",
  migrateOne: async (ctx, doc) => ({ someField: "some value" }),
  batchSize: 10,

If you don't provide a batchSize it will default to 💯.

Running a migration from code

You can start a migration from a Convex mutation or action with the startMigration function.

  • If it is already running it will refuse to start another duplicate worker.
  • If it had previously failed on some batch, it will continue from that batch unless you manually specify startCursor.
  • If you provide an explicit startCursor (null means to start at the beginning), it will start from there.
  • If you set dryRun: true then it will run and then throw, so no changes are committed, and you can see what it would have done. This is good for validating it does what you expect before running it on your data. Note: I often just run dry runs from the command line.
import { startMigration } from "convex-helpers/server/migrations";

//... within a mutation or action
await startMigration(ctx, internal.migrations.myMigration, {
  startCursor: null, // optional override
  batchSize: 10, // optional override

Running a series of default migrations from code

It's sometimes handy to just add the migration you want to run to a list, and have them all run after a deploy, or via some script. startMigrationsSerially will run each migration that hasn't finished, one at a time.

  • If a migration had already completed, it will skip it.
  • If a migration had partial progress, it will resume from where it left off.
  • If a migration is already in progress when attempted, it will no-op.
  • If a migration fails, it will not continue to the next migration, in case you had some dependencies between the migrations. Call the series again to retry.
import { startMigrationsSerially } from "convex-helpers/server/migrations";
import { internalMutation } from "./_generated/server";

export default internalMutation(async (ctx) => {
  await startMigrationsSerially(ctx, [

Note: if you start multiple serial migrations, the behavior is:

  • If they don't overlap on functions, they will happily run in parallel.
  • If they have a function in common and one completes before the other attempts it, the second will just skip it.
  • If they have a function in common and one is in progress, the second will no-op and not run any further migrations in its series.

Running migrations from the CLI or dashboard

You can run migrations manually from the CLI or dashboard.

To run a single migration that will start or resume where it previously left off, run:

npx convex run migrations:myMigration '{"fn": "migrations:myMigration"}'

To run a series of migrations, like the example above where there's a default export in convex/migrations.ts running startMigrationsSerially, run:

npx convex run migrations

In production you could run this after a deploy:

npx convex deploy --cmd 'npm run build' && npx convex run migrations --prod

Note you pass --prod to run these commands in production.

Test a migration before running it to completion from the CLI

npx convex run migrations:myMigration '{"dryRun": true, "fn": "migrations:myMigration"}' # --prod

Restart a migration from the beginning from the CLI

npx convex run migrations:myMigration '{"cursor": null, "fn": "migrations:myMigration"}' # --prod

Or you can pass in any cursor to start from, e.g. where a previous migration left off, if you haven't configured it to be stateful with a table.

Stop a migration

You can stop a migration with the cancelMigration function call. The currently running batch will complete, but it will not schedule further batches. This requires stateful migrations - here passing in the table name "migrations".

import { cancelMigration } from "convex-helpers/server/migrations";

await cancelMigration(ctx, "migrations", internal.migrations.myMigration);

You can also write an internal mutation that calls cancel for some job, so you can cancel a migration without pushing new code:

export const cancel = internalMutation({
  args: { fn: v.string() },
  handler: async (ctx, { fn }) => {
    return await cancelMigration(ctx, "migrations", fn);

And call it (assuming here that it's in convex/migrations.ts):

npx convex run migrations:cancel '{"fn": "migrations:myMigration" }' # --prod

Get the status of migrations

To see how a migration has progressed, you can use the getStatus function, either for specific migrations:

import { getStatus, MigrationStatus } from "convex-helpers/server/migrations";

// We annotate the type here to avoid circular references if we use this
// value in the return of a function (part of the internal.* type).
const status: MigrationStatus<"migrations"> = await getStatus(ctx, {
  migrationTable: "migrations",
  migrations: [internal.migrationsExample.increment],

Or you can get the status of the most recent migrations (defaults to 10):

export const status = internalQuery(async (ctx) => {
  return await getStatus(ctx, { migrationTable: "migrations", limit: 10 });

If you define an internalQuery like this, you can watch the status of your migration live from the CLI:

npx convex run --watch migrations:status # --prod

Defining your own migrations

How would you do this without the migration helper? The rest of this post is here if you want to know how to build some of this yourself. If you're happy with the helpers, you can stop reading here.

If your table is small enough (let’s say a few thousand rows, as a guideline), you could just do it all in one mutation. For example:

export const doMigration = internalMutation(async ({ db }) => {
  const teams = await db.query("teams").collect();
  for (const team of teams) {
    // modify the team and write it back to the db here

This would define the doMigration mutation, which you could run from the dashboard or via npx convex run.

Big tables

For larger tables, reading the whole table becomes impossible. Even with smaller tables, if there are a lot of active writes happening to the table, you might want to break the work into smaller chunks to avoid conflicts. Convex will automatically retry failed mutations up to a limit, and mutations don’t block queries, but it’s still best to avoid scenarios that make them likely.

There are a few ways you could break up the work. For the helper, I use pagination. Each mutation will only operate on a batch of documents and keep track of how far it got, so the next worker can efficiently pick up the next batch. One nice benefit of this is you can keep track of your progress, and if it fails on some batch of data, you can keep track of the cursor it started with and restart the migration at that batch. Thanks to Convex’s transactional guarantees, either all of the batch or none of the batch’s writes will have committed. A mutation that works with a page of data might look like this:

export const myMigrationBatch = internalMutation(
  async ({ db }, { cursor, numItems }) => {
    const data = await db.query("mytable").paginate({ cursor, numItems });
    const { page, isDone, continueCursor } = data;
    for (const doc of page) {
      // modify doc
    return { cursor: continueCursor, isDone };

Running a batch

To try out your migration, you might try running it on one chunk of data via the CLI or by going to the functions panel on the dashboard and clicking “Run function.” To run from the beginning of the table, you’d pass as an argument:

{ cursor: null, numItems: 1 }

On the CLI it would be:

npx convex run mutations:myMigrationBatch '{ "cursor": null, "numItems": 1 }'

It would then run and return the next cursor (and print it to the console so you can look back if you lose track of it). To run the next batch, just update the parameter to the cursor string instead of null.

You could keep running it from here, but it might start to feel tedious. Once you have confidence in the code and batch size, you can start running the rest. You can even pass in the cursor you got from testing on the dashboard to skip the documents you’ve already processed ☝️.

Looping batches from an action

To iterate through chunks, you can call it from an action in a loop:

export const runMigration = internalAction(
  async ({ runMutation }, { name, cursor, batchSize }) => {
    let isDone = false;
    while (!isDone) {
      const args = { cursor, numItems: batchSize };
      ({ isDone, cursor } = await runMutation(name, args));

You can then go to the dashboard page for the runMigration function and test run the mutation with the arguments { name: "myMigrationBatch", cursor: null, batchSize: 1 }

Here "myMigrationBatch" is whatever your mutation’s path is, e.g. if it’s in the file convex/migrations/someMigration.js, it would be "migrations/someMigration:myMigrationBatch".

To use the CLI, you could run:

npx convex run migrations:runMigration '{ "name": "myMigrationBatch", "cursor": null, "batchSize": 1 }'

It is also possible to loop from a client, such as the ConvexHttpClient, if you make it a public mutation. You could also recursively schedule a mutation to run, as an exercise left to the reader.

Batching via recursive scheduling

In the helpers, we use recursive scheduling for batches. A mutation keeps scheduling itself until the pagination is done. This is simpler as you don't need to use a separate runMigration function, you can just call it itself. This is why it takes a fn parameter: to know how to call itself. Read the code to see it in action.

An aside on serial vs. parallelizing

You might be wondering whether we should be doing all of this in parallel. I’d urge you to start doing it serially, and only add parallelization gradually if it’s actually too slow. As a general principle with backend systems, avoid sending big bursts of traffic when possible. Even without causing explicit failures, it could affect latencies for user requests if you flood the database with too much traffic at once. This is a different mindset from an analytics database where you’d optimize for throughput. I think you’ll be surprised how fast a serial approach works in most cases. The helpers run serially. Reach out if you want to explore more parallelism.


In this post, we looked at a strategy for migrating data in Convex using mutation functions. As with other posts, the magic is in composing helper functions and leveraging the fact that you get to write javascript or typescript rather than divining the right SQL incantation. The code for the helpers is available in the convex-helpers package and visible on GitHub, and if you have any questions don’t hesitate to reach out in Discord.



  1. Technically, there are some restrictions on Convex values, such as array lengths and object key names that you can read about here.

Build in minutes, scale forever.

Convex is the backend application platform with everything you need to build your project. Cloud functions, a database, file storage, scheduling, search, and realtime updates fit together seamlessly.

Get started