garage/doc/book/working-documents/migration-2.md

70 lines
3.4 KiB
Markdown
Raw Permalink Normal View History

2025-06-14 17:04:55 +02:00
+++
title = "Migrating from 1.0 to 2.0"
weight = 70
+++
**This guide explains how to migrate to v2.x if you have an existing v1.x.x cluster.
We don't recommend trying to migrate to v2.x directly from v0.9.x or older.**
This migration procedure has been tested on several clusters without issues.
However, it is still a *critical procedure* that might cause issues.
**Make sure to back up all your data before attempting it!**
You might also want to read our [general documentation on upgrading Garage](@/documentation/operations/upgrading.md).
## Changes introduced in v2.0
The following are **breaking changes** in Garage v2.0 that require your attention when migrating:
- The administration API has been completely reworked.
Some calls to the `/v1/` endpoints will still work but most will not.
New endpoints are prefixed by `/v2/`. **You will need to update all your code that makes use of the admin API.**
- `replication_mode` is no longer a supported configuration parameter,
please use `replication_factor` and `consistency_mode` instead.
## Migration procedure
The migration to Garage v2.0 can be done with almost no downtime,
by restarting all nodes at once in the new version.
The migration steps are as follows:
1. Do a `garage repair --all-nodes --yes tables`, check the logs and check that
all data seems to be synced correctly between nodes. If you have time, do
additional `garage repair` procedures (`blocks`, `versions`, `block_refs`,
etc.)
2. Ensure you have a snapshot of your Garage installation that you can restore
to in case the upgrade goes wrong, with one of the following options:
- You may use the `garage meta snapshot --all` command
to make a backup snapshot of the metadata directories of your nodes
for backup purposes. Once this command has completed, copy the following
files and directories from the `metadata_dir` of all your nodes
to somewhere safe: `snapshots`, `cluster_layout`, `data_layout`,
`node_key`, `node_key.pub`. (If you have set the `metadata_snapshots_dir`
to a different value in your config file, back up that directory instead.)
- If you are running a filesystem such as ZFS or BTRFS that support
snapshotting, you can create a filesystem-level snapshot of the `metadata_dir`
of all your nodes to be used as a restoration point if needed.
- You may also make a back-up manually: turn off each node
individually; back up its metadata folder (for instance, use the following
command if your metadata directory is `/var/lib/garage/meta`: `cd
2025-06-15 11:27:21 +02:00
/var/lib/garage ; tar -acf meta-v1.0.tar.zst meta/`); turn it back on
2025-06-14 17:04:55 +02:00
again. This will allow you to take a backup of all nodes without
impacting global cluster availability. You can do all nodes of a single
zone at once as this does not impact the availability of Garage.
3. Prepare your updated binaries and configuration files for Garage v2.0.
**Remember to update your configuration file to remove `replication_mode` and replace it by `replication_factor`.**
4. Shut down all v1.0 nodes simultaneously, and restart them all simultaneously
in v2.0. Use your favorite deployment tool (Ansible, Kubernetes, Nomad) to
achieve this as fast as possible. Garage v2.0 should be in a working state
as soon as enough nodes have started.
5. Monitor your cluster in the following hours to see if it works well under
your production load.