Compare commits

...

55 commits

Author SHA1 Message Date
Alex Auvolat
b6b18427a5 use optimization level 3 and thin LTO for release builds (#1405)
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1405
Co-authored-by: Alex Auvolat <lx@deuxfleurs.fr>
Co-committed-by: Alex Auvolat <lx@deuxfleurs.fr>
2026-04-16 08:47:02 +00:00
Gauthier Zirnhelt
9987166b2b Fix the LifecycleWorker being uncooperative (#1396)
## Summary

This PR ensures that the `LifecycleWorker` yields at least once to the Tokio scheduler in between each batch of 100 objects.

## Problem being solved

I'm administrating a Garage cluster which has been experiencing timeouts on all endpoints while the lifecycle worker is running at midnight UTC : `Ping timeout` error messages and even requests eventually failing due to `Could not reach quorum ...`.

I have found that this happens while the lifecycle worker is working on a big bucket (containing millions of objects) with a lifecycle rule that applies to very few objects.
The `process_object()` function does not hit any `await`:
- `last_bucket` is always the same, so the `bucket_table` is not read asynchronously
- no transaction is made on the `object_table` because my lifecycle rule (almost) never applies to any object

The first commit in this PR adds an executable which reproduces the problem that I've been experiencing in a self-contained way : the lifecycle worker starves the Tokio scheduler so much that no other task is able to run (or very rarely).
To run it : `cargo run -p garage_model --bin lifecycle-starvation-test`.
This commit can be dropped post-review, as it's only useful to demonstrate the starvation.

The error messages completely stopped after adding the extra yield to the nodes of my cluster.
The duration of the lifecycle worker task does not appear to have changed at all from what I can see (looking at the timestamps produced either by the self-contained binary or by each of my nodes with the `Lifecycle worker finished` message).

## Note

An other potential fix would have been to force the `WorkerProcessor` to yield before re-enqueuing a busy task, but this would have affected all Garage workers even though it's only the `LifecycleWorker` being uncooperative.

Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1396
Reviewed-by: Alex <lx@deuxfleurs.fr>
Co-authored-by: Gauthier Zirnhelt <gauthier.zirnhelt@insimo.fr>
Co-committed-by: Gauthier Zirnhelt <gauthier.zirnhelt@insimo.fr>
2026-04-15 09:56:24 +00:00
trinity-1686a
b72b090a09 fix silent write errors (#1358)
fix #1355

some write errors are not reported when calling write_all. That's notably the case of ENOSPC on small buffers (1MiB).
on ext4, the error is catched when calling flush(). This is hopefully the case on most local filesystems, though afaik this assumption doesn't hold for NFS

Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1358
Co-authored-by: trinity-1686a <trinity@deuxfleurs.fr>
Co-committed-by: trinity-1686a <trinity@deuxfleurs.fr>
2026-02-21 07:21:24 +00:00
Armael
8551aefed4 Fix: correctly parse CORS website configuration with no rules (#1320)
When sending a website config with an empty list of CORS rules, garage currently incorrectly refuses it with error message "Invalid XML: missing field `CORSRule`".
This fix the issue by following the documentation of quick-xml related to serde field parameters for this specific scenario:  https://docs.rs/quick-xml/latest/quick_xml/de/#sequences-xsall-and-xssequence-xml-schema-types .

(I've based this PR on main-v1 because we want it for deuxfleurs' deployment.)

Co-authored-by: Armaël Guéneau <armael.gueneau@ens-lyon.org>
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1320
Co-authored-by: Armael <armael@noreply.localhost>
Co-committed-by: Armael <armael@noreply.localhost>
2026-02-07 13:11:20 +00:00
Alex Auvolat
47bf5d9fb0 bump version to v1.3.1 2026-01-24 13:01:27 +01:00
Alex Auvolat
5df37dae5e update cargo dependencies in main-v1 (#1299)
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1299
Co-authored-by: Alex Auvolat <lx@deuxfleurs.fr>
Co-committed-by: Alex Auvolat <lx@deuxfleurs.fr>
2026-01-24 11:59:01 +00:00
Alex
44af0bdab3 Merge pull request 'Backport #1283 and #1290 to main-v1' (#1297) from backports-v1 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1297
2026-01-24 11:34:28 +00:00
rmoff
a7d6620e18 Fix typo in error message 2026-01-24 12:21:45 +01:00
Joe Anderson
8eb12755e4 Allow bucket to be missing from presigned post params 2026-01-24 12:21:25 +01:00
maximilien
c685a2cbaf Merge pull request 'Update doc/book/cookbook/binary-packages.md' (#1269) from nmstoker/garage:main-v1 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1269
2025-12-21 21:12:15 +00:00
maximilien
969f42a970 Merge pull request 'feat: add service annotations' (#1264) from deimosfr/garage:feat/add_helm_svc_annotations into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1264
Reviewed-by: maximilien <git@mricher.fr>
2025-12-21 21:00:00 +00:00
nmstoker
424d4f8d4d Update doc/book/cookbook/binary-packages.md
Correct the Arch Linux link as garage is now available in the official repos under extra, and no longer in AUR.
2025-12-20 13:16:38 +00:00
Pierre Mavro
bf5290036f
feat: add service annotations 2025-12-18 18:12:22 +01:00
Alex
4efc8bac07 Merge pull request 'Add the parameter, which replaces . This is to accommodate different storage media such as HDD and NVMe.' (#1251) from perrynzhou/garage:dev into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1251
Reviewed-by: Alex <lx@deuxfleurs.fr>
2025-12-17 10:05:49 +00:00
perrynzhou
f3dcc39903 Merge branch 'main-v1' into dev 2025-12-17 10:05:19 +00:00
maximilien
43e02920c2 Merge pull request 'docs: fix typo in doc/book/cookbook/kubernetes.md' (#1259) from simonpasquier/garage:fix-typo into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1259
Reviewed-by: maximilien <me@mricher.fr>
2025-12-17 07:09:59 +00:00
Simon Pasquier
dcc2fe4ac5
docs: fix typo in doc/book/cookbook/kubernetes.md 2025-12-16 10:16:44 +01:00
perrynzhou@gmail.com
e3a5ec6ef6 rename put_blocks_max_parallel to block_max_concurrent_writes_per_request and update configuration.md 2025-12-12 07:09:38 +08:00
perrynzhou@gmail.com
4d124e1c76 Add the parameter, which replaces . This is to accommodate different storage media such as HDD and NVMe. 2025-12-10 06:43:51 +08:00
Alex
d769a7be5d Merge pull request 'Update rust toolchain to 1.91.0' (#1233) from toolchain-update into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1233
2025-11-25 09:58:17 +00:00
Alex Auvolat
511cf0c6ec disable awscli checksumming in ci scripts
required because garage.deuxfleurs.fr is still running v1.x
2025-11-24 18:37:34 +01:00
Alex Auvolat
95693d45b2 run cargo fmt as a nix derivation 2025-11-24 18:09:53 +01:00
Alex Auvolat
ca296477f3 disable checksums in aws cli (todo: revert in main-v2) 2025-11-24 17:58:57 +01:00
Alex Auvolat
ca3b4a050d update nixos image used in woodpecker ci 2025-11-24 17:35:51 +01:00
Alex Auvolat
a057ab23ea Update rust toolchain 2025-11-24 11:09:46 +01:00
Alex
58bc65b9a8 Merge pull request 'migrate to this error, garage-v1' (#1218) from thiserror into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1218
Reviewed-by: Alex <lx@deuxfleurs.fr>
2025-11-12 08:05:32 +00:00
trinity-1686a
ac851d6dee fmt 2025-11-01 18:04:54 +01:00
trinity-1686a
eac2aa6fe4 Merge pull request 'fix: default config path changed for alpine binary' (#1204) from berndsen-io/garage:fix-alpine-docs into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1204
Reviewed-by: trinity-1686a <trinity@deuxfleurs.fr>
2025-11-01 16:43:32 +00:00
trinity-1686a
1e0201ada2 Merge pull request 'Update link to signature v2.' (#1211) from teo-tsirpanis/garage:sigv2-docs into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1211
Reviewed-by: trinity-1686a <trinity@deuxfleurs.fr>
2025-11-01 16:43:05 +00:00
trinity-1686a
82297371bf migrate to this error
it doesn't generate a bazillion warning at compile time
2025-11-01 17:20:39 +01:00
teo-tsirpanis
174f4f01a8 Update link to signature v2. 2025-10-26 15:54:08 +00:00
fgberry
1aac7b4875 chore: spacing 2025-10-24 11:25:33 +02:00
fgberry
b43c58cbe5 fix: default config path changed for alpine binary 2025-10-24 11:22:32 +02:00
Alex
9481ac428e Merge pull request 'sigv4: don't enforce x-amz-content-sha256 to be in signed headers list (fix #770)' (#1195) from fix-770 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1195
2025-10-14 09:34:35 +00:00
Alex Auvolat
1c29d04cc5 sigv4: don't enforce x-amz-content-sha256 to be in signed headers list (fix #770)
From the following page:
https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html

> In both cases, because the x-amz-content-sha256 header value is already
> part of your HashedPayload, you are not required to include the
> x-amz-content-sha256 header as a canonical header.
2025-10-14 11:18:25 +02:00
Alex
b48a8eaa1f Merge pull request 'properly handle precondition time equal to object time' (#1193) from precondition-ms into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1193
2025-10-10 19:41:06 +00:00
trinity-1686a
42fd8583bd properly handle precondition time equal to object time 2025-10-08 17:54:22 +02:00
Alex
236af3a958 Merge pull request 'Garage v1.3.0' (#1166) from rel-v1.3.0 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1166
2025-09-14 21:26:21 +00:00
Alex Auvolat
4b1fdbef55 bump version to v1.3.0 2025-09-14 21:36:33 +02:00
Alex Auvolat
0f1b488be0 fix rust warnings 2025-09-14 21:25:37 +02:00
Alex
0bbf63ee0e Merge pull request 'update rusqlite and snapshot using VACUUM INTO' (#1164) from update-rusqlite into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1164
2025-09-14 18:28:01 +00:00
Alex
879d941d7b Merge pull request 'add garage repair clear-resync-queue (fix #1151)' (#1165) from clear-resync-queue into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1165
2025-09-14 17:50:41 +00:00
Alex Auvolat
d726cf0299 add garage repair clear-resync-queue (fix #1151) 2025-09-14 19:34:44 +02:00
Alex
0c7aeab6f8 Merge pull request 'garage_db: fix error handling logic (fix #1138)' (#1163) from fix-1138 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1163
2025-09-14 17:26:08 +00:00
Alex Auvolat
5687fc0375 update rusqlite and snapshot using VACUUM INTO 2025-09-14 19:22:36 +02:00
Alex
97f1e9ab52 Merge pull request 'Add Plakar documentation (backup tools)' (#1119) from Lapineige/garage:Plakar_support into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1119
2025-09-14 16:08:36 +00:00
Lapineige
60b1d78b56 Add Plakar documentation 2025-09-14 18:07:49 +02:00
Alex Auvolat
4c895a7186 garage_db: fix error handling logic (fix #1138) 2025-09-14 18:03:31 +02:00
Alex
c3b5cbf212 Merge pull request 'fix panic when cluster_layout cannot be saved (fix #1150)' (#1158) from fix-1150 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1158
2025-09-13 15:58:52 +00:00
Alex
57a467b5c0 Merge pull request 'Block manager: limit simultaneous block reads from disk' (#1157) from block-max-simultaneous-reads into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1157
2025-09-13 15:53:24 +00:00
Alex Auvolat
6cf6db5c61 fix panic when cluster_layout cannot be saved (fix #1150) 2025-09-13 17:49:25 +02:00
Alex Auvolat
d5a57e3e13 block: read_block: don't add not found blocks to resync queue 2025-09-13 17:38:23 +02:00
Alex Auvolat
5cf354acb4 block: maximum number of simultaneous reads 2025-09-13 17:38:06 +02:00
Alex
2b007ddea3 Merge pull request 'woodpecker: require the nix=enabled label' (#1152) from woodpecker-nix-flag into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1152
2025-09-04 09:10:10 +00:00
Alex Auvolat
c8599a8636 woodpecker: require the nix=enabled label 2025-09-04 11:06:46 +02:00
65 changed files with 1497 additions and 1145 deletions

View file

@ -1,3 +1,6 @@
labels:
nix: "enabled"
when:
event:
- push
@ -9,32 +12,32 @@ when:
steps:
- name: check formatting
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-shell --attr devShell --run "cargo fmt -- --check"
- nix-build -j4 --attr flakePackages.fmt
- name: build
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-build -j4 --attr flakePackages.dev
- name: unit + func tests (lmdb)
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-build -j4 --attr flakePackages.tests-lmdb
- name: unit + func tests (sqlite)
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-build -j4 --attr flakePackages.tests-sqlite
- name: unit + func tests (fjall)
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-build -j4 --attr flakePackages.tests-fjall
- name: integration tests
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-build -j4 --attr flakePackages.dev
- nix-shell --attr ci --run ./script/test-smoke.sh || (cat /tmp/garage.log; false)

View file

@ -1,3 +1,6 @@
labels:
nix: "enabled"
when:
event:
- deployment
@ -8,7 +11,7 @@ depends_on:
steps:
- name: refresh-index
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
environment:
AWS_ACCESS_KEY_ID:
from_secret: garagehq_aws_access_key_id
@ -19,7 +22,7 @@ steps:
- nix-shell --attr ci --run "refresh_index"
- name: multiarch-docker
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
environment:
DOCKER_AUTH:
from_secret: docker_auth

View file

@ -1,3 +1,6 @@
labels:
nix: "enabled"
when:
event:
- deployment
@ -16,17 +19,17 @@ matrix:
steps:
- name: build
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-build --attr releasePackages.${ARCH} --argstr git_version ${CI_COMMIT_TAG:-$CI_COMMIT_SHA}
- name: check is static binary
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-shell --attr ci --run "./script/not-dynamic.sh result/bin/garage"
- name: integration tests
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-shell --attr ci --run ./script/test-smoke.sh || (cat /tmp/garage.log; false)
when:
@ -36,7 +39,7 @@ steps:
ARCH: i386
- name: upgrade tests
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
commands:
- nix-shell --attr ci --run "./script/test-upgrade.sh v0.8.4 x86_64-unknown-linux-musl" || (cat /tmp/garage.log; false)
when:
@ -44,7 +47,7 @@ steps:
ARCH: amd64
- name: push static binary
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
environment:
TARGET: "${TARGET}"
AWS_ACCESS_KEY_ID:
@ -55,7 +58,7 @@ steps:
- nix-shell --attr ci --run "to_s3"
- name: docker build and publish
image: nixpkgs/nix:nixos-22.05
image: nixpkgs/nix:nixos-24.05
environment:
DOCKER_PLATFORM: "linux/${ARCH}"
CONTAINER_NAME: "dxflrs/${ARCH}_garage"

1744
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -24,18 +24,18 @@ default-members = ["src/garage"]
# Internal Garage crates
format_table = { version = "0.1.1", path = "src/format-table" }
garage_api_common = { version = "1.2.0", path = "src/api/common" }
garage_api_admin = { version = "1.2.0", path = "src/api/admin" }
garage_api_s3 = { version = "1.2.0", path = "src/api/s3" }
garage_api_k2v = { version = "1.2.0", path = "src/api/k2v" }
garage_block = { version = "1.2.0", path = "src/block" }
garage_db = { version = "1.2.0", path = "src/db", default-features = false }
garage_model = { version = "1.2.0", path = "src/model", default-features = false }
garage_net = { version = "1.2.0", path = "src/net" }
garage_rpc = { version = "1.2.0", path = "src/rpc" }
garage_table = { version = "1.2.0", path = "src/table" }
garage_util = { version = "1.2.0", path = "src/util" }
garage_web = { version = "1.2.0", path = "src/web" }
garage_api_common = { version = "1.3.1", path = "src/api/common" }
garage_api_admin = { version = "1.3.1", path = "src/api/admin" }
garage_api_s3 = { version = "1.3.1", path = "src/api/s3" }
garage_api_k2v = { version = "1.3.1", path = "src/api/k2v" }
garage_block = { version = "1.3.1", path = "src/block" }
garage_db = { version = "1.3.1", path = "src/db", default-features = false }
garage_model = { version = "1.3.1", path = "src/model", default-features = false }
garage_net = { version = "1.3.1", path = "src/net" }
garage_rpc = { version = "1.3.1", path = "src/rpc" }
garage_table = { version = "1.3.1", path = "src/table" }
garage_util = { version = "1.3.1", path = "src/util" }
garage_web = { version = "1.3.1", path = "src/web" }
k2v-client = { version = "0.0.4", path = "src/k2v-client" }
# External crates from crates.io
@ -52,7 +52,6 @@ chrono = "0.4"
crc32fast = "1.4"
crc32c = "0.6"
crypto-common = "0.1"
err-derive = "0.3"
gethostname = "0.4"
git-version = "0.3.4"
hex = "0.4"
@ -88,9 +87,9 @@ tracing-journald = "0.3.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
heed = { version = "0.11", default-features = false, features = ["lmdb"] }
rusqlite = "0.31.0"
rusqlite = "0.37"
r2d2 = "0.8"
r2d2_sqlite = "0.24"
r2d2_sqlite = "0.31"
fjall = "2.4"
async-compression = { version = "0.4", features = ["tokio", "zstd"] }
@ -137,7 +136,7 @@ prometheus = "0.13"
aws-sigv4 = { version = "1.1", default-features = false }
hyper-rustls = { version = "0.26", default-features = false, features = ["http1", "http2", "ring", "rustls-native-certs"] }
log = "0.4"
thiserror = "1.0"
thiserror = "2.0"
# ---- used only as build / dev dependencies ----
assert-json-diff = "2.0"
@ -147,12 +146,8 @@ aws-smithy-runtime = { version = "1.8", default-features = false, features = ["t
aws-sdk-config = { version = "1.62", default-features = false }
aws-sdk-s3 = { version = "1.79", default-features = false, features = ["rt-tokio"] }
[profile.dev]
#lto = "thin" # disabled for now, adds 2-4 min to each CI build
lto = "off"
[profile.release]
lto = true
codegen-units = 1
opt-level = "s"
strip = true
lto = "thin"
codegen-units = 16
opt-level = 3
strip = "debuginfo"

View file

@ -161,3 +161,49 @@ kopia repository validate-provider
You can then run all the standard kopia commands: `kopia snapshot create`, `kopia mount`...
Everything should work out-of-the-box.
## Plakar
Create your key and bucket on Garage server:
```bash
garage key create my-plakar-key
garage bucket create plakar-backups
garage bucket allow plakar-backups --read --write --key my-plakar-key
```
On Plakar server, add your Garage as a storage location:
```bash
plakar store add garageS3 s3://my-garage.tld/plakar-backups \
region=garage # Or as you've specified in garage.toml \
access_key=<Key ID from "garage key info my-plakar-key"> \
secret_access_key=<Secret key from "garage key info my-plakar-key">
```
Then create the repository.
```bash
plakar at @garageS3 create -plaintext # Unencrypted
# or
plakar at @garageS3 create #encrypted
```
If you encrypt your backups (Plakar default), you will need to define a strong passphrase. Do not forget to save your password safely. It will be needed to decrypt your backups.
After the repository has been created, check that everything works as expected (that might give an empty result as no file has been added yet, but no error message):
```bash
plakar at @garageS3 check
```
Now that everything is configure, you can use Garage as your backups storage. For instance sync it with a local backup storage:
```bash
$ plakar at ~/backups sync to @garageS3
```
Or list the S3 storage content:
```bash
$ plakar at @garageS3 ls
```
More information in Plakar documentation: https://www.plakar.io/docs/main/quickstart/

View file

@ -15,9 +15,10 @@ Alpine Linux repositories (available since v3.17):
apk add garage
```
The default configuration file is installed to `/etc/garage.toml`. You can run
Garage using: `rc-service garage start`. If you don't specify `rpc_secret`, it
will be automatically replaced with a random string on the first start.
The default configuration file is installed to `/etc/garage/garage.toml`. You can run
Garage using: `rc-service garage start`.
If you don't specify `rpc_secret`, it will be automatically replaced with a random string on the first start.
Please note that this package is built without Consul discovery, Kubernetes
discovery, OpenTelemetry exporter, and K2V features (K2V will be enabled once
@ -26,7 +27,7 @@ it's stable).
## Arch Linux
Garage is available in the [AUR](https://aur.archlinux.org/packages/garage).
Garage is available in the official repositories under [extra](https://archlinux.org/packages/extra/x86_64/garage).
## FreeBSD

View file

@ -11,7 +11,7 @@ Firstly clone the repository:
```bash
git clone https://git.deuxfleurs.fr/Deuxfleurs/garage
cd garage/scripts/helm
cd garage/script/helm
```
Deploy with default options:

View file

@ -96,14 +96,14 @@ to store 2 TB of data in total.
## Get a Docker image
Our docker image is currently named `dxflrs/garage` and is stored on the [Docker Hub](https://hub.docker.com/r/dxflrs/garage/tags?page=1&ordering=last_updated).
We encourage you to use a fixed tag (eg. `v1.2.0`) and not the `latest` tag.
For this example, we will use the latest published version at the time of the writing which is `v1.2.0` but it's up to you
We encourage you to use a fixed tag (eg. `v1.3.0`) and not the `latest` tag.
For this example, we will use the latest published version at the time of the writing which is `v1.3.0` but it's up to you
to check [the most recent versions on the Docker Hub](https://hub.docker.com/r/dxflrs/garage/tags?page=1&ordering=last_updated).
For example:
```
sudo docker pull dxflrs/garage:v1.2.0
sudo docker pull dxflrs/garage:v1.3.0
```
## Deploying and configuring Garage
@ -171,7 +171,7 @@ docker run \
-v /etc/garage.toml:/etc/garage.toml \
-v /var/lib/garage/meta:/var/lib/garage/meta \
-v /var/lib/garage/data:/var/lib/garage/data \
dxflrs/garage:v1.2.0
dxflrs/garage:v1.3.0
```
With this command line, Garage should be started automatically at each boot.
@ -185,7 +185,7 @@ If you want to use `docker-compose`, you may use the following `docker-compose.y
version: "3"
services:
garage:
image: dxflrs/garage:v1.2.0
image: dxflrs/garage:v1.3.0
network_mode: "host"
restart: unless-stopped
volumes:

View file

@ -132,7 +132,7 @@ docker run \
-v /path/to/garage.toml:/etc/garage.toml \
-v /path/to/garage/meta:/var/lib/garage/meta \
-v /path/to/garage/data:/var/lib/garage/data \
dxflrs/garage:v1.2.0
dxflrs/garage:v1.3.0
```
Under Linux, you can substitute `--network host` for `-p 3900:3900 -p 3901:3901 -p 3902:3902 -p 3903:3903`

View file

@ -24,7 +24,8 @@ db_engine = "lmdb"
block_size = "1M"
block_ram_buffer_max = "256MiB"
block_max_concurrent_reads = 16
block_max_concurrent_writes_per_request =10
lmdb_map_size = "1T"
compression_level = 1
@ -96,7 +97,9 @@ The following gives details about each available configuration option.
Top-level configuration options, in alphabetical order:
[`allow_punycode`](#allow_punycode),
[`allow_world_readable_secrets`](#allow_world_readable_secrets),
[`block_max_concurrent_reads`](`block_max_concurrent_reads),
[`block_ram_buffer_max`](#block_ram_buffer_max),
[`block_max_concurrent_writes_per_request`](#block_max_concurrent_writes_per_request),
[`block_size`](#block_size),
[`bootstrap_peers`](#bootstrap_peers),
[`compression_level`](#compression_level),
@ -522,6 +525,37 @@ node.
The default value is 256MiB.
#### `block_max_concurrent_reads` (since `v1.3.0` / `v2.1.0`) {#block_max_concurrent_reads}
The maximum number of blocks (individual files in the data directory) open
simultaneously for reading.
Reducing this number does not limit the number of data blocks that can be
transferred through the network simultaneously. This mechanism was just added
as a backpressure mechanism for HDD read speed: it helps avoid a situation
where too many requests are coming in and Garage is reading too many block
files simultaneously, thus not making timely progress on any of the reads.
When a request to read a data block comes in through the network, the requests
awaits for one of the `block_max_concurrent_reads` slots to be available
(internally implemented using a Semaphore object). Once it acquired a read
slot, it reads the entire block file to RAM and frees the slot as soon as the
block file is finished reading. Only after the slot is released will the
block's data start being transferred over the network. If the request fails to
acquire a reading slot wihtin 15 seconds, it fails with a timeout error.
Timeout events can be monitored through the `block_read_semaphore_timeouts`
metric in Prometheus: a non-zero number of such events indicates an I/O
bottleneck on HDD read speed.
#### `block_max_concurrent_writes_per_request` (since `v2.1.0`) {#block_max_concurrent_writes_per_request}
This parameter is designed to adapt to the concurrent write performance of
different storage media.Maximum number of parallel block writes per put request
Higher values improve throughput but increase memory usage.
Default: 3, Recommended: 10-30 for NVMe, 3-10 for HDD
#### `lmdb_map_size` {#lmdb_map_size}
This parameters can be used to set the map size used by LMDB,

View file

@ -27,7 +27,7 @@ Feel free to open a PR to suggest fixes this table. Minio is missing because the
| Feature | Garage | [Openstack Swift](https://docs.openstack.org/swift/latest/s3_compat.html) | [Ceph Object Gateway](https://docs.ceph.com/en/latest/radosgw/s3/) | [Riak CS](https://docs.riak.com/riak/cs/2.1.1/references/apis/storage/s3/index.html) | [OpenIO](https://docs.openio.io/latest/source/arch-design/s3_compliancy.html) |
|------------------------------|----------------------------------|-----------------|---------------|---------|-----|
| [signature v2](https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html) (deprecated) | ❌ Missing | ✅ | ✅ | ✅ | ✅ |
| [signature v2](https://docs.aws.amazon.com/AmazonS3/latest/API/Appendix-Sigv2.html) (deprecated) | ❌ Missing | ✅ | ✅ | ✅ | ✅ |
| [signature v4](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) | ✅ Implemented | ✅ | ✅ | ❌ | ✅ |
| [URL path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#path-style-access) (eg. `host.tld/bucket/key`) | ✅ Implemented | ✅ | ✅ | ❓| ✅ |
| [URL vhost-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#virtual-hosted-style-access) URL (eg. `bucket.host.tld/key`) | ✅ Implemented | ❌| ✅| ✅ | ✅ |

View file

@ -70,7 +70,7 @@ Example response body:
```json
{
"node": "b10c110e4e854e5aa3f4637681befac755154b20059ec163254ddbfae86b09df",
"garageVersion": "v1.2.0",
"garageVersion": "v1.3.0",
"garageFeatures": [
"k2v",
"lmdb",

16
flake.lock generated
View file

@ -50,17 +50,17 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1736692550,
"narHash": "sha256-7tk8xH+g0sJkKLTJFOxphJxxOjMDFMWv24nXslaU2ro=",
"lastModified": 1763977559,
"narHash": "sha256-g4MKqsIRy5yJwEsI+fYODqLUnAqIY4kZai0nldAP6EM=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "7c4869c47090dd7f9f1bdfb49a22aea026996815",
"rev": "cfe2c7d5b5d3032862254e68c37a6576b633d632",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "7c4869c47090dd7f9f1bdfb49a22aea026996815",
"rev": "cfe2c7d5b5d3032862254e68c37a6576b633d632",
"type": "github"
}
},
@ -80,17 +80,17 @@
]
},
"locked": {
"lastModified": 1738549608,
"narHash": "sha256-GdyT9QEUSx5k/n8kILuNy83vxxdyUfJ8jL5mMpQZWfw=",
"lastModified": 1763952169,
"narHash": "sha256-+PeDBD8P+NKauH+w7eO/QWCIp8Cx4mCfWnh9sJmy9CM=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "35c6f8c4352f995ecd53896200769f80a3e8f22d",
"rev": "ab726555a9a72e6dc80649809147823a813fa95b",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "35c6f8c4352f995ecd53896200769f80a3e8f22d",
"rev": "ab726555a9a72e6dc80649809147823a813fa95b",
"type": "github"
}
},

View file

@ -2,13 +2,13 @@
description =
"Garage, an S3-compatible distributed object store for self-hosted deployments";
# Nixpkgs 24.11 as of 2025-01-12
# Nixpkgs 25.05 as of 2025-11-24
inputs.nixpkgs.url =
"github:NixOS/nixpkgs/7c4869c47090dd7f9f1bdfb49a22aea026996815";
"github:NixOS/nixpkgs/cfe2c7d5b5d3032862254e68c37a6576b633d632";
# Rust overlay as of 2025-02-03
# Rust overlay as of 2025-11-24
inputs.rust-overlay.url =
"github:oxalica/rust-overlay/35c6f8c4352f995ecd53896200769f80a3e8f22d";
"github:oxalica/rust-overlay/ab726555a9a72e6dc80649809147823a813fa95b";
inputs.rust-overlay.inputs.nixpkgs.follows = "nixpkgs";
inputs.crane.url = "github:ipetkov/crane";
@ -30,6 +30,10 @@
inherit system nixpkgs crane rust-overlay extraTestEnv;
release = false;
}).garage-test;
lints = (compile {
inherit system nixpkgs crane rust-overlay;
release = false;
});
in
{
packages = {
@ -56,6 +60,10 @@
tests-fjall = testWith {
GARAGE_TEST_INTEGRATION_DB_ENGINE = "fjall";
};
# lints (fmt, clippy)
fmt = lints.garage-cargo-fmt;
clippy = lints.garage-cargo-clippy;
};
# ---- developpment shell, for making native builds only ----

View file

@ -48,7 +48,7 @@ let
inherit (pkgs) lib stdenv;
toolchainFn = (p: p.rust-bin.stable."1.82.0".default.override {
toolchainFn = (p: p.rust-bin.stable."1.91.0".default.override {
targets = lib.optionals (target != null) [ rustTarget ];
extensions = [
"rust-src"
@ -190,4 +190,15 @@ in rec {
pkgs.cacert
];
} // extraTestEnv);
# ---- source code linting ----
garage-cargo-fmt = craneLib.cargoFmt (commonArgs // {
cargoExtraArgs = "";
});
garage-cargo-clippy = craneLib.cargoClippy (commonArgs // {
cargoArtifacts = garage-deps;
cargoClippyExtraArgs = "--all-targets -- -D warnings";
});
}

View file

@ -1,6 +1,7 @@
export AWS_ACCESS_KEY_ID=`cat /tmp/garage.s3 |cut -d' ' -f1`
export AWS_SECRET_ACCESS_KEY=`cat /tmp/garage.s3 |cut -d' ' -f2`
export AWS_DEFAULT_REGION='garage'
export AWS_REQUEST_CHECKSUM_CALCULATION='when_required'
# FUTUREWORK: set AWS_ENDPOINT_URL instead, once nixpkgs bumps awscli to >=2.13.0.
function aws { command aws --endpoint-url http://127.0.0.1:3911 $@ ; }

View file

@ -2,8 +2,8 @@ apiVersion: v2
name: garage
description: S3-compatible object store for small self-hosted geo-distributed deployments
type: application
version: 0.7.1
appVersion: "v1.2.0"
version: 0.7.3
appVersion: "v1.3.1"
home: https://garagehq.deuxfleurs.fr/
icon: https://garagehq.deuxfleurs.fr/images/garage-logo.svg

View file

@ -1,6 +1,6 @@
# garage
![Version: 0.7.1](https://img.shields.io/badge/Version-0.7.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.2.0](https://img.shields.io/badge/AppVersion-v1.2.0-informational?style=flat-square)
![Version: 0.7.3](https://img.shields.io/badge/Version-0.7.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.3.1](https://img.shields.io/badge/AppVersion-v1.3.1-informational?style=flat-square)
S3-compatible object store for small self-hosted geo-distributed deployments

View file

@ -4,6 +4,10 @@ metadata:
name: {{ include "garage.fullname" . }}
labels:
{{- include "garage.labels" . | nindent 4 }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:

View file

@ -124,6 +124,8 @@ service:
# - NodePort (+ Ingress)
# - LoadBalancer
type: ClusterIP
# -- Annotations to add to the service
annotations: {}
s3:
api:
port: 3900

View file

@ -34,6 +34,8 @@ in
jq
];
shellHook = ''
export AWS_REQUEST_CHECKSUM_CALCULATION='when_required'
function to_s3 {
aws \
--endpoint-url https://garage.deuxfleurs.fr \

View file

@ -1,6 +1,6 @@
[package]
name = "garage_api_admin"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -22,7 +22,7 @@ garage_api_common.workspace = true
argon2.workspace = true
async-trait.workspace = true
err-derive.workspace = true
thiserror.workspace = true
hex.workspace = true
tracing.workspace = true

View file

@ -1,8 +1,8 @@
use std::convert::TryFrom;
use err_derive::Error;
use hyper::header::HeaderValue;
use hyper::{HeaderMap, StatusCode};
use thiserror::Error;
pub use garage_model::helper::error::Error as HelperError;
@ -16,20 +16,17 @@ use garage_api_common::helpers::*;
/// Errors of this crate
#[derive(Debug, Error)]
pub enum Error {
#[error(display = "{}", _0)]
#[error("{0}")]
/// Error from common error
Common(#[error(source)] CommonError),
Common(#[from] CommonError),
// Category: cannot process
/// The API access key does not exist
#[error(display = "Access key not found: {}", _0)]
#[error("Access key not found: {0}")]
NoSuchAccessKey(String),
/// In Import key, the key already exists
#[error(
display = "Key {} already exists in data store. Even if it is deleted, we can't let you create a new key with the same ID. Sorry.",
_0
)]
#[error("Key {0} already exists in data store. Even if it is deleted, we can't let you create a new key with the same ID. Sorry.")]
KeyAlreadyExists(String),
}

View file

@ -1,6 +1,6 @@
[package]
name = "garage_api_common"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -24,7 +24,7 @@ chrono.workspace = true
crc32fast.workspace = true
crc32c.workspace = true
crypto-common.workspace = true
err-derive.workspace = true
thiserror.workspace = true
hex.workspace = true
hmac.workspace = true
md-5.workspace = true

View file

@ -1,7 +1,7 @@
use std::convert::TryFrom;
use err_derive::Error;
use hyper::StatusCode;
use thiserror::Error;
use garage_util::error::Error as GarageError;
@ -12,48 +12,48 @@ use garage_model::helper::error::Error as HelperError;
pub enum CommonError {
// ---- INTERNAL ERRORS ----
/// Error related to deeper parts of Garage
#[error(display = "Internal error: {}", _0)]
InternalError(#[error(source)] GarageError),
#[error("Internal error: {0}")]
InternalError(#[from] GarageError),
/// Error related to Hyper
#[error(display = "Internal error (Hyper error): {}", _0)]
Hyper(#[error(source)] hyper::Error),
#[error("Internal error (Hyper error): {0}")]
Hyper(#[from] hyper::Error),
/// Error related to HTTP
#[error(display = "Internal error (HTTP error): {}", _0)]
Http(#[error(source)] http::Error),
#[error("Internal error (HTTP error): {0}")]
Http(#[from] http::Error),
// ---- GENERIC CLIENT ERRORS ----
/// Proper authentication was not provided
#[error(display = "Forbidden: {}", _0)]
#[error("Forbidden: {0}")]
Forbidden(String),
/// Generic bad request response with custom message
#[error(display = "Bad request: {}", _0)]
#[error("Bad request: {0}")]
BadRequest(String),
/// The client sent a header with invalid value
#[error(display = "Invalid header value: {}", _0)]
InvalidHeader(#[error(source)] hyper::header::ToStrError),
#[error("Invalid header value: {0}")]
InvalidHeader(#[from] hyper::header::ToStrError),
// ---- SPECIFIC ERROR CONDITIONS ----
// These have to be error codes referenced in the S3 spec here:
// https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList
/// The bucket requested don't exists
#[error(display = "Bucket not found: {}", _0)]
#[error("Bucket not found: {0}")]
NoSuchBucket(String),
/// Tried to create a bucket that already exist
#[error(display = "Bucket already exists")]
#[error("Bucket already exists")]
BucketAlreadyExists,
/// Tried to delete a non-empty bucket
#[error(display = "Tried to delete a non-empty bucket")]
#[error("Tried to delete a non-empty bucket")]
BucketNotEmpty,
// Category: bad request
/// Bucket name is not valid according to AWS S3 specs
#[error(display = "Invalid bucket name: {}", _0)]
#[error("Invalid bucket name: {0}")]
InvalidBucketName(String),
}

View file

@ -33,7 +33,6 @@ use garage_util::metrics::{gen_trace_id, RecordDuration};
use garage_util::socket_address::UnixOrTCPSocketAddress;
use crate::helpers::{BoxBody, ErrorBody};
use crate::signature::payload::Authorization;
pub trait ApiEndpoint: Send + Sync + 'static {
fn name(&self) -> &'static str;
@ -62,7 +61,7 @@ pub trait ApiHandler: Send + Sync + 'static {
/// Returns the key id used to authenticate this request. The ID returned must be safe to
/// log.
fn key_id_from_request(&self, req: &Request<IncomingBody>) -> Option<String> {
fn key_id_from_request(&self, _req: &Request<IncomingBody>) -> Option<String> {
None
}
}

View file

@ -1,4 +1,4 @@
use err_derive::Error;
use thiserror::Error;
use crate::common_error::CommonError;
pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInternalError};
@ -6,21 +6,21 @@ pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInterna
/// Errors of this crate
#[derive(Debug, Error)]
pub enum Error {
#[error(display = "{}", _0)]
#[error("{0}")]
/// Error from common error
Common(CommonError),
/// Authorization Header Malformed
#[error(display = "Authorization header malformed, unexpected scope: {}", _0)]
#[error("Authorization header malformed, unexpected scope: {0}")]
AuthorizationHeaderMalformed(String),
// Category: bad request
/// The request contained an invalid UTF-8 sequence in its path or in other parameters
#[error(display = "Invalid UTF-8: {}", _0)]
InvalidUtf8Str(#[error(source)] std::str::Utf8Error),
#[error("Invalid UTF-8: {0}")]
InvalidUtf8Str(#[from] std::str::Utf8Error),
/// The provided digest (checksum) value was invalid
#[error(display = "Invalid digest: {}", _0)]
#[error("Invalid digest: {0}")]
InvalidDigest(String),
}

View file

@ -104,7 +104,7 @@ async fn check_standard_signature(
// Verify that all necessary request headers are included in signed_headers
// The following must be included for all signatures:
// - the Host header (mandatory)
// - all x-amz-* headers used in the request
// - all x-amz-* headers used in the request (except x-amz-content-sha256)
// AWS also indicates that the Content-Type header should be signed if
// it is used, but Minio client doesn't sign it so we don't check it for compatibility.
let signed_headers = split_signed_headers(&authorization)?;
@ -151,7 +151,7 @@ async fn check_presigned_signature(
// Verify that all necessary request headers are included in signed_headers
// For AWSv4 pre-signed URLs, the following must be included:
// - the Host header (mandatory)
// - all x-amz-* headers used in the request
// - all x-amz-* headers used in the request (except x-amz-content-sha256)
let signed_headers = split_signed_headers(&authorization)?;
verify_signed_headers(request.headers(), &signed_headers)?;
@ -268,7 +268,9 @@ fn verify_signed_headers(headers: &HeaderMap, signed_headers: &[HeaderName]) ->
return Err(Error::bad_request("Header `Host` should be signed"));
}
for (name, _) in headers.iter() {
if name.as_str().starts_with("x-amz-") {
// Enforce signature of all x-amz-* headers, except x-amz-content-sh256
// because it is included in the canonical request in all cases
if name.as_str().starts_with("x-amz-") && name != X_AMZ_CONTENT_SHA256 {
if !signed_headers.contains(name) {
return Err(Error::bad_request(format!(
"Header `{}` should be signed",
@ -468,8 +470,7 @@ impl Authorization {
let date = headers
.get(X_AMZ_DATE)
.ok_or_bad_request("Missing X-Amz-Date field")
.map_err(Error::from)?
.ok_or_bad_request("Missing X-Amz-Date field")?
.to_str()?;
let date = parse_date(date)?;

View file

@ -1,6 +1,6 @@
[package]
name = "garage_api_k2v"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -20,7 +20,7 @@ garage_util = { workspace = true, features = [ "k2v" ] }
garage_api_common.workspace = true
base64.workspace = true
err-derive.workspace = true
thiserror.workspace = true
tracing.workspace = true
futures.workspace = true

View file

@ -1,6 +1,6 @@
use err_derive::Error;
use hyper::header::HeaderValue;
use hyper::{HeaderMap, StatusCode};
use thiserror::Error;
use garage_api_common::common_error::{commonErrorDerivative, CommonError};
pub(crate) use garage_api_common::common_error::{helper_error_as_internal, pass_helper_error};
@ -14,38 +14,38 @@ use garage_api_common::signature::error::Error as SignatureError;
/// Errors of this crate
#[derive(Debug, Error)]
pub enum Error {
#[error(display = "{}", _0)]
#[error("{0}")]
/// Error from common error
Common(#[error(source)] CommonError),
Common(#[from] CommonError),
// Category: cannot process
/// Authorization Header Malformed
#[error(display = "Authorization header malformed, unexpected scope: {}", _0)]
#[error("Authorization header malformed, unexpected scope: {0}")]
AuthorizationHeaderMalformed(String),
/// The provided digest (checksum) value was invalid
#[error(display = "Invalid digest: {}", _0)]
#[error("Invalid digest: {0}")]
InvalidDigest(String),
/// The object requested don't exists
#[error(display = "Key not found")]
#[error("Key not found")]
NoSuchKey,
/// Some base64 encoded data was badly encoded
#[error(display = "Invalid base64: {}", _0)]
InvalidBase64(#[error(source)] base64::DecodeError),
#[error("Invalid base64: {0}")]
InvalidBase64(#[from] base64::DecodeError),
/// Invalid causality token
#[error(display = "Invalid causality token")]
#[error("Invalid causality token")]
InvalidCausalityToken,
/// The client asked for an invalid return format (invalid Accept header)
#[error(display = "Not acceptable: {}", _0)]
#[error("Not acceptable: {0}")]
NotAcceptable(String),
/// The request contained an invalid UTF-8 sequence in its path or in other parameters
#[error(display = "Invalid UTF-8: {}", _0)]
InvalidUtf8Str(#[error(source)] std::str::Utf8Error),
#[error("Invalid UTF-8: {0}")]
InvalidUtf8Str(#[from] std::str::Utf8Error),
}
commonErrorDerivative!(Error);

View file

@ -1,6 +1,6 @@
[package]
name = "garage_api_s3"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -29,7 +29,7 @@ bytes.workspace = true
chrono.workspace = true
crc32fast.workspace = true
crc32c.workspace = true
err-derive.workspace = true
thiserror.workspace = true
hex.workspace = true
tracing.workspace = true
md-5.workspace = true

View file

@ -88,7 +88,9 @@ pub async fn handle_put_cors(
pub struct CorsConfiguration {
#[serde(serialize_with = "xmlns_tag", skip_deserializing)]
pub xmlns: (),
#[serde(rename = "CORSRule")]
// "default" is required to be able to parse an empty list of rules,
// cf https://docs.rs/quick-xml/latest/quick_xml/de/#sequences-xsall-and-xssequence-xml-schema-types
#[serde(rename = "CORSRule", default)]
pub cors_rules: Vec<CorsRule>,
}
@ -270,4 +272,26 @@ mod tests {
Ok(())
}
#[test]
fn test_deserialize_norules() -> Result<(), Error> {
let message = r#"<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/" />"#;
let conf: CorsConfiguration = from_str(message).unwrap();
let ref_value = CorsConfiguration {
xmlns: (),
cors_rules: vec![],
};
assert_eq! {
ref_value,
conf
};
let message2 = to_xml_with_header(&ref_value)?;
let cleanup = |c: &str| c.replace(char::is_whitespace, "");
assert_eq!(cleanup(message), cleanup(&message2));
Ok(())
}
}

View file

@ -1,8 +1,8 @@
use std::convert::TryInto;
use err_derive::Error;
use hyper::header::HeaderValue;
use hyper::{HeaderMap, StatusCode};
use thiserror::Error;
use garage_model::helper::error::Error as HelperError;
@ -25,67 +25,67 @@ use crate::xml as s3_xml;
/// Errors of this crate
#[derive(Debug, Error)]
pub enum Error {
#[error(display = "{}", _0)]
#[error("{0}")]
/// Error from common error
Common(#[error(source)] CommonError),
Common(#[from] CommonError),
// Category: cannot process
/// Authorization Header Malformed
#[error(display = "Authorization header malformed, unexpected scope: {}", _0)]
#[error("Authorization header malformed, unexpected scope: {0}")]
AuthorizationHeaderMalformed(String),
/// The object requested don't exists
#[error(display = "Key not found")]
#[error("Key not found")]
NoSuchKey,
/// The multipart upload requested don't exists
#[error(display = "Upload not found")]
#[error("Upload not found")]
NoSuchUpload,
/// Precondition failed (e.g. x-amz-copy-source-if-match)
#[error(display = "At least one of the preconditions you specified did not hold")]
#[error("At least one of the preconditions you specified did not hold")]
PreconditionFailed,
/// Parts specified in CMU request do not match parts actually uploaded
#[error(display = "Parts given to CompleteMultipartUpload do not match uploaded parts")]
#[error("Parts given to CompleteMultipartUpload do not match uploaded parts")]
InvalidPart,
/// Parts given to CompleteMultipartUpload were not in ascending order
#[error(display = "Parts given to CompleteMultipartUpload were not in ascending order")]
#[error("Parts given to CompleteMultipartUpload were not in ascending order")]
InvalidPartOrder,
/// In CompleteMultipartUpload: not enough data
/// (here we are more lenient than AWS S3)
#[error(display = "Proposed upload is smaller than the minimum allowed object size")]
#[error("Proposed upload is smaller than the minimum allowed object size")]
EntityTooSmall,
// Category: bad request
/// The request contained an invalid UTF-8 sequence in its path or in other parameters
#[error(display = "Invalid UTF-8: {}", _0)]
InvalidUtf8Str(#[error(source)] std::str::Utf8Error),
#[error("Invalid UTF-8: {0}")]
InvalidUtf8Str(#[from] std::str::Utf8Error),
/// The request used an invalid path
#[error(display = "Invalid UTF-8: {}", _0)]
InvalidUtf8String(#[error(source)] std::string::FromUtf8Error),
#[error("Invalid UTF-8: {0}")]
InvalidUtf8String(#[from] std::string::FromUtf8Error),
/// The client sent invalid XML data
#[error(display = "Invalid XML: {}", _0)]
#[error("Invalid XML: {0}")]
InvalidXml(String),
/// The client sent a range header with invalid value
#[error(display = "Invalid HTTP range: {:?}", _0)]
InvalidRange(#[error(from)] (http_range::HttpRangeParseError, u64)),
#[error("Invalid HTTP range: {0:?}")]
InvalidRange((http_range::HttpRangeParseError, u64)),
/// The client sent a range header with invalid value
#[error(display = "Invalid encryption algorithm: {:?}, should be AES256", _0)]
#[error("Invalid encryption algorithm: {0:?}, should be AES256")]
InvalidEncryptionAlgorithm(String),
/// The provided digest (checksum) value was invalid
#[error(display = "Invalid digest: {}", _0)]
#[error("Invalid digest: {0}")]
InvalidDigest(String),
/// The client sent a request for an action not supported by garage
#[error(display = "Unimplemented action: {}", _0)]
#[error("Unimplemented action: {0}")]
NotImplemented(String),
}
@ -99,6 +99,12 @@ impl From<HelperError> for Error {
}
}
impl From<(http_range::HttpRangeParseError, u64)> for Error {
fn from(err: (http_range::HttpRangeParseError, u64)) -> Error {
Error::InvalidRange(err)
}
}
impl From<roxmltree::Error> for Error {
fn from(err: roxmltree::Error) -> Self {
Self::InvalidXml(format!("{}", err))

View file

@ -845,7 +845,9 @@ impl PreconditionHeaders {
}
fn check(&self, v: &ObjectVersion, etag: &str) -> Result<Option<StatusCode>, Error> {
let v_date = UNIX_EPOCH + Duration::from_millis(v.timestamp);
// we store date with ms precision, but headers are precise to the second: truncate
// the timestamp to handle the same-second edge case
let v_date = UNIX_EPOCH + Duration::from_secs(v.timestamp / 1000);
// Implemented from https://datatracker.ietf.org/doc/html/rfc7232#section-6

View file

@ -141,10 +141,26 @@ pub async fn handle_post_object(
let mut conditions = decoded_policy.into_conditions()?;
// If there are conditions on the bucket name, check these against the actual bucket_name rather
// than the one in params, which is allowed to be absent.
if let Some(conds) = conditions.params.remove("bucket") {
for cond in conds {
let ok = match cond {
Operation::Equal(s) => s.as_str() == bucket_name,
Operation::StartsWith(s) => bucket_name.starts_with(&s),
};
if !ok {
return Err(Error::bad_request(
"Key 'bucket' has value not allowed in policy",
));
}
}
}
for (param_key, value) in params.iter() {
let param_key = param_key.as_str();
match param_key {
"policy" | "x-amz-signature" => (), // this is always accepted, as it's required to validate other fields
"policy" | "x-amz-signature" | "bucket" => (), // this is always accepted, as it's required to validate other fields
"content-type" => {
let conds = conditions.params.remove("content-type").ok_or_else(|| {
Error::bad_request(format!("Key '{}' is not allowed in policy", param_key))

View file

@ -39,8 +39,6 @@ use crate::encryption::EncryptionParams;
use crate::error::*;
use crate::website::X_AMZ_WEBSITE_REDIRECT_LOCATION;
const PUT_BLOCKS_MAX_PARALLEL: usize = 3;
pub(crate) struct SaveStreamResult {
pub(crate) version_uuid: Uuid,
pub(crate) version_timestamp: u64,
@ -493,7 +491,7 @@ pub(crate) async fn read_and_put_blocks<S: Stream<Item = Result<Bytes, Error>> +
};
let recv_next = async {
// If more than a maximum number of writes are in progress, don't add more for now
if currently_running >= PUT_BLOCKS_MAX_PARALLEL {
if currently_running >= ctx.garage.config.block_max_concurrent_writes_per_request {
futures::future::pending().await
} else {
block_rx3.recv().await

View file

@ -1,6 +1,6 @@
[package]
name = "garage_block"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"

View file

@ -50,6 +50,8 @@ pub const INLINE_THRESHOLD: usize = 3072;
// to delete the block locally.
pub(crate) const BLOCK_GC_DELAY: Duration = Duration::from_secs(600);
const BLOCK_READ_SEMAPHORE_TIMEOUT: Duration = Duration::from_secs(15);
/// RPC messages used to share blocks of data between nodes
#[derive(Debug, Serialize, Deserialize)]
pub enum BlockRpc {
@ -87,6 +89,7 @@ pub struct BlockManager {
disable_scrub: bool,
mutation_lock: Vec<Mutex<BlockManagerLocked>>,
read_semaphore: Semaphore,
pub rc: BlockRc,
pub resync: BlockResyncManager,
@ -176,6 +179,8 @@ impl BlockManager {
.iter()
.map(|_| Mutex::new(BlockManagerLocked()))
.collect::<Vec<_>>(),
read_semaphore: Semaphore::new(config.block_max_concurrent_reads),
rc,
resync,
system,
@ -557,9 +562,6 @@ impl BlockManager {
match self.find_block(hash).await {
Some(p) => self.read_block_from(hash, &p).await,
None => {
// Not found but maybe we should have had it ??
self.resync
.put_to_resync(hash, 2 * self.system.rpc_helper().rpc_timeout())?;
return Err(Error::Message(format!(
"block {:?} not found on node",
hash
@ -581,6 +583,15 @@ impl BlockManager {
) -> Result<DataBlock, Error> {
let (header, path) = block_path.as_parts_ref();
let permit = tokio::select! {
sem = self.read_semaphore.acquire() => sem.ok_or_message("acquire read semaphore")?,
_ = tokio::time::sleep(BLOCK_READ_SEMAPHORE_TIMEOUT) => {
self.metrics.block_read_semaphore_timeouts.add(1);
debug!("read block {:?}: read_semaphore acquire timeout", hash);
return Err(Error::Message("read block: read_semaphore acquire timeout".into()));
}
};
let mut f = fs::File::open(&path).await?;
let mut data = vec![];
f.read_to_end(&mut data).await?;
@ -605,6 +616,8 @@ impl BlockManager {
return Err(Error::CorruptData(*hash));
}
drop(permit);
Ok(data)
}
@ -770,6 +783,7 @@ impl BlockManagerLocked {
let mut f = fs::File::create(&path_tmp).await?;
f.write_all(data).await?;
f.flush().await?;
mgr.metrics.bytes_written.add(data.len() as u64);
if mgr.data_fsync {

View file

@ -22,6 +22,7 @@ pub struct BlockManagerMetrics {
pub(crate) bytes_read: BoundCounter<u64>,
pub(crate) block_read_duration: BoundValueRecorder<f64>,
pub(crate) block_read_semaphore_timeouts: BoundCounter<u64>,
pub(crate) bytes_written: BoundCounter<u64>,
pub(crate) block_write_duration: BoundValueRecorder<f64>,
pub(crate) delete_counter: BoundCounter<u64>,
@ -119,6 +120,11 @@ impl BlockManagerMetrics {
.with_description("Duration of block read operations")
.init()
.bind(&[]),
block_read_semaphore_timeouts: meter
.u64_counter("block.read_semaphore_timeouts")
.with_description("Number of block reads that failed due to semaphore acquire timeout")
.init()
.bind(&[]),
bytes_written: meter
.u64_counter("block.bytes_written")
.with_description("Number of bytes written to disk")

View file

@ -133,6 +133,14 @@ impl BlockResyncManager {
)))
}
/// Clear the entire resync queue and list of errored blocks
/// Corresponds to `garage repair clear-resync-queue`
pub fn clear_resync_queue(&self) -> Result<(), Error> {
self.queue.clear()?;
self.errors.clear()?;
Ok(())
}
pub fn register_bg_vars(&self, vars: &mut vars::BgVars) {
let notify = self.notify.clone();
vars.register_rw(

View file

@ -1,6 +1,6 @@
[package]
name = "garage_db"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -12,7 +12,7 @@ readme = "../../README.md"
path = "lib.rs"
[dependencies]
err-derive.workspace = true
thiserror.workspace = true
tracing.workspace = true
heed = { workspace = true, optional = true }

View file

@ -20,7 +20,7 @@ use std::cell::Cell;
use std::path::PathBuf;
use std::sync::Arc;
use err_derive::Error;
use thiserror::Error;
pub use open::*;
@ -44,7 +44,7 @@ pub type TxValueIter<'a> = Box<dyn std::iter::Iterator<Item = TxOpResult<(Value,
// ----
#[derive(Debug, Error)]
#[error(display = "{}", _0)]
#[error("{0}")]
pub struct Error(pub Cow<'static, str>);
impl From<std::io::Error> for Error {
@ -56,7 +56,7 @@ impl From<std::io::Error> for Error {
pub type Result<T> = std::result::Result<T, Error>;
#[derive(Debug, Error)]
#[error(display = "{}", _0)]
#[error("{0}")]
pub struct TxOpError(pub(crate) Error);
pub type TxOpResult<T> = std::result::Result<T, TxOpError>;
@ -106,32 +106,44 @@ impl Db {
result: Cell::new(None),
};
let tx_res = self.0.transaction(&f);
let ret = f
.result
.into_inner()
.expect("Transaction did not store result");
let fn_res = f.result.into_inner();
match tx_res {
Ok(on_commit) => match ret {
Ok(value) => {
on_commit.into_iter().for_each(|f| f());
Ok(value)
}
_ => unreachable!(),
},
Err(TxError::Abort(())) => match ret {
Err(TxError::Abort(e)) => Err(TxError::Abort(e)),
_ => unreachable!(),
},
Err(TxError::Db(e2)) => match ret {
// Ok was stored -> the error occurred when finalizing
// transaction
Ok(_) => Err(TxError::Db(e2)),
// An error was already stored: that's the one we want to
// return
Err(TxError::Db(e)) => Err(TxError::Db(e)),
_ => unreachable!(),
},
match (tx_res, fn_res) {
(Ok(on_commit), Some(Ok(value))) => {
// Transaction succeeded
// TxFn stored the value to return to the user in fn_res
// tx_res contains the on_commit list of callbacks, run them now
on_commit.into_iter().for_each(|f| f());
Ok(value)
}
(Err(TxError::Abort(())), Some(Err(TxError::Abort(e)))) => {
// Transaction was aborted by user code
// The abort error value is stored in fn_res
Err(TxError::Abort(e))
}
(Err(TxError::Db(_tx_e)), Some(Err(TxError::Db(fn_e)))) => {
// Transaction encountered a DB error in user code
// The error value encountered is the one in fn_res,
// tx_res contains only a dummy error message
Err(TxError::Db(fn_e))
}
(Err(TxError::Db(tx_e)), None) => {
// Transaction encounterred a DB error when initializing the transaction,
// before user code was called
Err(TxError::Db(tx_e))
}
(Err(TxError::Db(tx_e)), Some(Ok(_))) => {
// Transaction encounterred a DB error when commiting the transaction,
// after user code was called
Err(TxError::Db(tx_e))
}
(tx_res, fn_res) => {
panic!(
"unexpected error case: tx_res={:?}, fn_res={:?}",
tx_res.map(|_| "..."),
fn_res.map(|x| x.map(|_| "...").map_err(|_| "..."))
);
}
}
}

View file

@ -151,30 +151,16 @@ impl IDb for SqliteDb {
}
fn snapshot(&self, base_path: &PathBuf) -> Result<()> {
fn progress(p: rusqlite::backup::Progress) {
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::{SystemTime, UNIX_EPOCH};
static LAST_LOG_TIME: AtomicU64 = AtomicU64::new(0);
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("Fix your clock :o")
.as_millis() as u64;
if now >= LAST_LOG_TIME.load(Ordering::Relaxed) + 10 * 1000 {
let percent = (p.pagecount - p.remaining) * 100 / p.pagecount;
info!("Sqlite snapshot progress: {}%", percent);
LAST_LOG_TIME.fetch_max(now, Ordering::Relaxed);
}
}
std::fs::create_dir_all(base_path)?;
let path = Engine::Sqlite.db_path(&base_path);
let path = Engine::Sqlite
.db_path(&base_path)
.into_os_string()
.into_string()
.map_err(|_| Error("invalid sqlite path string".into()))?;
self.db
.get()?
.backup(rusqlite::DatabaseName::Main, path, Some(progress))?;
info!("Start sqlite VACUUM INTO `{}`", path);
self.db.get()?.execute("VACUUM INTO ?1", params![path])?;
info!("Finished sqlite VACUUM INTO `{}`", path);
Ok(())
}

View file

@ -1,6 +1,6 @@
[package]
name = "garage"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"

View file

@ -466,6 +466,10 @@ pub enum RepairWhat {
/// Repair (resync/rebalance) the set of stored blocks in the cluster
#[structopt(name = "blocks", version = garage_version())]
Blocks,
/// Clear the block resync queue. The list of blocks in errored state
/// is cleared as well. You MUST run `garage repair blocks` after invoking this.
#[structopt(name = "clear-resync-queue", version = garage_version())]
ClearResyncQueue,
/// Repropagate object deletions to the version table
#[structopt(name = "versions", version = garage_version())]
Versions,

View file

@ -92,6 +92,11 @@ pub async fn launch_online_repair(
info!("Repairing bucket aliases (foreground)");
garage.locked_helper().await.repair_aliases().await?;
}
RepairWhat::ClearResyncQueue => {
let garage = garage.clone();
tokio::task::spawn_blocking(move || garage.block_manager.resync.clear_resync_queue())
.await??
}
}
Ok(())
}

View file

@ -198,6 +198,7 @@ async fn test_precondition() {
);
}
let older_date = DateTime::from_secs_f64(last_modified.as_secs_f64() - 10.0);
let same_date = DateTime::from_secs_f64(last_modified.as_secs_f64());
let newer_date = DateTime::from_secs_f64(last_modified.as_secs_f64() + 10.0);
{
let err = ctx
@ -212,6 +213,18 @@ async fn test_precondition() {
matches!(err, Err(SdkError::ServiceError(se)) if se.raw().status().as_u16() == 304)
);
let err = ctx
.client
.get_object()
.bucket(&bucket)
.key(STD_KEY)
.if_modified_since(same_date)
.send()
.await;
assert!(
matches!(err, Err(SdkError::ServiceError(se)) if se.raw().status().as_u16() == 304)
);
let o = ctx
.client
.get_object()
@ -236,6 +249,17 @@ async fn test_precondition() {
matches!(err, Err(SdkError::ServiceError(se)) if se.raw().status().as_u16() == 412)
);
let o = ctx
.client
.get_object()
.bucket(&bucket)
.key(STD_KEY)
.if_unmodified_since(same_date)
.send()
.await
.unwrap();
assert_eq!(o.e_tag.as_ref().unwrap().as_str(), etag);
let o = ctx
.client
.get_object()

View file

@ -1,6 +1,6 @@
[package]
name = "garage_model"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -24,7 +24,7 @@ garage_net.workspace = true
async-trait.workspace = true
blake2.workspace = true
chrono.workspace = true
err-derive.workspace = true
thiserror.workspace = true
hex.workspace = true
http.workspace = true
base64.workspace = true

View file

@ -315,15 +315,15 @@ impl Garage {
Ok(())
}
pub fn bucket_helper(&self) -> helper::bucket::BucketHelper {
pub fn bucket_helper(&self) -> helper::bucket::BucketHelper<'_> {
helper::bucket::BucketHelper(self)
}
pub fn key_helper(&self) -> helper::key::KeyHelper {
pub fn key_helper(&self) -> helper::key::KeyHelper<'_> {
helper::key::KeyHelper(self)
}
pub async fn locked_helper(&self) -> helper::locked::LockedHelper {
pub async fn locked_helper(&self) -> helper::locked::LockedHelper<'_> {
let lock = self.bucket_lock.lock().await;
helper::locked::LockedHelper(self, Some(lock))
}

View file

@ -1,24 +1,24 @@
use err_derive::Error;
use serde::{Deserialize, Serialize};
use thiserror::Error;
use garage_util::error::Error as GarageError;
#[derive(Debug, Error, Serialize, Deserialize)]
pub enum Error {
#[error(display = "Internal error: {}", _0)]
Internal(#[error(source)] GarageError),
#[error("Internal error: {0}")]
Internal(#[from] GarageError),
#[error(display = "Bad request: {}", _0)]
#[error("Bad request: {0}")]
BadRequest(String),
/// Bucket name is not valid according to AWS S3 specs
#[error(display = "Invalid bucket name: {}", _0)]
#[error("Invalid bucket name: {0}")]
InvalidBucketName(String),
#[error(display = "Access key not found: {}", _0)]
#[error("Access key not found: {0}")]
NoSuchAccessKey(String),
#[error(display = "Bucket not found: {}", _0)]
#[error("Bucket not found: {0}")]
NoSuchBucket(String),
}

View file

@ -1,6 +1,6 @@
[package]
name = "garage_net"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -30,7 +30,7 @@ rand.workspace = true
log.workspace = true
arc-swap.workspace = true
err-derive.workspace = true
thiserror.workspace = true
bytes.workspace = true
cfg-if.workspace = true

View file

@ -159,7 +159,7 @@ where
pub(crate) type DynEndpoint = Box<dyn GenericEndpoint + Send + Sync>;
pub(crate) trait GenericEndpoint {
fn handle(&self, req_enc: ReqEnc, from: NodeID) -> BoxFuture<Result<RespEnc, Error>>;
fn handle(&self, req_enc: ReqEnc, from: NodeID) -> BoxFuture<'_, Result<RespEnc, Error>>;
fn drop_handler(&self);
fn clone_endpoint(&self) -> DynEndpoint;
}
@ -175,7 +175,7 @@ where
M: Message,
H: StreamingEndpointHandler<M> + 'static,
{
fn handle(&self, req_enc: ReqEnc, from: NodeID) -> BoxFuture<Result<RespEnc, Error>> {
fn handle(&self, req_enc: ReqEnc, from: NodeID) -> BoxFuture<'_, Result<RespEnc, Error>> {
async move {
match self.0.handler.load_full() {
None => Err(Error::NoHandler),

View file

@ -1,49 +1,49 @@
use std::io;
use err_derive::Error;
use log::error;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum Error {
#[error(display = "IO error: {}", _0)]
Io(#[error(source)] io::Error),
#[error("IO error: {0}")]
Io(#[from] io::Error),
#[error(display = "Messagepack encode error: {}", _0)]
RMPEncode(#[error(source)] rmp_serde::encode::Error),
#[error(display = "Messagepack decode error: {}", _0)]
RMPDecode(#[error(source)] rmp_serde::decode::Error),
#[error("Messagepack encode error: {0}")]
RMPEncode(#[from] rmp_serde::encode::Error),
#[error("Messagepack decode error: {0}")]
RMPDecode(#[from] rmp_serde::decode::Error),
#[error(display = "Tokio join error: {}", _0)]
TokioJoin(#[error(source)] tokio::task::JoinError),
#[error("Tokio join error: {0}")]
TokioJoin(#[from] tokio::task::JoinError),
#[error(display = "oneshot receive error: {}", _0)]
OneshotRecv(#[error(source)] tokio::sync::oneshot::error::RecvError),
#[error("oneshot receive error: {0}")]
OneshotRecv(#[from] tokio::sync::oneshot::error::RecvError),
#[error(display = "Handshake error: {}", _0)]
Handshake(#[error(source)] kuska_handshake::async_std::Error),
#[error("Handshake error: {0}")]
Handshake(#[from] kuska_handshake::async_std::Error),
#[error(display = "UTF8 error: {}", _0)]
UTF8(#[error(source)] std::string::FromUtf8Error),
#[error("UTF8 error: {0}")]
UTF8(#[from] std::string::FromUtf8Error),
#[error(display = "Framing protocol error")]
#[error("Framing protocol error")]
Framing,
#[error(display = "Remote error ({:?}): {}", _0, _1)]
#[error("Remote error ({0:?}): {1}")]
Remote(io::ErrorKind, String),
#[error(display = "Request ID collision")]
#[error("Request ID collision")]
IdCollision,
#[error(display = "{}", _0)]
#[error("{0}")]
Message(String),
#[error(display = "No handler / shutting down")]
#[error("No handler / shutting down")]
NoHandler,
#[error(display = "Connection closed")]
#[error("Connection closed")]
ConnectionClosed,
#[error(display = "Version mismatch: {}", _0)]
#[error("Version mismatch: {0}")]
VersionMismatch(String),
}

View file

@ -1,6 +1,6 @@
[package]
name = "garage_rpc"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -33,7 +33,7 @@ async-trait.workspace = true
serde.workspace = true
serde_bytes.workspace = true
serde_json.workspace = true
err-derive = { workspace = true, optional = true }
thiserror = { workspace = true, optional = true }
# newer version requires rust edition 2021
kube = { workspace = true, optional = true }
@ -49,5 +49,5 @@ opentelemetry.workspace = true
[features]
kubernetes-discovery = [ "kube", "k8s-openapi", "schemars" ]
consul-discovery = [ "reqwest", "err-derive" ]
consul-discovery = [ "reqwest", "thiserror" ]
system-libs = [ "sodiumoxide/use-pkg-config" ]

View file

@ -3,8 +3,8 @@ use std::fs::File;
use std::io::Read;
use std::net::{IpAddr, SocketAddr};
use err_derive::Error;
use serde::{Deserialize, Serialize};
use thiserror::Error;
use garage_net::NodeID;
@ -219,12 +219,12 @@ impl ConsulDiscovery {
/// Regroup all Consul discovery errors
#[derive(Debug, Error)]
pub enum ConsulError {
#[error(display = "IO error: {}", _0)]
Io(#[error(source)] std::io::Error),
#[error(display = "HTTP error: {}", _0)]
Reqwest(#[error(source)] reqwest::Error),
#[error(display = "Invalid Consul TLS configuration")]
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
#[error("HTTP error: {0}")]
Reqwest(#[from] reqwest::Error),
#[error("Invalid Consul TLS configuration")]
InvalidTLSConfig,
#[error(display = "Token error: {}", _0)]
Token(#[error(source)] reqwest::header::InvalidHeaderValue),
#[error("Token error: {0}")]
Token(#[from] reqwest::header::InvalidHeaderValue),
}

View file

@ -229,13 +229,11 @@ impl LayoutManager {
}
/// Save cluster layout data to disk
async fn save_cluster_layout(&self) -> Result<(), Error> {
async fn save_cluster_layout(&self) {
let layout = self.layout.read().unwrap().inner().clone();
self.persist_cluster_layout
.save_async(&layout)
.await
.expect("Cannot save current cluster layout");
Ok(())
if let Err(e) = self.persist_cluster_layout.save_async(&layout).await {
error!("Failed to save cluster_layout: {}", e);
}
}
fn broadcast_update(self: &Arc<Self>, rpc: SystemRpc) {
@ -313,7 +311,7 @@ impl LayoutManager {
self.change_notify.notify_waiters();
self.broadcast_update(SystemRpc::AdvertiseClusterLayout(new_layout));
self.save_cluster_layout().await?;
self.save_cluster_layout().await;
}
Ok(SystemRpc::Ok)
@ -328,7 +326,7 @@ impl LayoutManager {
if let Some(new_trackers) = self.merge_layout_trackers(trackers) {
self.change_notify.notify_waiters();
self.broadcast_update(SystemRpc::AdvertiseClusterLayoutTrackers(new_trackers));
self.save_cluster_layout().await?;
self.save_cluster_layout().await;
}
Ok(SystemRpc::Ok)

View file

@ -507,7 +507,7 @@ impl LayoutVersion {
g.compute_maximal_flow()?;
if g.get_flow_value()? < (NB_PARTITIONS * self.replication_factor) as i64 {
return Err(Error::Message(
"The storage capacity of he cluster is to small. It is \
"The storage capacity of the cluster is too small. It is \
impossible to store partitions of size 1."
.into(),
));

View file

@ -1,6 +1,6 @@
[package]
name = "garage_table"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"

View file

@ -1,6 +1,6 @@
[package]
name = "garage_util"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
@ -21,7 +21,7 @@ arc-swap.workspace = true
async-trait.workspace = true
blake2.workspace = true
bytesize.workspace = true
err-derive.workspace = true
thiserror.workspace = true
hexdump.workspace = true
xxhash-rust.workspace = true
hex.workspace = true

View file

@ -115,32 +115,39 @@ impl WorkerProcessor {
trace!("{} (TID {}): {:?}", worker.worker.name(), worker.task_id, worker.state);
// Save worker info
let mut wi = self.worker_info.lock().unwrap();
match wi.get_mut(&worker.task_id) {
Some(i) => {
i.state = worker.state;
i.status = worker.worker.status();
i.errors = worker.errors;
i.consecutive_errors = worker.consecutive_errors;
if worker.last_error.is_some() {
i.last_error = worker.last_error.take();
{
let mut wi = self.worker_info.lock().unwrap();
match wi.get_mut(&worker.task_id) {
Some(i) => {
i.state = worker.state;
i.status = worker.worker.status();
i.errors = worker.errors;
i.consecutive_errors = worker.consecutive_errors;
if worker.last_error.is_some() {
i.last_error = worker.last_error.take();
}
}
None => {
wi.insert(worker.task_id, WorkerInfo {
name: worker.worker.name(),
state: worker.state,
status: worker.worker.status(),
errors: worker.errors,
consecutive_errors: worker.consecutive_errors,
last_error: worker.last_error.take(),
});
}
}
None => {
wi.insert(worker.task_id, WorkerInfo {
name: worker.worker.name(),
state: worker.state,
status: worker.worker.status(),
errors: worker.errors,
consecutive_errors: worker.consecutive_errors,
last_error: worker.last_error.take(),
});
}
}
if worker.state == WorkerState::Done {
info!("Worker {} (TID {}) exited", worker.worker.name(), worker.task_id);
} else {
// Yield to the Tokio scheduler between consecutive Busy steps so
// that a worker which never suspends on its own cannot starve other tasks.
if worker.state == WorkerState::Busy {
tokio::task::yield_now().await;
}
workers.push(async move {
worker.step().await;
worker

View file

@ -45,6 +45,11 @@ pub struct Config {
)]
pub block_size: usize,
/// Maximum number of parallel block writes per PUT request
/// Higher values improve throughput but increase memory usage
/// Default: 3, Recommended: 10-30 for NVMe, 3-10 for HDD
#[serde(default = "default_block_max_concurrent_writes_per_request")]
pub block_max_concurrent_writes_per_request: usize,
/// Number of replicas. Can be any positive integer, but uneven numbers are more favorable.
/// - 1 for single-node clusters, or to disable replication
/// - 3 is the recommended and supported setting.
@ -75,6 +80,10 @@ pub struct Config {
)]
pub block_ram_buffer_max: usize,
/// Maximum number of concurrent reads of block files on disk
#[serde(default = "default_block_max_concurrent_reads")]
pub block_max_concurrent_reads: usize,
/// Skip the permission check of secret files. Useful when
/// POSIX ACLs (or more complex chmods) are used.
#[serde(default)]
@ -263,6 +272,9 @@ pub struct KubernetesDiscoveryConfig {
pub skip_crd: bool,
}
pub fn default_block_max_concurrent_writes_per_request() -> usize {
3
}
/// Read and parse configuration
pub fn read_config(config_file: PathBuf) -> Result<Config, Error> {
let config = std::fs::read_to_string(config_file)?;
@ -280,6 +292,9 @@ fn default_block_size() -> usize {
fn default_block_ram_buffer_max() -> usize {
256 * 1024 * 1024
}
fn default_block_max_concurrent_reads() -> usize {
16
}
fn default_consistency_mode() -> String {
"consistent".into()

View file

@ -2,7 +2,7 @@
use std::fmt;
use std::io;
use err_derive::Error;
use thiserror::Error;
use serde::{de::Visitor, Deserialize, Deserializer, Serialize, Serializer};
@ -12,68 +12,61 @@ use crate::encode::debug_serialize;
/// Regroup all Garage errors
#[derive(Debug, Error)]
pub enum Error {
#[error(display = "IO error: {}", _0)]
Io(#[error(source)] io::Error),
#[error("IO error: {0}")]
Io(#[from] io::Error),
#[error(display = "Hyper error: {}", _0)]
Hyper(#[error(source)] hyper::Error),
#[error("Hyper error: {0}")]
Hyper(#[from] hyper::Error),
#[error(display = "HTTP error: {}", _0)]
Http(#[error(source)] http::Error),
#[error("HTTP error: {0}")]
Http(#[from] http::Error),
#[error(display = "Invalid HTTP header value: {}", _0)]
HttpHeader(#[error(source)] http::header::ToStrError),
#[error("Invalid HTTP header value: {0}")]
HttpHeader(#[from] http::header::ToStrError),
#[error(display = "Network error: {}", _0)]
Net(#[error(source)] garage_net::error::Error),
#[error("Network error: {0}")]
Net(#[from] garage_net::error::Error),
#[error(display = "DB error: {}", _0)]
Db(#[error(source)] garage_db::Error),
#[error("DB error: {0}")]
Db(#[from] garage_db::Error),
#[error(display = "Messagepack encode error: {}", _0)]
RmpEncode(#[error(source)] rmp_serde::encode::Error),
#[error(display = "Messagepack decode error: {}", _0)]
RmpDecode(#[error(source)] rmp_serde::decode::Error),
#[error(display = "JSON error: {}", _0)]
Json(#[error(source)] serde_json::error::Error),
#[error(display = "TOML decode error: {}", _0)]
TomlDecode(#[error(source)] toml::de::Error),
#[error("Messagepack encode error: {0}")]
RmpEncode(#[from] rmp_serde::encode::Error),
#[error("Messagepack decode error: {0}")]
RmpDecode(#[from] rmp_serde::decode::Error),
#[error("JSON error: {0}")]
Json(#[from] serde_json::error::Error),
#[error("TOML decode error: {0}")]
TomlDecode(#[from] toml::de::Error),
#[error(display = "Tokio join error: {}", _0)]
TokioJoin(#[error(source)] tokio::task::JoinError),
#[error("Tokio join error: {0}")]
TokioJoin(#[from] tokio::task::JoinError),
#[error(display = "Tokio semaphore acquire error: {}", _0)]
TokioSemAcquire(#[error(source)] tokio::sync::AcquireError),
#[error("Tokio semaphore acquire error: {0}")]
TokioSemAcquire(#[from] tokio::sync::AcquireError),
#[error(display = "Tokio broadcast receive error: {}", _0)]
TokioBcastRecv(#[error(source)] tokio::sync::broadcast::error::RecvError),
#[error("Tokio broadcast receive error: {0}")]
TokioBcastRecv(#[from] tokio::sync::broadcast::error::RecvError),
#[error(display = "Remote error: {}", _0)]
#[error("Remote error: {0}")]
RemoteError(String),
#[error(display = "Timeout")]
#[error("Timeout")]
Timeout,
#[error(
display = "Could not reach quorum of {} (sets={:?}). {} of {} request succeeded, others returned errors: {:?}",
_0,
_1,
_2,
_3,
_4
)]
#[error("Could not reach quorum of {0} (sets={1:?}). {2} of {3} request succeeded, others returned errors: {4:?}")]
Quorum(usize, Option<usize>, usize, usize, Vec<String>),
#[error(display = "Unexpected RPC message: {}", _0)]
#[error("Unexpected RPC message: {0}")]
UnexpectedRpcMessage(String),
#[error(display = "Corrupt data: does not match hash {:?}", _0)]
#[error("Corrupt data: does not match hash {0:?}")]
CorruptData(Hash),
#[error(display = "Missing block {:?}: no node returned a valid block", _0)]
#[error("Missing block {0:?}: no node returned a valid block")]
MissingBlock(Hash),
#[error(display = "{}", _0)]
#[error("{0}")]
Message(String),
}

View file

@ -1,6 +1,6 @@
[package]
name = "garage_web"
version = "1.2.0"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>", "Quentin Dufour <quentin@dufour.io>"]
edition = "2018"
license = "AGPL-3.0"
@ -20,7 +20,7 @@ garage_model.workspace = true
garage_util.workspace = true
garage_table.workspace = true
err-derive.workspace = true
thiserror.workspace = true
tracing.workspace = true
percent-encoding.workspace = true

View file

@ -1,6 +1,6 @@
use err_derive::Error;
use hyper::header::HeaderValue;
use hyper::{HeaderMap, StatusCode};
use thiserror::Error;
use garage_api_common::generic_server::ApiError;
@ -8,15 +8,15 @@ use garage_api_common::generic_server::ApiError;
#[derive(Debug, Error)]
pub enum Error {
/// An error received from the API crate
#[error(display = "API error: {}", _0)]
#[error("API error: {0}")]
ApiError(garage_api_s3::error::Error),
/// The file does not exist
#[error(display = "Not found")]
#[error("Not found")]
NotFound,
/// The client sent a request without host, or with unsupported method
#[error(display = "Bad request: {}", _0)]
#[error("Bad request: {0}")]
BadRequest(String),
}