Compare commits

..

304 commits

Author SHA1 Message Date
Alex Auvolat
b6b18427a5 use optimization level 3 and thin LTO for release builds (#1405)
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1405
Co-authored-by: Alex Auvolat <lx@deuxfleurs.fr>
Co-committed-by: Alex Auvolat <lx@deuxfleurs.fr>
2026-04-16 08:47:02 +00:00
Gauthier Zirnhelt
9987166b2b Fix the LifecycleWorker being uncooperative (#1396)
## Summary

This PR ensures that the `LifecycleWorker` yields at least once to the Tokio scheduler in between each batch of 100 objects.

## Problem being solved

I'm administrating a Garage cluster which has been experiencing timeouts on all endpoints while the lifecycle worker is running at midnight UTC : `Ping timeout` error messages and even requests eventually failing due to `Could not reach quorum ...`.

I have found that this happens while the lifecycle worker is working on a big bucket (containing millions of objects) with a lifecycle rule that applies to very few objects.
The `process_object()` function does not hit any `await`:
- `last_bucket` is always the same, so the `bucket_table` is not read asynchronously
- no transaction is made on the `object_table` because my lifecycle rule (almost) never applies to any object

The first commit in this PR adds an executable which reproduces the problem that I've been experiencing in a self-contained way : the lifecycle worker starves the Tokio scheduler so much that no other task is able to run (or very rarely).
To run it : `cargo run -p garage_model --bin lifecycle-starvation-test`.
This commit can be dropped post-review, as it's only useful to demonstrate the starvation.

The error messages completely stopped after adding the extra yield to the nodes of my cluster.
The duration of the lifecycle worker task does not appear to have changed at all from what I can see (looking at the timestamps produced either by the self-contained binary or by each of my nodes with the `Lifecycle worker finished` message).

## Note

An other potential fix would have been to force the `WorkerProcessor` to yield before re-enqueuing a busy task, but this would have affected all Garage workers even though it's only the `LifecycleWorker` being uncooperative.

Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1396
Reviewed-by: Alex <lx@deuxfleurs.fr>
Co-authored-by: Gauthier Zirnhelt <gauthier.zirnhelt@insimo.fr>
Co-committed-by: Gauthier Zirnhelt <gauthier.zirnhelt@insimo.fr>
2026-04-15 09:56:24 +00:00
trinity-1686a
b72b090a09 fix silent write errors (#1358)
fix #1355

some write errors are not reported when calling write_all. That's notably the case of ENOSPC on small buffers (1MiB).
on ext4, the error is catched when calling flush(). This is hopefully the case on most local filesystems, though afaik this assumption doesn't hold for NFS

Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1358
Co-authored-by: trinity-1686a <trinity@deuxfleurs.fr>
Co-committed-by: trinity-1686a <trinity@deuxfleurs.fr>
2026-02-21 07:21:24 +00:00
Armael
8551aefed4 Fix: correctly parse CORS website configuration with no rules (#1320)
When sending a website config with an empty list of CORS rules, garage currently incorrectly refuses it with error message "Invalid XML: missing field `CORSRule`".
This fix the issue by following the documentation of quick-xml related to serde field parameters for this specific scenario:  https://docs.rs/quick-xml/latest/quick_xml/de/#sequences-xsall-and-xssequence-xml-schema-types .

(I've based this PR on main-v1 because we want it for deuxfleurs' deployment.)

Co-authored-by: Armaël Guéneau <armael.gueneau@ens-lyon.org>
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1320
Co-authored-by: Armael <armael@noreply.localhost>
Co-committed-by: Armael <armael@noreply.localhost>
2026-02-07 13:11:20 +00:00
Alex Auvolat
47bf5d9fb0 bump version to v1.3.1 2026-01-24 13:01:27 +01:00
Alex Auvolat
5df37dae5e update cargo dependencies in main-v1 (#1299)
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1299
Co-authored-by: Alex Auvolat <lx@deuxfleurs.fr>
Co-committed-by: Alex Auvolat <lx@deuxfleurs.fr>
2026-01-24 11:59:01 +00:00
Alex
44af0bdab3 Merge pull request 'Backport #1283 and #1290 to main-v1' (#1297) from backports-v1 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1297
2026-01-24 11:34:28 +00:00
rmoff
a7d6620e18 Fix typo in error message 2026-01-24 12:21:45 +01:00
Joe Anderson
8eb12755e4 Allow bucket to be missing from presigned post params 2026-01-24 12:21:25 +01:00
maximilien
c685a2cbaf Merge pull request 'Update doc/book/cookbook/binary-packages.md' (#1269) from nmstoker/garage:main-v1 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1269
2025-12-21 21:12:15 +00:00
maximilien
969f42a970 Merge pull request 'feat: add service annotations' (#1264) from deimosfr/garage:feat/add_helm_svc_annotations into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1264
Reviewed-by: maximilien <git@mricher.fr>
2025-12-21 21:00:00 +00:00
nmstoker
424d4f8d4d Update doc/book/cookbook/binary-packages.md
Correct the Arch Linux link as garage is now available in the official repos under extra, and no longer in AUR.
2025-12-20 13:16:38 +00:00
Pierre Mavro
bf5290036f
feat: add service annotations 2025-12-18 18:12:22 +01:00
Alex
4efc8bac07 Merge pull request 'Add the parameter, which replaces . This is to accommodate different storage media such as HDD and NVMe.' (#1251) from perrynzhou/garage:dev into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1251
Reviewed-by: Alex <lx@deuxfleurs.fr>
2025-12-17 10:05:49 +00:00
perrynzhou
f3dcc39903 Merge branch 'main-v1' into dev 2025-12-17 10:05:19 +00:00
maximilien
43e02920c2 Merge pull request 'docs: fix typo in doc/book/cookbook/kubernetes.md' (#1259) from simonpasquier/garage:fix-typo into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1259
Reviewed-by: maximilien <me@mricher.fr>
2025-12-17 07:09:59 +00:00
Simon Pasquier
dcc2fe4ac5
docs: fix typo in doc/book/cookbook/kubernetes.md 2025-12-16 10:16:44 +01:00
perrynzhou@gmail.com
e3a5ec6ef6 rename put_blocks_max_parallel to block_max_concurrent_writes_per_request and update configuration.md 2025-12-12 07:09:38 +08:00
perrynzhou@gmail.com
4d124e1c76 Add the parameter, which replaces . This is to accommodate different storage media such as HDD and NVMe. 2025-12-10 06:43:51 +08:00
Alex
d769a7be5d Merge pull request 'Update rust toolchain to 1.91.0' (#1233) from toolchain-update into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1233
2025-11-25 09:58:17 +00:00
Alex Auvolat
511cf0c6ec disable awscli checksumming in ci scripts
required because garage.deuxfleurs.fr is still running v1.x
2025-11-24 18:37:34 +01:00
Alex Auvolat
95693d45b2 run cargo fmt as a nix derivation 2025-11-24 18:09:53 +01:00
Alex Auvolat
ca296477f3 disable checksums in aws cli (todo: revert in main-v2) 2025-11-24 17:58:57 +01:00
Alex Auvolat
ca3b4a050d update nixos image used in woodpecker ci 2025-11-24 17:35:51 +01:00
Alex Auvolat
a057ab23ea Update rust toolchain 2025-11-24 11:09:46 +01:00
Alex
58bc65b9a8 Merge pull request 'migrate to this error, garage-v1' (#1218) from thiserror into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1218
Reviewed-by: Alex <lx@deuxfleurs.fr>
2025-11-12 08:05:32 +00:00
trinity-1686a
ac851d6dee fmt 2025-11-01 18:04:54 +01:00
trinity-1686a
eac2aa6fe4 Merge pull request 'fix: default config path changed for alpine binary' (#1204) from berndsen-io/garage:fix-alpine-docs into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1204
Reviewed-by: trinity-1686a <trinity@deuxfleurs.fr>
2025-11-01 16:43:32 +00:00
trinity-1686a
1e0201ada2 Merge pull request 'Update link to signature v2.' (#1211) from teo-tsirpanis/garage:sigv2-docs into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1211
Reviewed-by: trinity-1686a <trinity@deuxfleurs.fr>
2025-11-01 16:43:05 +00:00
trinity-1686a
82297371bf migrate to this error
it doesn't generate a bazillion warning at compile time
2025-11-01 17:20:39 +01:00
teo-tsirpanis
174f4f01a8 Update link to signature v2. 2025-10-26 15:54:08 +00:00
fgberry
1aac7b4875 chore: spacing 2025-10-24 11:25:33 +02:00
fgberry
b43c58cbe5 fix: default config path changed for alpine binary 2025-10-24 11:22:32 +02:00
Alex
9481ac428e Merge pull request 'sigv4: don't enforce x-amz-content-sha256 to be in signed headers list (fix #770)' (#1195) from fix-770 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1195
2025-10-14 09:34:35 +00:00
Alex Auvolat
1c29d04cc5 sigv4: don't enforce x-amz-content-sha256 to be in signed headers list (fix #770)
From the following page:
https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html

> In both cases, because the x-amz-content-sha256 header value is already
> part of your HashedPayload, you are not required to include the
> x-amz-content-sha256 header as a canonical header.
2025-10-14 11:18:25 +02:00
Alex
b48a8eaa1f Merge pull request 'properly handle precondition time equal to object time' (#1193) from precondition-ms into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1193
2025-10-10 19:41:06 +00:00
trinity-1686a
42fd8583bd properly handle precondition time equal to object time 2025-10-08 17:54:22 +02:00
Alex
236af3a958 Merge pull request 'Garage v1.3.0' (#1166) from rel-v1.3.0 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1166
2025-09-14 21:26:21 +00:00
Alex Auvolat
4b1fdbef55 bump version to v1.3.0 2025-09-14 21:36:33 +02:00
Alex Auvolat
0f1b488be0 fix rust warnings 2025-09-14 21:25:37 +02:00
Alex
0bbf63ee0e Merge pull request 'update rusqlite and snapshot using VACUUM INTO' (#1164) from update-rusqlite into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1164
2025-09-14 18:28:01 +00:00
Alex
879d941d7b Merge pull request 'add garage repair clear-resync-queue (fix #1151)' (#1165) from clear-resync-queue into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1165
2025-09-14 17:50:41 +00:00
Alex Auvolat
d726cf0299 add garage repair clear-resync-queue (fix #1151) 2025-09-14 19:34:44 +02:00
Alex
0c7aeab6f8 Merge pull request 'garage_db: fix error handling logic (fix #1138)' (#1163) from fix-1138 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1163
2025-09-14 17:26:08 +00:00
Alex Auvolat
5687fc0375 update rusqlite and snapshot using VACUUM INTO 2025-09-14 19:22:36 +02:00
Alex
97f1e9ab52 Merge pull request 'Add Plakar documentation (backup tools)' (#1119) from Lapineige/garage:Plakar_support into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1119
2025-09-14 16:08:36 +00:00
Lapineige
60b1d78b56 Add Plakar documentation 2025-09-14 18:07:49 +02:00
Alex Auvolat
4c895a7186 garage_db: fix error handling logic (fix #1138) 2025-09-14 18:03:31 +02:00
Alex
c3b5cbf212 Merge pull request 'fix panic when cluster_layout cannot be saved (fix #1150)' (#1158) from fix-1150 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1158
2025-09-13 15:58:52 +00:00
Alex
57a467b5c0 Merge pull request 'Block manager: limit simultaneous block reads from disk' (#1157) from block-max-simultaneous-reads into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1157
2025-09-13 15:53:24 +00:00
Alex Auvolat
6cf6db5c61 fix panic when cluster_layout cannot be saved (fix #1150) 2025-09-13 17:49:25 +02:00
Alex Auvolat
d5a57e3e13 block: read_block: don't add not found blocks to resync queue 2025-09-13 17:38:23 +02:00
Alex Auvolat
5cf354acb4 block: maximum number of simultaneous reads 2025-09-13 17:38:06 +02:00
Alex
2b007ddea3 Merge pull request 'woodpecker: require the nix=enabled label' (#1152) from woodpecker-nix-flag into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1152
2025-09-04 09:10:10 +00:00
Alex Auvolat
c8599a8636 woodpecker: require the nix=enabled label 2025-09-04 11:06:46 +02:00
Alex
0b901bf291 Merge pull request 'garage_db: reduce frequency of sqlite snapshot progress log (fix #1129)' (#1146) from fix-1129 into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1146
2025-08-27 22:26:32 +00:00
Alex Auvolat
c8c20d6f47 garage_db: reduce frequency of sqlite snapshot progress log (fix #1129) 2025-08-28 00:07:35 +02:00
Alex
e5db610e4c Merge pull request 'K2V client: allow custom HTTP client' (#731) from k2v/shared_http_client into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/731
Reviewed-by: maximilien <me@mricher.fr>
2025-08-27 21:21:09 +00:00
Alex
65c6f8adea Merge pull request 'garage_db: refactor open function' (#1142) from factor-db-open into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1142
2025-08-27 21:10:59 +00:00
Alex Auvolat
54b9bf02a3 garage_db: refactor open function 2025-08-27 23:03:09 +02:00
Alex
469153233f Merge pull request 'garage_db: rename len to approximate_len as it is used for stats only' (#1141) from db-approximate-len into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1141
2025-08-27 20:44:50 +00:00
Alex Auvolat
90bba5889a garage_db: rename len to approximate_len as it is used for stats only 2025-08-27 21:23:45 +02:00
Alex
a64b567d43 Merge pull request 'Add experimental support for Fjall DB engine' (#906) from withings/garage:feat/fjall-db-engine into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/906
2025-08-27 19:09:40 +00:00
Alex Auvolat
6ea86db8cd document fjall db engine, remove flakey metadata_fsync implementation 2025-08-27 20:22:41 +02:00
Alex Auvolat
aa69c06f2b fix potential race condition and naming bug in fjall adapter 2025-08-27 20:22:38 +02:00
Alex Auvolat
a6c6c44310 nix: build and test fjall feature 2025-08-27 18:54:42 +02:00
Julien Kritter
96d7713915 Add support for an LSM-tree-based backend with Fjall 2025-08-27 18:54:34 +02:00
Alex
d64498c3d3 Merge pull request 'log access keys' (#1122) from 1686a/log-access-key into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1122
2025-08-27 16:18:16 +00:00
trinity-1686a
b340599e68 log access keys 2025-08-03 15:30:56 +02:00
Alex
5448012b27 Merge pull request 'Pixelfed_support' (#1118) from Lapineige/garage:Pixelfed_support into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1118
2025-08-02 15:03:57 +00:00
Alex
ce34d11a65 Merge pull request 'don't die on SIGHUP' (#1121) from 1686a/handle-sighup into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1121
2025-08-02 14:53:58 +00:00
Alex
8cb7623ebd Merge pull request 'handle ECONNABORTED' (#1120) from 1686a/handle-econnaborted into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1120
2025-08-02 14:53:45 +00:00
trinity-1686a
5469c95877 handle ECONNABORTED 2025-08-02 13:14:01 +02:00
trinity-1686a
f930c6f643 don't die on SIGHUP 2025-08-02 13:09:33 +02:00
Alex
afcb22bf16 Merge pull request 'Fix typo in peertube buckets names' (#1117) from Lapineige/garage:main into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1117
2025-08-02 08:27:01 +00:00
Lapineige
cc29a40d51 Actualiser doc/book/connect/apps/index.md 2025-08-01 21:35:15 +00:00
Lapineige
0f3f180c3e Merge branch 'main-v1' into main 2025-08-01 21:33:58 +00:00
Lapineige
70cf6004ae Fix typo in peertube buckets names 2025-08-01 21:32:59 +00:00
Alex
c7571ff89b Merge pull request 'Fix some unsoundness in lmdb adapter unsafe' (#1099) from krtab/garage:fix_some_ub into main-v1
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1099
2025-07-31 19:38:23 +00:00
Arthur Carcano
1b42919bf7 Fix some unsoundness in lmdb adapter unsafe 2025-07-25 23:33:51 +02:00
Alex
3f4ab3a4a3 Merge pull request 'Garage v1.2.0' (#1068) from rel-1.2.0 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1068
2025-06-13 16:12:29 +00:00
Alex Auvolat
3a4afc04a9 cargo: update crossbeam-channel to avoid yanked version 2025-06-13 17:22:47 +02:00
Alex Auvolat
fbf03e9378 bump version to v1.2.0 2025-06-13 14:21:28 +02:00
Alex
9eb07d4c7b Merge pull request 'cli: mark block refs as deleted in garage block purge (fix #1055)' (#1067) from fix-1055 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1067
2025-06-13 11:53:41 +00:00
Alex Auvolat
85ee4f5d8c cli: mark block refs as deleted in garage block purge 2025-06-13 13:52:02 +02:00
Alex
328072d122 Merge pull request 'put web error in a basic webpage' (#1064) from trinity-1686a/garage:1686a/non-xml-web-error into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1064
2025-06-12 06:06:38 +00:00
trinity-1686a
26bc807905 put web error in a basic webpage
before, it was a plain string, with an xml content type

this caused browsers to show very ugly and meaningless pages
2025-06-10 22:23:06 +02:00
Alex
a9f5f242b2 Merge pull request 'feat: add log to journald feature' (#1056) from ragazenta/garage:feat/tracing-journald into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1056
2025-06-10 18:38:23 +00:00
maximilien
ae98abca5c Merge pull request 'Add eddster2309/ansible-role-garage as deployment option' (#1057) from eddster2309/garage:main into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1057
Reviewed-by: maximilien <me@mricher.fr>
2025-06-08 11:56:31 +00:00
eddster2309
adfa44ad70 Add architecture support 2025-06-03 09:22:43 +00:00
eddster2309
47143b88ad Add eddster2309/ansible-role-garage as deployment option 2025-06-03 09:15:57 +00:00
Renjaya Raga Zenta
8843aa92fa
feat: add log to journald feature
The systemd-journald is used in most major Linux distros that use systemd.
This enables logging using the systemd-journald native protocol, instead
of just writing to stderr.
2025-06-02 11:55:27 +07:00
Alex
b601b3e46d Merge pull request 'documentation: Minor doc change to clarify why the capacity does not matter and how the zone name is used' (#1051) from ddxv/garage:docs-quick-start into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1051
2025-05-30 16:26:19 +00:00
Alex
a19d2f16e2 Merge pull request 'api: s3: implement get bucket acl' (#1045) from ragazenta/garage:feat/dummy-acl into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1045
2025-05-30 16:25:04 +00:00
trinity-1686a
fc8fc60f6d emit internal error when we detect race condition (#1053) (fix #1050)
i went with a `500`/`InternalError`/`Please try again.` because that is something i've seen AWS S3 report while developing other software, and i'm not convinced all clients would understand a 409 conflict properly (GET don't usually conflict)

Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1053
Co-authored-by: trinity-1686a <trinity@deuxfleurs.fr>
Co-committed-by: trinity-1686a <trinity@deuxfleurs.fr>
2025-05-30 16:24:12 +00:00
Alex
77079a1498 Merge pull request '[1.1.x] speed up UploadPartCopy' (#1047) from yuka/garage:uploadpartcopy-v1 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1047
2025-05-30 16:22:35 +00:00
James O'Claire
2a4f729b57 Minor doc change to clarify why the capacity does not matter and how the zone name is used 2025-05-28 09:49:50 +08:00
Renjaya Raga Zenta
1b042e379e
api: s3: implement get bucket acl 2025-05-26 09:43:15 +07:00
Yureka
ffbce0f689 speed up UploadPartCopy
(cherry picked from commit db54bf96c7)
2025-05-23 20:36:32 +02:00
Alex
37e5621dde Merge pull request 'documentation updates' (#1046) from doc-updates into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1046
2025-05-23 15:05:19 +00:00
Alex Auvolat
6529ff379a documentation updates 2025-05-23 17:02:23 +02:00
Alex
a8d73682a4 Merge pull request 'more resilience to inconsistent alias states' (#989) from fix-bucket-aliases into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/989
2025-05-22 17:41:42 +00:00
Alex Auvolat
8654eb19bf implement repair procedure to fix inconsistent bucket aliases 2025-05-22 19:34:38 +02:00
maximilien
54ea412188 Merge pull request 'Add kubernetes CRD' (#994) from babykart/garage:k8s-crd into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/994
2025-05-22 17:15:56 +00:00
Alex Auvolat
2ade8c86f6 more resilience to inconsistent alias states 2025-05-22 19:12:05 +02:00
babykart
b15e2cbb6c Update Kubernetes cookbook
Signed-off-by: babykart <babykart@gmail.com>
2025-05-22 17:11:14 +00:00
babykart
0fd1b7342b Add Kubernetes CRD and the related kustomization
Signed-off-by: babykart <babykart@gmail.com>
2025-05-22 17:11:14 +00:00
Alex
be16bc7a05 Merge pull request 'Fix behavior of CopyObject wrt x-amz-website-redirect-location' (#1037) from Armael/garage:copy-website-redirect into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1037
2025-05-22 13:57:28 +00:00
Alex
bfaa1ca6b7 Merge pull request 'api: lifecycle: 404 if missing lifecycle config' (#1043) from ragazenta/garage:no-lifecycle-response into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1043
2025-05-22 13:56:52 +00:00
Alex
de8eeab4ad Merge pull request 'optionally support puny code (fix #273)' (#1042) from trinity-1686a/garage:1686a/punnycode into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1042
2025-05-22 12:49:10 +00:00
Renjaya Raga Zenta
ae3f7ee76c
api: lifecycle: 404 if missing lifecycle config 2025-05-22 19:33:54 +07:00
trinity-1686a
2dc3a6dbbe document allow_punycode configuration option 2025-05-22 14:08:06 +02:00
Armaël Guéneau
c6bc3f229b Fix behavior of CopyObject wrt x-amz-website-redirect-location 2025-05-22 14:03:11 +02:00
trinity-1686a
bba9202f31 add test for punycode 2025-05-19 20:36:03 +02:00
trinity-1686a
a605a80806 support punnycode in api/web endpoint 2025-05-19 18:11:55 +02:00
trinity-1686a
539af12d21 allow punnycode in bucket name 2025-05-19 18:07:04 +02:00
Alex
a2a9e3cec4 Merge pull request 'doc: Add systemd example to increase file descriptors limit' (#1023) from baptiste/garage:systemd_openfiles into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1023
2025-05-09 10:34:07 +00:00
Baptiste Jonglez
14274bc13c doc: Add systemd example to increase file descriptors limit 2025-05-08 10:27:53 +02:00
Alex
bf4691d98a Merge pull request 'Fix #1007: hint that region can be changed depending on cluster config' (#1015) from garage-1007-update-region-in-doc into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1015
2025-04-24 07:41:32 +00:00
Maximilien R.
ad151cb1dc Fix #1007: hint that region can be changed depending on cluster config 2025-04-23 23:30:16 +02:00
babykart
3c20984a08 helm-chart: Cosmetic changes
Signed-off-by: babykart <babykart@gmail.com>
2025-04-21 10:04:53 +00:00
babykart
e6e4e051a1 helm-chart: Add metadata_auto_snapshot_interval
Signed-off-by: babykart <babykart@gmail.com>
2025-04-21 10:04:53 +00:00
babykart
9b38cba6f3 helm-chart: Add livenessProbe & readinessProbe
Signed-off-by: babykart <babykart@gmail.com>
2025-04-21 10:04:53 +00:00
Alex
4ef954d176 Merge pull request 'Fix Docker run volume mappings' (#1012) from Zoob/garage:main into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1012
2025-04-19 20:05:16 +00:00
Zoob
02498a93d0 doc: fix Docker run volume mappings 2025-04-19 18:46:36 +00:00
Alex
4caad5425d Merge pull request 'metadata: Create compact LMDB snapshots' (#1008) from baptiste/garage:lmdb_compact_snapshot into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/1008
2025-04-17 19:26:32 +00:00
Baptiste Jonglez
9ec3f8cc3c metadata: Create compact LMDB snapshots
See #1006

LMDB files never shrink, so we can end up with a large database that
contains a smaller amount of actual data.

Compacting the snapshots is an easy win: it will write faster to disk,
take less space, and if needed you can reimport an already-compacted
snapshot as the main database.
2025-04-12 23:18:50 +02:00
Alex
14d2f2b18d Merge pull request 'update cargo dependencies' (#992) from update-deps into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/992
2025-03-21 09:06:06 +00:00
Alex Auvolat
a7d845a999 change aws-sdk features to avoid using aws-lc which doesn't compile on i686/arm 2025-03-20 17:05:43 +01:00
Alex Auvolat
dd20e5d22a update cargo dependencies 2025-03-20 13:36:01 +01:00
maximilien
6906a4ff12 Merge pull request 'doc: add instructions on how to increase PVC size' (#987) from Joker9944/garage:main into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/987
Reviewed-by: maximilien <me@mricher.fr>
2025-03-17 20:32:31 +00:00
Joker9944
9053782d71
doc: add instructions on how to increase PVC size 2025-03-15 00:32:18 +01:00
Alex
c96be1a9a8 Merge pull request 'doc/upgrading: slightly more precise wording' (#981) from Armael/garage:doc-upgrading into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/981
2025-03-07 15:16:24 +00:00
maximilien
98e56490a1 Merge pull request 'helm-chart: Fix headless service' (#976) from babykart/garage:headless-svc into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/976
2025-03-07 12:17:20 +00:00
Armaël Guéneau
e791ccec8f doc/upgrading: slightly more precise wording 2025-03-07 12:27:21 +01:00
maximilien
d605c4fed1 Explicitely set ClusterIP on headless service type
Signed-off-by: maximilien <maximilien@deuxfleurs.fr>
2025-03-07 09:17:05 +00:00
babykart
0ce5f7eb00
helm-chart: Fix headless service
Signed-off-by: babykart <babykart@gmail.com>
2025-03-05 20:26:12 +01:00
Alex
516255321f Merge pull request 'doc: fix version number in quick start' (#974) from fix-quickstart into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/974
2025-03-05 11:07:27 +00:00
Alex Auvolat
f3b05ff771 doc: fix version number in quick start 2025-03-05 12:06:05 +01:00
Alex
e254cc20e5 Merge pull request 'Garage v1.1.0' (#968) from rel-1.1 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/968
2025-03-05 10:56:34 +00:00
Alex Auvolat
12f15c4c2b fix readme paths in cargo.toml for new crates 2025-03-05 11:00:19 +01:00
Alex Auvolat
42c5d02cdf doc: fix "since vX.X.X" in multiple places 2025-03-05 10:19:51 +01:00
Alex Auvolat
4689b10448 bump version to v1.1.0 2025-03-05 10:19:51 +01:00
Alex
156b10ee65 Merge pull request 'admin api definition: fix globalAlias query parameter name (related: #971)' (#973) from admin-sdk-fix into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/973
2025-03-05 09:19:30 +00:00
Alex Auvolat
8647ebf003 admin api definition: fix globalAlias query parameter name (related: #971) 2025-03-05 10:16:36 +01:00
maximilien
67d7c0769b Merge pull request 'Add headless service for statefulSet serviceName' (#970) from babykart/garage:helm-headless-svc into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/970
Reviewed-by: maximilien <me@mricher.fr>
2025-03-05 08:59:36 +00:00
babykart
09ed5ab8cc
Fix documentation link
Signed-off-by: babykart <babykart@gmail.com>
2025-02-23 15:55:01 +01:00
babykart
a0ea28b0da
Add headless service for statefulSet serviceName
Signed-off-by: babykart <babykart@gmail.com>
2025-02-23 15:45:55 +01:00
Alex
c5237c31e7 Merge pull request 'Implement all HTTP preconditions in GetObject/HeadObject' (#967) from fix-804 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/967
2025-02-19 17:31:26 +00:00
Alex Auvolat
f87943a39d tests: add test for http preconditions 2025-02-19 18:26:03 +01:00
Alex Auvolat
c0846c56fe api: unify http precondition handling 2025-02-19 18:14:27 +01:00
Alex
1cb0ae10a8 Merge pull request 'fix crash in layout computation when changing all nodes of a zone to gateway mode' (#937) from baptiste/garage:fix_layout_crash into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/937
Reviewed-by: Alex <lx@deuxfleurs.fr>
2025-02-19 17:09:10 +00:00
Alex Auvolat
1a8f74fc94 api: GetObject: implement if-match and if-unmodified-since 2025-02-19 17:26:29 +01:00
Alex
2191620af5 Merge pull request 'web: implement x-amz-website-redirect-location' (#966) from redirect-location-header into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/966
2025-02-19 16:10:04 +00:00
Alex Auvolat
bf27a3ec98 web: implement x-amz-website-redirect-location 2025-02-19 17:04:10 +01:00
Alex
f64ec6e542 Merge pull request 'implement STREAMING-*-PAYLOAD-TRAILER' (#960) from fix-824 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/960
2025-02-19 09:59:32 +00:00
Alex Auvolat
6d38907dac test: verify saved checksums in streaming putobject tests 2025-02-18 22:02:04 +01:00
Alex Auvolat
cfe8e8d45c api: PutObject: save trailer checksum in metadata 2025-02-18 21:56:32 +01:00
Alex Auvolat
f6e805e7db api: various fixes 2025-02-18 21:47:53 +01:00
Alex Auvolat
45e10e55f9 update aws-sdk-s3 in tests and fix wrong checksumming behavior in GetObject 2025-02-18 15:33:42 +01:00
Alex Auvolat
730bfee753 api: validate trailing checksum + add test for unsigned-paylad-trailer 2025-02-18 15:33:42 +01:00
Alex Auvolat
ccab0e4ae5 api: fix optional \n after trailer checksum header 2025-02-18 15:33:42 +01:00
Alex Auvolat
abb60dcf7e api: remove content-encoding: aws-chunked for streaming payload 2025-02-18 15:33:42 +01:00
Alex Auvolat
f8b0817ddc api: streaming signature: fix trailer parsing 2025-02-18 12:00:41 +01:00
Alex Auvolat
21c0dda16a api: refactor: move checksumming code around again 2025-02-17 20:11:06 +01:00
Alex Auvolat
658541d812 api: use checksumming in api_common::signature for put/putpart 2025-02-17 19:54:25 +01:00
Alex Auvolat
c5df820e2c api: start refactor of signature to calculate checksums earlier 2025-02-17 18:47:06 +01:00
Alex Auvolat
a04d6cd5b8 api: streaming: parse unsigned streaming bodies and payload trailers 2025-02-17 16:23:24 +01:00
Alex Auvolat
44a896f9b5 api: add logic to parse x-amz-content-sha256 2025-02-16 18:25:35 +01:00
Alex Auvolat
cee7560fc1 api: refactor: move checksum algorithms to common 2025-02-16 17:25:55 +01:00
Alex Auvolat
2f0c5ca220 signature: refactor: move constant defs to mod.rs 2025-02-16 16:34:18 +01:00
Alex
859b38b0d2 Merge pull request 'fix compilation warnings' (#959) from fixes into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/959
2025-02-14 17:32:30 +00:00
Alex Auvolat
2729a71d9d fix warning in garage test 2025-02-14 18:27:00 +01:00
Alex Auvolat
c9d00f5f7b garage_api_s3: remove unused field in ListPartsQuery 2025-02-14 18:25:23 +01:00
Alex
89c944ebd6 Merge pull request 's3api: return Location in CompleteMultipartUpload (fix #852)' (#958) from fix-852 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/958
2025-02-14 17:16:58 +00:00
Alex Auvolat
24470377c9 garage_model: fix warning about dead code 2025-02-14 18:12:14 +01:00
Alex Auvolat
5b26545abf fix deprecated uses of chrono in lifecycle worker 2025-02-14 18:08:23 +01:00
Alex Auvolat
9c7e3c7bde remove cargo build options in makefile to avoid mistakes 2025-02-14 18:06:07 +01:00
Alex Auvolat
165f9316e2 s3api: return Location in CompleteMultipartUpload (fix #852)
NB. The location returned is not guaranteed to work in all cases.
This already fixes the parse issue in #852.
2025-02-14 18:05:07 +01:00
Alex
a94adf804f Merge pull request 'block manager: avoid deadlock in fix_block_location (fix #845)' (#957) from fix-845 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/957
2025-02-14 16:53:01 +00:00
Alex Auvolat
e4c9a8cd53 block manager: avoid deadlock in fix_block_location (fix #845) 2025-02-14 17:41:50 +01:00
Alex
9312c6bbcb Merge pull request 'Store data blocks only on nodes in the latest cluster layout version (fix #815)' (#956) from fix-815 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/956
2025-02-14 15:53:16 +00:00
Alex Auvolat
fdf4dad728 block resync: avoid saving blocks to draining nodes 2025-02-14 16:45:55 +01:00
Alex Auvolat
6820b69f30 block manager: improve read strategy to find blocks faster 2025-02-14 16:45:55 +01:00
Alex Auvolat
d0104b9f9b block manager: write blocks only to currently active layout version (fix #815)
avoid wastefully writing blocks to nodes that will discard them as soon
as the layout migration is finished
2025-02-14 16:45:55 +01:00
Alex
3fe8db9e52 Merge pull request 'web_server.rs: Added bucket domain to observability' (#608) from jpds/garage:domain-web-requests into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/608
2025-02-14 14:26:08 +00:00
Alex
627a37fe9f Merge pull request 's3 api: parse x-id query parameter and warn of any inconsistency (fix #822)' (#954) from fix-822 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/954
2025-02-14 14:07:01 +00:00
Alex Auvolat
2f55889835 add configuration option to enable/disable monitoring bucket in web metrics 2025-02-14 14:59:00 +01:00
Jonathan Davies
8b9cc5ca3f web_server.rs: Added bucket domain to observability. 2025-02-14 14:36:20 +01:00
Alex
a1533d2919 Merge pull request 'cli: return info of all nodes when doing garage stats -a (fix #814)' (#953) from fix-814 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/953
2025-02-14 13:31:42 +00:00
Alex Auvolat
c1b39d9ba1 s3 api: parse x-id query parameter and warn of any inconsistency (fix #822) 2025-02-14 14:30:58 +01:00
Alex Auvolat
d84308c413 cli: return info of all nodes when doing garage stats -a (fix #814) 2025-02-14 14:11:41 +01:00
Alex
63f20bdeab Merge pull request 'db-snapshot: Add error handling to metadata snapshot creation' (#930) from handle_snapshot_errors into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/930
Reviewed-by: Armael <armael@noreply.localhost>
2025-02-14 11:52:58 +00:00
Baptiste Jonglez
a2e134f036 db-snapshot: propagate any node snapshot error through RPC call
In particular, it means that "garage meta snapshot --all" will get an exit
code of 1 if any node fails to snapshot.

This makes sure that any external tool trying to snapshot nodes (e.g. from
cron) will be aware of the failure.

Fix #920
2025-02-07 00:29:43 +01:00
Baptiste Jonglez
06aa4b604f db-snapshot: Fix error reporting when using "garage meta snapshot --all"
Snapshot errors on remote nodes were not reported at all.

We now get proper error output such as:

    0fa0f35be69528ab  error: Internal error: DB error: LMDB: No space left on device (os error 28)
    88d92e2971d14bae  ok

Fix #920
2025-02-07 00:18:01 +01:00
Alex
d3226bfa91 Merge pull request 'remove uses of #[async_trait]' (#952) from remove-async-trait into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/952
2025-02-05 19:52:00 +00:00
Alex Auvolat
af67626ab2 remove async_trait for TableRepair 2025-02-05 20:45:07 +01:00
Alex Auvolat
5475da8ea8 remove async_trait used in generic_server.rs 2025-02-05 20:31:34 +01:00
Alex Auvolat
620dc58560 remove async_trait for traits declared in garage_net 2025-02-05 20:22:16 +01:00
Alex
47e87c8739 Merge pull request 'upgrade Rust compiler and Cargo dependencies' (#951) from nix-crane into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/951
2025-02-03 17:49:00 +00:00
Alex Auvolat
34599bff51 update all Cargo dependencies except AWS crates and their dependencies 2025-02-03 17:46:54 +01:00
Alex Auvolat
ec1a475923 build with rust 1.82.0 2025-02-03 17:46:48 +01:00
Alex
b9df2d1ad1 Merge pull request 'compile with crane' (#950) from nix-crane into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/950
2025-02-03 15:54:54 +00:00
Alex Auvolat
390a5d97fe nix, ci: build with Crane
This removes our dependency on cargo2nix, which was causing us some
issues. Whereas cargo2nix creates one Nix derivation per crate, Crane
uses only two derivations:

1. Build dependencies only
2. Build the final binary

This means that during the second step, no caching can be done. For
instance, if we do a change in garage_model, we need to recompile all of
the Garage crates including those that do not depend on garage_model.
On the upside, this allows all of the Garage crates to be built at once
using cargo build logic, which is optimized for high parallelism and
better pipelining between all of the steps of the build. All in all,
this makes most builds faster than cargo2nix.

A few other changes have been made to the build scripts and CI:

- Unit tests are now run within a Nix derivation. In fact, we have
  different derivations to run the tests using LMDB and Sqlite as
  metadata db engines.

- For debug builds, most CI steps now run in parallel (with the notable
  exception of the smoke test that runs after the build, which is
  inevitable).

- We no longer pass the GIT_VERSION argument when building debug builds
  and running the tests. This means that dev binaries and test
  binaries don't know the exact version of Garage they are from. That
  shouldn't be an issue in most cases.

- The not-dynamic.sh scripts has been fixed to fail if the file does not
  exist.
2025-02-03 16:39:50 +01:00
Alex
4dc2bc337f Merge pull request 'woodpecker: use parallel nix-build in debug builds' (#949) from nix-parallel into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/949
2025-02-01 18:58:15 +00:00
Alex Auvolat
5dd2791981 woodpecker: use parallel nix-build in debug builds 2025-02-01 19:48:01 +01:00
Alex
d601f31186 Merge pull request 'split garage_api in garage_api_{common,s3,k2v,admin}' (#947) from split-garage-api into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/947
2025-02-01 17:48:25 +00:00
Alex Auvolat
e4de7bdfd5 fix ci for more test crates 2025-01-31 19:21:36 +01:00
Alex Auvolat
d18c5ad0ff fix tests 2025-01-31 19:12:51 +01:00
Alex Auvolat
3d5e9a027e cargo defs: simplify and fix descriptions 2025-01-31 18:54:29 +01:00
Alex Auvolat
f4ca7758b4 update cargo.nix 2025-01-31 18:48:07 +01:00
Alex Auvolat
4563313f87 use cargo-shear to remove many unused dependencies between crates 2025-01-31 18:47:30 +01:00
Alex Auvolat
afa28706e5 split s3/cors.rs into also common/cors.rs 2025-01-31 18:42:14 +01:00
Alex Auvolat
84f1db91c4 fix things up 2025-01-31 18:34:57 +01:00
Alex Auvolat
9fa20d45be wip: split garage_api into garage_api_{common,s3,k2v,admin} 2025-01-31 18:18:29 +01:00
Alex
9330fd79d3 Merge pull request 'table::insert_many: avoid failure with zero items (fix #915)' (#946) from fix-915 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/946
2025-01-31 13:10:54 +00:00
Alex Auvolat
83f6928ff7 table::insert_many: avoid failure with zero items (fix #915) 2025-01-30 18:06:47 +01:00
Alex
ab71544499 Merge pull request 'api: better handling of helper errors to distinguish error codes' (#942) from fix-getkeyinfo-404 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/942
2025-01-29 18:25:44 +00:00
Alex
991edbe02c Merge pull request 'Update doc/book/connect/repositories.md' (#941) from yatesco/garage:main into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/941
2025-01-29 18:18:59 +00:00
Alex Auvolat
9f3c7c3720 api: better handling of helper errors to distinguish error codes 2025-01-29 19:14:34 +01:00
yatesco
bfde9152b8 Update doc/book/operations/multi-hdd.md
trivial spelling mistake
2025-01-29 13:40:41 +00:00
yatesco
7bb042f0b7 Update doc/book/connect/repositories.md
trivial spelling mistake
2025-01-29 13:34:35 +00:00
Alex
a1d081ee84 Merge pull request 's3 api: make x-amz-meta-* headers lowercase (fix #844)' (#938) from fix-844 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/938
2025-01-27 19:32:19 +00:00
Alex Auvolat
e8fa89e834 s3 api: make x-amz-meta-* headers lowercase (fix #844) 2025-01-27 19:58:06 +01:00
Alex
beedc9fd11 Merge pull request 'snapshot: sqlite: use a subdirectory for consistency with LMDB' (#932) from baptiste/garage:snapshot_consistency_sqlite into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/932
2025-01-27 18:50:11 +00:00
Baptiste Jonglez
6d798c640f WIP: fix crash in layout computation when changing all nodes of a zone to gateway mode
This change is probably not a proper fix, somebody with more expertise on
this code should look at it.

Here is how to reproduce the crash:

- start with a layout with two zones
- move all nodes of a zone to gateway mode: `garage layout assign fea54bcc081f318 -g`
- `garage layout show` will panic with a backtrace

Fortunately, the crash is only on the RPC client side, not on the Garage
server itself, and `garage layout revert` still works to go back to the
previous state.

As far as I can tell, this bug is present since Garage 0.9.0 which
includes the new layout assignation algorithm:

  https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/296
2025-01-27 19:33:57 +01:00
Alex
d4e3e60920 Merge pull request 'update nix crate to 0.29 and libc to 0.2.169' (#931) from neuschaefer/garage:nix into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/931
2025-01-27 18:09:51 +00:00
Baptiste Jonglez
43402c9619 snapshot: sqlite: use a subdirectory for consistency with LMDB
Currently, taking a snapshot of the metadata database with sqlite creates
a sqlite file without extension with the following format:

    snapshots/2025-01-26T15:29:17Z

This makes it hard to understand what kind of data this is, and is not
consistent with LMDB:

    snapshots/2025-01-26T15:29:17Z/data.mdb

With this change, we now get a directory with a single db.sqlite file:

    snapshots/2025-01-26T15:29:17Z/db.sqlite
2025-01-27 19:06:52 +01:00
Alex
efa6f3d85e Merge pull request 'db-snapshot: allow to set directory where snapshots are stored' (#933) from baptiste/garage:configure_metadata_snapshots_dir into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/933
2025-01-27 18:04:05 +00:00
Alex Auvolat
74a1b49b13 Update Cargo.nix 2025-01-27 18:37:05 +01:00
J. Neuschäfer
23d57b89dc update nix crate to 0.29 and libc to 0.2.169 2025-01-27 18:37:05 +01:00
Alex
5e3e1f4453 Merge pull request 'fix problems with CI doing work multiple times' (#936) from woodpecker-simplify into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/936
2025-01-27 17:36:27 +00:00
Baptiste Jonglez
59c153d280 db-snapshot: allow to set directory where snapshots are stored
Fix #926
2025-01-27 18:33:55 +01:00
Alex Auvolat
bb3e0f7d22 nix CI: reduce redundant work 2025-01-27 18:09:51 +01:00
Alex
0156e40c9d Merge pull request 'ci: fix woodpecker definitions to comply with woodpecker 3' (#935) from woodpecker3 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/935
Reviewed-by: maximilien <me@mricher.fr>
2025-01-27 12:03:46 +00:00
Alex Auvolat
f6f88065ad ci: fix woodpecker definitions to comply with woodpecker 3 2025-01-27 12:06:31 +01:00
Alex
591bd808ec Merge pull request 'doc: Fix Nix devenv setup' (#927) from fix_devenv into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/927
2025-01-23 10:20:04 +00:00
maximilien
294cb99409 Merge pull request 'Fix all typos' (#928) from majst01/garage:fix-typos into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/928
Reviewed-by: maximilien <me@mricher.fr>
2025-01-17 06:06:14 +00:00
Stefan Majer
2eb9fcae20 Fix all typos 2025-01-16 13:22:00 +01:00
Baptiste Jonglez
58b9eb46fc doc: Fix Nix devenv setup
This is a hotfix to fix the doc for the current setup, see #868 for
possible future directions.
2025-01-16 10:00:12 +01:00
maximilien
255b01b626 Merge pull request 'Helm chart: Add garage.existingConfigmap and replace garage.garage.toml with garage.garageTomlString' (#923) from jessebot/garage:allow-existing-configmap into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/923
Reviewed-by: maximilien <me@mricher.fr>
2025-01-15 23:53:25 +00:00
Maximilien R.
58a765c51f Minor rewording, add some more hints 2025-01-15 23:51:07 +00:00
jessebot
1c431b8457 Add garage.existingConfigmap and replace garage.garage.toml with garage.garageTomlString
also moves all gotemplating back to configmap

and adds autogenerated docs via helm-docs

Signed-off-by: jessebot <jessebot@linux.com>
2025-01-15 23:51:07 +00:00
Alex
39ac034de5 Merge pull request 'update toolchain' (#924) from nix-update into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/924
2025-01-13 10:19:53 +00:00
Alex Auvolat
8ddb0dd485 nix build: switch to upstream cargo2nix (branch release-0.11.0) 2025-01-12 18:16:23 +01:00
Alex Auvolat
83887a8519 nix build: remove clippy build env that doesn't work 2025-01-12 17:51:33 +01:00
Alex Auvolat
0a15db6960 nix build: update rustc to v1.78 2025-01-12 17:37:36 +01:00
Alex Auvolat
295237476e fix formatting to comply with latest rustfmt 2025-01-12 17:36:25 +01:00
Alex Auvolat
9d83605736 flake: update versions of nixpkgs and rust-overlay 2025-01-12 17:34:04 +01:00
maximilien
4b1a7fb5e3 Merge pull request 'The version flag is required when applying a layout' (#921) from update-quickstart-docs-layout-apply into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/921
2025-01-09 00:41:35 +00:00
fabientot
b6aaebaf4c The version flag is required when applying a layout
I followed the documentation and got an error if the layout's version was not specified 

```
garage layout apply

Error: Internal error:
Please pass the new layout version number to ensure that you are writing the correct version of the cluster layout.
To know the correct value of the new layout version, invoke `garage layout show` and review the proposed changes.
```

This fixes that
2025-01-08 20:30:09 +00:00
Alex
7bbc8fec50 Merge pull request 'Fix #907' (#917) from vk/garage:fix_907 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/917
2025-01-04 16:07:40 +00:00
Vedad KAJTAZ
6689800986 Formatting with 2025-01-04 16:52:23 +01:00
Alex
d2246baab7 Merge pull request 'update flake.lock' (#918) from update-flake into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/918
2025-01-04 15:43:41 +00:00
Alex Auvolat
afac1d4d4a update flake.lock 2025-01-04 16:29:42 +01:00
Vedad KAJTAZ
6ca99fd02c formatting 2025-01-04 14:46:42 +01:00
Vedad KAJTAZ
b568bb863d Fix #907 2025-01-04 12:50:10 +01:00
Alex
b8f301a61d Merge pull request 'woodpecker: use modern syntax for secrets (removes warning)' (#912) from woodpecker-fix-warnings into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/912
2024-12-23 17:41:15 +00:00
Alex Auvolat
428ad2075d
woodpecker: use modern syntax for secrets (removes warning) 2024-12-23 18:00:22 +01:00
maximilien
3661a597fa Merge pull request 'feat: add use_local_tz configuration' (#908) from ragazenta/garage:feat/local-timezone into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/908
Reviewed-by: maximilien <me@mricher.fr>
2024-12-01 13:23:24 +00:00
Renjaya Raga Zenta
0fd3c0e794
doc: add use_local_tz configuration 2024-11-25 10:35:00 +07:00
Renjaya Raga Zenta
4c1bf42192
feat: add use_local_tz configuration
Used in lifecycle_worker to determine midnight time
2024-11-23 05:51:12 +07:00
maximilien
906c8708fd Merge pull request 'add extraVolumes and extraVolumeMounts to helm chart' (#896) from eugene-davis/garage:main into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/896
Reviewed-by: maximilien <me@mricher.fr>
2024-11-19 22:23:13 +00:00
Alex
747889a096 Merge pull request 'Update Python SDK documentation' (#887) from cryptolukas/garage:fix-python-sdk-docs into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/887
2024-11-19 09:15:03 +00:00
Alex
feb09a4bc6 Merge pull request 'doc: update mastodon media header pruning section' (#888) from teutat3s/garage:doc-update-mastodon-media into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/888
2024-11-19 09:14:34 +00:00
maximilien
aa8bc6aa88 Merge pull request 'doc: add Triplebit's use-case' (#901) from jonah/garage:triplebit into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/901
Reviewed-by: maximilien <me@mricher.fr>
2024-11-17 13:43:49 +00:00
Jonah Aragon
aba7902995
doc: add Triplebit's use-case 2024-11-15 16:27:46 -06:00
Alex
78de7b5bde Merge pull request 'fix bit/byte inversion in rpc secret error message' (#898) from trinity-1686a/garage:rpc-comment into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/898
2024-11-07 11:11:12 +00:00
trinity-1686a
9bd9e392ba fix bit/byte inversion in rpc secret error message 2024-11-07 00:29:26 +01:00
Eugene Davis
116ad479a8
add extraVolumes and extraVolumeMounts to helm chart 2024-10-26 21:14:08 +02:00
teutat3s
b6a58c5c16
doc: update mastodon media header pruning section
This is now possible since the upstream issue has been resolved.
https://github.com/mastodon/mastodon/issues/9567
2024-10-17 20:59:21 +02:00
Matthias Doering
2b0bfa9b18 the old value do not work out of the box 2024-10-14 17:20:26 +02:00
Alex
a18b3f0d1f Merge pull request 'Garage v1.0.1' (#881) from rel-v1.0.1 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/881
2024-09-22 13:02:02 +00:00
Alex Auvolat
7a143f46fc
Bump to version 1.0.1 2024-09-22 14:25:32 +02:00
Alex
c731f0291a Merge pull request 'fix logic in garage layout skip-dead-nodes + fix typo (fix #879)' (#880) from fix-skip-dead-nodes into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/880
2024-09-22 12:01:49 +00:00
Alex Auvolat
34453bc9c2
fix logic in garage layout skip-dead-nodes + fix typo (fix #879) 2024-09-22 13:47:27 +02:00
Alex
6da1353541 Merge pull request 'Don't fetch old values in cross-partition transactional inserts' (#877) from withings/garage:perf/kv/insert-no-return-cross-partition into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/877
2024-09-14 15:57:27 +00:00
Julien Kritter
bd71728874
Tests: don't expect old value after transactional insert 2024-09-12 10:50:53 +02:00
Julien Kritter
51ced60366
Don't fetch old values in cross-partition transactional inserts 2024-09-12 10:26:28 +02:00
Alex
586957b4b7 Merge pull request 'KV: don't retrieve values for write ops' (#873) from marvinj97/garage:perf/kv/insert-no-return into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/873
Reviewed-by: Alex <alex@adnab.me>
2024-09-10 09:06:29 +00:00
Alex
8d2bb4afeb Merge pull request 'Typo' (#875) from faust/garage:doc2 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/875
2024-09-10 09:03:31 +00:00
Faustin Lammler
c26f32b769
Typo
And remove trailing white space.
2024-09-10 09:34:59 +02:00
marvin-j97
8062ec7b4b test: fix db tests 2024-09-04 19:24:36 +02:00
marvin-j97
eb416a02fb dont assert deletion count in sqlite KV adapter 2024-09-04 18:51:51 +02:00
marvin-j97
74363c9060 perf(kv): dont retrieve values for write ops
see https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/851
2024-09-04 18:45:17 +02:00
Alex
615698df7d Merge pull request 'update compiler to Rust 1.77' (#866) from rust-1.77 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/866
2024-08-26 19:08:00 +00:00
Alex Auvolat
7061fa5a56
use rust 1.77 in nix/compile.nix 2024-08-26 19:19:16 +02:00
Alex Auvolat
8881930cdd
update nixpkgs and rust-overlay sources in flake.nix 2024-08-26 19:19:16 +02:00
Alex
52f6c0760b Merge pull request 'update crate time (fix #849)' (#865) from update-time into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/865
2024-08-26 16:20:04 +00:00
Alex Auvolat
5b0602c7e9
update crate time (fix #849) 2024-08-26 18:11:21 +02:00
Alex
182b2af7e5 Merge pull request 'api servers: kill opened connections after SIGINT after 10s deadline (fix #806)' (#864) from exit-deadline into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/864
2024-08-25 18:34:55 +00:00
Alex Auvolat
baf32c9575
api servers: kill opened connections after SIGINT after 10s deadline (fix #806) 2024-08-25 20:04:56 +02:00
Alex
3dda1ee4f6 Merge pull request 'fix build when lmdb feature is disabled (fix #800)' (#863) from fix-800 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/863
2024-08-25 10:00:40 +00:00
Alex Auvolat
aa7ce9e97c
fix build when lmdb feature is disabled (fix #800) 2024-08-25 11:42:37 +02:00
Alex
8d62616ec0 Merge pull request 'layout: discard old info when it is completely out-of-date (fix #841)' (#861) from fix-841 into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/861
2024-08-24 11:12:39 +00:00
Alex
bd6fe72c06 Merge pull request 'Quick start: mention Docker (replace #803)' (#862) from dougreeder into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/862
2024-08-24 11:07:46 +00:00
Alex Auvolat
4c9e8ef625
doc: clarify quick start on using docker 2024-08-24 13:07:02 +02:00
Alex
3e711bc110 Merge pull request 'don't modify postobject request before validating policy' (#850) from trinity-1686a/garage:fix-acl-postobject into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/850
2024-08-24 10:49:14 +00:00
Alex Auvolat
7fb66b4944
layout: discard old info when it is completely out-of-date (fix #841) 2024-08-24 12:38:56 +02:00
Quentin
679ae8bcbb Merge pull request 'Set "no read ahead" on LMDB to improve performances when "LMDB size" > "RAM size"' (#855) from fix-lmdb-no-read-ahead into main
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/855
Reviewed-by: Alex <alex@adnab.me>
2024-08-18 12:25:35 +00:00
Quentin Dufour
2a93ad0c84
force flag "no read ahead" on LMDB 2024-08-17 21:17:15 +02:00
trinity-1686a
f190032589 don't modify postobject request before validating policy 2024-08-10 20:10:47 +02:00
P. Douglas Reeder
0c3b198b22 Improves Quick Start for users not using Linux 2024-04-10 16:42:10 -04:00
Quentin Dufour
8b35a946d9
Allow external HTTP client 2024-02-23 17:09:47 +01:00
197 changed files with 7691 additions and 11445 deletions

View file

@ -1,3 +0,0 @@
[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=mold"]

View file

@ -1,3 +1,6 @@
labels:
nix: "enabled"
when: when:
event: event:
- push - push
@ -9,39 +12,33 @@ when:
steps: steps:
- name: check formatting - name: check formatting
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
commands: commands:
- nix-shell --attr devShell --run "cargo fmt -- --check" - nix-build -j4 --attr flakePackages.fmt
- name: build - name: build
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
commands: commands:
- nix-build --no-build-output --attr clippy.amd64 --argstr git_version ${CI_COMMIT_TAG:-$CI_COMMIT_SHA} - nix-build -j4 --attr flakePackages.dev
- name: unit + func tests - name: unit + func tests (lmdb)
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
environment:
GARAGE_TEST_INTEGRATION_EXE: result-bin/bin/garage
GARAGE_TEST_INTEGRATION_PATH: tmp-garage-integration
commands: commands:
- nix-build --no-build-output --attr clippy.amd64 --argstr git_version ${CI_COMMIT_TAG:-$CI_COMMIT_SHA} - nix-build -j4 --attr flakePackages.tests-lmdb
- nix-build --no-build-output --attr test.amd64
- ./result/bin/garage_db-* - name: unit + func tests (sqlite)
- ./result/bin/garage_api-* image: nixpkgs/nix:nixos-24.05
- ./result/bin/garage_model-* commands:
- ./result/bin/garage_rpc-* - nix-build -j4 --attr flakePackages.tests-sqlite
- ./result/bin/garage_table-*
- ./result/bin/garage_util-* - name: unit + func tests (fjall)
- ./result/bin/garage_web-* image: nixpkgs/nix:nixos-24.05
- ./result/bin/garage-* commands:
- GARAGE_TEST_INTEGRATION_DB_ENGINE=lmdb ./result/bin/integration-* || (cat tmp-garage-integration/stderr.log; false) - nix-build -j4 --attr flakePackages.tests-fjall
- nix-shell --attr ci --run "killall -9 garage" || true
- GARAGE_TEST_INTEGRATION_DB_ENGINE=sqlite ./result/bin/integration-* || (cat tmp-garage-integration/stderr.log; false)
- rm result
- rm -rv tmp-garage-integration
- name: integration tests - name: integration tests
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
commands: commands:
- nix-build --no-build-output --attr clippy.amd64 --argstr git_version ${CI_COMMIT_TAG:-$CI_COMMIT_SHA} - nix-build -j4 --attr flakePackages.dev
- nix-shell --attr ci --run ./script/test-smoke.sh || (cat /tmp/garage.log; false) - nix-shell --attr ci --run ./script/test-smoke.sh || (cat /tmp/garage.log; false)
depends_on: [ build ]

View file

@ -1,3 +1,6 @@
labels:
nix: "enabled"
when: when:
event: event:
- deployment - deployment
@ -8,20 +11,21 @@ depends_on:
steps: steps:
- name: refresh-index - name: refresh-index
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
secrets: environment:
- source: garagehq_aws_access_key_id AWS_ACCESS_KEY_ID:
target: AWS_ACCESS_KEY_ID from_secret: garagehq_aws_access_key_id
- source: garagehq_aws_secret_access_key AWS_SECRET_ACCESS_KEY:
target: AWS_SECRET_ACCESS_KEY from_secret: garagehq_aws_secret_access_key
commands: commands:
- mkdir -p /etc/nix && cp nix/nix.conf /etc/nix/nix.conf - mkdir -p /etc/nix && cp nix/nix.conf /etc/nix/nix.conf
- nix-shell --attr ci --run "refresh_index" - nix-shell --attr ci --run "refresh_index"
- name: multiarch-docker - name: multiarch-docker
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
secrets: environment:
- docker_auth DOCKER_AUTH:
from_secret: docker_auth
commands: commands:
- mkdir -p /root/.docker - mkdir -p /root/.docker
- echo $DOCKER_AUTH > /root/.docker/config.json - echo $DOCKER_AUTH > /root/.docker/config.json

View file

@ -1,3 +1,6 @@
labels:
nix: "enabled"
when: when:
event: event:
- deployment - deployment
@ -16,18 +19,17 @@ matrix:
steps: steps:
- name: build - name: build
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
commands: commands:
- nix-build --no-build-output --attr pkgs.${ARCH}.release --argstr git_version ${CI_COMMIT_TAG:-$CI_COMMIT_SHA} - nix-build --attr releasePackages.${ARCH} --argstr git_version ${CI_COMMIT_TAG:-$CI_COMMIT_SHA}
- name: check is static binary - name: check is static binary
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
commands: commands:
- nix-build --no-build-output --attr pkgs.${ARCH}.release --argstr git_version ${CI_COMMIT_TAG:-$CI_COMMIT_SHA} - nix-shell --attr ci --run "./script/not-dynamic.sh result/bin/garage"
- nix-shell --attr ci --run "./script/not-dynamic.sh result-bin/bin/garage"
- name: integration tests - name: integration tests
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
commands: commands:
- nix-shell --attr ci --run ./script/test-smoke.sh || (cat /tmp/garage.log; false) - nix-shell --attr ci --run ./script/test-smoke.sh || (cat /tmp/garage.log; false)
when: when:
@ -37,7 +39,7 @@ steps:
ARCH: i386 ARCH: i386
- name: upgrade tests - name: upgrade tests
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
commands: commands:
- nix-shell --attr ci --run "./script/test-upgrade.sh v0.8.4 x86_64-unknown-linux-musl" || (cat /tmp/garage.log; false) - nix-shell --attr ci --run "./script/test-upgrade.sh v0.8.4 x86_64-unknown-linux-musl" || (cat /tmp/garage.log; false)
when: when:
@ -45,24 +47,23 @@ steps:
ARCH: amd64 ARCH: amd64
- name: push static binary - name: push static binary
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
environment: environment:
TARGET: "${TARGET}" TARGET: "${TARGET}"
secrets: AWS_ACCESS_KEY_ID:
- source: garagehq_aws_access_key_id from_secret: garagehq_aws_access_key_id
target: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY:
- source: garagehq_aws_secret_access_key from_secret: garagehq_aws_secret_access_key
target: AWS_SECRET_ACCESS_KEY
commands: commands:
- nix-shell --attr ci --run "to_s3" - nix-shell --attr ci --run "to_s3"
- name: docker build and publish - name: docker build and publish
image: nixpkgs/nix:nixos-22.05 image: nixpkgs/nix:nixos-24.05
environment: environment:
DOCKER_PLATFORM: "linux/${ARCH}" DOCKER_PLATFORM: "linux/${ARCH}"
CONTAINER_NAME: "dxflrs/${ARCH}_garage" CONTAINER_NAME: "dxflrs/${ARCH}_garage"
secrets: DOCKER_AUTH:
- docker_auth from_secret: docker_auth
commands: commands:
- mkdir -p /root/.docker - mkdir -p /root/.docker
- echo $DOCKER_AUTH > /root/.docker/config.json - echo $DOCKER_AUTH > /root/.docker/config.json

2933
Cargo.lock generated

File diff suppressed because it is too large Load diff

7102
Cargo.nix

File diff suppressed because it is too large Load diff

View file

@ -8,7 +8,10 @@ members = [
"src/table", "src/table",
"src/block", "src/block",
"src/model", "src/model",
"src/api", "src/api/common",
"src/api/s3",
"src/api/k2v",
"src/api/admin",
"src/web", "src/web",
"src/garage", "src/garage",
"src/k2v-client", "src/k2v-client",
@ -21,15 +24,18 @@ default-members = ["src/garage"]
# Internal Garage crates # Internal Garage crates
format_table = { version = "0.1.1", path = "src/format-table" } format_table = { version = "0.1.1", path = "src/format-table" }
garage_api = { version = "1.0.0", path = "src/api" } garage_api_common = { version = "1.3.1", path = "src/api/common" }
garage_block = { version = "1.0.0", path = "src/block" } garage_api_admin = { version = "1.3.1", path = "src/api/admin" }
garage_db = { version = "1.0.0", path = "src/db", default-features = false } garage_api_s3 = { version = "1.3.1", path = "src/api/s3" }
garage_model = { version = "1.0.0", path = "src/model", default-features = false } garage_api_k2v = { version = "1.3.1", path = "src/api/k2v" }
garage_net = { version = "1.0.0", path = "src/net" } garage_block = { version = "1.3.1", path = "src/block" }
garage_rpc = { version = "1.0.0", path = "src/rpc" } garage_db = { version = "1.3.1", path = "src/db", default-features = false }
garage_table = { version = "1.0.0", path = "src/table" } garage_model = { version = "1.3.1", path = "src/model", default-features = false }
garage_util = { version = "1.0.0", path = "src/util" } garage_net = { version = "1.3.1", path = "src/net" }
garage_web = { version = "1.0.0", path = "src/web" } garage_rpc = { version = "1.3.1", path = "src/rpc" }
garage_table = { version = "1.3.1", path = "src/table" }
garage_util = { version = "1.3.1", path = "src/util" }
garage_web = { version = "1.3.1", path = "src/web" }
k2v-client = { version = "0.0.4", path = "src/k2v-client" } k2v-client = { version = "0.0.4", path = "src/k2v-client" }
# External crates from crates.io # External crates from crates.io
@ -46,21 +52,19 @@ chrono = "0.4"
crc32fast = "1.4" crc32fast = "1.4"
crc32c = "0.6" crc32c = "0.6"
crypto-common = "0.1" crypto-common = "0.1"
digest = "0.10"
err-derive = "0.3"
gethostname = "0.4" gethostname = "0.4"
git-version = "0.3.4" git-version = "0.3.4"
hex = "0.4" hex = "0.4"
hexdump = "0.1" hexdump = "0.1"
hmac = "0.12" hmac = "0.12"
idna = "0.5"
itertools = "0.12" itertools = "0.12"
ipnet = "2.9.0" ipnet = "2.9.0"
lazy_static = "1.4" lazy_static = "1.4"
md-5 = "0.10" md-5 = "0.10"
mktemp = "0.5" mktemp = "0.5"
nix = { version = "0.27", default-features = false, features = ["fs"] } nix = { version = "0.29", default-features = false, features = ["fs"] }
nom = "7.1" nom = "7.1"
parking_lot = "0.12"
parse_duration = "2.1" parse_duration = "2.1"
pin-project = "1.0.12" pin-project = "1.0.12"
pnet_datalink = "0.34" pnet_datalink = "0.34"
@ -79,12 +83,14 @@ pretty_env_logger = "0.5"
structopt = { version = "0.3", default-features = false } structopt = { version = "0.3", default-features = false }
syslog-tracing = "0.3" syslog-tracing = "0.3"
tracing = "0.1" tracing = "0.1"
tracing-journald = "0.3.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] } tracing-subscriber = { version = "0.3", features = ["env-filter"] }
heed = { version = "0.11", default-features = false, features = ["lmdb"] } heed = { version = "0.11", default-features = false, features = ["lmdb"] }
rusqlite = "0.31.0" rusqlite = "0.37"
r2d2 = "0.8" r2d2 = "0.8"
r2d2_sqlite = "0.24" r2d2_sqlite = "0.31"
fjall = "2.4"
async-compression = { version = "0.4", features = ["tokio", "zstd"] } async-compression = { version = "0.4", features = ["tokio", "zstd"] }
zstd = { version = "0.13", default-features = false } zstd = { version = "0.13", default-features = false }
@ -127,26 +133,21 @@ opentelemetry-contrib = "0.9"
prometheus = "0.13" prometheus = "0.13"
# used by the k2v-client crate only # used by the k2v-client crate only
aws-sigv4 = { version = "1.1" } aws-sigv4 = { version = "1.1", default-features = false }
hyper-rustls = { version = "0.26", features = ["http2"] } hyper-rustls = { version = "0.26", default-features = false, features = ["http1", "http2", "ring", "rustls-native-certs"] }
log = "0.4" log = "0.4"
thiserror = "1.0" thiserror = "2.0"
# ---- used only as build / dev dependencies ---- # ---- used only as build / dev dependencies ----
assert-json-diff = "2.0" assert-json-diff = "2.0"
rustc_version = "0.4.0" rustc_version = "0.4.0"
static_init = "1.0" static_init = "1.0"
aws-smithy-runtime = { version = "1.8", default-features = false, features = ["tls-rustls"] }
aws-config = "1.1.4" aws-sdk-config = { version = "1.62", default-features = false }
aws-sdk-config = "1.13" aws-sdk-s3 = { version = "1.79", default-features = false, features = ["rt-tokio"] }
aws-sdk-s3 = "1.14"
[profile.dev]
#lto = "thin" # disabled for now, adds 2-4 min to each CI build
lto = "off"
[profile.release] [profile.release]
lto = true lto = "thin"
codegen-units = 1 codegen-units = 16
opt-level = "s" opt-level = 3
strip = true strip = "debuginfo"

View file

@ -3,5 +3,5 @@ FROM scratch
ENV RUST_BACKTRACE=1 ENV RUST_BACKTRACE=1
ENV RUST_LOG=garage=info ENV RUST_LOG=garage=info
COPY result-bin/bin/garage / COPY result/bin/garage /
CMD [ "/garage", "server"] CMD [ "/garage", "server"]

View file

@ -1,13 +1,8 @@
.PHONY: doc all release shell run1 run2 run3 .PHONY: doc all run1 run2 run3
all: all:
clear; cargo build clear
cargo build
release:
nix-build --attr pkgs.amd64.release --no-build-output
shell:
nix-shell
# ---- # ----

View file

@ -3,53 +3,22 @@
with import ./nix/common.nix; with import ./nix/common.nix;
let let
pkgs = import pkgsSrc { }; pkgs = import nixpkgs { };
compile = import ./nix/compile.nix; compile = import ./nix/compile.nix;
build_debug_and_release = (target: { build_release = target: (compile {
debug = (compile { inherit target system git_version nixpkgs;
inherit system target git_version pkgsSrc cargo2nixOverlay; crane = flake.inputs.crane;
release = false; rust-overlay = flake.inputs.rust-overlay;
}).workspace.garage { compileMode = "build"; }; release = true;
}).garage;
release = (compile {
inherit system target git_version pkgsSrc cargo2nixOverlay;
release = true;
}).workspace.garage { compileMode = "build"; };
});
test = (rustPkgs:
pkgs.symlinkJoin {
name = "garage-tests";
paths =
builtins.map (key: rustPkgs.workspace.${key} { compileMode = "test"; })
(builtins.attrNames rustPkgs.workspace);
});
in { in {
pkgs = { releasePackages = {
amd64 = build_debug_and_release "x86_64-unknown-linux-musl"; amd64 = build_release "x86_64-unknown-linux-musl";
i386 = build_debug_and_release "i686-unknown-linux-musl"; i386 = build_release "i686-unknown-linux-musl";
arm64 = build_debug_and_release "aarch64-unknown-linux-musl"; arm64 = build_release "aarch64-unknown-linux-musl";
arm = build_debug_and_release "armv6l-unknown-linux-musleabihf"; arm = build_release "armv6l-unknown-linux-musleabihf";
};
test = {
amd64 = test (compile {
inherit system git_version pkgsSrc cargo2nixOverlay;
target = "x86_64-unknown-linux-musl";
features = [
"garage/bundled-libs"
"garage/k2v"
"garage/lmdb"
"garage/sqlite"
];
});
};
clippy = {
amd64 = (compile {
inherit system git_version pkgsSrc cargo2nixOverlay;
target = "x86_64-unknown-linux-musl";
compiler = "clippy";
}).workspace.garage { compileMode = "build"; };
}; };
flakePackages = flake.packages.${system};
} }

View file

@ -687,7 +687,7 @@ paths:
operationId: "GetBucketInfo" operationId: "GetBucketInfo"
summary: "Get a bucket" summary: "Get a bucket"
description: | description: |
Given a bucket identifier (`id`) or a global alias (`alias`), get its information. Given a bucket identifier (`id`) or a global alias (`globalAlias`), get its information.
It includes its aliases, its web configuration, keys that have some permissions It includes its aliases, its web configuration, keys that have some permissions
on it, some statistics (number of objects, size), number of dangling multipart uploads, on it, some statistics (number of objects, size), number of dangling multipart uploads,
and its quotas (if any). and its quotas (if any).
@ -701,7 +701,7 @@ paths:
example: "b4018dc61b27ccb5c64ec1b24f53454bbbd180697c758c4d47a22a8921864a87" example: "b4018dc61b27ccb5c64ec1b24f53454bbbd180697c758c4d47a22a8921864a87"
schema: schema:
type: string type: string
- name: alias - name: globalAlias
in: query in: query
description: | description: |
The exact global alias of one of the existing buckets. The exact global alias of one of the existing buckets.

View file

@ -23,7 +23,7 @@ client = minio.Minio(
"GKyourapikey", "GKyourapikey",
"abcd[...]1234", "abcd[...]1234",
# Force the region, this is specific to garage # Force the region, this is specific to garage
region="region", region="garage",
) )
``` ```

View file

@ -12,7 +12,7 @@ In this section, we cover the following web applications:
| [Mastodon](#mastodon) | ✅ | Natively supported | | [Mastodon](#mastodon) | ✅ | Natively supported |
| [Matrix](#matrix) | ✅ | Tested with `synapse-s3-storage-provider` | | [Matrix](#matrix) | ✅ | Tested with `synapse-s3-storage-provider` |
| [ejabberd](#ejabberd) | ✅ | `mod_s3_upload` | | [ejabberd](#ejabberd) | ✅ | `mod_s3_upload` |
| [Pixelfed](#pixelfed) | ❓ | Not yet tested | | [Pixelfed](#pixelfed) | ✅ | Natively supported |
| [Pleroma](#pleroma) | ❓ | Not yet tested | | [Pleroma](#pleroma) | ❓ | Not yet tested |
| [Lemmy](#lemmy) | ✅ | Supported with pict-rs | | [Lemmy](#lemmy) | ✅ | Supported with pict-rs |
| [Funkwhale](#funkwhale) | ❓ | Not yet tested | | [Funkwhale](#funkwhale) | ❓ | Not yet tested |
@ -69,7 +69,7 @@ $CONFIG = array(
'hostname' => '127.0.0.1', // Can also be a domain name, eg. garage.example.com 'hostname' => '127.0.0.1', // Can also be a domain name, eg. garage.example.com
'port' => 3900, // Put your reverse proxy port or your S3 API port 'port' => 3900, // Put your reverse proxy port or your S3 API port
'use_ssl' => false, // Set it to true if you have a TLS enabled reverse proxy 'use_ssl' => false, // Set it to true if you have a TLS enabled reverse proxy
'region' => 'garage', // Garage has only one region named "garage" 'region' => 'garage', // Garage default region is named "garage", edit according to your cluster config
'use_path_style' => true // Garage supports only path style, must be set to true 'use_path_style' => true // Garage supports only path style, must be set to true
], ],
], ],
@ -135,7 +135,7 @@ bucket but doesn't also know the secret encryption key.
*Click on the picture to zoom* *Click on the picture to zoom*
Add a new external storage. Put what you want in "folder name" (eg. "shared"). Select "Amazon S3". Keep "Access Key" for the Authentication field. Add a new external storage. Put what you want in "folder name" (eg. "shared"). Select "Amazon S3". Keep "Access Key" for the Authentication field.
In Configuration, put your bucket name (eg. nextcloud), the host (eg. 127.0.0.1), the port (eg. 3900 or 443), the region (garage). Tick the SSL box if you have put an HTTPS proxy in front of garage. You must tick the "Path access" box and you must leave the "Legacy authentication (v2)" box empty. Put your Key ID (eg. GK...) and your Secret Key in the last two input boxes. Finally click on the tick symbol on the right of your screen. In Configuration, put your bucket name (eg. nextcloud), the host (eg. 127.0.0.1), the port (eg. 3900 or 443), the region ("garage" if you use the default, or the one your configured in your `garage.toml`). Tick the SSL box if you have put an HTTPS proxy in front of garage. You must tick the "Path access" box and you must leave the "Legacy authentication (v2)" box empty. Put your Key ID (eg. GK...) and your Secret Key in the last two input boxes. Finally click on the tick symbol on the right of your screen.
Now go to your "Files" app and a new "linked folder" has appeared with the name you chose earlier (eg. "shared"). Now go to your "Files" app and a new "linked folder" has appeared with the name you chose earlier (eg. "shared").
@ -191,10 +191,10 @@ garage key create peertube-key
Keep the Key ID and the Secret key in a pad, they will be needed later. Keep the Key ID and the Secret key in a pad, they will be needed later.
We need two buckets, one for normal videos (named peertube-video) and one for webtorrent videos (named peertube-playlist). We need two buckets, one for normal videos (named peertube-videos) and one for webtorrent videos (named peertube-playlists).
```bash ```bash
garage bucket create peertube-videos garage bucket create peertube-videos
garage bucket create peertube-playlist garage bucket create peertube-playlists
``` ```
Now we allow our key to read and write on these buckets: Now we allow our key to read and write on these buckets:
@ -238,7 +238,7 @@ object_storage:
# Put localhost only if you have a garage instance running on that node # Put localhost only if you have a garage instance running on that node
endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443 endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443
# Garage supports only one region for now, named garage # Garage default region is named "garage", edit according to your config
region: 'garage' region: 'garage'
credentials: credentials:
@ -253,7 +253,7 @@ object_storage:
proxify_private_files: false proxify_private_files: false
streaming_playlists: streaming_playlists:
bucket_name: 'peertube-playlist' bucket_name: 'peertube-playlists'
# Keep it empty for our example # Keep it empty for our example
prefix: '' prefix: ''
@ -335,6 +335,7 @@ From the [official Mastodon documentation](https://docs.joinmastodon.org/admin/t
```bash ```bash
$ RAILS_ENV=production bin/tootctl media remove --days 3 $ RAILS_ENV=production bin/tootctl media remove --days 3
$ RAILS_ENV=production bin/tootctl media remove --days 15 --prune-profiles
$ RAILS_ENV=production bin/tootctl media remove-orphans $ RAILS_ENV=production bin/tootctl media remove-orphans
$ RAILS_ENV=production bin/tootctl preview_cards remove --days 15 $ RAILS_ENV=production bin/tootctl preview_cards remove --days 15
``` ```
@ -353,8 +354,6 @@ Imports: 1.7 KB
Settings: 0 Bytes Settings: 0 Bytes
``` ```
Unfortunately, [old avatars and headers cannot currently be cleaned up](https://github.com/mastodon/mastodon/issues/9567).
### Migrating your data ### Migrating your data
Data migration should be done with an efficient S3 client. Data migration should be done with an efficient S3 client.
@ -442,7 +441,7 @@ media_storage_providers:
store_synchronous: True # do we want to wait that the file has been written before returning? store_synchronous: True # do we want to wait that the file has been written before returning?
config: config:
bucket: matrix # the name of our bucket, we chose matrix earlier bucket: matrix # the name of our bucket, we chose matrix earlier
region_name: garage # only "garage" is supported for the region field region_name: garage # "garage" by default, edit according to your cluster config
endpoint_url: http://localhost:3900 # the path to the S3 endpoint endpoint_url: http://localhost:3900 # the path to the S3 endpoint
access_key_id: "GKxxx" # your Key ID access_key_id: "GKxxx" # your Key ID
secret_access_key: "xxxx" # your Secret Key secret_access_key: "xxxx" # your Secret Key

View file

@ -161,3 +161,49 @@ kopia repository validate-provider
You can then run all the standard kopia commands: `kopia snapshot create`, `kopia mount`... You can then run all the standard kopia commands: `kopia snapshot create`, `kopia mount`...
Everything should work out-of-the-box. Everything should work out-of-the-box.
## Plakar
Create your key and bucket on Garage server:
```bash
garage key create my-plakar-key
garage bucket create plakar-backups
garage bucket allow plakar-backups --read --write --key my-plakar-key
```
On Plakar server, add your Garage as a storage location:
```bash
plakar store add garageS3 s3://my-garage.tld/plakar-backups \
region=garage # Or as you've specified in garage.toml \
access_key=<Key ID from "garage key info my-plakar-key"> \
secret_access_key=<Secret key from "garage key info my-plakar-key">
```
Then create the repository.
```bash
plakar at @garageS3 create -plaintext # Unencrypted
# or
plakar at @garageS3 create #encrypted
```
If you encrypt your backups (Plakar default), you will need to define a strong passphrase. Do not forget to save your password safely. It will be needed to decrypt your backups.
After the repository has been created, check that everything works as expected (that might give an empty result as no file has been added yet, but no error message):
```bash
plakar at @garageS3 check
```
Now that everything is configure, you can use Garage as your backups storage. For instance sync it with a local backup storage:
```bash
$ plakar at ~/backups sync to @garageS3
```
Or list the S3 storage content:
```bash
$ plakar at @garageS3 ls
```
More information in Plakar documentation: https://www.plakar.io/docs/main/quickstart/

View file

@ -17,7 +17,7 @@ Garage can also help you serve this content.
## Gitea ## Gitea
You can use Garage with Gitea to store your [git LFS](https://git-lfs.github.com/) data, your users' avatar, and their attachements. You can use Garage with Gitea to store your [git LFS](https://git-lfs.github.com/) data, your users' avatar, and their attachments.
You can configure a different target for each data type (check `[lfs]` and `[attachment]` sections of the Gitea documentation) and you can provide a default one through the `[storage]` section. You can configure a different target for each data type (check `[lfs]` and `[attachment]` sections of the Gitea documentation) and you can provide a default one through the `[storage]` section.
Let's start by creating a key and a bucket (your key id and secret will be needed later, keep them somewhere): Let's start by creating a key and a bucket (your key id and secret will be needed later, keep them somewhere):

View file

@ -8,18 +8,18 @@ have published Ansible roles. We list them and compare them below.
## Comparison of Ansible roles ## Comparison of Ansible roles
| Feature | [ansible-role-garage](#zorun-ansible-role-garage) | [garage-docker-ansible-deploy](#moan0s-garage-docker-ansible-deploy) | | Feature | [ansible-role-garage](#zorun-ansible-role-garage) | [garage-docker-ansible-deploy](#moan0s-garage-docker-ansible-deploy) | [eddster ansible-role-garage](#eddster-ansible-role-garage) |
|------------------------------------|---------------------------------------------|---------------------------------------------------------------| |------------------------------------|---------------------------------------------|---------------------------------------------------------------|---------------------------------|
| **Runtime** | Systemd | Docker | | **Runtime** | Systemd | Docker | Systemd |
| **Target OS** | Any Linux | Any Linux | | **Target OS** | Any Linux | Any Linux | Any Linux |
| **Architecture** | amd64, arm64, i686 | amd64, arm64 | | **Architecture** | amd64, arm64, i686 | amd64, arm64 | arm64, arm, 386, amd64 |
| **Additional software** | None | Traefik | | **Additional software** | None | Traefik | Ngnix and Keepalived (optional) |
| **Automatic node connection** | ❌ | ✅ | | **Automatic node connection** | ❌ | ✅ | ✅ |
| **Layout management** | ❌ | ✅ | | **Layout management** | ❌ | ✅ | ✅ |
| **Manage buckets & keys** | ❌ | ✅ (basic) | | **Manage buckets & keys** | ❌ | ✅ (basic) | ✅ |
| **Allow custom Garage config** | ✅ | ❌ | | **Allow custom Garage config** | ✅ | ❌ | ❌ |
| **Facilitate Garage upgrades** | ✅ | ❌ | | **Facilitate Garage upgrades** | ✅ | ❌ | ✅ |
| **Multiple instances on one host** | ✅ | ✅ | | **Multiple instances on one host** | ✅ | ✅ | ❌ |
## zorun/ansible-role-garage ## zorun/ansible-role-garage
@ -49,3 +49,15 @@ structured DNS names, etc).
As a result, this role makes it easier to start with Garage on Ansible, As a result, this role makes it easier to start with Garage on Ansible,
but is less flexible. but is less flexible.
## eddster2309/ansible-role-garage
[Source code](https://github.com/eddster2309/ansible-role-garage), [Ansible galaxy](https://galaxy.ansible.com/ui/standalone/roles/eddster2309/garage/)
This role is a opinionated but customisable role using the official Garage
static binaries and only requires Systemd. As such it should work on any
Linux based host. It includes all the nesscary configuration to
automatically setup a clustered Garage deployment. Most Garage
configuration options are exposed through Ansible variables so while you
can't provide a custom config you can get very close. It can optionally
installed a HA nginx deployment with Keepalived.

View file

@ -15,9 +15,10 @@ Alpine Linux repositories (available since v3.17):
apk add garage apk add garage
``` ```
The default configuration file is installed to `/etc/garage.toml`. You can run The default configuration file is installed to `/etc/garage/garage.toml`. You can run
Garage using: `rc-service garage start`. If you don't specify `rpc_secret`, it Garage using: `rc-service garage start`.
will be automatically replaced with a random string on the first start.
If you don't specify `rpc_secret`, it will be automatically replaced with a random string on the first start.
Please note that this package is built without Consul discovery, Kubernetes Please note that this package is built without Consul discovery, Kubernetes
discovery, OpenTelemetry exporter, and K2V features (K2V will be enabled once discovery, OpenTelemetry exporter, and K2V features (K2V will be enabled once
@ -26,7 +27,7 @@ it's stable).
## Arch Linux ## Arch Linux
Garage is available in the [AUR](https://aur.archlinux.org/packages/garage). Garage is available in the official repositories under [extra](https://archlinux.org/packages/extra/x86_64/garage).
## FreeBSD ## FreeBSD

View file

@ -11,7 +11,7 @@ Firstly clone the repository:
```bash ```bash
git clone https://git.deuxfleurs.fr/Deuxfleurs/garage git clone https://git.deuxfleurs.fr/Deuxfleurs/garage
cd garage/scripts/helm cd garage/script/helm
``` ```
Deploy with default options: Deploy with default options:
@ -26,6 +26,13 @@ Or deploy with custom values:
helm install --create-namespace --namespace garage garage ./garage -f values.override.yaml helm install --create-namespace --namespace garage garage ./garage -f values.override.yaml
``` ```
If you want to manage the CustomRessourceDefinition used by garage for its `kubernetes_discovery` outside of the helm chart, add `garage.kubernetesSkipCrd: true` to your custom values and use the kustomization before deploying the helm chart:
```bash
kubectl apply -k ../k8s/crd
helm install --create-namespace --namespace garage garage ./garage -f values.override.yaml
```
After deploying, cluster layout must be configured manually as described in [Creating a cluster layout](@/documentation/quick-start/_index.md#creating-a-cluster-layout). Use the following command to access garage CLI: After deploying, cluster layout must be configured manually as described in [Creating a cluster layout](@/documentation/quick-start/_index.md#creating-a-cluster-layout). Use the following command to access garage CLI:
```bash ```bash
@ -86,3 +93,62 @@ helm delete --namespace garage garage
``` ```
Note that this will leave behind custom CRD `garagenodes.deuxfleurs.fr`, which must be removed manually if desired. Note that this will leave behind custom CRD `garagenodes.deuxfleurs.fr`, which must be removed manually if desired.
## Increase PVC size on running Garage instances
Since the Garage Helm chart creates the data and meta PVC based on `StatefulSet` templates, increasing the PVC size can be a bit tricky.
### Confirm the `StorageClass` used for Garage supports volume expansion
Confirm the storage class used for garage.
```bash
kubectl -n garage get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-garage-0 Bound pvc-080360c9-8ce3-4acf-8579-1701e57b7f3f 30Gi RWO longhorn-local <unset> 77d
data-garage-1 Bound pvc-ab8ba697-6030-4fc7-ab3c-0d6df9e3dbc0 30Gi RWO longhorn-local <unset> 5d8h
data-garage-2 Bound pvc-3ab37551-0231-4604-986d-136d0fd950ec 30Gi RWO longhorn-local <unset> 5d5h
meta-garage-0 Bound pvc-3b457302-3023-4169-846e-c928c5f2ea65 3Gi RWO longhorn-local <unset> 77d
meta-garage-1 Bound pvc-49ace2b9-5c85-42df-9247-51c4cf64b460 3Gi RWO longhorn-local <unset> 5d8h
meta-garage-2 Bound pvc-99e2e50f-42b4-4128-ae2f-b52629259723 3Gi RWO longhorn-local <unset> 5d5h
```
In this case, the storage class is `longhorn-local`. Now, check if `ALLOWVOLUMEEXPANSION` is true for the used `StorageClass`.
```bash
kubectl get storageclasses.storage.k8s.io longhorn-local
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
longhorn-local driver.longhorn.io Delete Immediate true 103d
```
If your `StorageClass` does not support volume expansion, double check if you can enable it. Otherwise, your only real option is to spin up a new Garage cluster with increased size and migrate all data over.
If your `StorageClass` supports expansion, you are free to continue.
### Increase the size of the PVCs
Increase the size of all PVCs to your desired size.
```bash
kubectl -n garage edit pvc data-garage-0
kubectl -n garage edit pvc data-garage-1
kubectl -n garage edit pvc data-garage-2
kubectl -n garage edit pvc meta-garage-0
kubectl -n garage edit pvc meta-garage-1
kubectl -n garage edit pvc meta-garage-2
```
### Increase the size of the `StatefulSet` PVC template
This is an optional step, but if not done, future instances of Garage will be created with the original size from the template.
```bash
kubectl -n garage delete sts --cascade=orphan garage
statefulset.apps "garage" deleted
```
This will remove the Garage `StatefulSet` but leave the pods running. It may seem destructive but needs to be done this way since edits to the size of PVC templates are prohibited.
### Redeploy the `StatefulSet`
Now the size of future PVCs can be increased, and the Garage Helm chart can be upgraded. The new `StatefulSet` should take ownership of the orphaned pods again.

View file

@ -96,14 +96,14 @@ to store 2 TB of data in total.
## Get a Docker image ## Get a Docker image
Our docker image is currently named `dxflrs/garage` and is stored on the [Docker Hub](https://hub.docker.com/r/dxflrs/garage/tags?page=1&ordering=last_updated). Our docker image is currently named `dxflrs/garage` and is stored on the [Docker Hub](https://hub.docker.com/r/dxflrs/garage/tags?page=1&ordering=last_updated).
We encourage you to use a fixed tag (eg. `v1.0.0`) and not the `latest` tag. We encourage you to use a fixed tag (eg. `v1.3.0`) and not the `latest` tag.
For this example, we will use the latest published version at the time of the writing which is `v1.0.0` but it's up to you For this example, we will use the latest published version at the time of the writing which is `v1.3.0` but it's up to you
to check [the most recent versions on the Docker Hub](https://hub.docker.com/r/dxflrs/garage/tags?page=1&ordering=last_updated). to check [the most recent versions on the Docker Hub](https://hub.docker.com/r/dxflrs/garage/tags?page=1&ordering=last_updated).
For example: For example:
``` ```
sudo docker pull dxflrs/garage:v1.0.0 sudo docker pull dxflrs/garage:v1.3.0
``` ```
## Deploying and configuring Garage ## Deploying and configuring Garage
@ -171,7 +171,7 @@ docker run \
-v /etc/garage.toml:/etc/garage.toml \ -v /etc/garage.toml:/etc/garage.toml \
-v /var/lib/garage/meta:/var/lib/garage/meta \ -v /var/lib/garage/meta:/var/lib/garage/meta \
-v /var/lib/garage/data:/var/lib/garage/data \ -v /var/lib/garage/data:/var/lib/garage/data \
dxflrs/garage:v1.0.0 dxflrs/garage:v1.3.0
``` ```
With this command line, Garage should be started automatically at each boot. With this command line, Garage should be started automatically at each boot.
@ -185,7 +185,7 @@ If you want to use `docker-compose`, you may use the following `docker-compose.y
version: "3" version: "3"
services: services:
garage: garage:
image: dxflrs/garage:v1.0.0 image: dxflrs/garage:v1.3.0
network_mode: "host" network_mode: "host"
restart: unless-stopped restart: unless-stopped
volumes: volumes:

View file

@ -28,6 +28,7 @@ StateDirectory=garage
DynamicUser=true DynamicUser=true
ProtectHome=true ProtectHome=true
NoNewPrivileges=true NoNewPrivileges=true
LimitNOFILE=42000
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View file

@ -50,3 +50,20 @@ locations. They use Garage themselves for the following tasks:
The Deuxfleurs Garage cluster is a multi-site cluster currently composed of The Deuxfleurs Garage cluster is a multi-site cluster currently composed of
9 nodes in 3 physical locations. 9 nodes in 3 physical locations.
### Triplebit
[Triplebit](https://www.triplebit.org) is a non-profit hosting provider and
ISP focused on improving access to privacy-related services. They use
Garage themselves for the following tasks:
- Hosting of their homepage, [privacyguides.org](https://www.privacyguides.org/), and various other static sites
- As a Mastodon object storage backend for [mstdn.party](https://mstdn.party/) and [mstdn.plus](https://mstdn.plus/)
- As a PeerTube storage backend for [neat.tube](https://neat.tube/)
- As a [Matrix media backend](https://github.com/matrix-org/synapse-s3-storage-provider)
Triplebit's Garage cluster is a multi-site cluster currently composed of
10 nodes in 3 physical locations.

View file

@ -36,7 +36,7 @@ sudo killall nix-daemon
Now you can enter our nix-shell, all the required packages will be downloaded but they will not pollute your environment outside of the shell: Now you can enter our nix-shell, all the required packages will be downloaded but they will not pollute your environment outside of the shell:
```bash ```bash
nix-shell nix-shell -A devShell
``` ```
You can use the traditional Rust development workflow: You can use the traditional Rust development workflow:
@ -65,8 +65,8 @@ nix-build -j $(nproc) --max-jobs auto
``` ```
Our build has multiple parameters you might want to set: Our build has multiple parameters you might want to set:
- `release` build with release optimisations instead of debug - `release` to build with release optimisations instead of debug
- `target allows` for cross compilation - `target` allows for cross compilation
- `compileMode` can be set to test or bench to build a unit test runner - `compileMode` can be set to test or bench to build a unit test runner
- `git_version` to inject the hash to display when running `garage stats` - `git_version` to inject the hash to display when running `garage stats`

View file

@ -21,14 +21,14 @@ data_dir = [
``` ```
Garage will automatically balance all blocks stored by the node Garage will automatically balance all blocks stored by the node
among the different specified directories, proportionnally to the among the different specified directories, proportionally to the
specified capacities. specified capacities.
## Updating the list of storage locations ## Updating the list of storage locations
If you add new storage locations to your `data_dir`, If you add new storage locations to your `data_dir`,
Garage will not rebalance existing data between storage locations. Garage will not rebalance existing data between storage locations.
Newly written blocks will be balanced proportionnally to the specified capacities, Newly written blocks will be balanced proportionally to the specified capacities,
and existing data may be moved between drives to improve balancing, and existing data may be moved between drives to improve balancing,
but only opportunistically when a data block is re-written (e.g. an object but only opportunistically when a data block is re-written (e.g. an object
is re-uploaded, or an object with a duplicate block is uploaded). is re-uploaded, or an object with a duplicate block is uploaded).

View file

@ -5,7 +5,7 @@ weight = 40
Garage is meant to work on old, second-hand hardware. Garage is meant to work on old, second-hand hardware.
In particular, this makes it likely that some of your drives will fail, and some manual intervention will be needed. In particular, this makes it likely that some of your drives will fail, and some manual intervention will be needed.
Fear not! For Garage is fully equipped to handle drive failures, in most common cases. Fear not! Garage is fully equipped to handle drive failures, in most common cases.
## A note on availability of Garage ## A note on availability of Garage
@ -61,7 +61,7 @@ garage repair -a --yes blocks
This will re-synchronize blocks of data that are missing to the new HDD, reading them from copies located on other nodes. This will re-synchronize blocks of data that are missing to the new HDD, reading them from copies located on other nodes.
You can check on the advancement of this process by doing the following command: You can check on the advancement of this process by doing the following command:
```bash ```bash
garage stats -a garage stats -a

View file

@ -71,7 +71,7 @@ The entire procedure would look something like this:
2. Take each node offline individually to back up its metadata folder, bring them back online once the backup is done. 2. Take each node offline individually to back up its metadata folder, bring them back online once the backup is done.
You can do all of the nodes in a single zone at once as that won't impact global cluster availability. You can do all of the nodes in a single zone at once as that won't impact global cluster availability.
Do not try to make a backup of the metadata folder of a running node. Do not try to manually copy the metadata folder of a running node.
**Since Garage v0.9.4,** you can use the `garage meta snapshot --all` command **Since Garage v0.9.4,** you can use the `garage meta snapshot --all` command
to take a simultaneous snapshot of the metadata database files of all your to take a simultaneous snapshot of the metadata database files of all your

View file

@ -42,6 +42,13 @@ If a binary of the last version is not available for your architecture,
or if you want a build customized for your system, or if you want a build customized for your system,
you can [build Garage from source](@/documentation/cookbook/from-source.md). you can [build Garage from source](@/documentation/cookbook/from-source.md).
If none of these option work for you, you can also run Garage in a Docker
container. When using Docker, the commands used in this guide will not work
anymore. We recommend reading the tutorial on [configuring a
multi-node cluster](@/documentation/cookbook/real-world.md) to learn about
using Garage as a Docker container. For simplicity, a minimal command to launch
Garage using Docker is provided in this quick start guide as well.
## Configuring and starting Garage ## Configuring and starting Garage
@ -85,6 +92,9 @@ metrics_token = "$(openssl rand -base64 32)"
EOF EOF
``` ```
See the [Configuration file format](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/)
for complete options and values.
Now that your configuration file has been created, you may save it to the directory of your choice. Now that your configuration file has been created, you may save it to the directory of your choice.
By default, Garage looks for **`/etc/garage.toml`.** By default, Garage looks for **`/etc/garage.toml`.**
You can also store it somewhere else, but you will have to specify `-c path/to/garage.toml` You can also store it somewhere else, but you will have to specify `-c path/to/garage.toml`
@ -111,6 +121,26 @@ garage -c path/to/garage.toml server
If you have placed the `garage.toml` file in `/etc` (its default location), you can simply run `garage server`. If you have placed the `garage.toml` file in `/etc` (its default location), you can simply run `garage server`.
Alternatively, if you cannot or do not wish to run the Garage binary directly,
you may use Docker to run Garage in a container using the following command:
```bash
docker run \
-d \
--name garaged \
-p 3900:3900 -p 3901:3901 -p 3902:3902 -p 3903:3903 \
-v /path/to/garage.toml:/etc/garage.toml \
-v /path/to/garage/meta:/var/lib/garage/meta \
-v /path/to/garage/data:/var/lib/garage/data \
dxflrs/garage:v1.3.0
```
Under Linux, you can substitute `--network host` for `-p 3900:3900 -p 3901:3901 -p 3902:3902 -p 3903:3903`
#### Troubleshooting
Ensure your configuration file, `metadata_dir` and `data_dir` are readable by the user running the `garage` server or Docker.
You can tune Garage's verbosity by setting the `RUST_LOG=` environment variable. \ You can tune Garage's verbosity by setting the `RUST_LOG=` environment variable. \
Available log levels are (from less verbose to more verbose): `error`, `warn`, `info` *(default)*, `debug` and `trace`. Available log levels are (from less verbose to more verbose): `error`, `warn`, `info` *(default)*, `debug` and `trace`.
@ -131,6 +161,9 @@ It uses values from the TOML configuration file to find the Garage daemon runnin
local node, therefore if your configuration file is not at `/etc/garage.toml` you will local node, therefore if your configuration file is not at `/etc/garage.toml` you will
again have to specify `-c path/to/garage.toml` at each invocation. again have to specify `-c path/to/garage.toml` at each invocation.
If you are running Garage in a Docker container, you can set `alias garage="docker exec -ti <container name> /garage"`
to use the Garage binary inside your container.
If the `garage` CLI is able to correctly detect the parameters of your local Garage node, If the `garage` CLI is able to correctly detect the parameters of your local Garage node,
the following command should be enough to show the status of your cluster: the following command should be enough to show the status of your cluster:
@ -149,11 +182,12 @@ ID Hostname Address Tag Zone Capacit
## Creating a cluster layout ## Creating a cluster layout
Creating a cluster layout for a Garage deployment means informing Garage Creating a cluster layout for a Garage deployment means informing Garage
of the disk space available on each node of the cluster of the disk space available on each node of the cluster, `-c`,
as well as the zone (e.g. datacenter) each machine is located in. as well as the name of the zone (e.g. datacenter), `-z`, each machine is located in.
For our test deployment, we are using only one node. The way in which we configure For our test deployment, we are have only one node with zone named `dc1` and a
it does not matter, you can simply write: capacity of `1G`, though the capacity is ignored for a single node deployment
and can be changed later when adding new nodes.
```bash ```bash
garage layout assign -z dc1 -c 1G <node_id> garage layout assign -z dc1 -c 1G <node_id>
@ -166,7 +200,7 @@ For instance here you could write just `garage layout assign -z dc1 -c 1G 563e`.
The layout then has to be applied to the cluster, using: The layout then has to be applied to the cluster, using:
```bash ```bash
garage layout apply garage layout apply --version 1
``` ```
@ -316,7 +350,7 @@ Check [our s3 compatibility list](@/documentation/reference-manual/s3-compatibil
### Other tools for interacting with Garage ### Other tools for interacting with Garage
The following tools can also be used to send and recieve files from/to Garage: The following tools can also be used to send and receive files from/to Garage:
- [minio-client](@/documentation/connect/cli.md#minio-client) - [minio-client](@/documentation/connect/cli.md#minio-client)
- [s3cmd](@/documentation/connect/cli.md#s3cmd) - [s3cmd](@/documentation/connect/cli.md#s3cmd)

View file

@ -13,16 +13,19 @@ consistency_mode = "consistent"
metadata_dir = "/var/lib/garage/meta" metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data" data_dir = "/var/lib/garage/data"
metadata_snapshots_dir = "/var/lib/garage/snapshots"
metadata_fsync = true metadata_fsync = true
data_fsync = false data_fsync = false
disable_scrub = false disable_scrub = false
use_local_tz = false
metadata_auto_snapshot_interval = "6h" metadata_auto_snapshot_interval = "6h"
db_engine = "lmdb" db_engine = "lmdb"
block_size = "1M" block_size = "1M"
block_ram_buffer_max = "256MiB" block_ram_buffer_max = "256MiB"
block_max_concurrent_reads = 16
block_max_concurrent_writes_per_request =10
lmdb_map_size = "1T" lmdb_map_size = "1T"
compression_level = 1 compression_level = 1
@ -44,6 +47,7 @@ bootstrap_peers = [
"212fd62eeaca72c122b45a7f4fa0f55e012aa5e24ac384a72a3016413fa724ff@[fc00:F::1]:3901", "212fd62eeaca72c122b45a7f4fa0f55e012aa5e24ac384a72a3016413fa724ff@[fc00:F::1]:3901",
] ]
allow_punycode = false
[consul_discovery] [consul_discovery]
api = "catalog" api = "catalog"
@ -73,6 +77,7 @@ root_domain = ".s3.garage"
[s3_web] [s3_web]
bind_addr = "[::]:3902" bind_addr = "[::]:3902"
root_domain = ".web.garage" root_domain = ".web.garage"
add_host_to_metrics = true
[admin] [admin]
api_bind_addr = "0.0.0.0:3903" api_bind_addr = "0.0.0.0:3903"
@ -89,12 +94,16 @@ The following gives details about each available configuration option.
[Environment variables](#env_variables). [Environment variables](#env_variables).
Top-level configuration options: Top-level configuration options, in alphabetical order:
[`allow_punycode`](#allow_punycode),
[`allow_world_readable_secrets`](#allow_world_readable_secrets), [`allow_world_readable_secrets`](#allow_world_readable_secrets),
[`block_max_concurrent_reads`](`block_max_concurrent_reads),
[`block_ram_buffer_max`](#block_ram_buffer_max), [`block_ram_buffer_max`](#block_ram_buffer_max),
[`block_max_concurrent_writes_per_request`](#block_max_concurrent_writes_per_request),
[`block_size`](#block_size), [`block_size`](#block_size),
[`bootstrap_peers`](#bootstrap_peers), [`bootstrap_peers`](#bootstrap_peers),
[`compression_level`](#compression_level), [`compression_level`](#compression_level),
[`consistency_mode`](#consistency_mode),
[`data_dir`](#data_dir), [`data_dir`](#data_dir),
[`data_fsync`](#data_fsync), [`data_fsync`](#data_fsync),
[`db_engine`](#db_engine), [`db_engine`](#db_engine),
@ -103,13 +112,14 @@ Top-level configuration options:
[`metadata_auto_snapshot_interval`](#metadata_auto_snapshot_interval), [`metadata_auto_snapshot_interval`](#metadata_auto_snapshot_interval),
[`metadata_dir`](#metadata_dir), [`metadata_dir`](#metadata_dir),
[`metadata_fsync`](#metadata_fsync), [`metadata_fsync`](#metadata_fsync),
[`metadata_snapshots_dir`](#metadata_snapshots_dir),
[`replication_factor`](#replication_factor), [`replication_factor`](#replication_factor),
[`consistency_mode`](#consistency_mode),
[`rpc_bind_addr`](#rpc_bind_addr), [`rpc_bind_addr`](#rpc_bind_addr),
[`rpc_bind_outgoing`](#rpc_bind_outgoing), [`rpc_bind_outgoing`](#rpc_bind_outgoing),
[`rpc_public_addr`](#rpc_public_addr), [`rpc_public_addr`](#rpc_public_addr),
[`rpc_public_addr_subnet`](#rpc_public_addr_subnet) [`rpc_public_addr_subnet`](#rpc_public_addr_subnet)
[`rpc_secret`/`rpc_secret_file`](#rpc_secret). [`rpc_secret`/`rpc_secret_file`](#rpc_secret),
[`use_local_tz`](#use_local_tz).
The `[consul_discovery]` section: The `[consul_discovery]` section:
[`api`](#consul_api), [`api`](#consul_api),
@ -134,6 +144,7 @@ The `[s3_api]` section:
[`s3_region`](#s3_region). [`s3_region`](#s3_region).
The `[s3_web]` section: The `[s3_web]` section:
[`add_host_to_metrics`](#web_add_host_to_metrics),
[`bind_addr`](#web_bind_addr), [`bind_addr`](#web_bind_addr),
[`root_domain`](#web_root_domain). [`root_domain`](#web_root_domain).
@ -145,13 +156,17 @@ The `[admin]` section:
### Environment variables {#env_variables} ### Environment variables {#env_variables}
The following configuration parameter must be specified as an environment The following configuration parameters must be specified as environment variables,
variable, it does not exist in the configuration file: they do not exist in the configuration file:
- `GARAGE_LOG_TO_SYSLOG` (since v0.9.4): set this to `1` or `true` to make the - `GARAGE_LOG_TO_SYSLOG` (since `v0.9.4`): set this to `1` or `true` to make the
Garage daemon send its logs to `syslog` (using the libc `syslog` function) Garage daemon send its logs to `syslog` (using the libc `syslog` function)
instead of printing to stderr. instead of printing to stderr.
- `GARAGE_LOG_TO_JOURNALD` (since `v1.2.0`): set this to `1` or `true` to make the
Garage daemon send its logs to `journald` (using the native protocol of `systemd-journald`)
instead of printing to stderr.
The following environment variables can be used to override the corresponding The following environment variables can be used to override the corresponding
values in the configuration file: values in the configuration file:
@ -163,7 +178,7 @@ values in the configuration file:
### Top-level configuration options ### Top-level configuration options
#### `replication_factor` {#replication_factor} #### `replication_factor` (since `v1.0.0`) {#replication_factor}
The replication factor can be any positive integer smaller or equal the node count in your cluster. The replication factor can be any positive integer smaller or equal the node count in your cluster.
The chosen replication factor has a big impact on the cluster's failure tolerancy and performance characteristics. The chosen replication factor has a big impact on the cluster's failure tolerancy and performance characteristics.
@ -211,7 +226,7 @@ is in progress. In theory, no data should be lost as rebalancing is a
routine operation for Garage, although we cannot guarantee you that everything routine operation for Garage, although we cannot guarantee you that everything
will go right in such an extreme scenario. will go right in such an extreme scenario.
#### `consistency_mode` {#consistency_mode} #### `consistency_mode` (since `v1.0.0`) {#consistency_mode}
The consistency mode setting determines the read and write behaviour of your cluster. The consistency mode setting determines the read and write behaviour of your cluster.
@ -273,6 +288,7 @@ as the index of all objects, object version and object blocks.
Store this folder on a fast SSD drive if possible to maximize Garage's performance. Store this folder on a fast SSD drive if possible to maximize Garage's performance.
#### `data_dir` {#data_dir} #### `data_dir` {#data_dir}
The directory in which Garage will store the data blocks of objects. The directory in which Garage will store the data blocks of objects.
@ -293,6 +309,25 @@ data_dir = [
See [the dedicated documentation page](@/documentation/operations/multi-hdd.md) See [the dedicated documentation page](@/documentation/operations/multi-hdd.md)
on how to operate Garage in such a setup. on how to operate Garage in such a setup.
#### `metadata_snapshots_dir` (since `v1.1.0`) {#metadata_snapshots_dir}
The directory in which Garage will store metadata snapshots when it
performs a snapshot of the metadata database, either when instructed to do
so from a RPC call or regularly through
[`metadata_auto_snapshot_interval`](#metadata_auto_snapshot_interval).
By default, Garage will store snapshots into a `snapshots/` subdirectory
of [`metadata_dir`](#metadata_dir). This might quickly fill up your
metadata storage space if you use snapshots, because Garage will need up
to 4x the space of the existing metadata database: each snapshot requires
roughly as much space as the original database, and Garage temporarily
needs to store up to three different snapshots before it cleans up the oldest
snapshot to go back to two stored snapshots.
To prevent filling your disk, you might to change this setting to a
directory with ample available space, e.g. on the same storage space as
[`data_dir`](#data_dir).
#### `db_engine` (since `v0.8.0`) {#db_engine} #### `db_engine` (since `v0.8.0`) {#db_engine}
Since `v0.8.0`, Garage can use alternative storage backends as follows: Since `v0.8.0`, Garage can use alternative storage backends as follows:
@ -301,6 +336,7 @@ Since `v0.8.0`, Garage can use alternative storage backends as follows:
| --------- | ----------------- | ------------- | | --------- | ----------------- | ------------- |
| [LMDB](https://www.symas.com/lmdb) (since `v0.8.0`, default since `v0.9.0`) | `"lmdb"` | `<metadata_dir>/db.lmdb/` | | [LMDB](https://www.symas.com/lmdb) (since `v0.8.0`, default since `v0.9.0`) | `"lmdb"` | `<metadata_dir>/db.lmdb/` |
| [Sqlite](https://sqlite.org) (since `v0.8.0`) | `"sqlite"` | `<metadata_dir>/db.sqlite` | | [Sqlite](https://sqlite.org) (since `v0.8.0`) | `"sqlite"` | `<metadata_dir>/db.sqlite` |
| [Fjall](https://github.com/fjall-rs/fjall) (**experimental support** since `v1.3.0`) | `"fjall"` | `<metadata_dir>/db.fjall/` |
| [Sled](https://sled.rs) (old default, removed since `v1.0`) | `"sled"` | `<metadata_dir>/db/` | | [Sled](https://sled.rs) (old default, removed since `v1.0`) | `"sled"` | `<metadata_dir>/db/` |
Sled was supported until Garage v0.9.x, and was removed in Garage v1.0. Sled was supported until Garage v0.9.x, and was removed in Garage v1.0.
@ -337,6 +373,14 @@ LMDB works very well, but is known to have the following limitations:
so it is not the best choice for high-performance storage clusters, so it is not the best choice for high-performance storage clusters,
but it should work fine in many cases. but it should work fine in many cases.
- Fjall: a storage engine based on LSM trees, which theoretically allow for
higher write throughput than other storage engines that are based on B-trees.
Using Fjall could potentially improve Garage's performance significantly in
write-heavy workloads. **Support for Fjall is experimental at this point**,
we have added it to Garage for evaluation purposes only. **Do not use it for
production-critical workloads.**
It is possible to convert Garage's metadata directory from one format to another It is possible to convert Garage's metadata directory from one format to another
using the `garage convert-db` command, which should be used as follows: using the `garage convert-db` command, which should be used as follows:
@ -374,6 +418,7 @@ Here is how this option impacts the different database engines:
|----------|------------------------------------|-------------------------------| |----------|------------------------------------|-------------------------------|
| Sqlite | `PRAGMA synchronous = OFF` | `PRAGMA synchronous = NORMAL` | | Sqlite | `PRAGMA synchronous = OFF` | `PRAGMA synchronous = NORMAL` |
| LMDB | `MDB_NOMETASYNC` + `MDB_NOSYNC` | `MDB_NOMETASYNC` | | LMDB | `MDB_NOMETASYNC` + `MDB_NOSYNC` | `MDB_NOMETASYNC` |
| Fjall | default options | not supported |
Note that the Sqlite database is always ran in `WAL` mode (`PRAGMA journal_mode = WAL`). Note that the Sqlite database is always ran in `WAL` mode (`PRAGMA journal_mode = WAL`).
@ -390,7 +435,7 @@ at the cost of a moderate drop in write performance.
Similarly to `metatada_fsync`, this is likely not necessary Similarly to `metatada_fsync`, this is likely not necessary
if geographical replication is used. if geographical replication is used.
#### `metadata_auto_snapshot_interval` (since Garage v0.9.4) {#metadata_auto_snapshot_interval} #### `metadata_auto_snapshot_interval` (since `v0.9.4`) {#metadata_auto_snapshot_interval}
If this value is set, Garage will automatically take a snapshot of the metadata If this value is set, Garage will automatically take a snapshot of the metadata
DB file at a regular interval and save it in the metadata directory. DB file at a regular interval and save it in the metadata directory.
@ -427,6 +472,13 @@ you should delete it from the data directory and then call `garage repair
blocks` on the node to ensure that it re-obtains a copy from another node on blocks` on the node to ensure that it re-obtains a copy from another node on
the network. the network.
#### `use_local_tz` (since `v1.1.0`) {#use_local_tz}
By default, Garage runs the lifecycle worker every day at midnight in UTC. Set the
`use_local_tz` configuration value to `true` if you want Garage to run the
lifecycle worker at midnight in your local timezone. If you have multiple nodes,
you should also ensure that each node has the same timezone configuration.
#### `block_size` {#block_size} #### `block_size` {#block_size}
Garage splits stored objects in consecutive chunks of size `block_size` Garage splits stored objects in consecutive chunks of size `block_size`
@ -442,7 +494,7 @@ files will remain available. This however means that chunks from existing files
will not be deduplicated with chunks from newly uploaded files, meaning you will not be deduplicated with chunks from newly uploaded files, meaning you
might use more storage space that is optimally possible. might use more storage space that is optimally possible.
#### `block_ram_buffer_max` (since v0.9.4) {#block_ram_buffer_max} #### `block_ram_buffer_max` (since `v0.9.4`) {#block_ram_buffer_max}
A limit on the total size of data blocks kept in RAM by S3 API nodes awaiting A limit on the total size of data blocks kept in RAM by S3 API nodes awaiting
to be sent to storage nodes asynchronously. to be sent to storage nodes asynchronously.
@ -473,6 +525,37 @@ node.
The default value is 256MiB. The default value is 256MiB.
#### `block_max_concurrent_reads` (since `v1.3.0` / `v2.1.0`) {#block_max_concurrent_reads}
The maximum number of blocks (individual files in the data directory) open
simultaneously for reading.
Reducing this number does not limit the number of data blocks that can be
transferred through the network simultaneously. This mechanism was just added
as a backpressure mechanism for HDD read speed: it helps avoid a situation
where too many requests are coming in and Garage is reading too many block
files simultaneously, thus not making timely progress on any of the reads.
When a request to read a data block comes in through the network, the requests
awaits for one of the `block_max_concurrent_reads` slots to be available
(internally implemented using a Semaphore object). Once it acquired a read
slot, it reads the entire block file to RAM and frees the slot as soon as the
block file is finished reading. Only after the slot is released will the
block's data start being transferred over the network. If the request fails to
acquire a reading slot wihtin 15 seconds, it fails with a timeout error.
Timeout events can be monitored through the `block_read_semaphore_timeouts`
metric in Prometheus: a non-zero number of such events indicates an I/O
bottleneck on HDD read speed.
#### `block_max_concurrent_writes_per_request` (since `v2.1.0`) {#block_max_concurrent_writes_per_request}
This parameter is designed to adapt to the concurrent write performance of
different storage media.Maximum number of parallel block writes per put request
Higher values improve throughput but increase memory usage.
Default: 3, Recommended: 10-30 for NVMe, 3-10 for HDD
#### `lmdb_map_size` {#lmdb_map_size} #### `lmdb_map_size` {#lmdb_map_size}
This parameters can be used to set the map size used by LMDB, This parameters can be used to set the map size used by LMDB,
@ -529,7 +612,7 @@ the node, even in the case of a NAT: the NAT should be configured to forward the
port number to the same internal port nubmer. This means that if you have several nodes running port number to the same internal port nubmer. This means that if you have several nodes running
behind a NAT, they should each use a different RPC port number. behind a NAT, they should each use a different RPC port number.
#### `rpc_bind_outgoing`(since v0.9.2) {#rpc_bind_outgoing} #### `rpc_bind_outgoing` (since `v0.9.2`) {#rpc_bind_outgoing}
If enabled, pre-bind all sockets for outgoing connections to the same IP address If enabled, pre-bind all sockets for outgoing connections to the same IP address
used for listening (the IP address specified in `rpc_bind_addr`) before used for listening (the IP address specified in `rpc_bind_addr`) before
@ -571,7 +654,7 @@ be obtained by running `garage node id` and then included directly in the
key will be returned by `garage node id` and you will have to add the IP key will be returned by `garage node id` and you will have to add the IP
yourself. yourself.
### `allow_world_readable_secrets` or `GARAGE_ALLOW_WORLD_READABLE_SECRETS` (env) {#allow_world_readable_secrets} #### `allow_world_readable_secrets` or `GARAGE_ALLOW_WORLD_READABLE_SECRETS` (env) {#allow_world_readable_secrets}
Garage checks the permissions of your secret files to make sure they're not Garage checks the permissions of your secret files to make sure they're not
world-readable. In some cases, the check might fail and consider your files as world-readable. In some cases, the check might fail and consider your files as
@ -583,6 +666,13 @@ permission verification.
Alternatively, you can set the `GARAGE_ALLOW_WORLD_READABLE_SECRETS` Alternatively, you can set the `GARAGE_ALLOW_WORLD_READABLE_SECRETS`
environment variable to `true` to bypass the permissions check. environment variable to `true` to bypass the permissions check.
#### `allow_punycode` {#allow_punycode}
Allow creating buckets with names containing punycode. When used for buckets served
as websites, this allows using almost any unicode character in the domain name.
Default to `false`.
### The `[consul_discovery]` section ### The `[consul_discovery]` section
Garage supports discovering other nodes of the cluster using Consul. For this Garage supports discovering other nodes of the cluster using Consul. For this
@ -713,6 +803,13 @@ For instance, if `root_domain` is `web.garage.eu`, a bucket called `deuxfleurs.f
will be accessible either with hostname `deuxfleurs.fr.web.garage.eu` will be accessible either with hostname `deuxfleurs.fr.web.garage.eu`
or with hostname `deuxfleurs.fr`. or with hostname `deuxfleurs.fr`.
#### `add_host_to_metrics` {#web_add_host_to_metrics}
Whether to include the requested domain name (HTTP `Host` header) in the
Prometheus metrics of the web endpoint. This is disabled by default as the
number of possible values is not bounded and can be a source of cardinality
explosion in the exported metrics.
### The `[admin]` section ### The `[admin]` section

View file

@ -61,7 +61,7 @@ directed to a Garage cluster can be handled independently of one another instead
of going through a central bottleneck (the leader node). of going through a central bottleneck (the leader node).
As a consequence, requests can be handled much faster, even in cases where latency As a consequence, requests can be handled much faster, even in cases where latency
between cluster nodes is important (see our [benchmarks](@/documentation/design/benchmarks/index.md) for data on this). between cluster nodes is important (see our [benchmarks](@/documentation/design/benchmarks/index.md) for data on this).
This is particularly usefull when nodes are far from one another and talk to one other through standard Internet connections. This is particularly useful when nodes are far from one another and talk to one other through standard Internet connections.
### Web server for static websites ### Web server for static websites

View file

@ -392,7 +392,7 @@ table_merkle_updater_todo_queue_length{table_name="block_ref"} 0
#### `table_sync_items_received`, `table_sync_items_sent` (counters) #### `table_sync_items_received`, `table_sync_items_sent` (counters)
Number of data items sent to/recieved from other nodes during resync procedures Number of data items sent to/received from other nodes during resync procedures
``` ```
table_sync_items_received{from="<remote node>",table_name="bucket_v2"} 3 table_sync_items_received{from="<remote node>",table_name="bucket_v2"} 3

View file

@ -23,17 +23,17 @@ Feel free to open a PR to suggest fixes this table. Minio is missing because the
- 2022-05-25 - Many Ceph S3 endpoints are not documented but implemented. Following a notification from the Ceph community, we added them. - 2022-05-25 - Many Ceph S3 endpoints are not documented but implemented. Following a notification from the Ceph community, we added them.
## High-level features ## High-level features
| Feature | Garage | [Openstack Swift](https://docs.openstack.org/swift/latest/s3_compat.html) | [Ceph Object Gateway](https://docs.ceph.com/en/latest/radosgw/s3/) | [Riak CS](https://docs.riak.com/riak/cs/2.1.1/references/apis/storage/s3/index.html) | [OpenIO](https://docs.openio.io/latest/source/arch-design/s3_compliancy.html) | | Feature | Garage | [Openstack Swift](https://docs.openstack.org/swift/latest/s3_compat.html) | [Ceph Object Gateway](https://docs.ceph.com/en/latest/radosgw/s3/) | [Riak CS](https://docs.riak.com/riak/cs/2.1.1/references/apis/storage/s3/index.html) | [OpenIO](https://docs.openio.io/latest/source/arch-design/s3_compliancy.html) |
|------------------------------|----------------------------------|-----------------|---------------|---------|-----| |------------------------------|----------------------------------|-----------------|---------------|---------|-----|
| [signature v2](https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html) (deprecated) | ❌ Missing | ✅ | ✅ | ✅ | ✅ | | [signature v2](https://docs.aws.amazon.com/AmazonS3/latest/API/Appendix-Sigv2.html) (deprecated) | ❌ Missing | ✅ | ✅ | ✅ | ✅ |
| [signature v4](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) | ✅ Implemented | ✅ | ✅ | ❌ | ✅ | | [signature v4](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) | ✅ Implemented | ✅ | ✅ | ❌ | ✅ |
| [URL path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#path-style-access) (eg. `host.tld/bucket/key`) | ✅ Implemented | ✅ | ✅ | ❓| ✅ | | [URL path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#path-style-access) (eg. `host.tld/bucket/key`) | ✅ Implemented | ✅ | ✅ | ❓| ✅ |
| [URL vhost-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#virtual-hosted-style-access) URL (eg. `bucket.host.tld/key`) | ✅ Implemented | ❌| ✅| ✅ | ✅ | | [URL vhost-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#virtual-hosted-style-access) URL (eg. `bucket.host.tld/key`) | ✅ Implemented | ❌| ✅| ✅ | ✅ |
| [Presigned URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html) | ✅ Implemented | ❌| ✅ | ✅ | ✅(❓) | | [Presigned URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html) | ✅ Implemented | ❌| ✅ | ✅ | ✅(❓) |
| [SSE-C encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html) | ✅ Implemented | ❓ | ✅ | ❌ | ✅ | | [SSE-C encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html) | ✅ Implemented | ❓ | ✅ | ❌ | ✅ |
| [Bucket versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) | ❌ Missing | ✅ | ✅ | ❌ | ✅ |
*Note:* OpenIO does not says if it supports presigned URLs. Because it is part *Note:* OpenIO does not says if it supports presigned URLs. Because it is part
of signature v4 and they claim they support it without additional precisions, of signature v4 and they claim they support it without additional precisions,

View file

@ -42,7 +42,7 @@ The general principle are similar, but details have not been updated.**
A version is defined by the existence of at least one entry in the blocks table for a certain version UUID. A version is defined by the existence of at least one entry in the blocks table for a certain version UUID.
We must keep the following invariant: if a version exists in the blocks table, it has to be referenced in the objects table. We must keep the following invariant: if a version exists in the blocks table, it has to be referenced in the objects table.
We explicitly manage concurrent versions of an object: the version timestamp and version UUID columns are index columns, thus we may have several concurrent versions of an object. We explicitly manage concurrent versions of an object: the version timestamp and version UUID columns are index columns, thus we may have several concurrent versions of an object.
Important: before deleting an older version from the objects table, we must make sure that we did a successfull delete of the blocks of that version from the blocks table. Important: before deleting an older version from the objects table, we must make sure that we did a successful delete of the blocks of that version from the blocks table.
Thus, the workflow for reading an object is as follows: Thus, the workflow for reading an object is as follows:
@ -95,7 +95,7 @@ Known issue: if someone is reading from a version that we want to delete and the
Usefull metadata: Usefull metadata:
- list of versions that reference this block in the Casandra table, so that we can do GC by checking in Cassandra that the lines still exist - list of versions that reference this block in the Casandra table, so that we can do GC by checking in Cassandra that the lines still exist
- list of other nodes that we know have acknowledged a write of this block, usefull in the rebalancing algorithm - list of other nodes that we know have acknowledged a write of this block, useful in the rebalancing algorithm
Write strategy: have a single thread that does all write IO so that it is serialized (or have several threads that manage independent parts of the hash space). When writing a blob, write it to a temporary file, close, then rename so that a concurrent read gets a consistent result (either not found or found with whole content). Write strategy: have a single thread that does all write IO so that it is serialized (or have several threads that manage independent parts of the hash space). When writing a blob, write it to a temporary file, close, then rename so that a concurrent read gets a consistent result (either not found or found with whole content).

View file

@ -68,7 +68,7 @@ The migration steps are as follows:
5. Turn off Garage 0.3 5. Turn off Garage 0.3
6. Backup metadata folders if you can (i.e. if you have space to do it 6. Backup metadata folders if you can (i.e. if you have space to do it
somewhere). Backuping data folders could also be usefull but that's much somewhere). Backuping data folders could also be useful but that's much
harder to do. If your filesystem supports snapshots, this could be a good harder to do. If your filesystem supports snapshots, this could be a good
time to use them. time to use them.

View file

@ -37,7 +37,7 @@ There are two reasons for this:
Reminder: rules of simplicity, concerning changes to Garage's source code. Reminder: rules of simplicity, concerning changes to Garage's source code.
Always question what we are doing. Always question what we are doing.
Never do anything just because it looks nice or because we "think" it might be usefull at some later point but without knowing precisely why/when. Never do anything just because it looks nice or because we "think" it might be useful at some later point but without knowing precisely why/when.
Only do things that make perfect sense in the context of what we currently know. Only do things that make perfect sense in the context of what we currently know.
## References ## References

View file

@ -70,7 +70,7 @@ Example response body:
```json ```json
{ {
"node": "b10c110e4e854e5aa3f4637681befac755154b20059ec163254ddbfae86b09df", "node": "b10c110e4e854e5aa3f4637681befac755154b20059ec163254ddbfae86b09df",
"garageVersion": "v1.0.0", "garageVersion": "v1.3.0",
"garageFeatures": [ "garageFeatures": [
"k2v", "k2v",
"lmdb", "lmdb",

View file

@ -562,7 +562,7 @@ token>", v: ["<value1>", ...] }`, with the following fields:
- in case of concurrent update and deletion, a `null` is added to the list of concurrent values - in case of concurrent update and deletion, a `null` is added to the list of concurrent values
- if the `tombstones` query parameter is set to `true`, tombstones are returned - if the `tombstones` query parameter is set to `true`, tombstones are returned
for items that have been deleted (this can be usefull for inserting after an for items that have been deleted (this can be useful for inserting after an
item that has been deleted, so that the insert is not considered item that has been deleted, so that the insert is not considered
concurrent with the delete). Tombstones are returned as tuples in the concurrent with the delete). Tombstones are returned as tuples in the
same format with only `null` values same format with only `null` values

118
flake.lock generated
View file

@ -1,38 +1,27 @@
{ {
"nodes": { "nodes": {
"cargo2nix": { "crane": {
"inputs": {
"flake-compat": [
"flake-compat"
],
"flake-utils": "flake-utils",
"nixpkgs": [
"nixpkgs"
],
"rust-overlay": "rust-overlay"
},
"locked": { "locked": {
"lastModified": 1666087781, "lastModified": 1737689766,
"narHash": "sha256-trKVdjMZ8mNkGfLcY5LsJJGtdV3xJDZnMVrkFjErlcs=", "narHash": "sha256-ivVXYaYlShxYoKfSo5+y5930qMKKJ8CLcAoIBPQfJ6s=",
"owner": "Alexis211", "owner": "ipetkov",
"repo": "cargo2nix", "repo": "crane",
"rev": "a7a61179b66054904ef6a195d8da736eaaa06c36", "rev": "6fe74265bbb6d016d663b1091f015e2976c4a527",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "Alexis211", "owner": "ipetkov",
"repo": "cargo2nix", "repo": "crane",
"rev": "a7a61179b66054904ef6a195d8da736eaaa06c36",
"type": "github" "type": "github"
} }
}, },
"flake-compat": { "flake-compat": {
"locked": { "locked": {
"lastModified": 1688025799, "lastModified": 1717312683,
"narHash": "sha256-ktpB4dRtnksm9F5WawoIkEneh1nrEvuxb5lJFt1iOyw=", "narHash": "sha256-FrlieJH50AuvagamEvWMIE6D2OAnERuDboFDYAED/dE=",
"owner": "nix-community", "owner": "nix-community",
"repo": "flake-compat", "repo": "flake-compat",
"rev": "8bf105319d44f6b9f0d764efa4fdef9f1cc9ba1c", "rev": "38fd3954cf65ce6faf3d0d45cd26059e059f07ea",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -46,29 +35,11 @@
"systems": "systems" "systems": "systems"
}, },
"locked": { "locked": {
"lastModified": 1681202837, "lastModified": 1731533236,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=", "narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "flake-utils",
"rev": "cfacdce06f30d2b68473a46042957675eebb3401", "rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1681202837,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "cfacdce06f30d2b68473a46042957675eebb3401",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -79,63 +50,47 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1682109806, "lastModified": 1763977559,
"narHash": "sha256-d9g7RKNShMLboTWwukM+RObDWWpHKaqTYXB48clBWXI=", "narHash": "sha256-g4MKqsIRy5yJwEsI+fYODqLUnAqIY4kZai0nldAP6EM=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "2362848adf8def2866fabbffc50462e929d7fffb", "rev": "cfe2c7d5b5d3032862254e68c37a6576b633d632",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1707091808,
"narHash": "sha256-LahKBAfGbY836gtpVNnWwBTIzN7yf/uYM/S0g393r0Y=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "9f2ee8c91ac42da3ae6c6a1d21555f283458247e",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "9f2ee8c91ac42da3ae6c6a1d21555f283458247e", "rev": "cfe2c7d5b5d3032862254e68c37a6576b633d632",
"type": "github" "type": "github"
} }
}, },
"root": { "root": {
"inputs": { "inputs": {
"cargo2nix": "cargo2nix", "crane": "crane",
"flake-compat": "flake-compat", "flake-compat": "flake-compat",
"flake-utils": [ "flake-utils": "flake-utils",
"cargo2nix", "nixpkgs": "nixpkgs",
"flake-utils" "rust-overlay": "rust-overlay"
],
"nixpkgs": "nixpkgs_2"
} }
}, },
"rust-overlay": { "rust-overlay": {
"inputs": { "inputs": {
"flake-utils": "flake-utils_2", "nixpkgs": [
"nixpkgs": "nixpkgs" "nixpkgs"
]
}, },
"locked": { "locked": {
"lastModified": 1707271822, "lastModified": 1763952169,
"narHash": "sha256-/DZsoPH5GBzOpVEGz5PgJ7vh8Q6TcrJq5u8FcBjqAfI=", "narHash": "sha256-+PeDBD8P+NKauH+w7eO/QWCIp8Cx4mCfWnh9sJmy9CM=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "7a94fe7690d2bdfe1aab475382a505e14dc114a6", "rev": "ab726555a9a72e6dc80649809147823a813fa95b",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "ab726555a9a72e6dc80649809147823a813fa95b",
"type": "github" "type": "github"
} }
}, },
@ -153,21 +108,6 @@
"repo": "default", "repo": "default",
"type": "github" "type": "github"
} }
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
} }
}, },
"root": "root", "root": "root",

130
flake.nix
View file

@ -2,89 +2,95 @@
description = description =
"Garage, an S3-compatible distributed object store for self-hosted deployments"; "Garage, an S3-compatible distributed object store for self-hosted deployments";
# Nixpkgs 23.11 as of 2024-02-07, has rustc v1.73 # Nixpkgs 25.05 as of 2025-11-24
inputs.nixpkgs.url = inputs.nixpkgs.url =
"github:NixOS/nixpkgs/9f2ee8c91ac42da3ae6c6a1d21555f283458247e"; "github:NixOS/nixpkgs/cfe2c7d5b5d3032862254e68c37a6576b633d632";
# Rust overlay as of 2025-11-24
inputs.rust-overlay.url =
"github:oxalica/rust-overlay/ab726555a9a72e6dc80649809147823a813fa95b";
inputs.rust-overlay.inputs.nixpkgs.follows = "nixpkgs";
inputs.crane.url = "github:ipetkov/crane";
inputs.flake-compat.url = "github:nix-community/flake-compat"; inputs.flake-compat.url = "github:nix-community/flake-compat";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.cargo2nix = { outputs = { self, nixpkgs, flake-utils, crane, rust-overlay, ... }:
# As of 2022-10-18: two small patches over unstable branch, one for clippy and one to fix feature detection
url = "github:Alexis211/cargo2nix/a7a61179b66054904ef6a195d8da736eaaa06c36";
# As of 2023-04-25:
# - my two patches were merged into unstable (one for clippy and one to "fix" feature detection)
# - rustc v1.66
# url = "github:cargo2nix/cargo2nix/8fb57a670f7993bfc24099c33eb9c5abb51f29a2";
# Rust overlay as of 2024-02-07
inputs.rust-overlay.url =
"github:oxalica/rust-overlay/7a94fe7690d2bdfe1aab475382a505e14dc114a6";
inputs.nixpkgs.follows = "nixpkgs";
inputs.flake-compat.follows = "flake-compat";
};
inputs.flake-utils.follows = "cargo2nix/flake-utils";
outputs = { self, nixpkgs, cargo2nix, flake-utils, ... }:
let let
git_version = self.lastModifiedDate;
compile = import ./nix/compile.nix; compile = import ./nix/compile.nix;
in in
flake-utils.lib.eachDefaultSystem (system: flake-utils.lib.eachDefaultSystem (system:
let let
pkgs = nixpkgs.legacyPackages.${system}; pkgs = nixpkgs.legacyPackages.${system};
packageFor = target: release: (compile {
inherit system target nixpkgs crane rust-overlay release;
}).garage;
testWith = extraTestEnv: (compile {
inherit system nixpkgs crane rust-overlay extraTestEnv;
release = false;
}).garage-test;
lints = (compile {
inherit system nixpkgs crane rust-overlay;
release = false;
});
in in
{ {
packages = packages = {
let # default = native release build
packageFor = target: (compile { default = packageFor null true;
inherit system git_version target;
pkgsSrc = nixpkgs; # <arch> = cross-compiled, statically-linked release builds
cargo2nixOverlay = cargo2nix.overlays.default; amd64 = packageFor "x86_64-unknown-linux-musl" true;
release = true; i386 = packageFor "i686-unknown-linux-musl" true;
}).workspace.garage { compileMode = "build"; }; arm64 = packageFor "aarch64-unknown-linux-musl" true;
in arm = packageFor "armv6l-unknown-linux-musl" true;
{
# default = native release build # dev = native dev build
default = packageFor null; dev = packageFor null false;
# other = cross-compiled, statically-linked builds
amd64 = packageFor "x86_64-unknown-linux-musl"; # test = cargo test
i386 = packageFor "i686-unknown-linux-musl"; tests = testWith {};
arm64 = packageFor "aarch64-unknown-linux-musl"; tests-lmdb = testWith {
arm = packageFor "armv6l-unknown-linux-musl"; GARAGE_TEST_INTEGRATION_DB_ENGINE = "lmdb";
}; };
tests-sqlite = testWith {
GARAGE_TEST_INTEGRATION_DB_ENGINE = "sqlite";
};
tests-fjall = testWith {
GARAGE_TEST_INTEGRATION_DB_ENGINE = "fjall";
};
# lints (fmt, clippy)
fmt = lints.garage-cargo-fmt;
clippy = lints.garage-cargo-clippy;
};
# ---- developpment shell, for making native builds only ---- # ---- developpment shell, for making native builds only ----
devShells = devShells =
let let
shellWithPackages = (packages: (compile { targets = compile {
inherit system git_version; inherit system nixpkgs crane rust-overlay;
pkgsSrc = nixpkgs; };
cargo2nixOverlay = cargo2nix.overlays.default;
}).workspaceShell { inherit packages; });
in in
{ {
default = shellWithPackages default = targets.devShell;
(with pkgs; [
rustfmt
clang
mold
]);
# import the full shell using `nix develop .#full` # import the full shell using `nix develop .#full`
full = shellWithPackages (with pkgs; [ full = pkgs.mkShell {
rustfmt buildInputs = with pkgs; [
rust-analyzer targets.toolchain
clang protobuf
mold clang
# ---- extra packages for dev tasks ---- mold
cargo-audit # ---- extra packages for dev tasks ----
cargo-outdated rust-analyzer
cargo-machete cargo-audit
nixpkgs-fmt cargo-outdated
]); cargo-machete
nixpkgs-fmt
];
};
}; };
}); });
} }

View file

@ -2,7 +2,7 @@
with import ./common.nix; with import ./common.nix;
let let
pkgs = import pkgsSrc { }; pkgs = import nixpkgs { };
lib = pkgs.lib; lib = pkgs.lib;
/* Converts a key list and a value list to a set /* Converts a key list and a value list to a set

View file

@ -10,9 +10,9 @@ let
flake = (import flake-compat { system = builtins.currentSystem; src = ../.; }); flake = (import flake-compat { system = builtins.currentSystem; src = ../.; });
in in
rec {
pkgsSrc = flake.defaultNix.inputs.nixpkgs; {
cargo2nix = flake.defaultNix.inputs.cargo2nix; flake = flake.defaultNix;
cargo2nixOverlay = cargo2nix.overlays.default; nixpkgs = flake.defaultNix.inputs.nixpkgs;
devShells = builtins.getAttr builtins.currentSystem flake.defaultNix.devShells; devShells = flake.defaultNix.devShells.${builtins.currentSystem};
} }

View file

@ -1,164 +1,64 @@
{ system, target ? null, pkgsSrc, cargo2nixOverlay, compiler ? "rustc" {
, release ? false, git_version ? null, features ? null, }: /* build inputs */
nixpkgs,
crane,
rust-overlay,
/* parameters */
system,
git_version ? null,
target ? null,
release ? false,
features ? null,
extraTestEnv ? {}
}:
let let
log = v: builtins.trace v v; log = v: builtins.trace v v;
# NixOS and Rust/Cargo triples do not match for ARM, fix it here.
rustTarget = if target == "armv6l-unknown-linux-musleabihf" then
"arm-unknown-linux-musleabihf"
else
target;
rustTargetEnvMap = {
"x86_64-unknown-linux-musl" = "X86_64_UNKNOWN_LINUX_MUSL";
"aarch64-unknown-linux-musl" = "AARCH64_UNKNOWN_LINUX_MUSL";
"i686-unknown-linux-musl" = "I686_UNKNOWN_LINUX_MUSL";
"arm-unknown-linux-musleabihf" = "ARM_UNKNOWN_LINUX_MUSLEABIHF";
};
pkgsNative = import nixpkgs {
inherit system;
overlays = [ (import rust-overlay) ];
};
pkgs = if target != null then pkgs = if target != null then
import pkgsSrc { import nixpkgs {
inherit system; inherit system;
crossSystem = { crossSystem = {
config = target; config = target;
isStatic = true; isStatic = true;
}; };
overlays = [ cargo2nixOverlay ]; overlays = [ (import rust-overlay) ];
} }
else else
import pkgsSrc { pkgsNative;
inherit system;
overlays = [ cargo2nixOverlay ];
};
toolchainOptions = { inherit (pkgs) lib stdenv;
rustVersion = "1.73.0";
extraRustComponents = [ "clippy" ];
};
buildEnv = (drv: toolchainFn = (p: p.rust-bin.stable."1.91.0".default.override {
{ targets = lib.optionals (target != null) [ rustTarget ];
rustc = drv.setBuildEnv; extensions = [
clippy = '' "rust-src"
${drv.setBuildEnv or ""} "rustfmt"
echo
echo --- BUILDING WITH CLIPPY ---
echo
export NIX_RUST_BUILD_FLAGS="''${NIX_RUST_BUILD_FLAGS} --deny warnings"
export RUSTC="''${CLIPPY_DRIVER}"
'';
}.${compiler});
/* Cargo2nix provides many overrides by default, you can take inspiration from them:
https://github.com/cargo2nix/cargo2nix/blob/master/overlay/overrides.nix
You can have a complete list of the available options by looking at the overriden object, mkcrate:
https://github.com/cargo2nix/cargo2nix/blob/master/overlay/mkcrate.nix
*/
packageOverrides = pkgs:
pkgs.rustBuilder.overrides.all ++ [
/* [1] We add some logic to compile our crates with clippy, it provides us many additional lints
[2] We need to alter Nix hardening to make static binaries: PIE,
Position Independent Executables seems to be supported only on amd64. Having
this flag set either 1. make our executables crash or 2. compile as dynamic on some platforms.
Here, we deactivate it. Later (find `codegenOpts`), we reactivate it for supported targets
(only amd64 curently) through the `-static-pie` flag.
PIE is a feature used by ASLR, which helps mitigate security issues.
Learn more about Nix Hardening at: https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-support/cc-wrapper/add-hardening.sh
[3] We want to inject the git version while keeping the build deterministic.
As we do not want to consider the .git folder as part of the input source,
we ask the user (the CI often) to pass the value to Nix.
[4] We don't want libsodium-sys and zstd-sys to try to use pkgconfig to build against a system library.
However the features to do so get activated for some reason (due to a bug in cargo2nix?),
so disable them manually here.
*/
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage";
overrideAttrs = drv:
(if git_version != null then {
# [3]
preConfigure = ''
${drv.preConfigure or ""}
export GIT_VERSION="${git_version}"
'';
} else
{ }) // {
# [1]
setBuildEnv = (buildEnv drv);
# [2]
hardeningDisable = [ "pie" ];
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_rpc";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_db";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_util";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_table";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_block";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_model";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_api";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "garage_web";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "k2v-client";
overrideAttrs = drv: { # [1]
setBuildEnv = (buildEnv drv);
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "libsodium-sys";
overrideArgs = old: {
features = [ ]; # [4]
};
})
(pkgs.rustBuilder.rustLib.makeOverride {
name = "zstd-sys";
overrideArgs = old: {
features = [ ]; # [4]
};
})
]; ];
});
craneLib = (crane.mkLib pkgs).overrideToolchain toolchainFn;
src = craneLib.cleanCargoSource ../.;
/* We ship some parts of the code disabled by default by putting them behind a flag. /* We ship some parts of the code disabled by default by putting them behind a flag.
It speeds up the compilation (when the feature is not required) and released crates have less dependency by default (less attack surface, disk space, etc.). It speeds up the compilation (when the feature is not required) and released crates have less dependency by default (less attack surface, disk space, etc.).
@ -168,16 +68,16 @@ let
rootFeatures = if features != null then rootFeatures = if features != null then
features features
else else
([ "garage/bundled-libs" "garage/lmdb" "garage/sqlite" "garage/k2v" ] ++ (if release then [ ([ "bundled-libs" "lmdb" "sqlite" "fjall" "k2v" ] ++ (lib.optionals release [
"garage/consul-discovery" "consul-discovery"
"garage/kubernetes-discovery" "kubernetes-discovery"
"garage/metrics" "metrics"
"garage/telemetry-otlp" "telemetry-otlp"
"garage/syslog" "syslog"
] else "journald"
[ ])); ]));
packageFun = import ../Cargo.nix; featuresStr = lib.concatStringsSep "," rootFeatures;
/* We compile fully static binaries with musl to simplify deployment on most systems. /* We compile fully static binaries with musl to simplify deployment on most systems.
When possible, we reactivate PIE hardening (see above). When possible, we reactivate PIE hardening (see above).
@ -188,12 +88,9 @@ let
For more information on static builds, please refer to Rust's RFC 1721. For more information on static builds, please refer to Rust's RFC 1721.
https://rust-lang.github.io/rfcs/1721-crt-static.html#specifying-dynamicstatic-c-runtime-linkage https://rust-lang.github.io/rfcs/1721-crt-static.html#specifying-dynamicstatic-c-runtime-linkage
*/ */
codegenOptsMap = {
codegenOpts = { "x86_64-unknown-linux-musl" =
"armv6l-unknown-linux-musleabihf" = [ [ "target-feature=+crt-static" "link-arg=-static-pie" ];
"target-feature=+crt-static"
"link-arg=-static"
]; # compile as dynamic with static-pie
"aarch64-unknown-linux-musl" = [ "aarch64-unknown-linux-musl" = [
"target-feature=+crt-static" "target-feature=+crt-static"
"link-arg=-static" "link-arg=-static"
@ -202,17 +99,106 @@ let
"target-feature=+crt-static" "target-feature=+crt-static"
"link-arg=-static" "link-arg=-static"
]; # segfault with static-pie ]; # segfault with static-pie
"x86_64-unknown-linux-musl" = "armv6l-unknown-linux-musleabihf" = [
[ "target-feature=+crt-static" "link-arg=-static-pie" ]; "target-feature=+crt-static"
"link-arg=-static"
]; # compile as dynamic with static-pie
}; };
# NixOS and Rust/Cargo triples do not match for ARM, fix it here. codegenOpts = if target != null then codegenOptsMap.${target} else [
rustTarget = if target == "armv6l-unknown-linux-musleabihf" then "link-arg=-fuse-ld=mold"
"arm-unknown-linux-musleabihf" ];
else
target;
in pkgs.rustBuilder.makePackageSet ({ commonArgs =
inherit release packageFun packageOverrides codegenOpts rootFeatures; {
target = rustTarget; inherit src;
} // toolchainOptions) pname = "garage";
version = "dev";
strictDeps = true;
cargoExtraArgs = "--locked --features ${featuresStr}";
cargoTestExtraArgs = "--workspace";
nativeBuildInputs = [
pkgsNative.protobuf
pkgs.stdenv.cc
] ++ lib.optionals (target == null) [
pkgs.clang
pkgs.mold
];
CARGO_PROFILE = if release then "release" else "dev";
CARGO_BUILD_RUSTFLAGS =
lib.concatStringsSep
" "
(builtins.map (flag: "-C ${flag}") codegenOpts);
}
//
(if rustTarget != null then {
CARGO_BUILD_TARGET = rustTarget;
"CARGO_TARGET_${rustTargetEnvMap.${rustTarget}}_LINKER" = "${stdenv.cc.targetPrefix}cc";
HOST_CC = "${stdenv.cc.nativePrefix}cc";
TARGET_CC = "${stdenv.cc.targetPrefix}cc";
} else {
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER = "clang";
});
in rec {
toolchain = toolchainFn pkgs;
devShell = pkgs.mkShell {
buildInputs = [
toolchain
] ++ (with pkgs; [
protobuf
clang
mold
]);
};
# ---- building garage ----
garage-deps = craneLib.buildDepsOnly commonArgs;
garage = craneLib.buildPackage (commonArgs // {
cargoArtifacts = garage-deps;
doCheck = false;
} //
(if git_version != null then {
version = git_version;
GIT_VERSION = git_version;
} else {}));
# ---- testing garage ----
garage-test-bin = craneLib.cargoBuild (commonArgs // {
cargoArtifacts = garage-deps;
pname = "garage-tests";
CARGO_PROFILE = "test";
cargoExtraArgs = "${commonArgs.cargoExtraArgs} --tests --workspace";
doCheck = false;
});
garage-test = craneLib.cargoTest (commonArgs // {
cargoArtifacts = garage-test-bin;
nativeBuildInputs = commonArgs.nativeBuildInputs ++ [
pkgs.cacert
];
} // extraTestEnv);
# ---- source code linting ----
garage-cargo-fmt = craneLib.cargoFmt (commonArgs // {
cargoExtraArgs = "";
});
garage-cargo-clippy = craneLib.cargoClippy (commonArgs // {
cargoArtifacts = garage-deps;
cargoClippyExtraArgs = "--all-targets -- -D warnings";
});
}

View file

@ -11,7 +11,7 @@ PATH="${GARAGE_DEBUG}:${GARAGE_RELEASE}:${NIX_RELEASE}:$PATH"
FANCYCOLORS=("41m" "42m" "44m" "45m" "100m" "104m") FANCYCOLORS=("41m" "42m" "44m" "45m" "100m" "104m")
export RUST_BACKTRACE=1 export RUST_BACKTRACE=1
export RUST_LOG=garage=info,garage_api=debug export RUST_LOG=garage=info,garage_api_common=debug,garage_api_s3=debug
MAIN_LABEL="\e[${FANCYCOLORS[0]}[main]\e[49m" MAIN_LABEL="\e[${FANCYCOLORS[0]}[main]\e[49m"
if [ -z "$GARAGE_BIN" ]; then if [ -z "$GARAGE_BIN" ]; then

View file

@ -1,6 +1,7 @@
export AWS_ACCESS_KEY_ID=`cat /tmp/garage.s3 |cut -d' ' -f1` export AWS_ACCESS_KEY_ID=`cat /tmp/garage.s3 |cut -d' ' -f1`
export AWS_SECRET_ACCESS_KEY=`cat /tmp/garage.s3 |cut -d' ' -f2` export AWS_SECRET_ACCESS_KEY=`cat /tmp/garage.s3 |cut -d' ' -f2`
export AWS_DEFAULT_REGION='garage' export AWS_DEFAULT_REGION='garage'
export AWS_REQUEST_CHECKSUM_CALCULATION='when_required'
# FUTUREWORK: set AWS_ENDPOINT_URL instead, once nixpkgs bumps awscli to >=2.13.0. # FUTUREWORK: set AWS_ENDPOINT_URL instead, once nixpkgs bumps awscli to >=2.13.0.
function aws { command aws --endpoint-url http://127.0.0.1:3911 $@ ; } function aws { command aws --endpoint-url http://127.0.0.1:3911 $@ ; }

View file

@ -1,3 +1,3 @@
# Garage helm3 chart # Garage helm3 chart
Documentation is located [here](/doc/book/cookbook/kubernetes.md). Documentation is located [here](https://garagehq.deuxfleurs.fr/documentation/cookbook/kubernetes/).

View file

@ -1,24 +1,18 @@
apiVersion: v2 apiVersion: v2
name: garage name: garage
description: S3-compatible object store for small self-hosted geo-distributed deployments description: S3-compatible object store for small self-hosted geo-distributed deployments
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application type: application
version: 0.7.3
appVersion: "v1.3.1"
home: https://garagehq.deuxfleurs.fr/
icon: https://garagehq.deuxfleurs.fr/images/garage-logo.svg
# This is the chart version. This version number should be incremented each time you make changes keywords:
# to the chart and its templates, including the app version. - geo-distributed
# Versions are expected to follow Semantic Versioning (https://semver.org/) - read-after-write-consistency
version: 0.5.0 - s3-compatible
# This is the version number of the application being deployed. This version number should be sources:
# incremented each time you make changes to the application. Versions are not expected to - https://git.deuxfleurs.fr/Deuxfleurs/garage.git
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes. maintainers: []
appVersion: "v1.0.0"

View file

@ -0,0 +1,95 @@
# garage
![Version: 0.7.3](https://img.shields.io/badge/Version-0.7.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.3.1](https://img.shields.io/badge/AppVersion-v1.3.1-informational?style=flat-square)
S3-compatible object store for small self-hosted geo-distributed deployments
**Homepage:** <https://garagehq.deuxfleurs.fr/>
## Source Code
* <https://git.deuxfleurs.fr/Deuxfleurs/garage.git>
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | |
| deployment.kind | string | `"StatefulSet"` | Switchable to DaemonSet |
| deployment.podManagementPolicy | string | `"OrderedReady"` | If using statefulset, allow Parallel or OrderedReady (default) |
| deployment.replicaCount | int | `3` | Number of StatefulSet replicas/garage nodes to start |
| environment | object | `{}` | |
| extraVolumeMounts | object | `{}` | |
| extraVolumes | object | `{}` | |
| fullnameOverride | string | `""` | |
| garage.blockSize | string | `"1048576"` | Defaults is 1MB An increase can result in better performance in certain scenarios https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#block-size |
| garage.bootstrapPeers | list | `[]` | This is not required if you use the integrated kubernetes discovery |
| garage.compressionLevel | string | `"1"` | zstd compression level of stored blocks https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#compression-level |
| garage.dbEngine | string | `"lmdb"` | Can be changed for better performance on certain systems https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#db-engine-since-v0-8-0 |
| garage.existingConfigMap | string | `""` | if not empty string, allow using an existing ConfigMap for the garage.toml, if set, ignores garage.toml |
| garage.garageTomlString | string | `""` | String Template for the garage configuration if set, ignores above values. Values can be templated, see https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/ |
| garage.kubernetesSkipCrd | bool | `false` | Set to true if you want to use k8s discovery but install the CRDs manually outside of the helm chart, for example if you operate at namespace level without cluster ressources |
| garage.metadataAutoSnapshotInterval | string | `""` | If this value is set, Garage will automatically take a snapshot of the metadata DB file at a regular interval and save it in the metadata directory. https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#metadata_auto_snapshot_interval |
| garage.replicationMode | string | `"3"` | Default to 3 replicas, see the replication_mode section at https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#replication-mode |
| garage.rpcBindAddr | string | `"[::]:3901"` | |
| garage.rpcSecret | string | `""` | If not given, a random secret will be generated and stored in a Secret object |
| garage.s3.api.region | string | `"garage"` | |
| garage.s3.api.rootDomain | string | `".s3.garage.tld"` | |
| garage.s3.web.index | string | `"index.html"` | |
| garage.s3.web.rootDomain | string | `".web.garage.tld"` | |
| image.pullPolicy | string | `"IfNotPresent"` | |
| image.repository | string | `"dxflrs/amd64_garage"` | default to amd64 docker image |
| image.tag | string | `""` | set the image tag, please prefer using the chart version and not this to avoid compatibility issues |
| imagePullSecrets | list | `[]` | set if you need credentials to pull your custom image |
| ingress.s3.api.annotations | object | `{}` | Rely _either_ on the className or the annotation below but not both! If you want to use the className, set className: "nginx" and replace "nginx" by an Ingress controller name, examples [here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers). |
| ingress.s3.api.enabled | bool | `false` | |
| ingress.s3.api.hosts[0] | object | `{"host":"s3.garage.tld","paths":[{"path":"/","pathType":"Prefix"}]}` | garage S3 API endpoint, to be used with awscli for example |
| ingress.s3.api.hosts[1] | object | `{"host":"*.s3.garage.tld","paths":[{"path":"/","pathType":"Prefix"}]}` | garage S3 API endpoint, DNS style bucket access |
| ingress.s3.api.labels | object | `{}` | |
| ingress.s3.api.tls | list | `[]` | |
| ingress.s3.web.annotations | object | `{}` | Rely _either_ on the className or the annotation below but not both! If you want to use the className, set className: "nginx" and replace "nginx" by an Ingress controller name, examples [here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers). |
| ingress.s3.web.enabled | bool | `false` | |
| ingress.s3.web.hosts[0] | object | `{"host":"*.web.garage.tld","paths":[{"path":"/","pathType":"Prefix"}]}` | wildcard website access with bucket name prefix |
| ingress.s3.web.hosts[1] | object | `{"host":"mywebpage.example.com","paths":[{"path":"/","pathType":"Prefix"}]}` | specific bucket access with FQDN bucket |
| ingress.s3.web.labels | object | `{}` | |
| ingress.s3.web.tls | list | `[]` | |
| initImage.pullPolicy | string | `"IfNotPresent"` | |
| initImage.repository | string | `"busybox"` | |
| initImage.tag | string | `"stable"` | |
| livenessProbe | object | `{}` | Specifies a livenessProbe |
| monitoring.metrics.enabled | bool | `false` | If true, a service for monitoring is created with a prometheus.io/scrape annotation |
| monitoring.metrics.serviceMonitor.enabled | bool | `false` | If true, a ServiceMonitor CRD is created for a prometheus operator https://github.com/coreos/prometheus-operator |
| monitoring.metrics.serviceMonitor.interval | string | `"15s"` | |
| monitoring.metrics.serviceMonitor.labels | object | `{}` | |
| monitoring.metrics.serviceMonitor.path | string | `"/metrics"` | |
| monitoring.metrics.serviceMonitor.relabelings | list | `[]` | |
| monitoring.metrics.serviceMonitor.scheme | string | `"http"` | |
| monitoring.metrics.serviceMonitor.scrapeTimeout | string | `"10s"` | |
| monitoring.metrics.serviceMonitor.tlsConfig | object | `{}` | |
| monitoring.tracing.sink | string | `""` | specify a sink endpoint for OpenTelemetry Traces, eg. `http://localhost:4317` |
| nameOverride | string | `""` | |
| nodeSelector | object | `{}` | |
| persistence.data.hostPath | string | `"/var/lib/garage/data"` | |
| persistence.data.size | string | `"100Mi"` | |
| persistence.enabled | bool | `true` | |
| persistence.meta.hostPath | string | `"/var/lib/garage/meta"` | |
| persistence.meta.size | string | `"100Mi"` | |
| podAnnotations | object | `{}` | additonal pod annotations |
| podSecurityContext.fsGroup | int | `1000` | |
| podSecurityContext.runAsGroup | int | `1000` | |
| podSecurityContext.runAsNonRoot | bool | `true` | |
| podSecurityContext.runAsUser | int | `1000` | |
| readinessProbe | object | `{}` | Specifies a readinessProbe |
| resources | object | `{}` | |
| securityContext.capabilities | object | `{"drop":["ALL"]}` | The default security context is heavily restricted, feel free to tune it to your requirements |
| securityContext.readOnlyRootFilesystem | bool | `true` | |
| service.s3.api.port | int | `3900` | |
| service.s3.web.port | int | `3902` | |
| service.type | string | `"ClusterIP"` | You can rely on any service to expose your cluster - ClusterIP (+ Ingress) - NodePort (+ Ingress) - LoadBalancer |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| serviceAccount.create | bool | `true` | Specifies whether a service account should be created |
| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template |
| tolerations | list | `[]` | |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View file

@ -1,7 +1,53 @@
{{- if not .Values.garage.existingConfigMap }}
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: {{ include "garage.fullname" . }}-config name: {{ include "garage.fullname" . }}-config
data: data:
garage.toml: |- garage.toml: |-
{{- tpl (index (index .Values.garage) "garage.toml") $ | nindent 4 }} {{- if .Values.garage.garageTomlString }}
{{- tpl (index (index .Values.garage) "garageTomlString") $ | nindent 4 }}
{{- else }}
metadata_dir = "/mnt/meta"
data_dir = "/mnt/data"
db_engine = "{{ .Values.garage.dbEngine }}"
block_size = {{ .Values.garage.blockSize }}
replication_mode = "{{ .Values.garage.replicationMode }}"
compression_level = {{ .Values.garage.compressionLevel }}
{{- if .Values.garage.metadataAutoSnapshotInterval }}
metadata_auto_snapshot_interval = {{ .Values.garage.metadataAutoSnapshotInterval | quote }}
{{- end }}
rpc_bind_addr = "{{ .Values.garage.rpcBindAddr }}"
# rpc_secret will be populated by the init container from a k8s secret object
rpc_secret = "__RPC_SECRET_REPLACE__"
bootstrap_peers = {{ .Values.garage.bootstrapPeers }}
[kubernetes_discovery]
namespace = "{{ .Release.Namespace }}"
service_name = "{{ include "garage.fullname" . }}"
skip_crd = {{ .Values.garage.kubernetesSkipCrd }}
[s3_api]
s3_region = "{{ .Values.garage.s3.api.region }}"
api_bind_addr = "[::]:3900"
root_domain = "{{ .Values.garage.s3.api.rootDomain }}"
[s3_web]
bind_addr = "[::]:3902"
root_domain = "{{ .Values.garage.s3.web.rootDomain }}"
index = "{{ .Values.garage.s3.web.index }}"
[admin]
api_bind_addr = "[::]:3903"
{{- if .Values.monitoring.tracing.sink }}
trace_sink = "{{ .Values.monitoring.tracing.sink }}"
{{- end }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,22 @@
{{- if eq .Values.deployment.kind "StatefulSet" -}}
apiVersion: v1
kind: Service
metadata:
name: {{ include "garage.fullname" . }}-headless
labels:
{{- include "garage.labels" . | nindent 4 }}
spec:
type: ClusterIP
clusterIP: None
ports:
- port: {{ .Values.service.s3.api.port }}
targetPort: 3900
protocol: TCP
name: s3-api
- port: {{ .Values.service.s3.web.port }}
targetPort: 3902
protocol: TCP
name: s3-web
selector:
{{- include "garage.selectorLabels" . | nindent 4 }}
{{- end }}

View file

@ -4,6 +4,10 @@ metadata:
name: {{ include "garage.fullname" . }} name: {{ include "garage.fullname" . }}
labels: labels:
{{- include "garage.labels" . | nindent 4 }} {{- include "garage.labels" . | nindent 4 }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec: spec:
type: {{ .Values.service.type }} type: {{ .Values.service.type }}
ports: ports:
@ -37,4 +41,4 @@ spec:
name: metrics name: metrics
selector: selector:
{{- include "garage.selectorLabels" . | nindent 4 }} {{- include "garage.selectorLabels" . | nindent 4 }}
{{- end }} {{- end }}

View file

@ -10,12 +10,11 @@ spec:
{{- include "garage.selectorLabels" . | nindent 6 }} {{- include "garage.selectorLabels" . | nindent 6 }}
{{- if eq .Values.deployment.kind "StatefulSet" }} {{- if eq .Values.deployment.kind "StatefulSet" }}
replicas: {{ .Values.deployment.replicaCount }} replicas: {{ .Values.deployment.replicaCount }}
serviceName: {{ include "garage.fullname" . }} serviceName: {{ include "garage.fullname" . }}-headless
podManagementPolicy: {{ .Values.deployment.podManagementPolicy }} podManagementPolicy: {{ .Values.deployment.podManagementPolicy }}
{{- end }} {{- end }}
template: template:
metadata: metadata:
annotations: annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }} {{- with .Values.podAnnotations }}
@ -76,15 +75,17 @@ spec:
- name: etc - name: etc
mountPath: /etc/garage.toml mountPath: /etc/garage.toml
subPath: garage.toml subPath: garage.toml
# TODO {{- with .Values.extraVolumeMounts }}
# livenessProbe: {{- toYaml . | nindent 12 }}
# httpGet: {{- end }}
# path: / {{- with .Values.livenessProbe }}
# port: 3900 livenessProbe:
# readinessProbe: {{- toYaml . | nindent 12 }}
# httpGet: {{- end }}
# path: / {{- with .Values.readinessProbe }}
# port: 3900 readinessProbe:
{{- toYaml . | nindent 12 }}
{{- end }}
resources: resources:
{{- toYaml .Values.resources | nindent 12 }} {{- toYaml .Values.resources | nindent 12 }}
volumes: volumes:
@ -110,6 +111,9 @@ spec:
- name: data - name: data
emptyDir: {} emptyDir: {}
{{- end }} {{- end }}
{{- with .Values.extraVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }} {{- with .Values.nodeSelector }}
nodeSelector: nodeSelector:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}

View file

@ -4,28 +4,34 @@
# Garage configuration. These values go to garage.toml # Garage configuration. These values go to garage.toml
garage: garage:
# Can be changed for better performance on certain systems # -- Can be changed for better performance on certain systems
# https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#db-engine-since-v0-8-0 # https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#db-engine-since-v0-8-0
dbEngine: "lmdb" dbEngine: "lmdb"
# Defaults is 1MB # -- Defaults is 1MB
# An increase can result in better performance in certain scenarios # An increase can result in better performance in certain scenarios
# https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#block-size # https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#block-size
blockSize: "1048576" blockSize: "1048576"
# Default to 3 replicas, see the replication_mode section at # -- Default to 3 replicas, see the replication_mode section at
# https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#replication-mode # https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#replication-mode
replicationMode: "3" replicationMode: "3"
# zstd compression level of stored blocks # -- zstd compression level of stored blocks
# https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#compression-level # https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#compression-level
compressionLevel: "1" compressionLevel: "1"
# -- If this value is set, Garage will automatically take a snapshot of the metadata DB file at a regular interval and save it in the metadata directory.
# https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#metadata_auto_snapshot_interval
metadataAutoSnapshotInterval: ""
rpcBindAddr: "[::]:3901" rpcBindAddr: "[::]:3901"
# If not given, a random secret will be generated and stored in a Secret object # -- If not given, a random secret will be generated and stored in a Secret object
rpcSecret: "" rpcSecret: ""
# This is not required if you use the integrated kubernetes discovery # -- This is not required if you use the integrated kubernetes discovery
bootstrapPeers: [] bootstrapPeers: []
# -- Set to true if you want to use k8s discovery but install the CRDs manually outside
# of the helm chart, for example if you operate at namespace level without cluster ressources
kubernetesSkipCrd: false kubernetesSkipCrd: false
s3: s3:
api: api:
@ -34,47 +40,16 @@ garage:
web: web:
rootDomain: ".web.garage.tld" rootDomain: ".web.garage.tld"
index: "index.html" index: "index.html"
# Template for the garage configuration
# Values can be templated
# ref: https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/
garage.toml: |-
metadata_dir = "/mnt/meta"
data_dir = "/mnt/data"
db_engine = "{{ .Values.garage.dbEngine }}" # -- if not empty string, allow using an existing ConfigMap for the garage.toml,
# if set, ignores garage.toml
existingConfigMap: ""
block_size = {{ .Values.garage.blockSize }} # -- String Template for the garage configuration
# if set, ignores above values.
replication_mode = "{{ .Values.garage.replicationMode }}" # Values can be templated,
# see https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/
compression_level = {{ .Values.garage.compressionLevel }} garageTomlString: ""
rpc_bind_addr = "{{ .Values.garage.rpcBindAddr }}"
# rpc_secret will be populated by the init container from a k8s secret object
rpc_secret = "__RPC_SECRET_REPLACE__"
bootstrap_peers = {{ .Values.garage.bootstrapPeers }}
[kubernetes_discovery]
namespace = "{{ .Release.Namespace }}"
service_name = "{{ include "garage.fullname" . }}"
skip_crd = {{ .Values.garage.kubernetesSkipCrd }}
[s3_api]
s3_region = "{{ .Values.garage.s3.api.region }}"
api_bind_addr = "[::]:3900"
root_domain = "{{ .Values.garage.s3.api.rootDomain }}"
[s3_web]
bind_addr = "[::]:3902"
root_domain = "{{ .Values.garage.s3.web.rootDomain }}"
index = "{{ .Values.garage.s3.web.index }}"
[admin]
api_bind_addr = "[::]:3903"
{{- if .Values.monitoring.tracing.sink }}
trace_sink = "{{ .Values.monitoring.tracing.sink }}"
{{- end }}
# Data persistence # Data persistence
persistence: persistence:
@ -92,16 +67,18 @@ persistence:
# Deployment configuration # Deployment configuration
deployment: deployment:
# Switchable to DaemonSet # -- Switchable to DaemonSet
kind: StatefulSet kind: StatefulSet
# Number of StatefulSet replicas/garage nodes to start # -- Number of StatefulSet replicas/garage nodes to start
replicaCount: 3 replicaCount: 3
# If using statefulset, allow Parallel or OrderedReady (default) # -- If using statefulset, allow Parallel or OrderedReady (default)
podManagementPolicy: OrderedReady podManagementPolicy: OrderedReady
image: image:
# -- default to amd64 docker image
repository: dxflrs/amd64_garage repository: dxflrs/amd64_garage
# please prefer using the chart version and not this tag # -- set the image tag, please prefer using the chart version and not this
# to avoid compatibility issues
tag: "" tag: ""
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
@ -110,19 +87,21 @@ initImage:
tag: stable tag: stable
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
# -- set if you need credentials to pull your custom image
imagePullSecrets: [] imagePullSecrets: []
nameOverride: "" nameOverride: ""
fullnameOverride: "" fullnameOverride: ""
serviceAccount: serviceAccount:
# Specifies whether a service account should be created # -- Specifies whether a service account should be created
create: true create: true
# Annotations to add to the service account # -- Annotations to add to the service account
annotations: {} annotations: {}
# The name of the service account to use. # -- The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template # If not set and create is true, a name is generated using the fullname template
name: "" name: ""
# -- additonal pod annotations
podAnnotations: {} podAnnotations: {}
podSecurityContext: podSecurityContext:
@ -132,7 +111,7 @@ podSecurityContext:
runAsNonRoot: true runAsNonRoot: true
securityContext: securityContext:
# The default security context is heavily restricted # -- The default security context is heavily restricted,
# feel free to tune it to your requirements # feel free to tune it to your requirements
capabilities: capabilities:
drop: drop:
@ -140,11 +119,13 @@ securityContext:
readOnlyRootFilesystem: true readOnlyRootFilesystem: true
service: service:
# You can rely on any service to expose your cluster # -- You can rely on any service to expose your cluster
# - ClusterIP (+ Ingress) # - ClusterIP (+ Ingress)
# - NodePort (+ Ingress) # - NodePort (+ Ingress)
# - LoadBalancer # - LoadBalancer
type: ClusterIP type: ClusterIP
# -- Annotations to add to the service
annotations: {}
s3: s3:
api: api:
port: 3900 port: 3900
@ -156,20 +137,23 @@ ingress:
s3: s3:
api: api:
enabled: false enabled: false
# Rely either on the className or the annotation below but not both # -- Rely _either_ on the className or the annotation below but not both!
# replace "nginx" by an Ingress controller # If you want to use the className, set
# you can find examples here https://kubernetes.io/docs/concepts/services-networking/ingress-controllers
# className: "nginx" # className: "nginx"
# and replace "nginx" by an Ingress controller name,
# examples [here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers).
annotations: {} annotations: {}
# kubernetes.io/ingress.class: "nginx" # kubernetes.io/ingress.class: "nginx"
# kubernetes.io/tls-acme: "true" # kubernetes.io/tls-acme: "true"
labels: {} labels: {}
hosts: hosts:
- host: "s3.garage.tld" # garage S3 API endpoint # -- garage S3 API endpoint, to be used with awscli for example
- host: "s3.garage.tld"
paths: paths:
- path: / - path: /
pathType: Prefix pathType: Prefix
- host: "*.s3.garage.tld" # garage S3 API endpoint, DNS style bucket access # -- garage S3 API endpoint, DNS style bucket access
- host: "*.s3.garage.tld"
paths: paths:
- path: / - path: /
pathType: Prefix pathType: Prefix
@ -179,20 +163,23 @@ ingress:
# - kubernetes.docker.internal # - kubernetes.docker.internal
web: web:
enabled: false enabled: false
# Rely either on the className or the annotation below but not both # -- Rely _either_ on the className or the annotation below but not both!
# replace "nginx" by an Ingress controller # If you want to use the className, set
# you can find examples here https://kubernetes.io/docs/concepts/services-networking/ingress-controllers
# className: "nginx" # className: "nginx"
# and replace "nginx" by an Ingress controller name,
# examples [here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers).
annotations: {} annotations: {}
# kubernetes.io/ingress.class: nginx # kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true" # kubernetes.io/tls-acme: "true"
labels: {} labels: {}
hosts: hosts:
- host: "*.web.garage.tld" # wildcard website access with bucket name prefix # -- wildcard website access with bucket name prefix
- host: "*.web.garage.tld"
paths: paths:
- path: / - path: /
pathType: Prefix pathType: Prefix
- host: "mywebpage.example.com" # specific bucket access with FQDN bucket # -- specific bucket access with FQDN bucket
- host: "mywebpage.example.com"
paths: paths:
- path: / - path: /
pathType: Prefix pathType: Prefix
@ -210,6 +197,21 @@ resources: {}
# cpu: 100m # cpu: 100m
# memory: 512Mi # memory: 512Mi
# -- Specifies a livenessProbe
livenessProbe: {}
#httpGet:
# path: /health
# port: 3903
#initialDelaySeconds: 5
#periodSeconds: 30
# -- Specifies a readinessProbe
readinessProbe: {}
#httpGet:
# path: /health
# port: 3903
#initialDelaySeconds: 5
#periodSeconds: 30
nodeSelector: {} nodeSelector: {}
tolerations: [] tolerations: []
@ -218,12 +220,16 @@ affinity: {}
environment: {} environment: {}
extraVolumes: {}
extraVolumeMounts: {}
monitoring: monitoring:
metrics: metrics:
# If true, a service for monitoring is created with a prometheus.io/scrape annotation # -- If true, a service for monitoring is created with a prometheus.io/scrape annotation
enabled: false enabled: false
serviceMonitor: serviceMonitor:
# If true, a ServiceMonitor CRD is created for a prometheus operator # -- If true, a ServiceMonitor CRD is created for a prometheus operator
# https://github.com/coreos/prometheus-operator # https://github.com/coreos/prometheus-operator
enabled: false enabled: false
path: /metrics path: /metrics
@ -235,4 +241,5 @@ monitoring:
scrapeTimeout: 10s scrapeTimeout: 10s
relabelings: [] relabelings: []
tracing: tracing:
# -- specify a sink endpoint for OpenTelemetry Traces, eg. `http://localhost:4317`
sink: "" sink: ""

View file

@ -0,0 +1,43 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: garagenodes.deuxfleurs.fr
spec:
conversion:
strategy: None
group: deuxfleurs.fr
names:
kind: GarageNode
listKind: GarageNodeList
plural: garagenodes
singular: garagenode
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: Auto-generated derived type for Node via `CustomResource`
properties:
spec:
properties:
address:
format: ip
type: string
hostname:
type: string
port:
format: uint16
minimum: 0
type: integer
required:
- address
- hostname
- port
type: object
required:
- spec
title: GarageNode
type: object
served: true
storage: true
subresources: {}

View file

@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- garagenodes.deuxfleurs.fr.yaml

View file

@ -7,7 +7,12 @@ if [ "$#" -ne 1 ]; then
exit 2 exit 2
fi fi
if file $1 | grep 'dynamically linked' 2>&1; then if [ ! -x "$1" ]; then
echo "[fail] $1 does not exist or is not an executable"
exit 1
fi
if file "$1" | grep 'dynamically linked' 2>&1; then
echo "[fail] $1 is dynamic" echo "[fail] $1 is dynamic"
exit 1 exit 1
fi fi

View file

@ -3,7 +3,7 @@
with import ./nix/common.nix; with import ./nix/common.nix;
let let
pkgs = import pkgsSrc { pkgs = import nixpkgs {
inherit system; inherit system;
}; };
winscp = (import ./nix/winscp.nix) pkgs; winscp = (import ./nix/winscp.nix) pkgs;
@ -34,12 +34,14 @@ in
jq jq
]; ];
shellHook = '' shellHook = ''
export AWS_REQUEST_CHECKSUM_CALCULATION='when_required'
function to_s3 { function to_s3 {
aws \ aws \
--endpoint-url https://garage.deuxfleurs.fr \ --endpoint-url https://garage.deuxfleurs.fr \
--region garage \ --region garage \
s3 cp \ s3 cp \
./result-bin/bin/garage \ ./result/bin/garage \
s3://garagehq.deuxfleurs.fr/_releases/''${CI_COMMIT_TAG:-$CI_COMMIT_SHA}/''${TARGET}/garage s3://garagehq.deuxfleurs.fr/_releases/''${CI_COMMIT_TAG:-$CI_COMMIT_SHA}/''${TARGET}/garage
} }
@ -115,7 +117,7 @@ in
shellHook = '' shellHook = ''
function refresh_cache { function refresh_cache {
pass show deuxfleurs/nix_priv_key > /tmp/nix-signing-key.sec pass show deuxfleurs/nix_priv_key > /tmp/nix-signing-key.sec
for attr in clippy.amd64 test.amd64 pkgs.{amd64,i386,arm,arm64}.release; do for attr in pkgs.amd64.debug test.amd64 pkgs.{amd64,i386,arm,arm64}.release; do
echo "Updating cache for ''${attr}" echo "Updating cache for ''${attr}"
nix copy -j8 \ nix copy -j8 \
--to 's3://nix?endpoint=garage.deuxfleurs.fr&region=garage&secret-key=/tmp/nix-signing-key.sec' \ --to 's3://nix?endpoint=garage.deuxfleurs.fr&region=garage&secret-key=/tmp/nix-signing-key.sec' \

43
src/api/admin/Cargo.toml Normal file
View file

@ -0,0 +1,43 @@
[package]
name = "garage_api_admin"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
description = "Admin API server crate for the Garage object store"
repository = "https://git.deuxfleurs.fr/Deuxfleurs/garage"
readme = "../../../README.md"
[lib]
path = "lib.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
garage_model.workspace = true
garage_table.workspace = true
garage_util.workspace = true
garage_rpc.workspace = true
garage_api_common.workspace = true
argon2.workspace = true
async-trait.workspace = true
thiserror.workspace = true
hex.workspace = true
tracing.workspace = true
futures.workspace = true
tokio.workspace = true
http.workspace = true
hyper = { workspace = true, default-features = false, features = ["server", "http1"] }
url.workspace = true
serde.workspace = true
serde_json.workspace = true
opentelemetry.workspace = true
opentelemetry-prometheus = { workspace = true, optional = true }
prometheus = { workspace = true, optional = true }
[features]
metrics = [ "opentelemetry-prometheus", "prometheus" ]

View file

@ -2,7 +2,6 @@ use std::collections::HashMap;
use std::sync::Arc; use std::sync::Arc;
use argon2::password_hash::PasswordHash; use argon2::password_hash::PasswordHash;
use async_trait::async_trait;
use http::header::{ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN, ALLOW}; use http::header::{ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN, ALLOW};
use hyper::{body::Incoming as IncomingBody, Request, Response, StatusCode}; use hyper::{body::Incoming as IncomingBody, Request, Response, StatusCode};
@ -20,15 +19,15 @@ use garage_rpc::system::ClusterHealthStatus;
use garage_util::error::Error as GarageError; use garage_util::error::Error as GarageError;
use garage_util::socket_address::UnixOrTCPSocketAddress; use garage_util::socket_address::UnixOrTCPSocketAddress;
use crate::generic_server::*; use garage_api_common::generic_server::*;
use garage_api_common::helpers::*;
use crate::admin::bucket::*; use crate::bucket::*;
use crate::admin::cluster::*; use crate::cluster::*;
use crate::admin::error::*; use crate::error::*;
use crate::admin::key::*; use crate::key::*;
use crate::admin::router_v0; use crate::router_v0;
use crate::admin::router_v1::{Authorization, Endpoint}; use crate::router_v1::{Authorization, Endpoint};
use crate::helpers::*;
pub type ResBody = BoxBody<Error>; pub type ResBody = BoxBody<Error>;
@ -221,7 +220,6 @@ impl AdminApiServer {
} }
} }
#[async_trait]
impl ApiHandler for AdminApiServer { impl ApiHandler for AdminApiServer {
const API_NAME: &'static str = "admin"; const API_NAME: &'static str = "admin";
const API_NAME_DISPLAY: &'static str = "Admin"; const API_NAME_DISPLAY: &'static str = "Admin";

View file

@ -17,11 +17,12 @@ use garage_model::permission::*;
use garage_model::s3::mpu_table; use garage_model::s3::mpu_table;
use garage_model::s3::object_table::*; use garage_model::s3::object_table::*;
use crate::admin::api_server::ResBody; use garage_api_common::common_error::CommonError;
use crate::admin::error::*; use garage_api_common::helpers::*;
use crate::admin::key::ApiBucketKeyPerm;
use crate::common_error::CommonError; use crate::api_server::ResBody;
use crate::helpers::*; use crate::error::*;
use crate::key::ApiBucketKeyPerm;
pub async fn handle_list_buckets(garage: &Arc<Garage>) -> Result<Response<ResBody>, Error> { pub async fn handle_list_buckets(garage: &Arc<Garage>) -> Result<Response<ResBody>, Error> {
let buckets = garage let buckets = garage
@ -276,7 +277,7 @@ pub async fn handle_create_bucket(
let helper = garage.locked_helper().await; let helper = garage.locked_helper().await;
if let Some(ga) = &req.global_alias { if let Some(ga) = &req.global_alias {
if !is_valid_bucket_name(ga) { if !is_valid_bucket_name(ga, garage.config.allow_punycode) {
return Err(Error::bad_request(format!( return Err(Error::bad_request(format!(
"{}: {}", "{}: {}",
ga, INVALID_BUCKET_NAME_MESSAGE ga, INVALID_BUCKET_NAME_MESSAGE
@ -291,7 +292,7 @@ pub async fn handle_create_bucket(
} }
if let Some(la) = &req.local_alias { if let Some(la) = &req.local_alias {
if !is_valid_bucket_name(&la.alias) { if !is_valid_bucket_name(&la.alias, garage.config.allow_punycode) {
return Err(Error::bad_request(format!( return Err(Error::bad_request(format!(
"{}: {}", "{}: {}",
la.alias, INVALID_BUCKET_NAME_MESSAGE la.alias, INVALID_BUCKET_NAME_MESSAGE
@ -381,7 +382,7 @@ pub async fn handle_delete_bucket(
for ((key_id, alias), _, active) in state.local_aliases.items().iter() { for ((key_id, alias), _, active) in state.local_aliases.items().iter() {
if *active { if *active {
helper helper
.unset_local_bucket_alias(bucket.id, key_id, alias) .purge_local_bucket_alias(bucket.id, key_id, alias)
.await?; .await?;
} }
} }

View file

@ -12,9 +12,10 @@ use garage_rpc::layout;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use crate::admin::api_server::ResBody; use garage_api_common::helpers::{json_ok_response, parse_json_body};
use crate::admin::error::*;
use crate::helpers::{json_ok_response, parse_json_body}; use crate::api_server::ResBody;
use crate::error::*;
pub async fn handle_get_cluster_status(garage: &Arc<Garage>) -> Result<Response<ResBody>, Error> { pub async fn handle_get_cluster_status(garage: &Arc<Garage>) -> Result<Response<ResBody>, Error> {
let layout = garage.system.cluster_layout(); let layout = garage.system.cluster_layout();

View file

@ -1,45 +1,50 @@
use err_derive::Error; use std::convert::TryFrom;
use hyper::header::HeaderValue; use hyper::header::HeaderValue;
use hyper::{HeaderMap, StatusCode}; use hyper::{HeaderMap, StatusCode};
use thiserror::Error;
pub use garage_model::helper::error::Error as HelperError; pub use garage_model::helper::error::Error as HelperError;
use crate::common_error::CommonError; use garage_api_common::common_error::{commonErrorDerivative, CommonError};
pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInternalError}; pub use garage_api_common::common_error::{
use crate::generic_server::ApiError; CommonErrorDerivative, OkOrBadRequest, OkOrInternalError,
use crate::helpers::*; };
use garage_api_common::generic_server::ApiError;
use garage_api_common::helpers::*;
/// Errors of this crate /// Errors of this crate
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum Error { pub enum Error {
#[error(display = "{}", _0)] #[error("{0}")]
/// Error from common error /// Error from common error
Common(CommonError), Common(#[from] CommonError),
// Category: cannot process // Category: cannot process
/// The API access key does not exist /// The API access key does not exist
#[error(display = "Access key not found: {}", _0)] #[error("Access key not found: {0}")]
NoSuchAccessKey(String), NoSuchAccessKey(String),
/// In Import key, the key already exists /// In Import key, the key already exists
#[error( #[error("Key {0} already exists in data store. Even if it is deleted, we can't let you create a new key with the same ID. Sorry.")]
display = "Key {} already exists in data store. Even if it is deleted, we can't let you create a new key with the same ID. Sorry.",
_0
)]
KeyAlreadyExists(String), KeyAlreadyExists(String),
} }
impl<T> From<T> for Error commonErrorDerivative!(Error);
where
CommonError: From<T>, /// FIXME: helper errors are transformed into their corresponding variants
{ /// in the Error struct, but in many case a helper error should be considered
fn from(err: T) -> Self { /// an internal error.
Error::Common(CommonError::from(err)) impl From<HelperError> for Error {
fn from(err: HelperError) -> Error {
match CommonError::try_from(err) {
Ok(ce) => Self::Common(ce),
Err(HelperError::NoSuchAccessKey(k)) => Self::NoSuchAccessKey(k),
Err(_) => unreachable!(),
}
} }
} }
impl CommonErrorDerivative for Error {}
impl Error { impl Error {
fn code(&self) -> &'static str { fn code(&self) -> &'static str {
match self { match self {

View file

@ -9,9 +9,10 @@ use garage_table::*;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_model::key_table::*; use garage_model::key_table::*;
use crate::admin::api_server::ResBody; use garage_api_common::helpers::*;
use crate::admin::error::*;
use crate::helpers::*; use crate::api_server::ResBody;
use crate::error::*;
pub async fn handle_list_keys(garage: &Arc<Garage>) -> Result<Response<ResBody>, Error> { pub async fn handle_list_keys(garage: &Arc<Garage>) -> Result<Response<ResBody>, Error> {
let res = garage let res = garage

View file

@ -1,3 +1,6 @@
#[macro_use]
extern crate tracing;
pub mod api_server; pub mod api_server;
mod error; mod error;
mod router_v0; mod router_v0;

View file

@ -2,8 +2,9 @@ use std::borrow::Cow;
use hyper::{Method, Request}; use hyper::{Method, Request};
use crate::admin::error::*; use garage_api_common::router_macros::*;
use crate::router_macros::*;
use crate::error::*;
router_match! {@func router_match! {@func

View file

@ -2,9 +2,10 @@ use std::borrow::Cow;
use hyper::{Method, Request}; use hyper::{Method, Request};
use crate::admin::error::*; use garage_api_common::router_macros::*;
use crate::admin::router_v0;
use crate::router_macros::*; use crate::error::*;
use crate::router_v0;
pub enum Authorization { pub enum Authorization {
None, None,

48
src/api/common/Cargo.toml Normal file
View file

@ -0,0 +1,48 @@
[package]
name = "garage_api_common"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
description = "Common functions for the API server crates for the Garage object store"
repository = "https://git.deuxfleurs.fr/Deuxfleurs/garage"
readme = "../../../README.md"
[lib]
path = "lib.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
garage_model.workspace = true
garage_table.workspace = true
garage_util.workspace = true
base64.workspace = true
bytes.workspace = true
chrono.workspace = true
crc32fast.workspace = true
crc32c.workspace = true
crypto-common.workspace = true
thiserror.workspace = true
hex.workspace = true
hmac.workspace = true
md-5.workspace = true
tracing.workspace = true
nom.workspace = true
pin-project.workspace = true
sha1.workspace = true
sha2.workspace = true
futures.workspace = true
tokio.workspace = true
http.workspace = true
http-body-util.workspace = true
hyper = { workspace = true, default-features = false, features = ["server", "http1"] }
hyper-util.workspace = true
url.workspace = true
serde.workspace = true
serde_json.workspace = true
opentelemetry.workspace = true

View file

@ -1,5 +1,7 @@
use err_derive::Error; use std::convert::TryFrom;
use hyper::StatusCode; use hyper::StatusCode;
use thiserror::Error;
use garage_util::error::Error as GarageError; use garage_util::error::Error as GarageError;
@ -10,51 +12,80 @@ use garage_model::helper::error::Error as HelperError;
pub enum CommonError { pub enum CommonError {
// ---- INTERNAL ERRORS ---- // ---- INTERNAL ERRORS ----
/// Error related to deeper parts of Garage /// Error related to deeper parts of Garage
#[error(display = "Internal error: {}", _0)] #[error("Internal error: {0}")]
InternalError(#[error(source)] GarageError), InternalError(#[from] GarageError),
/// Error related to Hyper /// Error related to Hyper
#[error(display = "Internal error (Hyper error): {}", _0)] #[error("Internal error (Hyper error): {0}")]
Hyper(#[error(source)] hyper::Error), Hyper(#[from] hyper::Error),
/// Error related to HTTP /// Error related to HTTP
#[error(display = "Internal error (HTTP error): {}", _0)] #[error("Internal error (HTTP error): {0}")]
Http(#[error(source)] http::Error), Http(#[from] http::Error),
// ---- GENERIC CLIENT ERRORS ---- // ---- GENERIC CLIENT ERRORS ----
/// Proper authentication was not provided /// Proper authentication was not provided
#[error(display = "Forbidden: {}", _0)] #[error("Forbidden: {0}")]
Forbidden(String), Forbidden(String),
/// Generic bad request response with custom message /// Generic bad request response with custom message
#[error(display = "Bad request: {}", _0)] #[error("Bad request: {0}")]
BadRequest(String), BadRequest(String),
/// The client sent a header with invalid value /// The client sent a header with invalid value
#[error(display = "Invalid header value: {}", _0)] #[error("Invalid header value: {0}")]
InvalidHeader(#[error(source)] hyper::header::ToStrError), InvalidHeader(#[from] hyper::header::ToStrError),
// ---- SPECIFIC ERROR CONDITIONS ---- // ---- SPECIFIC ERROR CONDITIONS ----
// These have to be error codes referenced in the S3 spec here: // These have to be error codes referenced in the S3 spec here:
// https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList // https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList
/// The bucket requested don't exists /// The bucket requested don't exists
#[error(display = "Bucket not found: {}", _0)] #[error("Bucket not found: {0}")]
NoSuchBucket(String), NoSuchBucket(String),
/// Tried to create a bucket that already exist /// Tried to create a bucket that already exist
#[error(display = "Bucket already exists")] #[error("Bucket already exists")]
BucketAlreadyExists, BucketAlreadyExists,
/// Tried to delete a non-empty bucket /// Tried to delete a non-empty bucket
#[error(display = "Tried to delete a non-empty bucket")] #[error("Tried to delete a non-empty bucket")]
BucketNotEmpty, BucketNotEmpty,
// Category: bad request // Category: bad request
/// Bucket name is not valid according to AWS S3 specs /// Bucket name is not valid according to AWS S3 specs
#[error(display = "Invalid bucket name: {}", _0)] #[error("Invalid bucket name: {0}")]
InvalidBucketName(String), InvalidBucketName(String),
} }
#[macro_export]
macro_rules! commonErrorDerivative {
( $error_struct: ident ) => {
impl From<garage_util::error::Error> for $error_struct {
fn from(err: garage_util::error::Error) -> Self {
Self::Common(CommonError::InternalError(err))
}
}
impl From<http::Error> for $error_struct {
fn from(err: http::Error) -> Self {
Self::Common(CommonError::Http(err))
}
}
impl From<hyper::Error> for $error_struct {
fn from(err: hyper::Error) -> Self {
Self::Common(CommonError::Hyper(err))
}
}
impl From<hyper::header::ToStrError> for $error_struct {
fn from(err: hyper::header::ToStrError) -> Self {
Self::Common(CommonError::InvalidHeader(err))
}
}
impl CommonErrorDerivative for $error_struct {}
};
}
pub use commonErrorDerivative;
impl CommonError { impl CommonError {
pub fn http_status_code(&self) -> StatusCode { pub fn http_status_code(&self) -> StatusCode {
match self { match self {
@ -97,18 +128,39 @@ impl CommonError {
} }
} }
impl From<HelperError> for CommonError { impl TryFrom<HelperError> for CommonError {
fn from(err: HelperError) -> Self { type Error = HelperError;
fn try_from(err: HelperError) -> Result<Self, HelperError> {
match err { match err {
HelperError::Internal(i) => Self::InternalError(i), HelperError::Internal(i) => Ok(Self::InternalError(i)),
HelperError::BadRequest(b) => Self::BadRequest(b), HelperError::BadRequest(b) => Ok(Self::BadRequest(b)),
HelperError::InvalidBucketName(n) => Self::InvalidBucketName(n), HelperError::InvalidBucketName(n) => Ok(Self::InvalidBucketName(n)),
HelperError::NoSuchBucket(n) => Self::NoSuchBucket(n), HelperError::NoSuchBucket(n) => Ok(Self::NoSuchBucket(n)),
e => Self::bad_request(format!("{}", e)), e => Err(e),
} }
} }
} }
/// This function converts HelperErrors into CommonErrors,
/// for variants that exist in CommonError.
/// This is used for helper functions that might return InvalidBucketName
/// or NoSuchBucket for instance, and we want to pass that error
/// up to our caller.
pub fn pass_helper_error(err: HelperError) -> CommonError {
match CommonError::try_from(err) {
Ok(e) => e,
Err(e) => panic!("Helper error `{}` should hot have happenned here", e),
}
}
pub fn helper_error_as_internal(err: HelperError) -> CommonError {
match err {
HelperError::Internal(e) => CommonError::InternalError(e),
e => CommonError::InternalError(GarageError::Message(e.to_string())),
}
}
pub trait CommonErrorDerivative: From<CommonError> { pub trait CommonErrorDerivative: From<CommonError> {
fn internal_error<M: ToString>(msg: M) -> Self { fn internal_error<M: ToString>(msg: M) -> Self {
Self::from(CommonError::InternalError(GarageError::Message( Self::from(CommonError::InternalError(GarageError::Message(

170
src/api/common/cors.rs Normal file
View file

@ -0,0 +1,170 @@
use std::sync::Arc;
use http::header::{
ACCESS_CONTROL_ALLOW_HEADERS, ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN,
ACCESS_CONTROL_EXPOSE_HEADERS, ACCESS_CONTROL_REQUEST_HEADERS, ACCESS_CONTROL_REQUEST_METHOD,
};
use hyper::{body::Body, body::Incoming as IncomingBody, Request, Response, StatusCode};
use garage_model::bucket_table::{BucketParams, CorsRule as GarageCorsRule};
use garage_model::garage::Garage;
use crate::common_error::{
helper_error_as_internal, CommonError, OkOrBadRequest, OkOrInternalError,
};
use crate::helpers::*;
pub fn find_matching_cors_rule<'a, B>(
bucket_params: &'a BucketParams,
req: &Request<B>,
) -> Result<Option<&'a GarageCorsRule>, CommonError> {
if let Some(cors_config) = bucket_params.cors_config.get() {
if let Some(origin) = req.headers().get("Origin") {
let origin = origin.to_str()?;
let request_headers = match req.headers().get(ACCESS_CONTROL_REQUEST_HEADERS) {
Some(h) => h.to_str()?.split(',').map(|h| h.trim()).collect::<Vec<_>>(),
None => vec![],
};
return Ok(cors_config.iter().find(|rule| {
cors_rule_matches(rule, origin, req.method().as_ref(), request_headers.iter())
}));
}
}
Ok(None)
}
pub fn cors_rule_matches<'a, HI, S>(
rule: &GarageCorsRule,
origin: &'a str,
method: &'a str,
mut request_headers: HI,
) -> bool
where
HI: Iterator<Item = S>,
S: AsRef<str>,
{
rule.allow_origins.iter().any(|x| x == "*" || x == origin)
&& rule.allow_methods.iter().any(|x| x == "*" || x == method)
&& request_headers.all(|h| {
rule.allow_headers
.iter()
.any(|x| x == "*" || x == h.as_ref())
})
}
pub fn add_cors_headers(
resp: &mut Response<impl Body>,
rule: &GarageCorsRule,
) -> Result<(), http::header::InvalidHeaderValue> {
let h = resp.headers_mut();
h.insert(
ACCESS_CONTROL_ALLOW_ORIGIN,
rule.allow_origins.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_ALLOW_METHODS,
rule.allow_methods.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_ALLOW_HEADERS,
rule.allow_headers.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_EXPOSE_HEADERS,
rule.expose_headers.join(", ").parse()?,
);
Ok(())
}
pub async fn handle_options_api(
garage: Arc<Garage>,
req: &Request<IncomingBody>,
bucket_name: Option<String>,
) -> Result<Response<EmptyBody>, CommonError> {
// FIXME: CORS rules of buckets with local aliases are
// not taken into account.
// If the bucket name is a global bucket name,
// we try to apply the CORS rules of that bucket.
// If a user has a local bucket name that has
// the same name, its CORS rules won't be applied
// and will be shadowed by the rules of the globally
// existing bucket (but this is inevitable because
// OPTIONS calls are not auhtenticated).
if let Some(bn) = bucket_name {
let helper = garage.bucket_helper();
let bucket_id = helper
.resolve_global_bucket_name(&bn)
.await
.map_err(helper_error_as_internal)?;
if let Some(id) = bucket_id {
let bucket = garage
.bucket_helper()
.get_existing_bucket(id)
.await
.map_err(helper_error_as_internal)?;
let bucket_params = bucket.state.into_option().unwrap();
handle_options_for_bucket(req, &bucket_params)
} else {
// If there is a bucket name in the request, but that name
// does not correspond to a global alias for a bucket,
// then it's either a non-existing bucket or a local bucket.
// We have no way of knowing, because the request is not
// authenticated and thus we can't resolve local aliases.
// We take the permissive approach of allowing everything,
// because we don't want to prevent web apps that use
// local bucket names from making API calls.
Ok(Response::builder()
.header(ACCESS_CONTROL_ALLOW_ORIGIN, "*")
.header(ACCESS_CONTROL_ALLOW_METHODS, "*")
.status(StatusCode::OK)
.body(EmptyBody::new())?)
}
} else {
// If there is no bucket name in the request,
// we are doing a ListBuckets call, which we want to allow
// for all origins.
Ok(Response::builder()
.header(ACCESS_CONTROL_ALLOW_ORIGIN, "*")
.header(ACCESS_CONTROL_ALLOW_METHODS, "GET")
.status(StatusCode::OK)
.body(EmptyBody::new())?)
}
}
pub fn handle_options_for_bucket<B>(
req: &Request<B>,
bucket_params: &BucketParams,
) -> Result<Response<EmptyBody>, CommonError> {
let origin = req
.headers()
.get("Origin")
.ok_or_bad_request("Missing Origin header")?
.to_str()?;
let request_method = req
.headers()
.get(ACCESS_CONTROL_REQUEST_METHOD)
.ok_or_bad_request("Missing Access-Control-Request-Method header")?
.to_str()?;
let request_headers = match req.headers().get(ACCESS_CONTROL_REQUEST_HEADERS) {
Some(h) => h.to_str()?.split(',').map(|h| h.trim()).collect::<Vec<_>>(),
None => vec![],
};
if let Some(cors_config) = bucket_params.cors_config.get() {
let matching_rule = cors_config
.iter()
.find(|rule| cors_rule_matches(rule, origin, request_method, request_headers.iter()));
if let Some(rule) = matching_rule {
let mut resp = Response::builder()
.status(StatusCode::OK)
.body(EmptyBody::new())?;
add_cors_headers(&mut resp, rule).ok_or_internal_error("Invalid CORS configuration")?;
return Ok(resp);
}
}
Err(CommonError::Forbidden(
"This CORS request is not allowed.".into(),
))
}

View file

@ -2,8 +2,7 @@ use std::convert::Infallible;
use std::fs::{self, Permissions}; use std::fs::{self, Permissions};
use std::os::unix::fs::PermissionsExt; use std::os::unix::fs::PermissionsExt;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use async_trait::async_trait;
use futures::future::Future; use futures::future::Future;
use futures::stream::{futures_unordered::FuturesUnordered, StreamExt}; use futures::stream::{futures_unordered::FuturesUnordered, StreamExt};
@ -19,6 +18,7 @@ use hyper_util::rt::TokioIo;
use tokio::io::{AsyncRead, AsyncWrite}; use tokio::io::{AsyncRead, AsyncWrite};
use tokio::net::{TcpListener, TcpStream, UnixListener, UnixStream}; use tokio::net::{TcpListener, TcpStream, UnixListener, UnixStream};
use tokio::sync::watch; use tokio::sync::watch;
use tokio::time::{sleep_until, Instant};
use opentelemetry::{ use opentelemetry::{
global, global,
@ -34,7 +34,7 @@ use garage_util::socket_address::UnixOrTCPSocketAddress;
use crate::helpers::{BoxBody, ErrorBody}; use crate::helpers::{BoxBody, ErrorBody};
pub(crate) trait ApiEndpoint: Send + Sync + 'static { pub trait ApiEndpoint: Send + Sync + 'static {
fn name(&self) -> &'static str; fn name(&self) -> &'static str;
fn add_span_attributes(&self, span: SpanRef<'_>); fn add_span_attributes(&self, span: SpanRef<'_>);
} }
@ -45,8 +45,7 @@ pub trait ApiError: std::error::Error + Send + Sync + 'static {
fn http_body(&self, garage_region: &str, path: &str) -> ErrorBody; fn http_body(&self, garage_region: &str, path: &str) -> ErrorBody;
} }
#[async_trait] pub trait ApiHandler: Send + Sync + 'static {
pub(crate) trait ApiHandler: Send + Sync + 'static {
const API_NAME: &'static str; const API_NAME: &'static str;
const API_NAME_DISPLAY: &'static str; const API_NAME_DISPLAY: &'static str;
@ -54,14 +53,20 @@ pub(crate) trait ApiHandler: Send + Sync + 'static {
type Error: ApiError; type Error: ApiError;
fn parse_endpoint(&self, r: &Request<IncomingBody>) -> Result<Self::Endpoint, Self::Error>; fn parse_endpoint(&self, r: &Request<IncomingBody>) -> Result<Self::Endpoint, Self::Error>;
async fn handle( fn handle(
&self, &self,
req: Request<IncomingBody>, req: Request<IncomingBody>,
endpoint: Self::Endpoint, endpoint: Self::Endpoint,
) -> Result<Response<BoxBody<Self::Error>>, Self::Error>; ) -> impl Future<Output = Result<Response<BoxBody<Self::Error>>, Self::Error>> + Send;
/// Returns the key id used to authenticate this request. The ID returned must be safe to
/// log.
fn key_id_from_request(&self, _req: &Request<IncomingBody>) -> Option<String> {
None
}
} }
pub(crate) struct ApiServer<A: ApiHandler> { pub struct ApiServer<A: ApiHandler> {
region: String, region: String,
api_handler: A, api_handler: A,
@ -143,19 +148,20 @@ impl<A: ApiHandler> ApiServer<A> {
) -> Result<Response<BoxBody<A::Error>>, http::Error> { ) -> Result<Response<BoxBody<A::Error>>, http::Error> {
let uri = req.uri().clone(); let uri = req.uri().clone();
if let Ok(forwarded_for_ip_addr) = let source = if let Ok(forwarded_for_ip_addr) =
forwarded_headers::handle_forwarded_for_headers(req.headers()) forwarded_headers::handle_forwarded_for_headers(req.headers())
{ {
info!( format!("{forwarded_for_ip_addr} (via {addr})")
"{} (via {}) {} {}",
forwarded_for_ip_addr,
addr,
req.method(),
uri
);
} else { } else {
info!("{} {} {}", addr, req.method(), uri); format!("{addr}")
} };
// we only do this to log the access key, so we can discard any error
let key = self
.api_handler
.key_id_from_request(&req)
.map(|k| format!("(key {k}) "))
.unwrap_or_default();
info!("{source} {key}{} {uri}", req.method());
debug!("{:?}", req); debug!("{:?}", req);
let tracer = opentelemetry::global::tracer("garage"); let tracer = opentelemetry::global::tracer("garage");
@ -246,13 +252,11 @@ impl<A: ApiHandler> ApiServer<A> {
// ==== helper functions ==== // ==== helper functions ====
#[async_trait]
pub trait Accept: Send + Sync + 'static { pub trait Accept: Send + Sync + 'static {
type Stream: AsyncRead + AsyncWrite + Send + Sync + 'static; type Stream: AsyncRead + AsyncWrite + Send + Sync + 'static;
async fn accept(&self) -> std::io::Result<(Self::Stream, String)>; fn accept(&self) -> impl Future<Output = std::io::Result<(Self::Stream, String)>> + Send;
} }
#[async_trait]
impl Accept for TcpListener { impl Accept for TcpListener {
type Stream = TcpStream; type Stream = TcpStream;
async fn accept(&self) -> std::io::Result<(Self::Stream, String)> { async fn accept(&self) -> std::io::Result<(Self::Stream, String)> {
@ -264,7 +268,6 @@ impl Accept for TcpListener {
pub struct UnixListenerOn(pub UnixListener, pub String); pub struct UnixListenerOn(pub UnixListener, pub String);
#[async_trait]
impl Accept for UnixListenerOn { impl Accept for UnixListenerOn {
type Stream = UnixStream; type Stream = UnixStream;
async fn accept(&self) -> std::io::Result<(Self::Stream, String)> { async fn accept(&self) -> std::io::Result<(Self::Stream, String)> {
@ -291,7 +294,7 @@ where
let connection_collector = tokio::spawn({ let connection_collector = tokio::spawn({
let server_name = server_name.clone(); let server_name = server_name.clone();
async move { async move {
let mut connections = FuturesUnordered::new(); let mut connections = FuturesUnordered::<tokio::task::JoinHandle<()>>::new();
loop { loop {
let collect_next = async { let collect_next = async {
if connections.is_empty() { if connections.is_empty() {
@ -312,23 +315,34 @@ where
} }
} }
} }
if !connections.is_empty() { let deadline = Instant::now() + Duration::from_secs(10);
while !connections.is_empty() {
info!( info!(
"{} server: {} connections still open", "{} server: {} connections still open, deadline in {:.2}s",
server_name, server_name,
connections.len() connections.len(),
(deadline - Instant::now()).as_secs_f32(),
); );
while let Some(conn_res) = connections.next().await { tokio::select! {
trace!( conn_res = connections.next() => {
"{} server: HTTP connection finished: {:?}", trace!(
server_name, "{} server: HTTP connection finished: {:?}",
conn_res server_name,
); conn_res.unwrap(),
info!( );
"{} server: {} connections still open", }
server_name, _ = sleep_until(deadline) => {
connections.len() warn!("{} server: exit deadline reached with {} connections still open, killing them now",
); server_name,
connections.len());
for conn in connections.iter() {
conn.abort();
}
for conn in connections {
assert!(conn.await.unwrap_err().is_cancelled());
}
break;
}
} }
} }
} }
@ -336,7 +350,11 @@ where
while !*must_exit.borrow() { while !*must_exit.borrow() {
let (stream, client_addr) = tokio::select! { let (stream, client_addr) = tokio::select! {
acc = listener.accept() => acc?, acc = listener.accept() => match acc {
Ok(r) => r,
Err(e) if e.kind() == std::io::ErrorKind::ConnectionAborted => continue,
Err(e) => return Err(e.into()),
},
_ = must_exit.changed() => continue, _ = must_exit.changed() => continue,
}; };

View file

@ -8,7 +8,6 @@ use hyper::{
body::{Body, Bytes}, body::{Body, Bytes},
Request, Response, Request, Response,
}; };
use idna::domain_to_unicode;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use garage_model::bucket_table::BucketParams; use garage_model::bucket_table::BucketParams;
@ -97,7 +96,7 @@ pub fn authority_to_host(authority: &str) -> Result<String, Error> {
authority authority
))), ))),
}; };
authority.map(|h| domain_to_unicode(h).0) authority.map(|h| h.to_ascii_lowercase())
} }
/// Extract the bucket name and the key name from an HTTP path and possibly a bucket provided in /// Extract the bucket name and the key name from an HTTP path and possibly a bucket provided in
@ -363,9 +362,9 @@ mod tests {
} }
#[derive(Serialize)] #[derive(Serialize)]
pub(crate) struct CustomApiErrorBody { pub struct CustomApiErrorBody {
pub(crate) code: String, pub code: String,
pub(crate) message: String, pub message: String,
pub(crate) region: String, pub region: String,
pub(crate) path: String, pub path: String,
} }

12
src/api/common/lib.rs Normal file
View file

@ -0,0 +1,12 @@
//! Crate for serving a S3 compatible API
#[macro_use]
extern crate tracing;
pub mod common_error;
pub mod cors;
pub mod encoding;
pub mod generic_server;
pub mod helpers;
pub mod router_macros;
pub mod signature;

View file

@ -1,5 +1,6 @@
/// This macro is used to generate very repetitive match {} blocks in this module /// This macro is used to generate very repetitive match {} blocks in this module
/// It is _not_ made to be used anywhere else /// It is _not_ made to be used anywhere else
#[macro_export]
macro_rules! router_match { macro_rules! router_match {
(@match $enum:expr , [ $($endpoint:ident,)* ]) => {{ (@match $enum:expr , [ $($endpoint:ident,)* ]) => {{
// usage: router_match {@match my_enum, [ VariantWithField1, VariantWithField2 ..] } // usage: router_match {@match my_enum, [ VariantWithField1, VariantWithField2 ..] }
@ -133,6 +134,7 @@ macro_rules! router_match {
/// This macro is used to generate part of the code in this module. It must be called only one, and /// This macro is used to generate part of the code in this module. It must be called only one, and
/// is useless outside of this module. /// is useless outside of this module.
#[macro_export]
macro_rules! generateQueryParameters { macro_rules! generateQueryParameters {
( (
keywords: [ $($kw_param:expr => $kw_name: ident),* ], keywords: [ $($kw_param:expr => $kw_name: ident),* ],
@ -204,7 +206,7 @@ macro_rules! generateQueryParameters {
} }
/// Get an error message in case not all parameters where used when extracting them to /// Get an error message in case not all parameters where used when extracting them to
/// build an Enpoint variant /// build an Endpoint variant
fn nonempty_message(&self) -> Option<&str> { fn nonempty_message(&self) -> Option<&str> {
if self.keyword.is_some() { if self.keyword.is_some() {
Some("Keyword not used") Some("Keyword not used")
@ -220,5 +222,5 @@ macro_rules! generateQueryParameters {
} }
} }
pub(crate) use generateQueryParameters; pub use generateQueryParameters;
pub(crate) use router_match; pub use router_match;

View file

@ -0,0 +1,135 @@
use std::sync::Mutex;
use futures::prelude::*;
use futures::stream::BoxStream;
use http_body_util::{BodyExt, StreamBody};
use hyper::body::{Bytes, Frame};
use serde::Deserialize;
use tokio::sync::mpsc;
use tokio::task;
use super::*;
use crate::signature::checksum::*;
pub struct ReqBody {
// why need mutex to be sync??
pub(crate) stream: Mutex<BoxStream<'static, Result<Frame<Bytes>, Error>>>,
pub(crate) checksummer: Checksummer,
pub(crate) expected_checksums: ExpectedChecksums,
pub(crate) trailer_algorithm: Option<ChecksumAlgorithm>,
}
pub type StreamingChecksumReceiver = task::JoinHandle<Result<Checksums, Error>>;
impl ReqBody {
pub fn add_expected_checksums(&mut self, more: ExpectedChecksums) {
if more.md5.is_some() {
self.expected_checksums.md5 = more.md5;
}
if more.sha256.is_some() {
self.expected_checksums.sha256 = more.sha256;
}
if more.extra.is_some() {
self.expected_checksums.extra = more.extra;
}
self.checksummer.add_expected(&self.expected_checksums);
}
pub fn add_md5(&mut self) {
self.checksummer.add_md5();
}
// ============ non-streaming =============
pub async fn json<T: for<'a> Deserialize<'a>>(self) -> Result<T, Error> {
let body = self.collect().await?;
let resp: T = serde_json::from_slice(&body).ok_or_bad_request("Invalid JSON")?;
Ok(resp)
}
pub async fn collect(self) -> Result<Bytes, Error> {
self.collect_with_checksums().await.map(|(b, _)| b)
}
pub async fn collect_with_checksums(mut self) -> Result<(Bytes, Checksums), Error> {
let stream: BoxStream<_> = self.stream.into_inner().unwrap();
let bytes = BodyExt::collect(StreamBody::new(stream)).await?.to_bytes();
self.checksummer.update(&bytes);
let checksums = self.checksummer.finalize();
checksums.verify(&self.expected_checksums)?;
Ok((bytes, checksums))
}
// ============ streaming =============
pub fn streaming_with_checksums(
self,
) -> (
BoxStream<'static, Result<Bytes, Error>>,
StreamingChecksumReceiver,
) {
let Self {
stream,
mut checksummer,
mut expected_checksums,
trailer_algorithm,
} = self;
let (frame_tx, mut frame_rx) = mpsc::channel::<Frame<Bytes>>(5);
let join_checksums = tokio::spawn(async move {
while let Some(frame) = frame_rx.recv().await {
match frame.into_data() {
Ok(data) => {
checksummer = tokio::task::spawn_blocking(move || {
checksummer.update(&data);
checksummer
})
.await
.unwrap()
}
Err(frame) => {
let trailers = frame.into_trailers().unwrap();
let algo = trailer_algorithm.unwrap();
expected_checksums.extra = Some(extract_checksum_value(&trailers, algo)?);
break;
}
}
}
if trailer_algorithm.is_some() && expected_checksums.extra.is_none() {
return Err(Error::bad_request("trailing checksum was not sent"));
}
let checksums = checksummer.finalize();
checksums.verify(&expected_checksums)?;
Ok(checksums)
});
let stream: BoxStream<_> = stream.into_inner().unwrap();
let stream = stream.filter_map(move |x| {
let frame_tx = frame_tx.clone();
async move {
match x {
Err(e) => Some(Err(e)),
Ok(frame) => {
if frame.is_data() {
let data = frame.data_ref().unwrap().clone();
let _ = frame_tx.send(frame).await;
Some(Ok(data))
} else {
let _ = frame_tx.send(frame).await;
None
}
}
}
}
});
(stream.boxed(), join_checksums)
}
}

View file

@ -11,11 +11,12 @@ use sha2::Sha256;
use http::{HeaderMap, HeaderName, HeaderValue}; use http::{HeaderMap, HeaderName, HeaderValue};
use garage_util::data::*; use garage_util::data::*;
use garage_util::error::OkOrMessage;
use garage_model::s3::object_table::*; use super::*;
use crate::s3::error::*; pub use garage_model::s3::object_table::{ChecksumAlgorithm, ChecksumValue};
pub const CONTENT_MD5: HeaderName = HeaderName::from_static("content-md5");
pub const X_AMZ_CHECKSUM_ALGORITHM: HeaderName = pub const X_AMZ_CHECKSUM_ALGORITHM: HeaderName =
HeaderName::from_static("x-amz-checksum-algorithm"); HeaderName::from_static("x-amz-checksum-algorithm");
@ -31,8 +32,8 @@ pub type Md5Checksum = [u8; 16];
pub type Sha1Checksum = [u8; 20]; pub type Sha1Checksum = [u8; 20];
pub type Sha256Checksum = [u8; 32]; pub type Sha256Checksum = [u8; 32];
#[derive(Debug, Default)] #[derive(Debug, Default, Clone)]
pub(crate) struct ExpectedChecksums { pub struct ExpectedChecksums {
// base64-encoded md5 (content-md5 header) // base64-encoded md5 (content-md5 header)
pub md5: Option<String>, pub md5: Option<String>,
// content_sha256 (as a Hash / FixedBytes32) // content_sha256 (as a Hash / FixedBytes32)
@ -41,7 +42,7 @@ pub(crate) struct ExpectedChecksums {
pub extra: Option<ChecksumValue>, pub extra: Option<ChecksumValue>,
} }
pub(crate) struct Checksummer { pub struct Checksummer {
pub crc32: Option<Crc32>, pub crc32: Option<Crc32>,
pub crc32c: Option<Crc32c>, pub crc32c: Option<Crc32c>,
pub md5: Option<Md5>, pub md5: Option<Md5>,
@ -50,7 +51,7 @@ pub(crate) struct Checksummer {
} }
#[derive(Default)] #[derive(Default)]
pub(crate) struct Checksums { pub struct Checksums {
pub crc32: Option<Crc32Checksum>, pub crc32: Option<Crc32Checksum>,
pub crc32c: Option<Crc32cChecksum>, pub crc32c: Option<Crc32cChecksum>,
pub md5: Option<Md5Checksum>, pub md5: Option<Md5Checksum>,
@ -59,34 +60,48 @@ pub(crate) struct Checksums {
} }
impl Checksummer { impl Checksummer {
pub(crate) fn init(expected: &ExpectedChecksums, require_md5: bool) -> Self { pub fn new() -> Self {
let mut ret = Self { Self {
crc32: None, crc32: None,
crc32c: None, crc32c: None,
md5: None, md5: None,
sha1: None, sha1: None,
sha256: None, sha256: None,
}; }
}
if expected.md5.is_some() || require_md5 { pub fn init(expected: &ExpectedChecksums, add_md5: bool) -> Self {
ret.md5 = Some(Md5::new()); let mut ret = Self::new();
} ret.add_expected(expected);
if expected.sha256.is_some() || matches!(&expected.extra, Some(ChecksumValue::Sha256(_))) { if add_md5 {
ret.sha256 = Some(Sha256::new()); ret.add_md5();
}
if matches!(&expected.extra, Some(ChecksumValue::Crc32(_))) {
ret.crc32 = Some(Crc32::new());
}
if matches!(&expected.extra, Some(ChecksumValue::Crc32c(_))) {
ret.crc32c = Some(Crc32c::default());
}
if matches!(&expected.extra, Some(ChecksumValue::Sha1(_))) {
ret.sha1 = Some(Sha1::new());
} }
ret ret
} }
pub(crate) fn add(mut self, algo: Option<ChecksumAlgorithm>) -> Self { pub fn add_md5(&mut self) {
self.md5 = Some(Md5::new());
}
pub fn add_expected(&mut self, expected: &ExpectedChecksums) {
if expected.md5.is_some() {
self.md5 = Some(Md5::new());
}
if expected.sha256.is_some() || matches!(&expected.extra, Some(ChecksumValue::Sha256(_))) {
self.sha256 = Some(Sha256::new());
}
if matches!(&expected.extra, Some(ChecksumValue::Crc32(_))) {
self.crc32 = Some(Crc32::new());
}
if matches!(&expected.extra, Some(ChecksumValue::Crc32c(_))) {
self.crc32c = Some(Crc32c::default());
}
if matches!(&expected.extra, Some(ChecksumValue::Sha1(_))) {
self.sha1 = Some(Sha1::new());
}
}
pub fn add(mut self, algo: Option<ChecksumAlgorithm>) -> Self {
match algo { match algo {
Some(ChecksumAlgorithm::Crc32) => { Some(ChecksumAlgorithm::Crc32) => {
self.crc32 = Some(Crc32::new()); self.crc32 = Some(Crc32::new());
@ -105,7 +120,7 @@ impl Checksummer {
self self
} }
pub(crate) fn update(&mut self, bytes: &[u8]) { pub fn update(&mut self, bytes: &[u8]) {
if let Some(crc32) = &mut self.crc32 { if let Some(crc32) = &mut self.crc32 {
crc32.update(bytes); crc32.update(bytes);
} }
@ -123,7 +138,7 @@ impl Checksummer {
} }
} }
pub(crate) fn finalize(self) -> Checksums { pub fn finalize(self) -> Checksums {
Checksums { Checksums {
crc32: self.crc32.map(|x| u32::to_be_bytes(x.finalize())), crc32: self.crc32.map(|x| u32::to_be_bytes(x.finalize())),
crc32c: self crc32c: self
@ -183,153 +198,56 @@ impl Checksums {
// ---- // ----
#[derive(Default)] pub fn parse_checksum_algorithm(algo: &str) -> Result<ChecksumAlgorithm, Error> {
pub(crate) struct MultipartChecksummer { match algo {
pub md5: Md5, "CRC32" => Ok(ChecksumAlgorithm::Crc32),
pub extra: Option<MultipartExtraChecksummer>, "CRC32C" => Ok(ChecksumAlgorithm::Crc32c),
} "SHA1" => Ok(ChecksumAlgorithm::Sha1),
"SHA256" => Ok(ChecksumAlgorithm::Sha256),
pub(crate) enum MultipartExtraChecksummer { _ => Err(Error::bad_request("invalid checksum algorithm")),
Crc32(Crc32),
Crc32c(Crc32c),
Sha1(Sha1),
Sha256(Sha256),
}
impl MultipartChecksummer {
pub(crate) fn init(algo: Option<ChecksumAlgorithm>) -> Self {
Self {
md5: Md5::new(),
extra: match algo {
None => None,
Some(ChecksumAlgorithm::Crc32) => {
Some(MultipartExtraChecksummer::Crc32(Crc32::new()))
}
Some(ChecksumAlgorithm::Crc32c) => {
Some(MultipartExtraChecksummer::Crc32c(Crc32c::default()))
}
Some(ChecksumAlgorithm::Sha1) => Some(MultipartExtraChecksummer::Sha1(Sha1::new())),
Some(ChecksumAlgorithm::Sha256) => {
Some(MultipartExtraChecksummer::Sha256(Sha256::new()))
}
},
}
}
pub(crate) fn update(
&mut self,
etag: &str,
checksum: Option<ChecksumValue>,
) -> Result<(), Error> {
self.md5
.update(&hex::decode(&etag).ok_or_message("invalid etag hex")?);
match (&mut self.extra, checksum) {
(None, _) => (),
(
Some(MultipartExtraChecksummer::Crc32(ref mut crc32)),
Some(ChecksumValue::Crc32(x)),
) => {
crc32.update(&x);
}
(
Some(MultipartExtraChecksummer::Crc32c(ref mut crc32c)),
Some(ChecksumValue::Crc32c(x)),
) => {
crc32c.write(&x);
}
(Some(MultipartExtraChecksummer::Sha1(ref mut sha1)), Some(ChecksumValue::Sha1(x))) => {
sha1.update(&x);
}
(
Some(MultipartExtraChecksummer::Sha256(ref mut sha256)),
Some(ChecksumValue::Sha256(x)),
) => {
sha256.update(&x);
}
(Some(_), b) => {
return Err(Error::internal_error(format!(
"part checksum was not computed correctly, got: {:?}",
b
)))
}
}
Ok(())
}
pub(crate) fn finalize(self) -> (Md5Checksum, Option<ChecksumValue>) {
let md5 = self.md5.finalize()[..].try_into().unwrap();
let extra = match self.extra {
None => None,
Some(MultipartExtraChecksummer::Crc32(crc32)) => {
Some(ChecksumValue::Crc32(u32::to_be_bytes(crc32.finalize())))
}
Some(MultipartExtraChecksummer::Crc32c(crc32c)) => Some(ChecksumValue::Crc32c(
u32::to_be_bytes(u32::try_from(crc32c.finish()).unwrap()),
)),
Some(MultipartExtraChecksummer::Sha1(sha1)) => {
Some(ChecksumValue::Sha1(sha1.finalize()[..].try_into().unwrap()))
}
Some(MultipartExtraChecksummer::Sha256(sha256)) => Some(ChecksumValue::Sha256(
sha256.finalize()[..].try_into().unwrap(),
)),
};
(md5, extra)
} }
} }
// ----
/// Extract the value of the x-amz-checksum-algorithm header /// Extract the value of the x-amz-checksum-algorithm header
pub(crate) fn request_checksum_algorithm( pub fn request_checksum_algorithm(
headers: &HeaderMap<HeaderValue>, headers: &HeaderMap<HeaderValue>,
) -> Result<Option<ChecksumAlgorithm>, Error> { ) -> Result<Option<ChecksumAlgorithm>, Error> {
match headers.get(X_AMZ_CHECKSUM_ALGORITHM) { match headers.get(X_AMZ_CHECKSUM_ALGORITHM) {
None => Ok(None), None => Ok(None),
Some(x) if x == "CRC32" => Ok(Some(ChecksumAlgorithm::Crc32)), Some(x) => parse_checksum_algorithm(x.to_str()?).map(Some),
Some(x) if x == "CRC32C" => Ok(Some(ChecksumAlgorithm::Crc32c)), }
Some(x) if x == "SHA1" => Ok(Some(ChecksumAlgorithm::Sha1)), }
Some(x) if x == "SHA256" => Ok(Some(ChecksumAlgorithm::Sha256)),
pub fn request_trailer_checksum_algorithm(
headers: &HeaderMap<HeaderValue>,
) -> Result<Option<ChecksumAlgorithm>, Error> {
match headers.get(X_AMZ_TRAILER).map(|x| x.to_str()).transpose()? {
None => Ok(None),
Some(x) if x == X_AMZ_CHECKSUM_CRC32 => Ok(Some(ChecksumAlgorithm::Crc32)),
Some(x) if x == X_AMZ_CHECKSUM_CRC32C => Ok(Some(ChecksumAlgorithm::Crc32c)),
Some(x) if x == X_AMZ_CHECKSUM_SHA1 => Ok(Some(ChecksumAlgorithm::Sha1)),
Some(x) if x == X_AMZ_CHECKSUM_SHA256 => Ok(Some(ChecksumAlgorithm::Sha256)),
_ => Err(Error::bad_request("invalid checksum algorithm")), _ => Err(Error::bad_request("invalid checksum algorithm")),
} }
} }
/// Extract the value of any of the x-amz-checksum-* headers /// Extract the value of any of the x-amz-checksum-* headers
pub(crate) fn request_checksum_value( pub fn request_checksum_value(
headers: &HeaderMap<HeaderValue>, headers: &HeaderMap<HeaderValue>,
) -> Result<Option<ChecksumValue>, Error> { ) -> Result<Option<ChecksumValue>, Error> {
let mut ret = vec![]; let mut ret = vec![];
if let Some(crc32_str) = headers.get(X_AMZ_CHECKSUM_CRC32) { if headers.contains_key(X_AMZ_CHECKSUM_CRC32) {
let crc32 = BASE64_STANDARD ret.push(extract_checksum_value(headers, ChecksumAlgorithm::Crc32)?);
.decode(&crc32_str)
.ok()
.and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-crc32 header")?;
ret.push(ChecksumValue::Crc32(crc32))
} }
if let Some(crc32c_str) = headers.get(X_AMZ_CHECKSUM_CRC32C) { if headers.contains_key(X_AMZ_CHECKSUM_CRC32C) {
let crc32c = BASE64_STANDARD ret.push(extract_checksum_value(headers, ChecksumAlgorithm::Crc32c)?);
.decode(&crc32c_str)
.ok()
.and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-crc32c header")?;
ret.push(ChecksumValue::Crc32c(crc32c))
} }
if let Some(sha1_str) = headers.get(X_AMZ_CHECKSUM_SHA1) { if headers.contains_key(X_AMZ_CHECKSUM_SHA1) {
let sha1 = BASE64_STANDARD ret.push(extract_checksum_value(headers, ChecksumAlgorithm::Sha1)?);
.decode(&sha1_str)
.ok()
.and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-sha1 header")?;
ret.push(ChecksumValue::Sha1(sha1))
} }
if let Some(sha256_str) = headers.get(X_AMZ_CHECKSUM_SHA256) { if headers.contains_key(X_AMZ_CHECKSUM_SHA256) {
let sha256 = BASE64_STANDARD ret.push(extract_checksum_value(headers, ChecksumAlgorithm::Sha256)?);
.decode(&sha256_str)
.ok()
.and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-sha256 header")?;
ret.push(ChecksumValue::Sha256(sha256))
} }
if ret.len() > 1 { if ret.len() > 1 {
@ -340,50 +258,49 @@ pub(crate) fn request_checksum_value(
Ok(ret.pop()) Ok(ret.pop())
} }
/// Checks for the presense of x-amz-checksum-algorithm /// Checks for the presence of x-amz-checksum-algorithm
/// if so extract the corrseponding x-amz-checksum-* value /// if so extract the corresponding x-amz-checksum-* value
pub(crate) fn request_checksum_algorithm_value( pub fn extract_checksum_value(
headers: &HeaderMap<HeaderValue>, headers: &HeaderMap<HeaderValue>,
) -> Result<Option<ChecksumValue>, Error> { algo: ChecksumAlgorithm,
match headers.get(X_AMZ_CHECKSUM_ALGORITHM) { ) -> Result<ChecksumValue, Error> {
Some(x) if x == "CRC32" => { match algo {
ChecksumAlgorithm::Crc32 => {
let crc32 = headers let crc32 = headers
.get(X_AMZ_CHECKSUM_CRC32) .get(X_AMZ_CHECKSUM_CRC32)
.and_then(|x| BASE64_STANDARD.decode(&x).ok()) .and_then(|x| BASE64_STANDARD.decode(&x).ok())
.and_then(|x| x.try_into().ok()) .and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-crc32 header")?; .ok_or_bad_request("invalid x-amz-checksum-crc32 header")?;
Ok(Some(ChecksumValue::Crc32(crc32))) Ok(ChecksumValue::Crc32(crc32))
} }
Some(x) if x == "CRC32C" => { ChecksumAlgorithm::Crc32c => {
let crc32c = headers let crc32c = headers
.get(X_AMZ_CHECKSUM_CRC32C) .get(X_AMZ_CHECKSUM_CRC32C)
.and_then(|x| BASE64_STANDARD.decode(&x).ok()) .and_then(|x| BASE64_STANDARD.decode(&x).ok())
.and_then(|x| x.try_into().ok()) .and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-crc32c header")?; .ok_or_bad_request("invalid x-amz-checksum-crc32c header")?;
Ok(Some(ChecksumValue::Crc32c(crc32c))) Ok(ChecksumValue::Crc32c(crc32c))
} }
Some(x) if x == "SHA1" => { ChecksumAlgorithm::Sha1 => {
let sha1 = headers let sha1 = headers
.get(X_AMZ_CHECKSUM_SHA1) .get(X_AMZ_CHECKSUM_SHA1)
.and_then(|x| BASE64_STANDARD.decode(&x).ok()) .and_then(|x| BASE64_STANDARD.decode(&x).ok())
.and_then(|x| x.try_into().ok()) .and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-sha1 header")?; .ok_or_bad_request("invalid x-amz-checksum-sha1 header")?;
Ok(Some(ChecksumValue::Sha1(sha1))) Ok(ChecksumValue::Sha1(sha1))
} }
Some(x) if x == "SHA256" => { ChecksumAlgorithm::Sha256 => {
let sha256 = headers let sha256 = headers
.get(X_AMZ_CHECKSUM_SHA256) .get(X_AMZ_CHECKSUM_SHA256)
.and_then(|x| BASE64_STANDARD.decode(&x).ok()) .and_then(|x| BASE64_STANDARD.decode(&x).ok())
.and_then(|x| x.try_into().ok()) .and_then(|x| x.try_into().ok())
.ok_or_bad_request("invalid x-amz-checksum-sha256 header")?; .ok_or_bad_request("invalid x-amz-checksum-sha256 header")?;
Ok(Some(ChecksumValue::Sha256(sha256))) Ok(ChecksumValue::Sha256(sha256))
} }
Some(_) => Err(Error::bad_request("invalid x-amz-checksum-algorithm")),
None => Ok(None),
} }
} }
pub(crate) fn add_checksum_response_headers( pub fn add_checksum_response_headers(
checksum: &Option<ChecksumValue>, checksum: &Option<ChecksumValue>,
mut resp: http::response::Builder, mut resp: http::response::Builder,
) -> http::response::Builder { ) -> http::response::Builder {

View file

@ -1,4 +1,4 @@
use err_derive::Error; use thiserror::Error;
use crate::common_error::CommonError; use crate::common_error::CommonError;
pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInternalError}; pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInternalError};
@ -6,18 +6,22 @@ pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInterna
/// Errors of this crate /// Errors of this crate
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum Error { pub enum Error {
#[error(display = "{}", _0)] #[error("{0}")]
/// Error from common error /// Error from common error
Common(CommonError), Common(CommonError),
/// Authorization Header Malformed /// Authorization Header Malformed
#[error(display = "Authorization header malformed, unexpected scope: {}", _0)] #[error("Authorization header malformed, unexpected scope: {0}")]
AuthorizationHeaderMalformed(String), AuthorizationHeaderMalformed(String),
// Category: bad request // Category: bad request
/// The request contained an invalid UTF-8 sequence in its path or in other parameters /// The request contained an invalid UTF-8 sequence in its path or in other parameters
#[error(display = "Invalid UTF-8: {}", _0)] #[error("Invalid UTF-8: {0}")]
InvalidUtf8Str(#[error(source)] std::str::Utf8Error), InvalidUtf8Str(#[from] std::str::Utf8Error),
/// The provided digest (checksum) value was invalid
#[error("Invalid digest: {0}")]
InvalidDigest(String),
} }
impl<T> From<T> for Error impl<T> From<T> for Error

View file

@ -0,0 +1,118 @@
use chrono::{DateTime, Utc};
use hmac::{Hmac, Mac};
use sha2::Sha256;
use hyper::header::HeaderName;
use hyper::{body::Incoming as IncomingBody, Request};
use garage_model::garage::Garage;
use garage_model::key_table::Key;
use garage_util::data::{sha256sum, Hash};
use error::*;
pub mod body;
pub mod checksum;
pub mod error;
pub mod payload;
pub mod streaming;
pub const SHORT_DATE: &str = "%Y%m%d";
pub const LONG_DATETIME: &str = "%Y%m%dT%H%M%SZ";
// ---- Constants used in AWSv4 signatures ----
pub const X_AMZ_ALGORITHM: HeaderName = HeaderName::from_static("x-amz-algorithm");
pub const X_AMZ_CREDENTIAL: HeaderName = HeaderName::from_static("x-amz-credential");
pub const X_AMZ_DATE: HeaderName = HeaderName::from_static("x-amz-date");
pub const X_AMZ_EXPIRES: HeaderName = HeaderName::from_static("x-amz-expires");
pub const X_AMZ_SIGNEDHEADERS: HeaderName = HeaderName::from_static("x-amz-signedheaders");
pub const X_AMZ_SIGNATURE: HeaderName = HeaderName::from_static("x-amz-signature");
pub const X_AMZ_CONTENT_SHA256: HeaderName = HeaderName::from_static("x-amz-content-sha256");
pub const X_AMZ_TRAILER: HeaderName = HeaderName::from_static("x-amz-trailer");
/// Result of `sha256("")`
pub(crate) const EMPTY_STRING_HEX_DIGEST: &str =
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855";
// Signature calculation algorithm
pub const AWS4_HMAC_SHA256: &str = "AWS4-HMAC-SHA256";
type HmacSha256 = Hmac<Sha256>;
// Possible values for x-amz-content-sha256, in addition to the actual sha256
pub const UNSIGNED_PAYLOAD: &str = "UNSIGNED-PAYLOAD";
pub const STREAMING_UNSIGNED_PAYLOAD_TRAILER: &str = "STREAMING-UNSIGNED-PAYLOAD-TRAILER";
pub const STREAMING_AWS4_HMAC_SHA256_PAYLOAD: &str = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD";
// Used in the computation of StringToSign
pub const AWS4_HMAC_SHA256_PAYLOAD: &str = "AWS4-HMAC-SHA256-PAYLOAD";
// ---- enums to describe stuff going on in signature calculation ----
#[derive(Debug)]
pub enum ContentSha256Header {
UnsignedPayload,
Sha256Checksum(Hash),
StreamingPayload { trailer: bool, signed: bool },
}
// ---- top-level functions ----
pub struct VerifiedRequest {
pub request: Request<streaming::ReqBody>,
pub access_key: Key,
pub content_sha256_header: ContentSha256Header,
}
pub async fn verify_request(
garage: &Garage,
mut req: Request<IncomingBody>,
service: &'static str,
) -> Result<VerifiedRequest, Error> {
let checked_signature = payload::check_payload_signature(&garage, &mut req, service).await?;
let request = streaming::parse_streaming_body(
req,
&checked_signature,
&garage.config.s3_api.s3_region,
service,
)?;
let access_key = checked_signature
.key
.ok_or_else(|| Error::forbidden("Garage does not support anonymous access yet"))?;
Ok(VerifiedRequest {
request,
access_key,
content_sha256_header: checked_signature.content_sha256_header,
})
}
pub fn signing_hmac(
datetime: &DateTime<Utc>,
secret_key: &str,
region: &str,
service: &str,
) -> Result<HmacSha256, crypto_common::InvalidLength> {
let secret = String::from("AWS4") + secret_key;
let mut date_hmac = HmacSha256::new_from_slice(secret.as_bytes())?;
date_hmac.update(datetime.format(SHORT_DATE).to_string().as_bytes());
let mut region_hmac = HmacSha256::new_from_slice(&date_hmac.finalize().into_bytes())?;
region_hmac.update(region.as_bytes());
let mut service_hmac = HmacSha256::new_from_slice(&region_hmac.finalize().into_bytes())?;
service_hmac.update(service.as_bytes());
let mut signing_hmac = HmacSha256::new_from_slice(&service_hmac.finalize().into_bytes())?;
signing_hmac.update(b"aws4_request");
let hmac = HmacSha256::new_from_slice(&signing_hmac.finalize().into_bytes())?;
Ok(hmac)
}
pub fn compute_scope(datetime: &DateTime<Utc>, region: &str, service: &str) -> String {
format!(
"{}/{}/{}/aws4_request",
datetime.format(SHORT_DATE),
region,
service
)
}

View file

@ -13,23 +13,9 @@ use garage_util::data::Hash;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_model::key_table::*; use garage_model::key_table::*;
use super::LONG_DATETIME; use super::*;
use super::{compute_scope, signing_hmac};
use crate::encoding::uri_encode; use crate::encoding::uri_encode;
use crate::signature::error::*;
pub const X_AMZ_ALGORITHM: HeaderName = HeaderName::from_static("x-amz-algorithm");
pub const X_AMZ_CREDENTIAL: HeaderName = HeaderName::from_static("x-amz-credential");
pub const X_AMZ_DATE: HeaderName = HeaderName::from_static("x-amz-date");
pub const X_AMZ_EXPIRES: HeaderName = HeaderName::from_static("x-amz-expires");
pub const X_AMZ_SIGNEDHEADERS: HeaderName = HeaderName::from_static("x-amz-signedheaders");
pub const X_AMZ_SIGNATURE: HeaderName = HeaderName::from_static("x-amz-signature");
pub const X_AMZ_CONTENT_SH256: HeaderName = HeaderName::from_static("x-amz-content-sha256");
pub const AWS4_HMAC_SHA256: &str = "AWS4-HMAC-SHA256";
pub const UNSIGNED_PAYLOAD: &str = "UNSIGNED-PAYLOAD";
pub const STREAMING_AWS4_HMAC_SHA256_PAYLOAD: &str = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD";
pub type QueryMap = HeaderMap<QueryValue>; pub type QueryMap = HeaderMap<QueryValue>;
pub struct QueryValue { pub struct QueryValue {
@ -39,16 +25,23 @@ pub struct QueryValue {
value: String, value: String,
} }
#[derive(Debug)]
pub struct CheckedSignature {
pub key: Option<Key>,
pub content_sha256_header: ContentSha256Header,
pub signature_header: Option<String>,
}
pub async fn check_payload_signature( pub async fn check_payload_signature(
garage: &Garage, garage: &Garage,
request: &mut Request<IncomingBody>, request: &mut Request<IncomingBody>,
service: &'static str, service: &'static str,
) -> Result<(Option<Key>, Option<Hash>), Error> { ) -> Result<CheckedSignature, Error> {
let query = parse_query_map(request.uri())?; let query = parse_query_map(request.uri())?;
if query.contains_key(&X_AMZ_ALGORITHM) { if query.contains_key(&X_AMZ_ALGORITHM) {
// We check for presigned-URL-style authentification first, because // We check for presigned-URL-style authentication first, because
// the browser or someting else could inject an Authorization header // the browser or something else could inject an Authorization header
// that is totally unrelated to AWS signatures. // that is totally unrelated to AWS signatures.
check_presigned_signature(garage, service, request, query).await check_presigned_signature(garage, service, request, query).await
} else if request.headers().contains_key(AUTHORIZATION) { } else if request.headers().contains_key(AUTHORIZATION) {
@ -57,17 +50,46 @@ pub async fn check_payload_signature(
// Unsigned (anonymous) request // Unsigned (anonymous) request
let content_sha256 = request let content_sha256 = request
.headers() .headers()
.get("x-amz-content-sha256") .get(X_AMZ_CONTENT_SHA256)
.filter(|c| c.as_bytes() != UNSIGNED_PAYLOAD.as_bytes()); .map(|x| x.to_str())
if let Some(content_sha256) = content_sha256 { .transpose()?;
let sha256 = hex::decode(content_sha256) Ok(CheckedSignature {
.ok() key: None,
.and_then(|bytes| Hash::try_from(&bytes)) content_sha256_header: parse_x_amz_content_sha256(content_sha256)?,
.ok_or_bad_request("Invalid content sha256 hash")?; signature_header: None,
Ok((None, Some(sha256))) })
}
}
fn parse_x_amz_content_sha256(header: Option<&str>) -> Result<ContentSha256Header, Error> {
let header = match header {
Some(x) => x,
None => return Ok(ContentSha256Header::UnsignedPayload),
};
if header == UNSIGNED_PAYLOAD {
Ok(ContentSha256Header::UnsignedPayload)
} else if let Some(rest) = header.strip_prefix("STREAMING-") {
let (trailer, algo) = if let Some(rest2) = rest.strip_suffix("-TRAILER") {
(true, rest2)
} else { } else {
Ok((None, None)) (false, rest)
} };
let signed = match algo {
AWS4_HMAC_SHA256_PAYLOAD => true,
UNSIGNED_PAYLOAD => false,
_ => {
return Err(Error::bad_request(
"invalid or unsupported x-amz-content-sha256",
))
}
};
Ok(ContentSha256Header::StreamingPayload { trailer, signed })
} else {
let sha256 = hex::decode(header)
.ok()
.and_then(|bytes| Hash::try_from(&bytes))
.ok_or_bad_request("Invalid content sha256 hash")?;
Ok(ContentSha256Header::Sha256Checksum(sha256))
} }
} }
@ -76,13 +98,13 @@ async fn check_standard_signature(
service: &'static str, service: &'static str,
request: &Request<IncomingBody>, request: &Request<IncomingBody>,
query: QueryMap, query: QueryMap,
) -> Result<(Option<Key>, Option<Hash>), Error> { ) -> Result<CheckedSignature, Error> {
let authorization = Authorization::parse_header(request.headers())?; let authorization = Authorization::parse_header(request.headers())?;
// Verify that all necessary request headers are included in signed_headers // Verify that all necessary request headers are included in signed_headers
// The following must be included for all signatures: // The following must be included for all signatures:
// - the Host header (mandatory) // - the Host header (mandatory)
// - all x-amz-* headers used in the request // - all x-amz-* headers used in the request (except x-amz-content-sha256)
// AWS also indicates that the Content-Type header should be signed if // AWS also indicates that the Content-Type header should be signed if
// it is used, but Minio client doesn't sign it so we don't check it for compatibility. // it is used, but Minio client doesn't sign it so we don't check it for compatibility.
let signed_headers = split_signed_headers(&authorization)?; let signed_headers = split_signed_headers(&authorization)?;
@ -108,18 +130,13 @@ async fn check_standard_signature(
let key = verify_v4(garage, service, &authorization, string_to_sign.as_bytes()).await?; let key = verify_v4(garage, service, &authorization, string_to_sign.as_bytes()).await?;
let content_sha256 = if authorization.content_sha256 == UNSIGNED_PAYLOAD { let content_sha256_header = parse_x_amz_content_sha256(Some(&authorization.content_sha256))?;
None
} else if authorization.content_sha256 == STREAMING_AWS4_HMAC_SHA256_PAYLOAD {
let bytes = hex::decode(authorization.signature).ok_or_bad_request("Invalid signature")?;
Some(Hash::try_from(&bytes).ok_or_bad_request("Invalid signature")?)
} else {
let bytes = hex::decode(authorization.content_sha256)
.ok_or_bad_request("Invalid content sha256 hash")?;
Some(Hash::try_from(&bytes).ok_or_bad_request("Invalid content sha256 hash")?)
};
Ok((Some(key), content_sha256)) Ok(CheckedSignature {
key: Some(key),
content_sha256_header,
signature_header: Some(authorization.signature),
})
} }
async fn check_presigned_signature( async fn check_presigned_signature(
@ -127,14 +144,14 @@ async fn check_presigned_signature(
service: &'static str, service: &'static str,
request: &mut Request<IncomingBody>, request: &mut Request<IncomingBody>,
mut query: QueryMap, mut query: QueryMap,
) -> Result<(Option<Key>, Option<Hash>), Error> { ) -> Result<CheckedSignature, Error> {
let algorithm = query.get(&X_AMZ_ALGORITHM).unwrap(); let algorithm = query.get(&X_AMZ_ALGORITHM).unwrap();
let authorization = Authorization::parse_presigned(&algorithm.value, &query)?; let authorization = Authorization::parse_presigned(&algorithm.value, &query)?;
// Verify that all necessary request headers are included in signed_headers // Verify that all necessary request headers are included in signed_headers
// For AWSv4 pre-signed URLs, the following must be incldued: // For AWSv4 pre-signed URLs, the following must be included:
// - the Host header (mandatory) // - the Host header (mandatory)
// - all x-amz-* headers used in the request // - all x-amz-* headers used in the request (except x-amz-content-sha256)
let signed_headers = split_signed_headers(&authorization)?; let signed_headers = split_signed_headers(&authorization)?;
verify_signed_headers(request.headers(), &signed_headers)?; verify_signed_headers(request.headers(), &signed_headers)?;
@ -193,7 +210,11 @@ async fn check_presigned_signature(
// Presigned URLs always use UNSIGNED-PAYLOAD, // Presigned URLs always use UNSIGNED-PAYLOAD,
// so there is no sha256 hash to return. // so there is no sha256 hash to return.
Ok((Some(key), None)) Ok(CheckedSignature {
key: Some(key),
content_sha256_header: ContentSha256Header::UnsignedPayload,
signature_header: Some(authorization.signature),
})
} }
pub fn parse_query_map(uri: &http::uri::Uri) -> Result<QueryMap, Error> { pub fn parse_query_map(uri: &http::uri::Uri) -> Result<QueryMap, Error> {
@ -247,7 +268,9 @@ fn verify_signed_headers(headers: &HeaderMap, signed_headers: &[HeaderName]) ->
return Err(Error::bad_request("Header `Host` should be signed")); return Err(Error::bad_request("Header `Host` should be signed"));
} }
for (name, _) in headers.iter() { for (name, _) in headers.iter() {
if name.as_str().starts_with("x-amz-") { // Enforce signature of all x-amz-* headers, except x-amz-content-sh256
// because it is included in the canonical request in all cases
if name.as_str().starts_with("x-amz-") && name != X_AMZ_CONTENT_SHA256 {
if !signed_headers.contains(name) { if !signed_headers.contains(name) {
return Err(Error::bad_request(format!( return Err(Error::bad_request(format!(
"Header `{}` should be signed", "Header `{}` should be signed",
@ -306,7 +329,7 @@ pub fn canonical_request(
// Note that there is also the issue of path normalization, which I hope is unrelated to the // Note that there is also the issue of path normalization, which I hope is unrelated to the
// one of URI-encoding. At least in aws-sigv4 both parameters can be set independently, // one of URI-encoding. At least in aws-sigv4 both parameters can be set independently,
// and rusoto_signature does not seem to do any effective path normalization, even though // and rusoto_signature does not seem to do any effective path normalization, even though
// it mentions it in the comments (same link to the souce code as above). // it mentions it in the comments (same link to the source code as above).
// We make the explicit choice of NOT normalizing paths in the K2V API because doing so // We make the explicit choice of NOT normalizing paths in the K2V API because doing so
// would make non-normalized paths invalid K2V partition keys, and we don't want that. // would make non-normalized paths invalid K2V partition keys, and we don't want that.
let canonical_uri: std::borrow::Cow<str> = if service != "s3" { let canonical_uri: std::borrow::Cow<str> = if service != "s3" {
@ -396,7 +419,7 @@ pub async fn verify_v4(
// ============ Authorization header, or X-Amz-* query params ========= // ============ Authorization header, or X-Amz-* query params =========
pub struct Authorization { pub struct Authorization {
key_id: String, pub key_id: String,
scope: String, scope: String,
signed_headers: String, signed_headers: String,
signature: String, signature: String,
@ -405,7 +428,7 @@ pub struct Authorization {
} }
impl Authorization { impl Authorization {
fn parse_header(headers: &HeaderMap) -> Result<Self, Error> { pub fn parse_header(headers: &HeaderMap) -> Result<Self, Error> {
let authorization = headers let authorization = headers
.get(AUTHORIZATION) .get(AUTHORIZATION)
.ok_or_bad_request("Missing authorization header")? .ok_or_bad_request("Missing authorization header")?
@ -442,13 +465,12 @@ impl Authorization {
.to_string(); .to_string();
let content_sha256 = headers let content_sha256 = headers
.get(X_AMZ_CONTENT_SH256) .get(X_AMZ_CONTENT_SHA256)
.ok_or_bad_request("Missing X-Amz-Content-Sha256 field")?; .ok_or_bad_request("Missing X-Amz-Content-Sha256 field")?;
let date = headers let date = headers
.get(X_AMZ_DATE) .get(X_AMZ_DATE)
.ok_or_bad_request("Missing X-Amz-Date field") .ok_or_bad_request("Missing X-Amz-Date field")?
.map_err(Error::from)?
.to_str()?; .to_str()?;
let date = parse_date(date)?; let date = parse_date(date)?;
@ -518,7 +540,7 @@ impl Authorization {
}) })
} }
pub(crate) fn parse_form(params: &HeaderMap) -> Result<Self, Error> { pub fn parse_form(params: &HeaderMap) -> Result<Self, Error> {
let algorithm = params let algorithm = params
.get(X_AMZ_ALGORITHM) .get(X_AMZ_ALGORITHM)
.ok_or_bad_request("Missing X-Amz-Algorithm header")? .ok_or_bad_request("Missing X-Amz-Algorithm header")?

View file

@ -0,0 +1,618 @@
use std::pin::Pin;
use std::sync::Mutex;
use chrono::{DateTime, NaiveDateTime, TimeZone, Utc};
use futures::prelude::*;
use futures::task;
use hmac::Mac;
use http::header::{HeaderMap, HeaderValue, CONTENT_ENCODING};
use hyper::body::{Bytes, Frame, Incoming as IncomingBody};
use hyper::Request;
use garage_util::data::Hash;
use super::*;
use crate::helpers::body_stream;
use crate::signature::checksum::*;
use crate::signature::payload::CheckedSignature;
pub use crate::signature::body::ReqBody;
pub fn parse_streaming_body(
mut req: Request<IncomingBody>,
checked_signature: &CheckedSignature,
region: &str,
service: &str,
) -> Result<Request<ReqBody>, Error> {
debug!(
"Content signature mode: {:?}",
checked_signature.content_sha256_header
);
match checked_signature.content_sha256_header {
ContentSha256Header::StreamingPayload { signed, trailer } => {
// Sanity checks
if !signed && !trailer {
return Err(Error::bad_request(
"STREAMING-UNSIGNED-PAYLOAD without trailer is not a valid combination",
));
}
// Remove the aws-chunked component in the content-encoding: header
// Note: this header is not properly sent by minio client, so don't fail
// if it is absent from the request.
if let Some(content_encoding) = req.headers_mut().remove(CONTENT_ENCODING) {
if let Some(rest) = content_encoding.as_bytes().strip_prefix(b"aws-chunked,") {
req.headers_mut()
.insert(CONTENT_ENCODING, HeaderValue::from_bytes(rest).unwrap());
} else if content_encoding != "aws-chunked" {
return Err(Error::bad_request(
"content-encoding does not contain aws-chunked for STREAMING-*-PAYLOAD",
));
}
}
// If trailer header is announced, add the calculation of the requested checksum
let mut checksummer = Checksummer::init(&Default::default(), false);
let trailer_algorithm = if trailer {
let algo = Some(
request_trailer_checksum_algorithm(req.headers())?
.ok_or_bad_request("Missing x-amz-trailer header")?,
);
checksummer = checksummer.add(algo);
algo
} else {
None
};
// For signed variants, determine signing parameters
let sign_params = if signed {
let signature = checked_signature
.signature_header
.clone()
.ok_or_bad_request("No signature provided")?;
let signature = hex::decode(signature)
.ok()
.and_then(|bytes| Hash::try_from(&bytes))
.ok_or_bad_request("Invalid signature")?;
let secret_key = checked_signature
.key
.as_ref()
.ok_or_bad_request("Cannot sign streaming payload without signing key")?
.state
.as_option()
.ok_or_internal_error("Deleted key state")?
.secret_key
.to_string();
let date = req
.headers()
.get(X_AMZ_DATE)
.ok_or_bad_request("Missing X-Amz-Date field")?
.to_str()?;
let date: NaiveDateTime = NaiveDateTime::parse_from_str(date, LONG_DATETIME)
.ok_or_bad_request("Invalid date")?;
let date: DateTime<Utc> = Utc.from_utc_datetime(&date);
let scope = compute_scope(&date, region, service);
let signing_hmac =
crate::signature::signing_hmac(&date, &secret_key, region, service)
.ok_or_internal_error("Unable to build signing HMAC")?;
Some(SignParams {
datetime: date,
scope,
signing_hmac,
previous_signature: signature,
})
} else {
None
};
Ok(req.map(move |body| {
let stream = body_stream::<_, Error>(body);
let signed_payload_stream =
StreamingPayloadStream::new(stream, sign_params, trailer).map_err(Error::from);
ReqBody {
stream: Mutex::new(signed_payload_stream.boxed()),
checksummer,
expected_checksums: Default::default(),
trailer_algorithm,
}
}))
}
_ => Ok(req.map(|body| {
let expected_checksums = ExpectedChecksums {
sha256: match &checked_signature.content_sha256_header {
ContentSha256Header::Sha256Checksum(sha256) => Some(*sha256),
_ => None,
},
..Default::default()
};
let checksummer = Checksummer::init(&expected_checksums, false);
let stream = http_body_util::BodyStream::new(body).map_err(Error::from);
ReqBody {
stream: Mutex::new(stream.boxed()),
checksummer,
expected_checksums,
trailer_algorithm: None,
}
})),
}
}
fn compute_streaming_payload_signature(
signing_hmac: &HmacSha256,
date: DateTime<Utc>,
scope: &str,
previous_signature: Hash,
content_sha256: Hash,
) -> Result<Hash, StreamingPayloadError> {
let string_to_sign = [
AWS4_HMAC_SHA256_PAYLOAD,
&date.format(LONG_DATETIME).to_string(),
scope,
&hex::encode(previous_signature),
EMPTY_STRING_HEX_DIGEST,
&hex::encode(content_sha256),
]
.join("\n");
let mut hmac = signing_hmac.clone();
hmac.update(string_to_sign.as_bytes());
Hash::try_from(&hmac.finalize().into_bytes())
.ok_or_else(|| StreamingPayloadError::Message("Could not build signature".into()))
}
fn compute_streaming_trailer_signature(
signing_hmac: &HmacSha256,
date: DateTime<Utc>,
scope: &str,
previous_signature: Hash,
trailer_sha256: Hash,
) -> Result<Hash, StreamingPayloadError> {
let string_to_sign = [
AWS4_HMAC_SHA256_PAYLOAD,
&date.format(LONG_DATETIME).to_string(),
scope,
&hex::encode(previous_signature),
&hex::encode(trailer_sha256),
]
.join("\n");
let mut hmac = signing_hmac.clone();
hmac.update(string_to_sign.as_bytes());
Hash::try_from(&hmac.finalize().into_bytes())
.ok_or_else(|| StreamingPayloadError::Message("Could not build signature".into()))
}
mod payload {
use http::{HeaderName, HeaderValue};
use garage_util::data::Hash;
use nom::bytes::streaming::{tag, take_while};
use nom::character::streaming::hex_digit1;
use nom::combinator::{map_res, opt};
use nom::number::streaming::hex_u32;
macro_rules! try_parse {
($expr:expr) => {
$expr.map_err(|e| e.map(Error::Parser))?
};
}
pub enum Error<I> {
Parser(nom::error::Error<I>),
BadSignature,
}
impl<I> Error<I> {
pub fn description(&self) -> &str {
match *self {
Error::Parser(ref e) => e.code.description(),
Error::BadSignature => "Bad signature",
}
}
}
#[derive(Debug, Clone)]
pub struct ChunkHeader {
pub size: usize,
pub signature: Option<Hash>,
}
impl ChunkHeader {
pub fn parse_signed(input: &[u8]) -> nom::IResult<&[u8], Self, Error<&[u8]>> {
let (input, size) = try_parse!(hex_u32(input));
let (input, _) = try_parse!(tag(";")(input));
let (input, _) = try_parse!(tag("chunk-signature=")(input));
let (input, data) = try_parse!(map_res(hex_digit1, hex::decode)(input));
let signature = Hash::try_from(&data).ok_or(nom::Err::Failure(Error::BadSignature))?;
let (input, _) = try_parse!(tag("\r\n")(input));
let header = ChunkHeader {
size: size as usize,
signature: Some(signature),
};
Ok((input, header))
}
pub fn parse_unsigned(input: &[u8]) -> nom::IResult<&[u8], Self, Error<&[u8]>> {
let (input, size) = try_parse!(hex_u32(input));
let (input, _) = try_parse!(tag("\r\n")(input));
let header = ChunkHeader {
size: size as usize,
signature: None,
};
Ok((input, header))
}
}
#[derive(Debug, Clone)]
pub struct TrailerChunk {
pub header_name: HeaderName,
pub header_value: HeaderValue,
pub signature: Option<Hash>,
}
impl TrailerChunk {
fn parse_content(input: &[u8]) -> nom::IResult<&[u8], Self, Error<&[u8]>> {
let (input, header_name) = try_parse!(map_res(
take_while(|c: u8| c.is_ascii_alphanumeric() || c == b'-'),
HeaderName::from_bytes
)(input));
let (input, _) = try_parse!(tag(b":")(input));
let (input, header_value) = try_parse!(map_res(
take_while(|c: u8| c.is_ascii_alphanumeric() || b"+/=".contains(&c)),
HeaderValue::from_bytes
)(input));
// Possible '\n' after the header value, depends on clients
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
let (input, _) = try_parse!(opt(tag(b"\n"))(input));
let (input, _) = try_parse!(tag(b"\r\n")(input));
Ok((
input,
TrailerChunk {
header_name,
header_value,
signature: None,
},
))
}
pub fn parse_signed(input: &[u8]) -> nom::IResult<&[u8], Self, Error<&[u8]>> {
let (input, trailer) = Self::parse_content(input)?;
let (input, _) = try_parse!(tag(b"x-amz-trailer-signature:")(input));
let (input, data) = try_parse!(map_res(hex_digit1, hex::decode)(input));
let signature = Hash::try_from(&data).ok_or(nom::Err::Failure(Error::BadSignature))?;
let (input, _) = try_parse!(tag(b"\r\n")(input));
Ok((
input,
TrailerChunk {
signature: Some(signature),
..trailer
},
))
}
pub fn parse_unsigned(input: &[u8]) -> nom::IResult<&[u8], Self, Error<&[u8]>> {
let (input, trailer) = Self::parse_content(input)?;
let (input, _) = try_parse!(tag(b"\r\n")(input));
Ok((input, trailer))
}
}
}
#[derive(Debug)]
pub enum StreamingPayloadError {
Stream(Error),
InvalidSignature,
Message(String),
}
impl StreamingPayloadError {
fn message(msg: &str) -> Self {
StreamingPayloadError::Message(msg.into())
}
}
impl From<StreamingPayloadError> for Error {
fn from(err: StreamingPayloadError) -> Self {
match err {
StreamingPayloadError::Stream(e) => e,
StreamingPayloadError::InvalidSignature => {
Error::bad_request("Invalid payload signature")
}
StreamingPayloadError::Message(e) => {
Error::bad_request(format!("Chunk format error: {}", e))
}
}
}
}
impl<I> From<payload::Error<I>> for StreamingPayloadError {
fn from(err: payload::Error<I>) -> Self {
Self::message(err.description())
}
}
impl<I> From<nom::error::Error<I>> for StreamingPayloadError {
fn from(err: nom::error::Error<I>) -> Self {
Self::message(err.code.description())
}
}
enum StreamingPayloadChunk {
Chunk {
header: payload::ChunkHeader,
data: Bytes,
},
Trailer(payload::TrailerChunk),
}
struct SignParams {
datetime: DateTime<Utc>,
scope: String,
signing_hmac: HmacSha256,
previous_signature: Hash,
}
#[pin_project::pin_project]
pub struct StreamingPayloadStream<S>
where
S: Stream<Item = Result<Bytes, Error>>,
{
#[pin]
stream: S,
buf: bytes::BytesMut,
signing: Option<SignParams>,
has_trailer: bool,
done: bool,
}
impl<S> StreamingPayloadStream<S>
where
S: Stream<Item = Result<Bytes, Error>>,
{
fn new(stream: S, signing: Option<SignParams>, has_trailer: bool) -> Self {
Self {
stream,
buf: bytes::BytesMut::new(),
signing,
has_trailer,
done: false,
}
}
fn parse_next(
input: &[u8],
is_signed: bool,
has_trailer: bool,
) -> nom::IResult<&[u8], StreamingPayloadChunk, StreamingPayloadError> {
use nom::bytes::streaming::{tag, take};
macro_rules! try_parse {
($expr:expr) => {
$expr.map_err(nom::Err::convert)?
};
}
let (input, header) = if is_signed {
try_parse!(payload::ChunkHeader::parse_signed(input))
} else {
try_parse!(payload::ChunkHeader::parse_unsigned(input))
};
// 0-sized chunk is the last
if header.size == 0 {
if has_trailer {
let (input, trailer) = if is_signed {
try_parse!(payload::TrailerChunk::parse_signed(input))
} else {
try_parse!(payload::TrailerChunk::parse_unsigned(input))
};
return Ok((input, StreamingPayloadChunk::Trailer(trailer)));
} else {
return Ok((
input,
StreamingPayloadChunk::Chunk {
header,
data: Bytes::new(),
},
));
}
}
let (input, data) = try_parse!(take::<_, _, nom::error::Error<_>>(header.size)(input));
let (input, _) = try_parse!(tag::<_, _, nom::error::Error<_>>("\r\n")(input));
let data = Bytes::from(data.to_vec());
Ok((input, StreamingPayloadChunk::Chunk { header, data }))
}
}
impl<S> Stream for StreamingPayloadStream<S>
where
S: Stream<Item = Result<Bytes, Error>> + Unpin,
{
type Item = Result<Frame<Bytes>, StreamingPayloadError>;
fn poll_next(
self: Pin<&mut Self>,
cx: &mut task::Context<'_>,
) -> task::Poll<Option<Self::Item>> {
use std::task::Poll;
let mut this = self.project();
if *this.done {
return Poll::Ready(None);
}
loop {
let (input, payload) =
match Self::parse_next(this.buf, this.signing.is_some(), *this.has_trailer) {
Ok(res) => res,
Err(nom::Err::Incomplete(_)) => {
match futures::ready!(this.stream.as_mut().poll_next(cx)) {
Some(Ok(bytes)) => {
this.buf.extend(bytes);
continue;
}
Some(Err(e)) => {
return Poll::Ready(Some(Err(StreamingPayloadError::Stream(e))))
}
None => {
return Poll::Ready(Some(Err(StreamingPayloadError::message(
"Unexpected EOF",
))));
}
}
}
Err(nom::Err::Error(e)) | Err(nom::Err::Failure(e)) => {
return Poll::Ready(Some(Err(e)))
}
};
match payload {
StreamingPayloadChunk::Chunk { data, header } => {
if let Some(signing) = this.signing.as_mut() {
let data_sha256sum = sha256sum(&data);
let expected_signature = compute_streaming_payload_signature(
&signing.signing_hmac,
signing.datetime,
&signing.scope,
signing.previous_signature,
data_sha256sum,
)?;
if header.signature.unwrap() != expected_signature {
return Poll::Ready(Some(Err(StreamingPayloadError::InvalidSignature)));
}
signing.previous_signature = header.signature.unwrap();
}
*this.buf = input.into();
// 0-sized chunk is the last
if data.is_empty() {
// if there was a trailer, it would have been returned by the parser
assert!(!*this.has_trailer);
*this.done = true;
return Poll::Ready(None);
}
return Poll::Ready(Some(Ok(Frame::data(data))));
}
StreamingPayloadChunk::Trailer(trailer) => {
trace!(
"In StreamingPayloadStream::poll_next: got trailer {:?}",
trailer
);
if let Some(signing) = this.signing.as_mut() {
let data = [
trailer.header_name.as_ref(),
&b":"[..],
trailer.header_value.as_ref(),
&b"\n"[..],
]
.concat();
let trailer_sha256sum = sha256sum(&data);
let expected_signature = compute_streaming_trailer_signature(
&signing.signing_hmac,
signing.datetime,
&signing.scope,
signing.previous_signature,
trailer_sha256sum,
)?;
if trailer.signature.unwrap() != expected_signature {
return Poll::Ready(Some(Err(StreamingPayloadError::InvalidSignature)));
}
}
*this.buf = input.into();
*this.done = true;
let mut trailers_map = HeaderMap::new();
trailers_map.insert(trailer.header_name, trailer.header_value);
return Poll::Ready(Some(Ok(Frame::trailers(trailers_map))));
}
}
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.stream.size_hint()
}
}
#[cfg(test)]
mod tests {
use futures::prelude::*;
use super::{SignParams, StreamingPayloadError, StreamingPayloadStream};
#[tokio::test]
async fn test_interrupted_signed_payload_stream() {
use chrono::{DateTime, Utc};
use garage_util::data::Hash;
let datetime = DateTime::parse_from_rfc3339("2021-12-13T13:12:42+01:00") // TODO UNIX 0
.unwrap()
.with_timezone(&Utc);
let secret_key = "test";
let region = "test";
let scope = crate::signature::compute_scope(&datetime, region, "s3");
let signing_hmac =
crate::signature::signing_hmac(&datetime, secret_key, region, "s3").unwrap();
let data: &[&[u8]] = &[b"1"];
let body = futures::stream::iter(data.iter().map(|block| Ok(block.to_vec().into())));
let seed_signature = Hash::default();
let mut stream = StreamingPayloadStream::new(
body,
Some(SignParams {
signing_hmac,
datetime,
scope,
previous_signature: seed_signature,
}),
false,
);
assert!(stream.try_next().await.is_err());
match stream.try_next().await {
Err(StreamingPayloadError::Message(msg)) if msg == "Unexpected EOF" => {}
item => panic!(
"Unexpected result, expected early EOF error, got {:?}",
item
),
}
}
}

37
src/api/k2v/Cargo.toml Normal file
View file

@ -0,0 +1,37 @@
[package]
name = "garage_api_k2v"
version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018"
license = "AGPL-3.0"
description = "K2V API server crate for the Garage object store"
repository = "https://git.deuxfleurs.fr/Deuxfleurs/garage"
readme = "../../../README.md"
[lib]
path = "lib.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
garage_model = { workspace = true, features = [ "k2v" ] }
garage_table.workspace = true
garage_util = { workspace = true, features = [ "k2v" ] }
garage_api_common.workspace = true
base64.workspace = true
thiserror.workspace = true
tracing.workspace = true
futures.workspace = true
tokio.workspace = true
http.workspace = true
http-body-util.workspace = true
hyper = { workspace = true, default-features = false, features = ["server", "http1"] }
percent-encoding.workspace = true
url.workspace = true
serde.workspace = true
serde_json.workspace = true
opentelemetry.workspace = true

View file

@ -1,7 +1,5 @@
use std::sync::Arc; use std::sync::Arc;
use async_trait::async_trait;
use hyper::{body::Incoming as IncomingBody, Method, Request, Response}; use hyper::{body::Incoming as IncomingBody, Method, Request, Response};
use tokio::sync::watch; use tokio::sync::watch;
@ -12,26 +10,25 @@ use garage_util::socket_address::UnixOrTCPSocketAddress;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use crate::generic_server::*; use garage_api_common::cors::*;
use crate::k2v::error::*; use garage_api_common::generic_server::*;
use garage_api_common::helpers::*;
use garage_api_common::signature::verify_request;
use crate::signature::verify_request; use crate::batch::*;
use crate::error::*;
use crate::index::*;
use crate::item::*;
use crate::router::Endpoint;
use crate::helpers::*; pub use garage_api_common::signature::streaming::ReqBody;
use crate::k2v::batch::*;
use crate::k2v::index::*;
use crate::k2v::item::*;
use crate::k2v::router::Endpoint;
use crate::s3::cors::*;
pub use crate::signature::streaming::ReqBody;
pub type ResBody = BoxBody<Error>; pub type ResBody = BoxBody<Error>;
pub struct K2VApiServer { pub struct K2VApiServer {
garage: Arc<Garage>, garage: Arc<Garage>,
} }
pub(crate) struct K2VApiEndpoint { pub struct K2VApiEndpoint {
bucket_name: String, bucket_name: String,
endpoint: Endpoint, endpoint: Endpoint,
} }
@ -49,7 +46,6 @@ impl K2VApiServer {
} }
} }
#[async_trait]
impl ApiHandler for K2VApiServer { impl ApiHandler for K2VApiServer {
const API_NAME: &'static str = "k2v"; const API_NAME: &'static str = "k2v";
const API_NAME_DISPLAY: &'static str = "K2V"; const API_NAME_DISPLAY: &'static str = "K2V";
@ -77,7 +73,7 @@ impl ApiHandler for K2VApiServer {
} = endpoint; } = endpoint;
let garage = self.garage.clone(); let garage = self.garage.clone();
// The OPTIONS method is procesed early, before we even check for an API key // The OPTIONS method is processed early, before we even check for an API key
if let Endpoint::Options = endpoint { if let Endpoint::Options = endpoint {
let options_res = handle_options_api(garage, &req, Some(bucket_name)) let options_res = handle_options_api(garage, &req, Some(bucket_name))
.await .await
@ -85,16 +81,20 @@ impl ApiHandler for K2VApiServer {
return Ok(options_res.map(|_empty_body: EmptyBody| empty_body())); return Ok(options_res.map(|_empty_body: EmptyBody| empty_body()));
} }
let (req, api_key, _content_sha256) = verify_request(&garage, req, "k2v").await?; let verified_request = verify_request(&garage, req, "k2v").await?;
let req = verified_request.request;
let api_key = verified_request.access_key;
let bucket_id = garage let bucket_id = garage
.bucket_helper() .bucket_helper()
.resolve_bucket(&bucket_name, &api_key) .resolve_bucket(&bucket_name, &api_key)
.await?; .await
.map_err(pass_helper_error)?;
let bucket = garage let bucket = garage
.bucket_helper() .bucket_helper()
.get_existing_bucket(bucket_id) .get_existing_bucket(bucket_id)
.await?; .await
.map_err(helper_error_as_internal)?;
let bucket_params = bucket.state.into_option().unwrap(); let bucket_params = bucket.state.into_option().unwrap();
let allowed = match endpoint.authorization_type() { let allowed = match endpoint.authorization_type() {
@ -176,6 +176,12 @@ impl ApiHandler for K2VApiServer {
Ok(resp_ok) Ok(resp_ok)
} }
fn key_id_from_request(&self, req: &Request<IncomingBody>) -> Option<String> {
garage_api_common::signature::payload::Authorization::parse_header(req.headers())
.map(|auth| auth.key_id)
.ok()
}
} }
impl ApiEndpoint for K2VApiEndpoint { impl ApiEndpoint for K2VApiEndpoint {

View file

@ -4,13 +4,14 @@ use serde::{Deserialize, Serialize};
use garage_table::{EnumerationOrder, TableSchema}; use garage_table::{EnumerationOrder, TableSchema};
use garage_model::k2v::causality::*;
use garage_model::k2v::item_table::*; use garage_model::k2v::item_table::*;
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::k2v::api_server::{ReqBody, ResBody};
use crate::k2v::error::*; use crate::api_server::{ReqBody, ResBody};
use crate::k2v::range::read_range; use crate::error::*;
use crate::item::parse_causality_token;
use crate::range::read_range;
pub async fn handle_insert_batch( pub async fn handle_insert_batch(
ctx: ReqCtx, ctx: ReqCtx,
@ -19,11 +20,11 @@ pub async fn handle_insert_batch(
let ReqCtx { let ReqCtx {
garage, bucket_id, .. garage, bucket_id, ..
} = &ctx; } = &ctx;
let items = parse_json_body::<Vec<InsertBatchItem>, _, Error>(req).await?; let items = req.into_body().json::<Vec<InsertBatchItem>>().await?;
let mut items2 = vec![]; let mut items2 = vec![];
for it in items { for it in items {
let ct = it.ct.map(|s| CausalContext::parse_helper(&s)).transpose()?; let ct = it.ct.map(|s| parse_causality_token(&s)).transpose()?;
let v = match it.v { let v = match it.v {
Some(vs) => DvvsValue::Value( Some(vs) => DvvsValue::Value(
BASE64_STANDARD BASE64_STANDARD
@ -46,7 +47,7 @@ pub async fn handle_read_batch(
ctx: ReqCtx, ctx: ReqCtx,
req: Request<ReqBody>, req: Request<ReqBody>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let queries = parse_json_body::<Vec<ReadBatchQuery>, _, Error>(req).await?; let queries = req.into_body().json::<Vec<ReadBatchQuery>>().await?;
let resp_results = futures::future::join_all( let resp_results = futures::future::join_all(
queries queries
@ -140,7 +141,7 @@ pub async fn handle_delete_batch(
ctx: ReqCtx, ctx: ReqCtx,
req: Request<ReqBody>, req: Request<ReqBody>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let queries = parse_json_body::<Vec<DeleteBatchQuery>, _, Error>(req).await?; let queries = req.into_body().json::<Vec<DeleteBatchQuery>>().await?;
let resp_results = futures::future::join_all( let resp_results = futures::future::join_all(
queries queries
@ -261,7 +262,7 @@ pub(crate) async fn handle_poll_range(
} = ctx; } = ctx;
use garage_model::k2v::sub::PollRange; use garage_model::k2v::sub::PollRange;
let query = parse_json_body::<PollRangeQuery, _, Error>(req).await?; let query = req.into_body().json::<PollRangeQuery>().await?;
let timeout_msec = query.timeout.unwrap_or(300).clamp(1, 600) * 1000; let timeout_msec = query.timeout.unwrap_or(300).clamp(1, 600) * 1000;
@ -281,7 +282,8 @@ pub(crate) async fn handle_poll_range(
query.seen_marker, query.seen_marker,
timeout_msec, timeout_msec,
) )
.await?; .await
.map_err(pass_helper_error)?;
if let Some((items, seen_marker)) = resp { if let Some((items, seen_marker)) = resp {
let resp = PollRangeResponse { let resp = PollRangeResponse {

View file

@ -1,52 +1,54 @@
use err_derive::Error;
use hyper::header::HeaderValue; use hyper::header::HeaderValue;
use hyper::{HeaderMap, StatusCode}; use hyper::{HeaderMap, StatusCode};
use thiserror::Error;
use crate::common_error::CommonError; use garage_api_common::common_error::{commonErrorDerivative, CommonError};
pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInternalError}; pub(crate) use garage_api_common::common_error::{helper_error_as_internal, pass_helper_error};
use crate::generic_server::ApiError; pub use garage_api_common::common_error::{
use crate::helpers::*; CommonErrorDerivative, OkOrBadRequest, OkOrInternalError,
use crate::signature::error::Error as SignatureError; };
use garage_api_common::generic_server::ApiError;
use garage_api_common::helpers::*;
use garage_api_common::signature::error::Error as SignatureError;
/// Errors of this crate /// Errors of this crate
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum Error { pub enum Error {
#[error(display = "{}", _0)] #[error("{0}")]
/// Error from common error /// Error from common error
Common(CommonError), Common(#[from] CommonError),
// Category: cannot process // Category: cannot process
/// Authorization Header Malformed /// Authorization Header Malformed
#[error(display = "Authorization header malformed, unexpected scope: {}", _0)] #[error("Authorization header malformed, unexpected scope: {0}")]
AuthorizationHeaderMalformed(String), AuthorizationHeaderMalformed(String),
/// The provided digest (checksum) value was invalid
#[error("Invalid digest: {0}")]
InvalidDigest(String),
/// The object requested don't exists /// The object requested don't exists
#[error(display = "Key not found")] #[error("Key not found")]
NoSuchKey, NoSuchKey,
/// Some base64 encoded data was badly encoded /// Some base64 encoded data was badly encoded
#[error(display = "Invalid base64: {}", _0)] #[error("Invalid base64: {0}")]
InvalidBase64(#[error(source)] base64::DecodeError), InvalidBase64(#[from] base64::DecodeError),
/// Invalid causality token
#[error("Invalid causality token")]
InvalidCausalityToken,
/// The client asked for an invalid return format (invalid Accept header) /// The client asked for an invalid return format (invalid Accept header)
#[error(display = "Not acceptable: {}", _0)] #[error("Not acceptable: {0}")]
NotAcceptable(String), NotAcceptable(String),
/// The request contained an invalid UTF-8 sequence in its path or in other parameters /// The request contained an invalid UTF-8 sequence in its path or in other parameters
#[error(display = "Invalid UTF-8: {}", _0)] #[error("Invalid UTF-8: {0}")]
InvalidUtf8Str(#[error(source)] std::str::Utf8Error), InvalidUtf8Str(#[from] std::str::Utf8Error),
} }
impl<T> From<T> for Error commonErrorDerivative!(Error);
where
CommonError: From<T>,
{
fn from(err: T) -> Self {
Error::Common(CommonError::from(err))
}
}
impl CommonErrorDerivative for Error {}
impl From<SignatureError> for Error { impl From<SignatureError> for Error {
fn from(err: SignatureError) -> Self { fn from(err: SignatureError) -> Self {
@ -56,6 +58,7 @@ impl From<SignatureError> for Error {
Self::AuthorizationHeaderMalformed(c) Self::AuthorizationHeaderMalformed(c)
} }
SignatureError::InvalidUtf8Str(i) => Self::InvalidUtf8Str(i), SignatureError::InvalidUtf8Str(i) => Self::InvalidUtf8Str(i),
SignatureError::InvalidDigest(d) => Self::InvalidDigest(d),
} }
} }
} }
@ -72,6 +75,8 @@ impl Error {
Error::AuthorizationHeaderMalformed(_) => "AuthorizationHeaderMalformed", Error::AuthorizationHeaderMalformed(_) => "AuthorizationHeaderMalformed",
Error::InvalidBase64(_) => "InvalidBase64", Error::InvalidBase64(_) => "InvalidBase64",
Error::InvalidUtf8Str(_) => "InvalidUtf8String", Error::InvalidUtf8Str(_) => "InvalidUtf8String",
Error::InvalidCausalityToken => "CausalityToken",
Error::InvalidDigest(_) => "InvalidDigest",
} }
} }
} }
@ -85,7 +90,9 @@ impl ApiError for Error {
Error::NotAcceptable(_) => StatusCode::NOT_ACCEPTABLE, Error::NotAcceptable(_) => StatusCode::NOT_ACCEPTABLE,
Error::AuthorizationHeaderMalformed(_) Error::AuthorizationHeaderMalformed(_)
| Error::InvalidBase64(_) | Error::InvalidBase64(_)
| Error::InvalidUtf8Str(_) => StatusCode::BAD_REQUEST, | Error::InvalidUtf8Str(_)
| Error::InvalidDigest(_)
| Error::InvalidCausalityToken => StatusCode::BAD_REQUEST,
} }
} }

View file

@ -5,10 +5,11 @@ use garage_table::util::*;
use garage_model::k2v::item_table::{BYTES, CONFLICTS, ENTRIES, VALUES}; use garage_model::k2v::item_table::{BYTES, CONFLICTS, ENTRIES, VALUES};
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::k2v::api_server::ResBody;
use crate::k2v::error::*; use crate::api_server::ResBody;
use crate::k2v::range::read_range; use crate::error::*;
use crate::range::read_range;
pub async fn handle_read_index( pub async fn handle_read_index(
ctx: ReqCtx, ctx: ReqCtx,

View file

@ -6,9 +6,10 @@ use hyper::{Request, Response, StatusCode};
use garage_model::k2v::causality::*; use garage_model::k2v::causality::*;
use garage_model::k2v::item_table::*; use garage_model::k2v::item_table::*;
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::k2v::api_server::{ReqBody, ResBody};
use crate::k2v::error::*; use crate::api_server::{ReqBody, ResBody};
use crate::error::*;
pub const X_GARAGE_CAUSALITY_TOKEN: &str = "X-Garage-Causality-Token"; pub const X_GARAGE_CAUSALITY_TOKEN: &str = "X-Garage-Causality-Token";
@ -18,6 +19,10 @@ pub enum ReturnFormat {
Either, Either,
} }
pub(crate) fn parse_causality_token(s: &str) -> Result<CausalContext, Error> {
CausalContext::parse(s).ok_or(Error::InvalidCausalityToken)
}
impl ReturnFormat { impl ReturnFormat {
pub fn from(req: &Request<ReqBody>) -> Result<Self, Error> { pub fn from(req: &Request<ReqBody>) -> Result<Self, Error> {
let accept = match req.headers().get(header::ACCEPT) { let accept = match req.headers().get(header::ACCEPT) {
@ -136,12 +141,10 @@ pub async fn handle_insert_item(
.get(X_GARAGE_CAUSALITY_TOKEN) .get(X_GARAGE_CAUSALITY_TOKEN)
.map(|s| s.to_str()) .map(|s| s.to_str())
.transpose()? .transpose()?
.map(CausalContext::parse_helper) .map(parse_causality_token)
.transpose()?; .transpose()?;
let body = http_body_util::BodyExt::collect(req.into_body()) let body = req.into_body().collect().await?;
.await?
.to_bytes();
let value = DvvsValue::Value(body.to_vec()); let value = DvvsValue::Value(body.to_vec());
@ -176,7 +179,7 @@ pub async fn handle_delete_item(
.get(X_GARAGE_CAUSALITY_TOKEN) .get(X_GARAGE_CAUSALITY_TOKEN)
.map(|s| s.to_str()) .map(|s| s.to_str())
.transpose()? .transpose()?
.map(CausalContext::parse_helper) .map(parse_causality_token)
.transpose()?; .transpose()?;
let value = DvvsValue::Deleted; let value = DvvsValue::Deleted;

View file

@ -1,3 +1,6 @@
#[macro_use]
extern crate tracing;
pub mod api_server; pub mod api_server;
mod error; mod error;
mod router; mod router;

View file

@ -7,8 +7,9 @@ use std::sync::Arc;
use garage_table::replication::TableShardedReplication; use garage_table::replication::TableShardedReplication;
use garage_table::*; use garage_table::*;
use crate::helpers::key_after_prefix; use garage_api_common::helpers::key_after_prefix;
use crate::k2v::error::*;
use crate::error::*;
/// Read range in a Garage table. /// Read range in a Garage table.
/// Returns (entries, more?, nextStart) /// Returns (entries, more?, nextStart)

View file

@ -1,11 +1,11 @@
use crate::k2v::error::*; use crate::error::*;
use std::borrow::Cow; use std::borrow::Cow;
use hyper::{Method, Request}; use hyper::{Method, Request};
use crate::helpers::Authorization; use garage_api_common::helpers::Authorization;
use crate::router_macros::{generateQueryParameters, router_match}; use garage_api_common::router_macros::{generateQueryParameters, router_match};
router_match! {@func router_match! {@func

View file

@ -1,17 +0,0 @@
//! Crate for serving a S3 compatible API
#[macro_use]
extern crate tracing;
pub mod common_error;
mod encoding;
pub mod generic_server;
pub mod helpers;
mod router_macros;
/// This mode is public only to help testing. Don't expect stability here
pub mod signature;
pub mod admin;
#[cfg(feature = "k2v")]
pub mod k2v;
pub mod s3;

View file

@ -1,12 +1,12 @@
[package] [package]
name = "garage_api" name = "garage_api_s3"
version = "1.0.0" version = "1.3.1"
authors = ["Alex Auvolat <alex@adnab.me>"] authors = ["Alex Auvolat <alex@adnab.me>"]
edition = "2018" edition = "2018"
license = "AGPL-3.0" license = "AGPL-3.0"
description = "S3 API server crate for the Garage object store" description = "S3 API server crate for the Garage object store"
repository = "https://git.deuxfleurs.fr/Deuxfleurs/garage" repository = "https://git.deuxfleurs.fr/Deuxfleurs/garage"
readme = "../../README.md" readme = "../../../README.md"
[lib] [lib]
path = "lib.rs" path = "lib.rs"
@ -20,30 +20,24 @@ garage_block.workspace = true
garage_net.workspace = true garage_net.workspace = true
garage_util.workspace = true garage_util.workspace = true
garage_rpc.workspace = true garage_rpc.workspace = true
garage_api_common.workspace = true
aes-gcm.workspace = true aes-gcm.workspace = true
argon2.workspace = true
async-compression.workspace = true async-compression.workspace = true
async-trait.workspace = true
base64.workspace = true base64.workspace = true
bytes.workspace = true bytes.workspace = true
chrono.workspace = true chrono.workspace = true
crc32fast.workspace = true crc32fast.workspace = true
crc32c.workspace = true crc32c.workspace = true
crypto-common.workspace = true thiserror.workspace = true
err-derive.workspace = true
hex.workspace = true hex.workspace = true
hmac.workspace = true
idna.workspace = true
tracing.workspace = true tracing.workspace = true
md-5.workspace = true md-5.workspace = true
nom.workspace = true
pin-project.workspace = true pin-project.workspace = true
sha1.workspace = true sha1.workspace = true
sha2.workspace = true sha2.workspace = true
futures.workspace = true futures.workspace = true
futures-util.workspace = true
tokio.workspace = true tokio.workspace = true
tokio-stream.workspace = true tokio-stream.workspace = true
tokio-util.workspace = true tokio-util.workspace = true
@ -54,21 +48,13 @@ httpdate.workspace = true
http-range.workspace = true http-range.workspace = true
http-body-util.workspace = true http-body-util.workspace = true
hyper = { workspace = true, default-features = false, features = ["server", "http1"] } hyper = { workspace = true, default-features = false, features = ["server", "http1"] }
hyper-util.workspace = true
multer.workspace = true multer.workspace = true
percent-encoding.workspace = true percent-encoding.workspace = true
roxmltree.workspace = true roxmltree.workspace = true
url.workspace = true url.workspace = true
serde.workspace = true serde.workspace = true
serde_bytes.workspace = true
serde_json.workspace = true serde_json.workspace = true
quick-xml.workspace = true quick-xml.workspace = true
opentelemetry.workspace = true opentelemetry.workspace = true
opentelemetry-prometheus = { workspace = true, optional = true }
prometheus = { workspace = true, optional = true }
[features]
k2v = [ "garage_util/k2v", "garage_model/k2v" ]
metrics = [ "opentelemetry-prometheus", "prometheus" ]

View file

@ -1,7 +1,5 @@
use std::sync::Arc; use std::sync::Arc;
use async_trait::async_trait;
use hyper::header; use hyper::header;
use hyper::{body::Incoming as IncomingBody, Request, Response}; use hyper::{body::Incoming as IncomingBody, Request, Response};
use tokio::sync::watch; use tokio::sync::watch;
@ -14,33 +12,33 @@ use garage_util::socket_address::UnixOrTCPSocketAddress;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_model::key_table::Key; use garage_model::key_table::Key;
use crate::generic_server::*; use garage_api_common::cors::*;
use crate::s3::error::*; use garage_api_common::generic_server::*;
use garage_api_common::helpers::*;
use garage_api_common::signature::verify_request;
use crate::signature::verify_request; use crate::bucket::*;
use crate::copy::*;
use crate::cors::*;
use crate::delete::*;
use crate::error::*;
use crate::get::*;
use crate::lifecycle::*;
use crate::list::*;
use crate::multipart::*;
use crate::post_object::handle_post_object;
use crate::put::*;
use crate::router::Endpoint;
use crate::website::*;
use crate::helpers::*; pub use garage_api_common::signature::streaming::ReqBody;
use crate::s3::bucket::*;
use crate::s3::copy::*;
use crate::s3::cors::*;
use crate::s3::delete::*;
use crate::s3::get::*;
use crate::s3::lifecycle::*;
use crate::s3::list::*;
use crate::s3::multipart::*;
use crate::s3::post_object::handle_post_object;
use crate::s3::put::*;
use crate::s3::router::Endpoint;
use crate::s3::website::*;
pub use crate::signature::streaming::ReqBody;
pub type ResBody = BoxBody<Error>; pub type ResBody = BoxBody<Error>;
pub struct S3ApiServer { pub struct S3ApiServer {
garage: Arc<Garage>, garage: Arc<Garage>,
} }
pub(crate) struct S3ApiEndpoint { pub struct S3ApiEndpoint {
bucket_name: Option<String>, bucket_name: Option<String>,
endpoint: Endpoint, endpoint: Endpoint,
} }
@ -70,7 +68,6 @@ impl S3ApiServer {
} }
} }
#[async_trait]
impl ApiHandler for S3ApiServer { impl ApiHandler for S3ApiServer {
const API_NAME: &'static str = "s3"; const API_NAME: &'static str = "s3";
const API_NAME_DISPLAY: &'static str = "S3"; const API_NAME_DISPLAY: &'static str = "S3";
@ -124,7 +121,9 @@ impl ApiHandler for S3ApiServer {
return Ok(options_res.map(|_empty_body: EmptyBody| empty_body())); return Ok(options_res.map(|_empty_body: EmptyBody| empty_body()));
} }
let (req, api_key, content_sha256) = verify_request(&garage, req, "s3").await?; let verified_request = verify_request(&garage, req, "s3").await?;
let req = verified_request.request;
let api_key = verified_request.access_key;
let bucket_name = match bucket_name { let bucket_name = match bucket_name {
None => { None => {
@ -137,20 +136,14 @@ impl ApiHandler for S3ApiServer {
// Special code path for CreateBucket API endpoint // Special code path for CreateBucket API endpoint
if let Endpoint::CreateBucket {} = endpoint { if let Endpoint::CreateBucket {} = endpoint {
return handle_create_bucket( return handle_create_bucket(&garage, req, &api_key.key_id, bucket_name).await;
&garage,
req,
content_sha256,
&api_key.key_id,
bucket_name,
)
.await;
} }
let bucket_id = garage let bucket_id = garage
.bucket_helper() .bucket_helper()
.resolve_bucket(&bucket_name, &api_key) .resolve_bucket(&bucket_name, &api_key)
.await?; .await
.map_err(pass_helper_error)?;
let bucket = garage let bucket = garage
.bucket_helper() .bucket_helper()
.get_existing_bucket(bucket_id) .get_existing_bucket(bucket_id)
@ -181,7 +174,7 @@ impl ApiHandler for S3ApiServer {
let resp = match endpoint { let resp = match endpoint {
Endpoint::HeadObject { Endpoint::HeadObject {
key, part_number, .. key, part_number, ..
} => handle_head(ctx, &req, &key, part_number).await, } => handle_head(ctx, &req.map(|_| ()), &key, part_number).await,
Endpoint::GetObject { Endpoint::GetObject {
key, key,
part_number, part_number,
@ -201,20 +194,20 @@ impl ApiHandler for S3ApiServer {
response_content_type, response_content_type,
response_expires, response_expires,
}; };
handle_get(ctx, &req, &key, part_number, overrides).await handle_get(ctx, &req.map(|_| ()), &key, part_number, overrides).await
} }
Endpoint::UploadPart { Endpoint::UploadPart {
key, key,
part_number, part_number,
upload_id, upload_id,
} => handle_put_part(ctx, req, &key, part_number, &upload_id, content_sha256).await, } => handle_put_part(ctx, req, &key, part_number, &upload_id).await,
Endpoint::CopyObject { key } => handle_copy(ctx, &req, &key).await, Endpoint::CopyObject { key } => handle_copy(ctx, &req, &key).await,
Endpoint::UploadPartCopy { Endpoint::UploadPartCopy {
key, key,
part_number, part_number,
upload_id, upload_id,
} => handle_upload_part_copy(ctx, &req, &key, part_number, &upload_id).await, } => handle_upload_part_copy(ctx, &req, &key, part_number, &upload_id).await,
Endpoint::PutObject { key } => handle_put(ctx, req, &key, content_sha256).await, Endpoint::PutObject { key } => handle_put(ctx, req, &key).await,
Endpoint::AbortMultipartUpload { key, upload_id } => { Endpoint::AbortMultipartUpload { key, upload_id } => {
handle_abort_multipart_upload(ctx, &key, &upload_id).await handle_abort_multipart_upload(ctx, &key, &upload_id).await
} }
@ -223,7 +216,7 @@ impl ApiHandler for S3ApiServer {
handle_create_multipart_upload(ctx, &req, &key).await handle_create_multipart_upload(ctx, &req, &key).await
} }
Endpoint::CompleteMultipartUpload { key, upload_id } => { Endpoint::CompleteMultipartUpload { key, upload_id } => {
handle_complete_multipart_upload(ctx, req, &key, &upload_id, content_sha256).await handle_complete_multipart_upload(ctx, req, &key, &upload_id).await
} }
Endpoint::CreateBucket {} => unreachable!(), Endpoint::CreateBucket {} => unreachable!(),
Endpoint::HeadBucket {} => { Endpoint::HeadBucket {} => {
@ -233,6 +226,7 @@ impl ApiHandler for S3ApiServer {
Endpoint::DeleteBucket {} => handle_delete_bucket(ctx).await, Endpoint::DeleteBucket {} => handle_delete_bucket(ctx).await,
Endpoint::GetBucketLocation {} => handle_get_bucket_location(ctx), Endpoint::GetBucketLocation {} => handle_get_bucket_location(ctx),
Endpoint::GetBucketVersioning {} => handle_get_bucket_versioning(), Endpoint::GetBucketVersioning {} => handle_get_bucket_versioning(),
Endpoint::GetBucketAcl {} => handle_get_bucket_acl(ctx),
Endpoint::ListObjects { Endpoint::ListObjects {
delimiter, delimiter,
encoding_type, encoding_type,
@ -319,7 +313,6 @@ impl ApiHandler for S3ApiServer {
} => { } => {
let query = ListPartsQuery { let query = ListPartsQuery {
bucket_name: ctx.bucket_name.clone(), bucket_name: ctx.bucket_name.clone(),
bucket_id,
key, key,
upload_id, upload_id,
part_number_marker: part_number_marker.map(|p| p.min(10000)), part_number_marker: part_number_marker.map(|p| p.min(10000)),
@ -327,17 +320,15 @@ impl ApiHandler for S3ApiServer {
}; };
handle_list_parts(ctx, req, &query).await handle_list_parts(ctx, req, &query).await
} }
Endpoint::DeleteObjects {} => handle_delete_objects(ctx, req, content_sha256).await, Endpoint::DeleteObjects {} => handle_delete_objects(ctx, req).await,
Endpoint::GetBucketWebsite {} => handle_get_website(ctx).await, Endpoint::GetBucketWebsite {} => handle_get_website(ctx).await,
Endpoint::PutBucketWebsite {} => handle_put_website(ctx, req, content_sha256).await, Endpoint::PutBucketWebsite {} => handle_put_website(ctx, req).await,
Endpoint::DeleteBucketWebsite {} => handle_delete_website(ctx).await, Endpoint::DeleteBucketWebsite {} => handle_delete_website(ctx).await,
Endpoint::GetBucketCors {} => handle_get_cors(ctx).await, Endpoint::GetBucketCors {} => handle_get_cors(ctx).await,
Endpoint::PutBucketCors {} => handle_put_cors(ctx, req, content_sha256).await, Endpoint::PutBucketCors {} => handle_put_cors(ctx, req).await,
Endpoint::DeleteBucketCors {} => handle_delete_cors(ctx).await, Endpoint::DeleteBucketCors {} => handle_delete_cors(ctx).await,
Endpoint::GetBucketLifecycleConfiguration {} => handle_get_lifecycle(ctx).await, Endpoint::GetBucketLifecycleConfiguration {} => handle_get_lifecycle(ctx).await,
Endpoint::PutBucketLifecycleConfiguration {} => { Endpoint::PutBucketLifecycleConfiguration {} => handle_put_lifecycle(ctx, req).await,
handle_put_lifecycle(ctx, req, content_sha256).await
}
Endpoint::DeleteBucketLifecycle {} => handle_delete_lifecycle(ctx).await, Endpoint::DeleteBucketLifecycle {} => handle_delete_lifecycle(ctx).await,
endpoint => Err(Error::NotImplemented(endpoint.name().to_owned())), endpoint => Err(Error::NotImplemented(endpoint.name().to_owned())),
}; };
@ -352,6 +343,12 @@ impl ApiHandler for S3ApiServer {
Ok(resp_ok) Ok(resp_ok)
} }
fn key_id_from_request(&self, req: &Request<IncomingBody>) -> Option<String> {
garage_api_common::signature::payload::Authorization::parse_header(req.headers())
.map(|auth| auth.key_id)
.ok()
}
} }
impl ApiEndpoint for S3ApiEndpoint { impl ApiEndpoint for S3ApiEndpoint {

View file

@ -1,24 +1,22 @@
use std::collections::HashMap; use std::collections::HashMap;
use http_body_util::BodyExt;
use hyper::{Request, Response, StatusCode}; use hyper::{Request, Response, StatusCode};
use garage_model::bucket_alias_table::*; use garage_model::bucket_alias_table::*;
use garage_model::bucket_table::Bucket; use garage_model::bucket_table::Bucket;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_model::key_table::Key; use garage_model::key_table::{Key, KeyParams};
use garage_model::permission::BucketKeyPerm; use garage_model::permission::BucketKeyPerm;
use garage_table::util::*; use garage_table::util::*;
use garage_util::crdt::*; use garage_util::crdt::*;
use garage_util::data::*;
use garage_util::time::*; use garage_util::time::*;
use crate::common_error::CommonError; use garage_api_common::common_error::CommonError;
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::s3::api_server::{ReqBody, ResBody};
use crate::s3::error::*; use crate::api_server::{ReqBody, ResBody};
use crate::s3::xml as s3_xml; use crate::error::*;
use crate::signature::verify_signed_content; use crate::xml as s3_xml;
pub fn handle_get_bucket_location(ctx: ReqCtx) -> Result<Response<ResBody>, Error> { pub fn handle_get_bucket_location(ctx: ReqCtx) -> Result<Response<ResBody>, Error> {
let ReqCtx { garage, .. } = ctx; let ReqCtx { garage, .. } = ctx;
@ -46,6 +44,55 @@ pub fn handle_get_bucket_versioning() -> Result<Response<ResBody>, Error> {
.body(string_body(xml))?) .body(string_body(xml))?)
} }
pub fn handle_get_bucket_acl(ctx: ReqCtx) -> Result<Response<ResBody>, Error> {
let ReqCtx {
bucket_id, api_key, ..
} = ctx;
let key_p = api_key.params().ok_or_internal_error(
"Key should not be in deleted state at this point (in handle_get_bucket_acl)",
)?;
let mut grants: Vec<s3_xml::Grant> = vec![];
let kp = api_key.bucket_permissions(&bucket_id);
if kp.allow_owner {
grants.push(s3_xml::Grant {
grantee: create_grantee(&key_p, &api_key),
permission: s3_xml::Value("FULL_CONTROL".to_string()),
});
} else {
if kp.allow_read {
grants.push(s3_xml::Grant {
grantee: create_grantee(&key_p, &api_key),
permission: s3_xml::Value("READ".to_string()),
});
grants.push(s3_xml::Grant {
grantee: create_grantee(&key_p, &api_key),
permission: s3_xml::Value("READ_ACP".to_string()),
});
}
if kp.allow_write {
grants.push(s3_xml::Grant {
grantee: create_grantee(&key_p, &api_key),
permission: s3_xml::Value("WRITE".to_string()),
});
}
}
let access_control_policy = s3_xml::AccessControlPolicy {
xmlns: (),
owner: None,
acl: s3_xml::AccessControlList { entries: grants },
};
let xml = s3_xml::to_xml_with_header(&access_control_policy)?;
trace!("xml: {}", xml);
Ok(Response::builder()
.header("Content-Type", "application/xml")
.body(string_body(xml))?)
}
pub async fn handle_list_buckets( pub async fn handle_list_buckets(
garage: &Garage, garage: &Garage,
api_key: &Key, api_key: &Key,
@ -121,15 +168,10 @@ pub async fn handle_list_buckets(
pub async fn handle_create_bucket( pub async fn handle_create_bucket(
garage: &Garage, garage: &Garage,
req: Request<ReqBody>, req: Request<ReqBody>,
content_sha256: Option<Hash>,
api_key_id: &String, api_key_id: &String,
bucket_name: String, bucket_name: String,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let body = BodyExt::collect(req.into_body()).await?.to_bytes(); let body = req.into_body().collect().await?;
if let Some(content_sha256) = content_sha256 {
verify_signed_content(content_sha256, &body[..])?;
}
let cmd = let cmd =
parse_create_bucket_xml(&body[..]).ok_or_bad_request("Invalid create bucket XML query")?; parse_create_bucket_xml(&body[..]).ok_or_bad_request("Invalid create bucket XML query")?;
@ -179,7 +221,7 @@ pub async fn handle_create_bucket(
} }
// Create the bucket! // Create the bucket!
if !is_valid_bucket_name(&bucket_name) { if !is_valid_bucket_name(&bucket_name, garage.config.allow_punycode) {
return Err(Error::bad_request(format!( return Err(Error::bad_request(format!(
"{}: {}", "{}: {}",
bucket_name, INVALID_BUCKET_NAME_MESSAGE bucket_name, INVALID_BUCKET_NAME_MESSAGE
@ -248,11 +290,11 @@ pub async fn handle_delete_bucket(ctx: ReqCtx) -> Result<Response<ResBody>, Erro
// 1. delete bucket alias // 1. delete bucket alias
if is_local_alias { if is_local_alias {
helper helper
.unset_local_bucket_alias(*bucket_id, &api_key.key_id, bucket_name) .purge_local_bucket_alias(*bucket_id, &api_key.key_id, bucket_name)
.await?; .await?;
} else { } else {
helper helper
.unset_global_bucket_alias(*bucket_id, bucket_name) .purge_global_bucket_alias(*bucket_id, bucket_name)
.await?; .await?;
} }
@ -318,6 +360,15 @@ fn parse_create_bucket_xml(xml_bytes: &[u8]) -> Option<Option<String>> {
Some(ret) Some(ret)
} }
fn create_grantee(key_params: &KeyParams, api_key: &Key) -> s3_xml::Grantee {
s3_xml::Grantee {
xmlns_xsi: (),
typ: "CanonicalUser".to_string(),
display_name: Some(s3_xml::Value(key_params.name.get().to_string())),
id: Some(s3_xml::Value(api_key.key_id.to_string())),
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View file

@ -1,9 +1,9 @@
use std::pin::Pin; use std::pin::Pin;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use futures::{stream, stream::Stream, StreamExt, TryStreamExt}; use futures::{stream, stream::Stream, StreamExt, TryStreamExt};
use bytes::Bytes; use bytes::Bytes;
use http::header::HeaderName;
use hyper::{Request, Response}; use hyper::{Request, Response};
use serde::Serialize; use serde::Serialize;
@ -20,15 +20,26 @@ use garage_model::s3::mpu_table::*;
use garage_model::s3::object_table::*; use garage_model::s3::object_table::*;
use garage_model::s3::version_table::*; use garage_model::s3::version_table::*;
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::s3::api_server::{ReqBody, ResBody}; use garage_api_common::signature::checksum::*;
use crate::s3::checksum::*;
use crate::s3::encryption::EncryptionParams; use crate::api_server::{ReqBody, ResBody};
use crate::s3::error::*; use crate::encryption::EncryptionParams;
use crate::s3::get::full_object_byte_stream; use crate::error::*;
use crate::s3::multipart; use crate::get::{check_version_not_deleted, full_object_byte_stream, PreconditionHeaders};
use crate::s3::put::{get_headers, save_stream, ChecksumMode, SaveStreamResult}; use crate::multipart;
use crate::s3::xml::{self as s3_xml, xmlns_tag}; use crate::put::{extract_metadata_headers, save_stream, ChecksumMode, SaveStreamResult};
use crate::website::X_AMZ_WEBSITE_REDIRECT_LOCATION;
use crate::xml::{self as s3_xml, xmlns_tag};
pub const X_AMZ_COPY_SOURCE_IF_MATCH: HeaderName =
HeaderName::from_static("x-amz-copy-source-if-match");
pub const X_AMZ_COPY_SOURCE_IF_NONE_MATCH: HeaderName =
HeaderName::from_static("x-amz-copy-source-if-none-match");
pub const X_AMZ_COPY_SOURCE_IF_MODIFIED_SINCE: HeaderName =
HeaderName::from_static("x-amz-copy-source-if-modified-since");
pub const X_AMZ_COPY_SOURCE_IF_UNMODIFIED_SINCE: HeaderName =
HeaderName::from_static("x-amz-copy-source-if-unmodified-since");
// -------- CopyObject --------- // -------- CopyObject ---------
@ -37,7 +48,7 @@ pub async fn handle_copy(
req: &Request<ReqBody>, req: &Request<ReqBody>,
dest_key: &str, dest_key: &str,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let copy_precondition = CopyPreconditionHeaders::parse(req)?; let copy_precondition = PreconditionHeaders::parse_copy_source(req)?;
let checksum_algorithm = request_checksum_algorithm(req.headers())?; let checksum_algorithm = request_checksum_algorithm(req.headers())?;
@ -47,7 +58,7 @@ pub async fn handle_copy(
extract_source_info(&source_object)?; extract_source_info(&source_object)?;
// Check precondition, e.g. x-amz-copy-source-if-match // Check precondition, e.g. x-amz-copy-source-if-match
copy_precondition.check(source_version, &source_version_meta.etag)?; copy_precondition.check_copy_source(source_version, &source_version_meta.etag)?;
// Determine encryption parameters // Determine encryption parameters
let (source_encryption, source_object_meta_inner) = let (source_encryption, source_object_meta_inner) =
@ -63,7 +74,7 @@ pub async fn handle_copy(
let source_checksum_algorithm = source_checksum.map(|x| x.algorithm()); let source_checksum_algorithm = source_checksum.map(|x| x.algorithm());
// If source object has a checksum, the destination object must as well. // If source object has a checksum, the destination object must as well.
// The x-amz-checksum-algorihtm header allows to change that algorithm, // The x-amz-checksum-algorithm header allows to change that algorithm,
// but if it is absent, we must use the same as before // but if it is absent, we must use the same as before
let checksum_algorithm = checksum_algorithm.or(source_checksum_algorithm); let checksum_algorithm = checksum_algorithm.or(source_checksum_algorithm);
@ -72,9 +83,20 @@ pub async fn handle_copy(
let dest_object_meta = ObjectVersionMetaInner { let dest_object_meta = ObjectVersionMetaInner {
headers: match req.headers().get("x-amz-metadata-directive") { headers: match req.headers().get("x-amz-metadata-directive") {
Some(v) if v == hyper::header::HeaderValue::from_static("REPLACE") => { Some(v) if v == hyper::header::HeaderValue::from_static("REPLACE") => {
get_headers(req.headers())? extract_metadata_headers(req.headers())?
}
_ => {
// The x-amz-website-redirect-location header is not copied, instead
// it is replaced by the value from the request (or removed if no
// value was specified)
let is_redirect =
|(key, _): &(String, String)| key == X_AMZ_WEBSITE_REDIRECT_LOCATION.as_str();
let mut headers: Vec<_> = source_object_meta_inner.headers.clone();
headers.retain(|h| !is_redirect(h));
let new_headers = extract_metadata_headers(req.headers())?;
headers.extend(new_headers.into_iter().filter(is_redirect));
headers
} }
_ => source_object_meta_inner.into_owned().headers,
}, },
checksum: source_checksum, checksum: source_checksum,
}; };
@ -215,6 +237,7 @@ async fn handle_copy_metaonly(
.get(&source_version.uuid, &EmptyKey) .get(&source_version.uuid, &EmptyKey)
.await?; .await?;
let source_version = source_version.ok_or(Error::NoSuchKey)?; let source_version = source_version.ok_or(Error::NoSuchKey)?;
check_version_not_deleted(&source_version)?;
// Write an "uploading" marker in Object table // Write an "uploading" marker in Object table
// This holds a reference to the object in the Version table // This holds a reference to the object in the Version table
@ -334,7 +357,7 @@ pub async fn handle_upload_part_copy(
part_number: u64, part_number: u64,
upload_id: &str, upload_id: &str,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let copy_precondition = CopyPreconditionHeaders::parse(req)?; let copy_precondition = PreconditionHeaders::parse_copy_source(req)?;
let dest_upload_id = multipart::decode_upload_id(upload_id)?; let dest_upload_id = multipart::decode_upload_id(upload_id)?;
@ -350,7 +373,7 @@ pub async fn handle_upload_part_copy(
extract_source_info(&source_object)?; extract_source_info(&source_object)?;
// Check precondition on source, e.g. x-amz-copy-source-if-match // Check precondition on source, e.g. x-amz-copy-source-if-match
copy_precondition.check(source_object_version, &source_version_meta.etag)?; copy_precondition.check_copy_source(source_object_version, &source_version_meta.etag)?;
// Determine encryption parameters // Determine encryption parameters
let (source_encryption, _) = EncryptionParams::check_decrypt_for_copy_source( let (source_encryption, _) = EncryptionParams::check_decrypt_for_copy_source(
@ -406,6 +429,7 @@ pub async fn handle_upload_part_copy(
.get(&source_object_version.uuid, &EmptyKey) .get(&source_object_version.uuid, &EmptyKey)
.await? .await?
.ok_or(Error::NoSuchKey)?; .ok_or(Error::NoSuchKey)?;
check_version_not_deleted(&source_version)?;
// We want to reuse blocks from the source version as much as possible. // We want to reuse blocks from the source version as much as possible.
// However, we still need to get the data from these blocks // However, we still need to get the data from these blocks
@ -537,6 +561,7 @@ pub async fn handle_upload_part_copy(
let mut current_offset = 0; let mut current_offset = 0;
let mut next_block = defragmenter.next().await?; let mut next_block = defragmenter.next().await?;
let mut blocks_to_dup = dest_version.clone();
// TODO this could be optimized similarly to read_and_put_blocks // TODO this could be optimized similarly to read_and_put_blocks
// low priority because uploadpartcopy is rarely used // low priority because uploadpartcopy is rarely used
@ -566,8 +591,7 @@ pub async fn handle_upload_part_copy(
.unwrap()?; .unwrap()?;
checksummer = checksummer_updated; checksummer = checksummer_updated;
dest_version.blocks.clear(); let (version_block_key, version_block) = (
dest_version.blocks.put(
VersionBlockKey { VersionBlockKey {
part_number, part_number,
offset: current_offset, offset: current_offset,
@ -579,37 +603,56 @@ pub async fn handle_upload_part_copy(
); );
current_offset += data_len; current_offset += data_len;
let block_ref = BlockRef { let next = if let Some(final_data) = data_to_upload {
block: final_hash, dest_version.blocks.clear();
version: dest_version_id, dest_version.blocks.put(version_block_key, version_block);
deleted: false.into(), let block_ref = BlockRef {
block: final_hash,
version: dest_version_id,
deleted: false.into(),
};
let (_, _, _, next) = futures::try_join!(
// Thing 1: if the block is not exactly a block that existed before,
// we need to insert that data as a new block.
garage.block_manager.rpc_put_block(
final_hash,
final_data,
dest_encryption.is_encrypted(),
None
),
// Thing 2: we need to insert the block in the version
garage.version_table.insert(&dest_version),
// Thing 3: we need to add a block reference
garage.block_ref_table.insert(&block_ref),
// Thing 4: we need to read the next block
defragmenter.next(),
)?;
next
} else {
blocks_to_dup.blocks.put(version_block_key, version_block);
defragmenter.next().await?
}; };
let (_, _, _, next) = futures::try_join!(
// Thing 1: if the block is not exactly a block that existed before,
// we need to insert that data as a new block.
async {
if let Some(final_data) = data_to_upload {
garage
.block_manager
.rpc_put_block(final_hash, final_data, dest_encryption.is_encrypted(), None)
.await
} else {
Ok(())
}
},
// Thing 2: we need to insert the block in the version
garage.version_table.insert(&dest_version),
// Thing 3: we need to add a block reference
garage.block_ref_table.insert(&block_ref),
// Thing 4: we need to read the next block
defragmenter.next(),
)?;
next_block = next; next_block = next;
} }
assert_eq!(current_offset, source_range.length); assert_eq!(current_offset, source_range.length);
// Put the duplicated blocks into the version & block_refs tables
let block_refs_to_put = blocks_to_dup
.blocks
.items()
.iter()
.map(|b| BlockRef {
block: b.1.hash,
version: dest_version_id,
deleted: false.into(),
})
.collect::<Vec<_>>();
futures::try_join!(
garage.version_table.insert(&blocks_to_dup),
garage.block_ref_table.insert_many(&block_refs_to_put[..]),
)?;
let checksums = checksummer.finalize(); let checksums = checksummer.finalize();
let etag = dest_encryption.etag_from_md5(&checksums.md5); let etag = dest_encryption.etag_from_md5(&checksums.md5);
let checksum = checksums.extract(dest_object_checksum_algorithm); let checksum = checksums.extract(dest_object_checksum_algorithm);
@ -655,7 +698,8 @@ async fn get_copy_source(ctx: &ReqCtx, req: &Request<ReqBody>) -> Result<Object,
let source_bucket_id = garage let source_bucket_id = garage
.bucket_helper() .bucket_helper()
.resolve_bucket(&source_bucket.to_string(), api_key) .resolve_bucket(&source_bucket.to_string(), api_key)
.await?; .await
.map_err(pass_helper_error)?;
if !api_key.allow_read(&source_bucket_id) { if !api_key.allow_read(&source_bucket_id) {
return Err(Error::forbidden(format!( return Err(Error::forbidden(format!(
@ -701,97 +745,6 @@ fn extract_source_info(
Ok((source_version, source_version_data, source_version_meta)) Ok((source_version, source_version_data, source_version_meta))
} }
struct CopyPreconditionHeaders {
copy_source_if_match: Option<Vec<String>>,
copy_source_if_modified_since: Option<SystemTime>,
copy_source_if_none_match: Option<Vec<String>>,
copy_source_if_unmodified_since: Option<SystemTime>,
}
impl CopyPreconditionHeaders {
fn parse(req: &Request<ReqBody>) -> Result<Self, Error> {
Ok(Self {
copy_source_if_match: req
.headers()
.get("x-amz-copy-source-if-match")
.map(|x| x.to_str())
.transpose()?
.map(|x| {
x.split(',')
.map(|m| m.trim().trim_matches('"').to_string())
.collect::<Vec<_>>()
}),
copy_source_if_modified_since: req
.headers()
.get("x-amz-copy-source-if-modified-since")
.map(|x| x.to_str())
.transpose()?
.map(httpdate::parse_http_date)
.transpose()
.ok_or_bad_request("Invalid date in x-amz-copy-source-if-modified-since")?,
copy_source_if_none_match: req
.headers()
.get("x-amz-copy-source-if-none-match")
.map(|x| x.to_str())
.transpose()?
.map(|x| {
x.split(',')
.map(|m| m.trim().trim_matches('"').to_string())
.collect::<Vec<_>>()
}),
copy_source_if_unmodified_since: req
.headers()
.get("x-amz-copy-source-if-unmodified-since")
.map(|x| x.to_str())
.transpose()?
.map(httpdate::parse_http_date)
.transpose()
.ok_or_bad_request("Invalid date in x-amz-copy-source-if-unmodified-since")?,
})
}
fn check(&self, v: &ObjectVersion, etag: &str) -> Result<(), Error> {
let v_date = UNIX_EPOCH + Duration::from_millis(v.timestamp);
let ok = match (
&self.copy_source_if_match,
&self.copy_source_if_unmodified_since,
&self.copy_source_if_none_match,
&self.copy_source_if_modified_since,
) {
// TODO I'm not sure all of the conditions are evaluated correctly here
// If we have both if-match and if-unmodified-since,
// basically we don't care about if-unmodified-since,
// because in the spec it says that if if-match evaluates to
// true but if-unmodified-since evaluates to false,
// the copy is still done.
(Some(im), _, None, None) => im.iter().any(|x| x == etag || x == "*"),
(None, Some(ius), None, None) => v_date <= *ius,
// If we have both if-none-match and if-modified-since,
// then both of the two conditions must evaluate to true
(None, None, Some(inm), Some(ims)) => {
!inm.iter().any(|x| x == etag || x == "*") && v_date > *ims
}
(None, None, Some(inm), None) => !inm.iter().any(|x| x == etag || x == "*"),
(None, None, None, Some(ims)) => v_date > *ims,
(None, None, None, None) => true,
_ => {
return Err(Error::bad_request(
"Invalid combination of x-amz-copy-source-if-xxxxx headers",
))
}
};
if ok {
Ok(())
} else {
Err(Error::PreconditionFailed)
}
}
}
type BlockStreamItemOk = (Bytes, Option<Hash>); type BlockStreamItemOk = (Bytes, Option<Hash>);
type BlockStreamItem = Result<BlockStreamItemOk, garage_util::error::Error>; type BlockStreamItem = Result<BlockStreamItemOk, garage_util::error::Error>;
@ -861,7 +814,7 @@ pub struct CopyPartResult {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::s3::xml::to_xml_with_header; use crate::xml::to_xml_with_header;
#[test] #[test]
fn copy_object_result() -> Result<(), Error> { fn copy_object_result() -> Result<(), Error> {

View file

@ -1,29 +1,16 @@
use quick_xml::de::from_reader; use quick_xml::de::from_reader;
use std::sync::Arc;
use http::header::{ use hyper::{header::HeaderName, Method, Request, Response, StatusCode};
ACCESS_CONTROL_ALLOW_HEADERS, ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN,
ACCESS_CONTROL_EXPOSE_HEADERS, ACCESS_CONTROL_REQUEST_HEADERS, ACCESS_CONTROL_REQUEST_METHOD,
};
use hyper::{
body::Body, body::Incoming as IncomingBody, header::HeaderName, Method, Request, Response,
StatusCode,
};
use http_body_util::BodyExt;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::common_error::CommonError; use garage_model::bucket_table::{Bucket, CorsRule as GarageCorsRule};
use crate::helpers::*;
use crate::s3::api_server::{ReqBody, ResBody};
use crate::s3::error::*;
use crate::s3::xml::{to_xml_with_header, xmlns_tag, IntValue, Value};
use crate::signature::verify_signed_content;
use garage_model::bucket_table::{Bucket, BucketParams, CorsRule as GarageCorsRule}; use garage_api_common::helpers::*;
use garage_model::garage::Garage;
use garage_util::data::*; use crate::api_server::{ReqBody, ResBody};
use crate::error::*;
use crate::xml::{to_xml_with_header, xmlns_tag, IntValue, Value};
pub async fn handle_get_cors(ctx: ReqCtx) -> Result<Response<ResBody>, Error> { pub async fn handle_get_cors(ctx: ReqCtx) -> Result<Response<ResBody>, Error> {
let ReqCtx { bucket_params, .. } = ctx; let ReqCtx { bucket_params, .. } = ctx;
@ -68,7 +55,6 @@ pub async fn handle_delete_cors(ctx: ReqCtx) -> Result<Response<ResBody>, Error>
pub async fn handle_put_cors( pub async fn handle_put_cors(
ctx: ReqCtx, ctx: ReqCtx,
req: Request<ReqBody>, req: Request<ReqBody>,
content_sha256: Option<Hash>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let ReqCtx { let ReqCtx {
garage, garage,
@ -77,11 +63,7 @@ pub async fn handle_put_cors(
.. ..
} = ctx; } = ctx;
let body = BodyExt::collect(req.into_body()).await?.to_bytes(); let body = req.into_body().collect().await?;
if let Some(content_sha256) = content_sha256 {
verify_signed_content(content_sha256, &body[..])?;
}
let conf: CorsConfiguration = from_reader(&body as &[u8])?; let conf: CorsConfiguration = from_reader(&body as &[u8])?;
conf.validate()?; conf.validate()?;
@ -99,154 +81,6 @@ pub async fn handle_put_cors(
.body(empty_body())?) .body(empty_body())?)
} }
pub async fn handle_options_api(
garage: Arc<Garage>,
req: &Request<IncomingBody>,
bucket_name: Option<String>,
) -> Result<Response<EmptyBody>, CommonError> {
// FIXME: CORS rules of buckets with local aliases are
// not taken into account.
// If the bucket name is a global bucket name,
// we try to apply the CORS rules of that bucket.
// If a user has a local bucket name that has
// the same name, its CORS rules won't be applied
// and will be shadowed by the rules of the globally
// existing bucket (but this is inevitable because
// OPTIONS calls are not auhtenticated).
if let Some(bn) = bucket_name {
let helper = garage.bucket_helper();
let bucket_id = helper.resolve_global_bucket_name(&bn).await?;
if let Some(id) = bucket_id {
let bucket = garage.bucket_helper().get_existing_bucket(id).await?;
let bucket_params = bucket.state.into_option().unwrap();
handle_options_for_bucket(req, &bucket_params)
} else {
// If there is a bucket name in the request, but that name
// does not correspond to a global alias for a bucket,
// then it's either a non-existing bucket or a local bucket.
// We have no way of knowing, because the request is not
// authenticated and thus we can't resolve local aliases.
// We take the permissive approach of allowing everything,
// because we don't want to prevent web apps that use
// local bucket names from making API calls.
Ok(Response::builder()
.header(ACCESS_CONTROL_ALLOW_ORIGIN, "*")
.header(ACCESS_CONTROL_ALLOW_METHODS, "*")
.status(StatusCode::OK)
.body(EmptyBody::new())?)
}
} else {
// If there is no bucket name in the request,
// we are doing a ListBuckets call, which we want to allow
// for all origins.
Ok(Response::builder()
.header(ACCESS_CONTROL_ALLOW_ORIGIN, "*")
.header(ACCESS_CONTROL_ALLOW_METHODS, "GET")
.status(StatusCode::OK)
.body(EmptyBody::new())?)
}
}
pub fn handle_options_for_bucket(
req: &Request<IncomingBody>,
bucket_params: &BucketParams,
) -> Result<Response<EmptyBody>, CommonError> {
let origin = req
.headers()
.get("Origin")
.ok_or_bad_request("Missing Origin header")?
.to_str()?;
let request_method = req
.headers()
.get(ACCESS_CONTROL_REQUEST_METHOD)
.ok_or_bad_request("Missing Access-Control-Request-Method header")?
.to_str()?;
let request_headers = match req.headers().get(ACCESS_CONTROL_REQUEST_HEADERS) {
Some(h) => h.to_str()?.split(',').map(|h| h.trim()).collect::<Vec<_>>(),
None => vec![],
};
if let Some(cors_config) = bucket_params.cors_config.get() {
let matching_rule = cors_config
.iter()
.find(|rule| cors_rule_matches(rule, origin, request_method, request_headers.iter()));
if let Some(rule) = matching_rule {
let mut resp = Response::builder()
.status(StatusCode::OK)
.body(EmptyBody::new())?;
add_cors_headers(&mut resp, rule).ok_or_internal_error("Invalid CORS configuration")?;
return Ok(resp);
}
}
Err(CommonError::Forbidden(
"This CORS request is not allowed.".into(),
))
}
pub fn find_matching_cors_rule<'a>(
bucket_params: &'a BucketParams,
req: &Request<impl Body>,
) -> Result<Option<&'a GarageCorsRule>, Error> {
if let Some(cors_config) = bucket_params.cors_config.get() {
if let Some(origin) = req.headers().get("Origin") {
let origin = origin.to_str()?;
let request_headers = match req.headers().get(ACCESS_CONTROL_REQUEST_HEADERS) {
Some(h) => h.to_str()?.split(',').map(|h| h.trim()).collect::<Vec<_>>(),
None => vec![],
};
return Ok(cors_config.iter().find(|rule| {
cors_rule_matches(rule, origin, req.method().as_ref(), request_headers.iter())
}));
}
}
Ok(None)
}
fn cors_rule_matches<'a, HI, S>(
rule: &GarageCorsRule,
origin: &'a str,
method: &'a str,
mut request_headers: HI,
) -> bool
where
HI: Iterator<Item = S>,
S: AsRef<str>,
{
rule.allow_origins.iter().any(|x| x == "*" || x == origin)
&& rule.allow_methods.iter().any(|x| x == "*" || x == method)
&& request_headers.all(|h| {
rule.allow_headers
.iter()
.any(|x| x == "*" || x == h.as_ref())
})
}
pub fn add_cors_headers(
resp: &mut Response<impl Body>,
rule: &GarageCorsRule,
) -> Result<(), http::header::InvalidHeaderValue> {
let h = resp.headers_mut();
h.insert(
ACCESS_CONTROL_ALLOW_ORIGIN,
rule.allow_origins.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_ALLOW_METHODS,
rule.allow_methods.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_ALLOW_HEADERS,
rule.allow_headers.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_EXPOSE_HEADERS,
rule.expose_headers.join(", ").parse()?,
);
Ok(())
}
// ---- SERIALIZATION AND DESERIALIZATION TO/FROM S3 XML ---- // ---- SERIALIZATION AND DESERIALIZATION TO/FROM S3 XML ----
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
@ -254,7 +88,9 @@ pub fn add_cors_headers(
pub struct CorsConfiguration { pub struct CorsConfiguration {
#[serde(serialize_with = "xmlns_tag", skip_deserializing)] #[serde(serialize_with = "xmlns_tag", skip_deserializing)]
pub xmlns: (), pub xmlns: (),
#[serde(rename = "CORSRule")] // "default" is required to be able to parse an empty list of rules,
// cf https://docs.rs/quick-xml/latest/quick_xml/de/#sequences-xsall-and-xssequence-xml-schema-types
#[serde(rename = "CORSRule", default)]
pub cors_rules: Vec<CorsRule>, pub cors_rules: Vec<CorsRule>,
} }
@ -436,4 +272,26 @@ mod tests {
Ok(()) Ok(())
} }
#[test]
fn test_deserialize_norules() -> Result<(), Error> {
let message = r#"<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/" />"#;
let conf: CorsConfiguration = from_str(message).unwrap();
let ref_value = CorsConfiguration {
xmlns: (),
cors_rules: vec![],
};
assert_eq! {
ref_value,
conf
};
let message2 = to_xml_with_header(&ref_value)?;
let cleanup = |c: &str| c.replace(char::is_whitespace, "");
assert_eq!(cleanup(message), cleanup(&message2));
Ok(())
}
} }

View file

@ -1,16 +1,15 @@
use http_body_util::BodyExt;
use hyper::{Request, Response, StatusCode}; use hyper::{Request, Response, StatusCode};
use garage_util::data::*; use garage_util::data::*;
use garage_model::s3::object_table::*; use garage_model::s3::object_table::*;
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::s3::api_server::{ReqBody, ResBody};
use crate::s3::error::*; use crate::api_server::{ReqBody, ResBody};
use crate::s3::put::next_timestamp; use crate::error::*;
use crate::s3::xml as s3_xml; use crate::put::next_timestamp;
use crate::signature::verify_signed_content; use crate::xml as s3_xml;
async fn handle_delete_internal(ctx: &ReqCtx, key: &str) -> Result<(Uuid, Uuid), Error> { async fn handle_delete_internal(ctx: &ReqCtx, key: &str) -> Result<(Uuid, Uuid), Error> {
let ReqCtx { let ReqCtx {
@ -67,13 +66,8 @@ pub async fn handle_delete(ctx: ReqCtx, key: &str) -> Result<Response<ResBody>,
pub async fn handle_delete_objects( pub async fn handle_delete_objects(
ctx: ReqCtx, ctx: ReqCtx,
req: Request<ReqBody>, req: Request<ReqBody>,
content_sha256: Option<Hash>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let body = BodyExt::collect(req.into_body()).await?.to_bytes(); let body = req.into_body().collect().await?;
if let Some(content_sha256) = content_sha256 {
verify_signed_content(content_sha256, &body[..])?;
}
let cmd_xml = roxmltree::Document::parse(std::str::from_utf8(&body)?)?; let cmd_xml = roxmltree::Document::parse(std::str::from_utf8(&body)?)?;
let cmd = parse_delete_objects_xml(&cmd_xml).ok_or_bad_request("Invalid delete XML query")?; let cmd = parse_delete_objects_xml(&cmd_xml).ok_or_bad_request("Invalid delete XML query")?;

View file

@ -28,9 +28,10 @@ use garage_util::migrate::Migrate;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_model::s3::object_table::{ObjectVersionEncryption, ObjectVersionMetaInner}; use garage_model::s3::object_table::{ObjectVersionEncryption, ObjectVersionMetaInner};
use crate::common_error::*; use garage_api_common::common_error::*;
use crate::s3::checksum::Md5Checksum; use garage_api_common::signature::checksum::Md5Checksum;
use crate::s3::error::Error;
use crate::error::Error;
const X_AMZ_SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM: HeaderName = const X_AMZ_SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM: HeaderName =
HeaderName::from_static("x-amz-server-side-encryption-customer-algorithm"); HeaderName::from_static("x-amz-server-side-encryption-customer-algorithm");

View file

@ -1,93 +1,109 @@
use std::convert::TryInto; use std::convert::TryInto;
use err_derive::Error;
use hyper::header::HeaderValue; use hyper::header::HeaderValue;
use hyper::{HeaderMap, StatusCode}; use hyper::{HeaderMap, StatusCode};
use thiserror::Error;
use crate::common_error::CommonError; use garage_model::helper::error::Error as HelperError;
pub use crate::common_error::{CommonErrorDerivative, OkOrBadRequest, OkOrInternalError};
use crate::generic_server::ApiError; pub(crate) use garage_api_common::common_error::pass_helper_error;
use crate::helpers::*;
use crate::s3::xml as s3_xml; use garage_api_common::common_error::{
use crate::signature::error::Error as SignatureError; commonErrorDerivative, helper_error_as_internal, CommonError,
};
pub use garage_api_common::common_error::{
CommonErrorDerivative, OkOrBadRequest, OkOrInternalError,
};
use garage_api_common::generic_server::ApiError;
use garage_api_common::helpers::*;
use garage_api_common::signature::error::Error as SignatureError;
use crate::xml as s3_xml;
/// Errors of this crate /// Errors of this crate
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum Error { pub enum Error {
#[error(display = "{}", _0)] #[error("{0}")]
/// Error from common error /// Error from common error
Common(CommonError), Common(#[from] CommonError),
// Category: cannot process // Category: cannot process
/// Authorization Header Malformed /// Authorization Header Malformed
#[error(display = "Authorization header malformed, unexpected scope: {}", _0)] #[error("Authorization header malformed, unexpected scope: {0}")]
AuthorizationHeaderMalformed(String), AuthorizationHeaderMalformed(String),
/// The object requested don't exists /// The object requested don't exists
#[error(display = "Key not found")] #[error("Key not found")]
NoSuchKey, NoSuchKey,
/// The multipart upload requested don't exists /// The multipart upload requested don't exists
#[error(display = "Upload not found")] #[error("Upload not found")]
NoSuchUpload, NoSuchUpload,
/// Precondition failed (e.g. x-amz-copy-source-if-match) /// Precondition failed (e.g. x-amz-copy-source-if-match)
#[error(display = "At least one of the preconditions you specified did not hold")] #[error("At least one of the preconditions you specified did not hold")]
PreconditionFailed, PreconditionFailed,
/// Parts specified in CMU request do not match parts actually uploaded /// Parts specified in CMU request do not match parts actually uploaded
#[error(display = "Parts given to CompleteMultipartUpload do not match uploaded parts")] #[error("Parts given to CompleteMultipartUpload do not match uploaded parts")]
InvalidPart, InvalidPart,
/// Parts given to CompleteMultipartUpload were not in ascending order /// Parts given to CompleteMultipartUpload were not in ascending order
#[error(display = "Parts given to CompleteMultipartUpload were not in ascending order")] #[error("Parts given to CompleteMultipartUpload were not in ascending order")]
InvalidPartOrder, InvalidPartOrder,
/// In CompleteMultipartUpload: not enough data /// In CompleteMultipartUpload: not enough data
/// (here we are more lenient than AWS S3) /// (here we are more lenient than AWS S3)
#[error(display = "Proposed upload is smaller than the minimum allowed object size")] #[error("Proposed upload is smaller than the minimum allowed object size")]
EntityTooSmall, EntityTooSmall,
// Category: bad request // Category: bad request
/// The request contained an invalid UTF-8 sequence in its path or in other parameters /// The request contained an invalid UTF-8 sequence in its path or in other parameters
#[error(display = "Invalid UTF-8: {}", _0)] #[error("Invalid UTF-8: {0}")]
InvalidUtf8Str(#[error(source)] std::str::Utf8Error), InvalidUtf8Str(#[from] std::str::Utf8Error),
/// The request used an invalid path /// The request used an invalid path
#[error(display = "Invalid UTF-8: {}", _0)] #[error("Invalid UTF-8: {0}")]
InvalidUtf8String(#[error(source)] std::string::FromUtf8Error), InvalidUtf8String(#[from] std::string::FromUtf8Error),
/// The client sent invalid XML data /// The client sent invalid XML data
#[error(display = "Invalid XML: {}", _0)] #[error("Invalid XML: {0}")]
InvalidXml(String), InvalidXml(String),
/// The client sent a range header with invalid value /// The client sent a range header with invalid value
#[error(display = "Invalid HTTP range: {:?}", _0)] #[error("Invalid HTTP range: {0:?}")]
InvalidRange(#[error(from)] (http_range::HttpRangeParseError, u64)), InvalidRange((http_range::HttpRangeParseError, u64)),
/// The client sent a range header with invalid value /// The client sent a range header with invalid value
#[error(display = "Invalid encryption algorithm: {:?}, should be AES256", _0)] #[error("Invalid encryption algorithm: {0:?}, should be AES256")]
InvalidEncryptionAlgorithm(String), InvalidEncryptionAlgorithm(String),
/// The client sent invalid XML data /// The provided digest (checksum) value was invalid
#[error(display = "Invalid digest: {}", _0)] #[error("Invalid digest: {0}")]
InvalidDigest(String), InvalidDigest(String),
/// The client sent a request for an action not supported by garage /// The client sent a request for an action not supported by garage
#[error(display = "Unimplemented action: {}", _0)] #[error("Unimplemented action: {0}")]
NotImplemented(String), NotImplemented(String),
} }
impl<T> From<T> for Error commonErrorDerivative!(Error);
where
CommonError: From<T>, // Helper errors are always passed as internal errors by default.
{ // To pass the specific error code back to the client, use `pass_helper_error`.
fn from(err: T) -> Self { impl From<HelperError> for Error {
Error::Common(CommonError::from(err)) fn from(err: HelperError) -> Error {
Error::Common(helper_error_as_internal(err))
} }
} }
impl CommonErrorDerivative for Error {} impl From<(http_range::HttpRangeParseError, u64)> for Error {
fn from(err: (http_range::HttpRangeParseError, u64)) -> Error {
Error::InvalidRange(err)
}
}
impl From<roxmltree::Error> for Error { impl From<roxmltree::Error> for Error {
fn from(err: roxmltree::Error) -> Self { fn from(err: roxmltree::Error) -> Self {
@ -109,6 +125,7 @@ impl From<SignatureError> for Error {
Self::AuthorizationHeaderMalformed(c) Self::AuthorizationHeaderMalformed(c)
} }
SignatureError::InvalidUtf8Str(i) => Self::InvalidUtf8Str(i), SignatureError::InvalidUtf8Str(i) => Self::InvalidUtf8Str(i),
SignatureError::InvalidDigest(d) => Self::InvalidDigest(d),
} }
} }
} }

View file

@ -2,36 +2,39 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::convert::TryInto; use std::convert::TryInto;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, UNIX_EPOCH}; use std::time::{Duration, SystemTime, UNIX_EPOCH};
use bytes::Bytes; use bytes::Bytes;
use futures::future; use futures::future;
use futures::stream::{self, Stream, StreamExt}; use futures::stream::{self, Stream, StreamExt};
use http::header::{ use http::header::{
ACCEPT_RANGES, CACHE_CONTROL, CONTENT_DISPOSITION, CONTENT_ENCODING, CONTENT_LANGUAGE, HeaderMap, HeaderName, ACCEPT_RANGES, CACHE_CONTROL, CONTENT_DISPOSITION, CONTENT_ENCODING,
CONTENT_LENGTH, CONTENT_RANGE, CONTENT_TYPE, ETAG, EXPIRES, IF_MODIFIED_SINCE, IF_NONE_MATCH, CONTENT_LANGUAGE, CONTENT_LENGTH, CONTENT_RANGE, CONTENT_TYPE, ETAG, EXPIRES, IF_MATCH,
LAST_MODIFIED, RANGE, IF_MODIFIED_SINCE, IF_NONE_MATCH, IF_UNMODIFIED_SINCE, LAST_MODIFIED, RANGE,
}; };
use hyper::{body::Body, Request, Response, StatusCode}; use hyper::{Request, Response, StatusCode};
use tokio::sync::mpsc; use tokio::sync::mpsc;
use garage_net::stream::ByteStream; use garage_net::stream::ByteStream;
use garage_rpc::rpc_helper::OrderTag; use garage_rpc::rpc_helper::OrderTag;
use garage_table::EmptyKey; use garage_table::EmptyKey;
use garage_util::data::*; use garage_util::data::*;
use garage_util::error::OkOrMessage; use garage_util::error::{Error as UtilError, OkOrMessage};
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_model::s3::object_table::*; use garage_model::s3::object_table::*;
use garage_model::s3::version_table::*; use garage_model::s3::version_table::*;
use crate::helpers::*; use garage_api_common::common_error::CommonError;
use crate::s3::api_server::ResBody; use garage_api_common::helpers::*;
use crate::s3::checksum::{add_checksum_response_headers, X_AMZ_CHECKSUM_MODE}; use garage_api_common::signature::checksum::{add_checksum_response_headers, X_AMZ_CHECKSUM_MODE};
use crate::s3::encryption::EncryptionParams;
use crate::s3::error::*;
const X_AMZ_MP_PARTS_COUNT: &str = "x-amz-mp-parts-count"; use crate::api_server::ResBody;
use crate::copy::*;
use crate::encryption::EncryptionParams;
use crate::error::*;
const X_AMZ_MP_PARTS_COUNT: HeaderName = HeaderName::from_static("x-amz-mp-parts-count");
#[derive(Default)] #[derive(Default)]
pub struct GetObjectOverrides { pub struct GetObjectOverrides {
@ -68,14 +71,11 @@ fn object_headers(
// See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html // See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html
let mut headers_by_name = BTreeMap::new(); let mut headers_by_name = BTreeMap::new();
for (name, value) in meta_inner.headers.iter() { for (name, value) in meta_inner.headers.iter() {
match headers_by_name.get_mut(name) { let name_lower = name.to_ascii_lowercase();
None => { headers_by_name
headers_by_name.insert(name, vec![value.as_str()]); .entry(name_lower)
} .or_insert(vec![])
Some(headers) => { .push(value.as_str());
headers.push(value.as_str());
}
}
} }
for (name, values) in headers_by_name { for (name, values) in headers_by_name {
@ -117,49 +117,29 @@ fn getobject_override_headers(
Ok(()) Ok(())
} }
fn try_answer_cached( fn handle_http_precondition(
version: &ObjectVersion, version: &ObjectVersion,
version_meta: &ObjectVersionMeta, version_meta: &ObjectVersionMeta,
req: &Request<impl Body>, req: &Request<()>,
) -> Option<Response<ResBody>> { ) -> Result<Option<Response<ResBody>>, Error> {
// <trinity> It is possible, and is even usually the case, [that both If-None-Match and let precondition_headers = PreconditionHeaders::parse(req)?;
// If-Modified-Since] are present in a request. In this situation If-None-Match takes
// precedence and If-Modified-Since is ignored (as per 6.Precedence from rfc7232). The rational
// being that etag based matching is more accurate, it has no issue with sub-second precision
// for instance (in case of very fast updates)
let cached = if let Some(none_match) = req.headers().get(IF_NONE_MATCH) {
let none_match = none_match.to_str().ok()?;
let expected = format!("\"{}\"", version_meta.etag);
let found = none_match
.split(',')
.map(str::trim)
.any(|etag| etag == expected || etag == "\"*\"");
found
} else if let Some(modified_since) = req.headers().get(IF_MODIFIED_SINCE) {
let modified_since = modified_since.to_str().ok()?;
let client_date = httpdate::parse_http_date(modified_since).ok()?;
let server_date = UNIX_EPOCH + Duration::from_millis(version.timestamp);
client_date >= server_date
} else {
false
};
if cached { if let Some(status_code) = precondition_headers.check(&version, &version_meta.etag)? {
Some( Ok(Some(
Response::builder() Response::builder()
.status(StatusCode::NOT_MODIFIED) .status(status_code)
.body(empty_body()) .body(empty_body())
.unwrap(), .unwrap(),
) ))
} else { } else {
None Ok(None)
} }
} }
/// Handle HEAD request /// Handle HEAD request
pub async fn handle_head( pub async fn handle_head(
ctx: ReqCtx, ctx: ReqCtx,
req: &Request<impl Body>, req: &Request<()>,
key: &str, key: &str,
part_number: Option<u64>, part_number: Option<u64>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
@ -169,7 +149,7 @@ pub async fn handle_head(
/// Handle HEAD request for website /// Handle HEAD request for website
pub async fn handle_head_without_ctx( pub async fn handle_head_without_ctx(
garage: Arc<Garage>, garage: Arc<Garage>,
req: &Request<impl Body>, req: &Request<()>,
bucket_id: Uuid, bucket_id: Uuid,
key: &str, key: &str,
part_number: Option<u64>, part_number: Option<u64>,
@ -198,8 +178,8 @@ pub async fn handle_head_without_ctx(
_ => unreachable!(), _ => unreachable!(),
}; };
if let Some(cached) = try_answer_cached(object_version, version_meta, req) { if let Some(res) = handle_http_precondition(object_version, version_meta, req)? {
return Ok(cached); return Ok(res);
} }
let (encryption, headers) = let (encryption, headers) =
@ -236,6 +216,7 @@ pub async fn handle_head_without_ctx(
.get(&object_version.uuid, &EmptyKey) .get(&object_version.uuid, &EmptyKey)
.await? .await?
.ok_or(Error::NoSuchKey)?; .ok_or(Error::NoSuchKey)?;
check_version_not_deleted(&version)?;
let (part_offset, part_end) = let (part_offset, part_end) =
calculate_part_bounds(&version, pn).ok_or(Error::InvalidPart)?; calculate_part_bounds(&version, pn).ok_or(Error::InvalidPart)?;
@ -280,7 +261,7 @@ pub async fn handle_head_without_ctx(
/// Handle GET request /// Handle GET request
pub async fn handle_get( pub async fn handle_get(
ctx: ReqCtx, ctx: ReqCtx,
req: &Request<impl Body>, req: &Request<()>,
key: &str, key: &str,
part_number: Option<u64>, part_number: Option<u64>,
overrides: GetObjectOverrides, overrides: GetObjectOverrides,
@ -291,7 +272,7 @@ pub async fn handle_get(
/// Handle GET request /// Handle GET request
pub async fn handle_get_without_ctx( pub async fn handle_get_without_ctx(
garage: Arc<Garage>, garage: Arc<Garage>,
req: &Request<impl Body>, req: &Request<()>,
bucket_id: Uuid, bucket_id: Uuid,
key: &str, key: &str,
part_number: Option<u64>, part_number: Option<u64>,
@ -320,8 +301,8 @@ pub async fn handle_get_without_ctx(
ObjectVersionData::FirstBlock(meta, _) => meta, ObjectVersionData::FirstBlock(meta, _) => meta,
}; };
if let Some(cached) = try_answer_cached(last_v, last_v_meta, req) { if let Some(res) = handle_http_precondition(last_v, last_v_meta, req)? {
return Ok(cached); return Ok(res);
} }
let (enc, headers) = let (enc, headers) =
@ -342,7 +323,12 @@ pub async fn handle_get_without_ctx(
enc, enc,
&headers, &headers,
pn, pn,
checksum_mode, ChecksumMode {
// TODO: for multipart uploads, checksums of each part should be stored
// so that we can return the corresponding checksum here
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
enabled: false,
},
) )
.await .await
} }
@ -356,7 +342,12 @@ pub async fn handle_get_without_ctx(
&headers, &headers,
range.start, range.start,
range.start + range.length, range.start + range.length,
checksum_mode, ChecksumMode {
// TODO: for range queries that align with part boundaries,
// we should return the saved checksum of the part
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
enabled: false,
},
) )
.await .await
} }
@ -376,6 +367,21 @@ pub async fn handle_get_without_ctx(
} }
} }
pub(crate) fn check_version_not_deleted(version: &Version) -> Result<(), Error> {
if version.deleted.get() {
// the version was deleted between when the object_table was consulted
// and now, this could mean the object was deleted, or overriden.
// Rather than say the key doesn't exist, return a transient error
// to signal the client to try again.
return Err(CommonError::InternalError(UtilError::Message(
"conflict/inconsistency between object and version state, version is deleted"
.to_string(),
))
.into());
}
Ok(())
}
async fn handle_get_full( async fn handle_get_full(
garage: Arc<Garage>, garage: Arc<Garage>,
version: &ObjectVersion, version: &ObjectVersion,
@ -442,6 +448,7 @@ pub fn full_object_byte_stream(
.ok_or_message("channel closed")?; .ok_or_message("channel closed")?;
let version = version_fut.await.unwrap()?.ok_or(Error::NoSuchKey)?; let version = version_fut.await.unwrap()?.ok_or(Error::NoSuchKey)?;
check_version_not_deleted(&version)?;
for (i, (_, vb)) in version.blocks.items().iter().enumerate().skip(1) { for (i, (_, vb)) in version.blocks.items().iter().enumerate().skip(1) {
let stream_block_i = encryption let stream_block_i = encryption
.get_block(&garage, &vb.hash, Some(order_stream.order(i as u64))) .get_block(&garage, &vb.hash, Some(order_stream.order(i as u64)))
@ -457,6 +464,14 @@ pub fn full_object_byte_stream(
{ {
Ok(()) => (), Ok(()) => (),
Err(e) => { Err(e) => {
// TODO i think this is a bad idea, we should log
// an error and stop there. If the error happens to
// be exactly the size of what hasn't been streamed
// yet, the client will see the request as a
// success
// instead truncating the output notify the client
// something happened with their download, so that
// they can retry it
let _ = tx.send(error_stream_item(e)).await; let _ = tx.send(error_stream_item(e)).await;
} }
} }
@ -508,7 +523,7 @@ async fn handle_get_range(
.get(&version.uuid, &EmptyKey) .get(&version.uuid, &EmptyKey)
.await? .await?
.ok_or(Error::NoSuchKey)?; .ok_or(Error::NoSuchKey)?;
check_version_not_deleted(&version)?;
let body = let body =
body_from_blocks_range(garage, encryption, version.blocks.items(), begin, end); body_from_blocks_range(garage, encryption, version.blocks.items(), begin, end);
Ok(resp_builder.body(body)?) Ok(resp_builder.body(body)?)
@ -559,6 +574,8 @@ async fn handle_get_part(
.await? .await?
.ok_or(Error::NoSuchKey)?; .ok_or(Error::NoSuchKey)?;
check_version_not_deleted(&version)?;
let (begin, end) = let (begin, end) =
calculate_part_bounds(&version, part_number).ok_or(Error::InvalidPart)?; calculate_part_bounds(&version, part_number).ok_or(Error::InvalidPart)?;
@ -579,7 +596,7 @@ async fn handle_get_part(
} }
fn parse_range_header( fn parse_range_header(
req: &Request<impl Body>, req: &Request<()>,
total_size: u64, total_size: u64,
) -> Result<Option<http_range::HttpRange>, Error> { ) -> Result<Option<http_range::HttpRange>, Error> {
let range = match req.headers().get(RANGE) { let range = match req.headers().get(RANGE) {
@ -620,7 +637,7 @@ struct ChecksumMode {
enabled: bool, enabled: bool,
} }
fn checksum_mode(req: &Request<impl Body>) -> ChecksumMode { fn checksum_mode(req: &Request<()>) -> ChecksumMode {
ChecksumMode { ChecksumMode {
enabled: req enabled: req
.headers() .headers()
@ -753,3 +770,118 @@ fn std_error_from_read_error<E: std::fmt::Display>(e: E) -> std::io::Error {
format!("Error while reading object data: {}", e), format!("Error while reading object data: {}", e),
) )
} }
// ----
pub struct PreconditionHeaders {
if_match: Option<Vec<String>>,
if_modified_since: Option<SystemTime>,
if_none_match: Option<Vec<String>>,
if_unmodified_since: Option<SystemTime>,
}
impl PreconditionHeaders {
fn parse<B>(req: &Request<B>) -> Result<Self, Error> {
Self::parse_with(
req.headers(),
&IF_MATCH,
&IF_NONE_MATCH,
&IF_MODIFIED_SINCE,
&IF_UNMODIFIED_SINCE,
)
}
pub(crate) fn parse_copy_source<B>(req: &Request<B>) -> Result<Self, Error> {
Self::parse_with(
req.headers(),
&X_AMZ_COPY_SOURCE_IF_MATCH,
&X_AMZ_COPY_SOURCE_IF_NONE_MATCH,
&X_AMZ_COPY_SOURCE_IF_MODIFIED_SINCE,
&X_AMZ_COPY_SOURCE_IF_UNMODIFIED_SINCE,
)
}
fn parse_with(
headers: &HeaderMap,
hdr_if_match: &HeaderName,
hdr_if_none_match: &HeaderName,
hdr_if_modified_since: &HeaderName,
hdr_if_unmodified_since: &HeaderName,
) -> Result<Self, Error> {
Ok(Self {
if_match: headers
.get(hdr_if_match)
.map(|x| x.to_str())
.transpose()?
.map(|x| {
x.split(',')
.map(|m| m.trim().trim_matches('"').to_string())
.collect::<Vec<_>>()
}),
if_none_match: headers
.get(hdr_if_none_match)
.map(|x| x.to_str())
.transpose()?
.map(|x| {
x.split(',')
.map(|m| m.trim().trim_matches('"').to_string())
.collect::<Vec<_>>()
}),
if_modified_since: headers
.get(hdr_if_modified_since)
.map(|x| x.to_str())
.transpose()?
.map(httpdate::parse_http_date)
.transpose()
.ok_or_bad_request("Invalid date in if-modified-since")?,
if_unmodified_since: headers
.get(hdr_if_unmodified_since)
.map(|x| x.to_str())
.transpose()?
.map(httpdate::parse_http_date)
.transpose()
.ok_or_bad_request("Invalid date in if-unmodified-since")?,
})
}
fn check(&self, v: &ObjectVersion, etag: &str) -> Result<Option<StatusCode>, Error> {
// we store date with ms precision, but headers are precise to the second: truncate
// the timestamp to handle the same-second edge case
let v_date = UNIX_EPOCH + Duration::from_secs(v.timestamp / 1000);
// Implemented from https://datatracker.ietf.org/doc/html/rfc7232#section-6
if let Some(im) = &self.if_match {
// Step 1: if-match is present
if !im.iter().any(|x| x == etag || x == "*") {
return Ok(Some(StatusCode::PRECONDITION_FAILED));
}
} else if let Some(ius) = &self.if_unmodified_since {
// Step 2: if-unmodified-since is present, and if-match is absent
if v_date > *ius {
return Ok(Some(StatusCode::PRECONDITION_FAILED));
}
}
if let Some(inm) = &self.if_none_match {
// Step 3: if-none-match is present
if inm.iter().any(|x| x == etag || x == "*") {
return Ok(Some(StatusCode::NOT_MODIFIED));
}
} else if let Some(ims) = &self.if_modified_since {
// Step 4: if-modified-since is present, and if-none-match is absent
if v_date <= *ims {
return Ok(Some(StatusCode::NOT_MODIFIED));
}
}
Ok(None)
}
pub(crate) fn check_copy_source(&self, v: &ObjectVersion, etag: &str) -> Result<(), Error> {
match self.check(v, etag)? {
Some(_) => Err(Error::PreconditionFailed),
None => Ok(()),
}
}
}

View file

@ -1,3 +1,6 @@
#[macro_use]
extern crate tracing;
pub mod api_server; pub mod api_server;
pub mod error; pub mod error;
@ -11,9 +14,8 @@ mod list;
mod multipart; mod multipart;
mod post_object; mod post_object;
mod put; mod put;
mod website; pub mod website;
mod checksum;
mod encryption; mod encryption;
mod router; mod router;
pub mod xml; pub mod xml;

View file

@ -1,21 +1,19 @@
use quick_xml::de::from_reader; use quick_xml::de::from_reader;
use http_body_util::BodyExt;
use hyper::{Request, Response, StatusCode}; use hyper::{Request, Response, StatusCode};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::s3::api_server::{ReqBody, ResBody};
use crate::s3::error::*; use crate::api_server::{ReqBody, ResBody};
use crate::s3::xml::{to_xml_with_header, xmlns_tag, IntValue, Value}; use crate::error::*;
use crate::signature::verify_signed_content; use crate::xml::{to_xml_with_header, xmlns_tag, IntValue, Value};
use garage_model::bucket_table::{ use garage_model::bucket_table::{
parse_lifecycle_date, Bucket, LifecycleExpiration as GarageLifecycleExpiration, parse_lifecycle_date, Bucket, LifecycleExpiration as GarageLifecycleExpiration,
LifecycleFilter as GarageLifecycleFilter, LifecycleRule as GarageLifecycleRule, LifecycleFilter as GarageLifecycleFilter, LifecycleRule as GarageLifecycleRule,
}; };
use garage_util::data::*;
pub async fn handle_get_lifecycle(ctx: ReqCtx) -> Result<Response<ResBody>, Error> { pub async fn handle_get_lifecycle(ctx: ReqCtx) -> Result<Response<ResBody>, Error> {
let ReqCtx { bucket_params, .. } = ctx; let ReqCtx { bucket_params, .. } = ctx;
@ -29,7 +27,7 @@ pub async fn handle_get_lifecycle(ctx: ReqCtx) -> Result<Response<ResBody>, Erro
.body(string_body(xml))?) .body(string_body(xml))?)
} else { } else {
Ok(Response::builder() Ok(Response::builder()
.status(StatusCode::NO_CONTENT) .status(StatusCode::NOT_FOUND)
.body(empty_body())?) .body(empty_body())?)
} }
} }
@ -55,7 +53,6 @@ pub async fn handle_delete_lifecycle(ctx: ReqCtx) -> Result<Response<ResBody>, E
pub async fn handle_put_lifecycle( pub async fn handle_put_lifecycle(
ctx: ReqCtx, ctx: ReqCtx,
req: Request<ReqBody>, req: Request<ReqBody>,
content_sha256: Option<Hash>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let ReqCtx { let ReqCtx {
garage, garage,
@ -64,11 +61,7 @@ pub async fn handle_put_lifecycle(
.. ..
} = ctx; } = ctx;
let body = BodyExt::collect(req.into_body()).await?.to_bytes(); let body = req.into_body().collect().await?;
if let Some(content_sha256) = content_sha256 {
verify_signed_content(content_sha256, &body[..])?;
}
let conf: LifecycleConfiguration = from_reader(&body as &[u8])?; let conf: LifecycleConfiguration = from_reader(&body as &[u8])?;
let config = conf let config = conf

View file

@ -13,13 +13,14 @@ use garage_model::s3::object_table::*;
use garage_table::EnumerationOrder; use garage_table::EnumerationOrder;
use crate::encoding::*; use garage_api_common::encoding::*;
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::s3::api_server::{ReqBody, ResBody};
use crate::s3::encryption::EncryptionParams; use crate::api_server::{ReqBody, ResBody};
use crate::s3::error::*; use crate::encryption::EncryptionParams;
use crate::s3::multipart as s3_multipart; use crate::error::*;
use crate::s3::xml as s3_xml; use crate::multipart as s3_multipart;
use crate::xml as s3_xml;
const DUMMY_NAME: &str = "Dummy Key"; const DUMMY_NAME: &str = "Dummy Key";
const DUMMY_KEY: &str = "GKDummyKey"; const DUMMY_KEY: &str = "GKDummyKey";
@ -53,7 +54,6 @@ pub struct ListMultipartUploadsQuery {
#[derive(Debug)] #[derive(Debug)]
pub struct ListPartsQuery { pub struct ListPartsQuery {
pub bucket_name: String, pub bucket_name: String,
pub bucket_id: Uuid,
pub key: String, pub key: String,
pub upload_id: String, pub upload_id: String,
pub part_number_marker: Option<u64>, pub part_number_marker: Option<u64>,
@ -398,7 +398,7 @@ enum ExtractionResult {
key: String, key: String,
}, },
// Fallback key is used for legacy APIs that only support // Fallback key is used for legacy APIs that only support
// exlusive pagination (and not inclusive one). // exclusive pagination (and not inclusive one).
SkipTo { SkipTo {
key: String, key: String,
fallback_key: Option<String>, fallback_key: Option<String>,
@ -408,7 +408,7 @@ enum ExtractionResult {
#[derive(PartialEq, Clone, Debug)] #[derive(PartialEq, Clone, Debug)]
enum RangeBegin { enum RangeBegin {
// Fallback key is used for legacy APIs that only support // Fallback key is used for legacy APIs that only support
// exlusive pagination (and not inclusive one). // exclusive pagination (and not inclusive one).
IncludingKey { IncludingKey {
key: String, key: String,
fallback_key: Option<String>, fallback_key: Option<String>,
@ -1244,10 +1244,8 @@ mod tests {
#[test] #[test]
fn test_fetch_part_info() -> Result<(), Error> { fn test_fetch_part_info() -> Result<(), Error> {
let uuid = Uuid::from([0x08; 32]);
let mut query = ListPartsQuery { let mut query = ListPartsQuery {
bucket_name: "a".to_string(), bucket_name: "a".to_string(),
bucket_id: uuid,
key: "a".to_string(), key: "a".to_string(),
upload_id: "xx".to_string(), upload_id: "xx".to_string(),
part_number_marker: None, part_number_marker: None,

View file

@ -1,13 +1,20 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::convert::TryInto; use std::convert::{TryFrom, TryInto};
use std::hash::Hasher;
use std::sync::Arc; use std::sync::Arc;
use base64::prelude::*; use base64::prelude::*;
use crc32c::Crc32cHasher as Crc32c;
use crc32fast::Hasher as Crc32;
use futures::prelude::*; use futures::prelude::*;
use hyper::{Request, Response}; use hyper::{Request, Response};
use md5::{Digest, Md5};
use sha1::Sha1;
use sha2::Sha256;
use garage_table::*; use garage_table::*;
use garage_util::data::*; use garage_util::data::*;
use garage_util::error::OkOrMessage;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_model::s3::block_ref_table::*; use garage_model::s3::block_ref_table::*;
@ -15,14 +22,14 @@ use garage_model::s3::mpu_table::*;
use garage_model::s3::object_table::*; use garage_model::s3::object_table::*;
use garage_model::s3::version_table::*; use garage_model::s3::version_table::*;
use crate::helpers::*; use garage_api_common::helpers::*;
use crate::s3::api_server::{ReqBody, ResBody}; use garage_api_common::signature::checksum::*;
use crate::s3::checksum::*;
use crate::s3::encryption::EncryptionParams; use crate::api_server::{ReqBody, ResBody};
use crate::s3::error::*; use crate::encryption::EncryptionParams;
use crate::s3::put::*; use crate::error::*;
use crate::s3::xml as s3_xml; use crate::put::*;
use crate::signature::verify_signed_content; use crate::xml as s3_xml;
// ---- // ----
@ -42,7 +49,7 @@ pub async fn handle_create_multipart_upload(
let upload_id = gen_uuid(); let upload_id = gen_uuid();
let timestamp = next_timestamp(existing_object.as_ref()); let timestamp = next_timestamp(existing_object.as_ref());
let headers = get_headers(req.headers())?; let headers = extract_metadata_headers(req.headers())?;
let meta = ObjectVersionMetaInner { let meta = ObjectVersionMetaInner {
headers, headers,
checksum: None, checksum: None,
@ -93,7 +100,6 @@ pub async fn handle_put_part(
key: &str, key: &str,
part_number: u64, part_number: u64,
upload_id: &str, upload_id: &str,
content_sha256: Option<Hash>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let ReqCtx { garage, .. } = &ctx; let ReqCtx { garage, .. } = &ctx;
@ -104,17 +110,30 @@ pub async fn handle_put_part(
Some(x) => Some(x.to_str()?.to_string()), Some(x) => Some(x.to_str()?.to_string()),
None => None, None => None,
}, },
sha256: content_sha256, sha256: None,
extra: request_checksum_value(req.headers())?, extra: request_checksum_value(req.headers())?,
}; };
// Read first chuck, and at the same time try to get object to see if it exists
let key = key.to_string(); let key = key.to_string();
let (req_head, req_body) = req.into_parts(); let (req_head, mut req_body) = req.into_parts();
let stream = body_stream(req_body);
// Before we stream the body, configure the needed checksums.
req_body.add_expected_checksums(expected_checksums.clone());
// TODO: avoid parsing encryption headers twice...
if !EncryptionParams::new_from_headers(&garage, &req_head.headers)?.is_encrypted() {
// For non-encrypted objects, we need to compute the md5sum in all cases
// (even if content-md5 is not set), because it is used as an etag of the
// part, which is in turn used in the etag computation of the whole object
req_body.add_md5();
}
let (stream, stream_checksums) = req_body.streaming_with_checksums();
let stream = stream.map_err(Error::from);
let mut chunker = StreamChunker::new(stream, garage.config.block_size); let mut chunker = StreamChunker::new(stream, garage.config.block_size);
// Read first chuck, and at the same time try to get object to see if it exists
let ((_, object_version, mut mpu), first_block) = let ((_, object_version, mut mpu), first_block) =
futures::try_join!(get_upload(&ctx, &key, &upload_id), chunker.next(),)?; futures::try_join!(get_upload(&ctx, &key, &upload_id), chunker.next(),)?;
@ -171,21 +190,21 @@ pub async fn handle_put_part(
garage.version_table.insert(&version).await?; garage.version_table.insert(&version).await?;
// Copy data to version // Copy data to version
let checksummer = let (total_size, _, _) = read_and_put_blocks(
Checksummer::init(&expected_checksums, !encryption.is_encrypted()).add(checksum_algorithm);
let (total_size, checksums, _) = read_and_put_blocks(
&ctx, &ctx,
&version, &version,
encryption, encryption,
part_number, part_number,
first_block, first_block,
&mut chunker, chunker,
checksummer, Checksummer::new(),
) )
.await?; .await?;
// Verify that checksums map // Verify that checksums match
checksums.verify(&expected_checksums)?; let checksums = stream_checksums
.await
.ok_or_internal_error("checksum calculation")??;
// Store part etag in version // Store part etag in version
let etag = encryption.etag_from_md5(&checksums.md5); let etag = encryption.etag_from_md5(&checksums.md5);
@ -247,7 +266,6 @@ pub async fn handle_complete_multipart_upload(
req: Request<ReqBody>, req: Request<ReqBody>,
key: &str, key: &str,
upload_id: &str, upload_id: &str,
content_sha256: Option<Hash>,
) -> Result<Response<ResBody>, Error> { ) -> Result<Response<ResBody>, Error> {
let ReqCtx { let ReqCtx {
garage, garage,
@ -259,11 +277,7 @@ pub async fn handle_complete_multipart_upload(
let expected_checksum = request_checksum_value(&req_head.headers)?; let expected_checksum = request_checksum_value(&req_head.headers)?;
let body = http_body_util::BodyExt::collect(req_body).await?.to_bytes(); let body = req_body.collect().await?;
if let Some(content_sha256) = content_sha256 {
verify_signed_content(content_sha256, &body[..])?;
}
let body_xml = roxmltree::Document::parse(std::str::from_utf8(&body)?)?; let body_xml = roxmltree::Document::parse(std::str::from_utf8(&body)?)?;
let body_list_of_parts = parse_complete_multipart_upload_body(&body_xml) let body_list_of_parts = parse_complete_multipart_upload_body(&body_xml)
@ -429,7 +443,16 @@ pub async fn handle_complete_multipart_upload(
// Send response saying ok we're done // Send response saying ok we're done
let result = s3_xml::CompleteMultipartUploadResult { let result = s3_xml::CompleteMultipartUploadResult {
xmlns: (), xmlns: (),
location: None, // FIXME: the location returned is not always correct:
// - we always return https, but maybe some people do http
// - if root_domain is not specified, a full URL is not returned
location: garage
.config
.s3_api
.root_domain
.as_ref()
.map(|rd| s3_xml::Value(format!("https://{}.{}/{}", bucket_name, rd, key)))
.or(Some(s3_xml::Value(format!("/{}/{}", bucket_name, key)))),
bucket: s3_xml::Value(bucket_name.to_string()), bucket: s3_xml::Value(bucket_name.to_string()),
key: s3_xml::Value(key), key: s3_xml::Value(key),
etag: s3_xml::Value(format!("\"{}\"", etag)), etag: s3_xml::Value(format!("\"{}\"", etag)),
@ -592,3 +615,99 @@ fn parse_complete_multipart_upload_body(
Some(parts) Some(parts)
} }
// ====== checksummer ====
#[derive(Default)]
pub(crate) struct MultipartChecksummer {
pub md5: Md5,
pub extra: Option<MultipartExtraChecksummer>,
}
pub(crate) enum MultipartExtraChecksummer {
Crc32(Crc32),
Crc32c(Crc32c),
Sha1(Sha1),
Sha256(Sha256),
}
impl MultipartChecksummer {
pub(crate) fn init(algo: Option<ChecksumAlgorithm>) -> Self {
Self {
md5: Md5::new(),
extra: match algo {
None => None,
Some(ChecksumAlgorithm::Crc32) => {
Some(MultipartExtraChecksummer::Crc32(Crc32::new()))
}
Some(ChecksumAlgorithm::Crc32c) => {
Some(MultipartExtraChecksummer::Crc32c(Crc32c::default()))
}
Some(ChecksumAlgorithm::Sha1) => Some(MultipartExtraChecksummer::Sha1(Sha1::new())),
Some(ChecksumAlgorithm::Sha256) => {
Some(MultipartExtraChecksummer::Sha256(Sha256::new()))
}
},
}
}
pub(crate) fn update(
&mut self,
etag: &str,
checksum: Option<ChecksumValue>,
) -> Result<(), Error> {
self.md5
.update(&hex::decode(&etag).ok_or_message("invalid etag hex")?);
match (&mut self.extra, checksum) {
(None, _) => (),
(
Some(MultipartExtraChecksummer::Crc32(ref mut crc32)),
Some(ChecksumValue::Crc32(x)),
) => {
crc32.update(&x);
}
(
Some(MultipartExtraChecksummer::Crc32c(ref mut crc32c)),
Some(ChecksumValue::Crc32c(x)),
) => {
crc32c.write(&x);
}
(Some(MultipartExtraChecksummer::Sha1(ref mut sha1)), Some(ChecksumValue::Sha1(x))) => {
sha1.update(&x);
}
(
Some(MultipartExtraChecksummer::Sha256(ref mut sha256)),
Some(ChecksumValue::Sha256(x)),
) => {
sha256.update(&x);
}
(Some(_), b) => {
return Err(Error::internal_error(format!(
"part checksum was not computed correctly, got: {:?}",
b
)))
}
}
Ok(())
}
pub(crate) fn finalize(self) -> (Md5Checksum, Option<ChecksumValue>) {
let md5 = self.md5.finalize()[..].try_into().unwrap();
let extra = match self.extra {
None => None,
Some(MultipartExtraChecksummer::Crc32(crc32)) => {
Some(ChecksumValue::Crc32(u32::to_be_bytes(crc32.finalize())))
}
Some(MultipartExtraChecksummer::Crc32c(crc32c)) => Some(ChecksumValue::Crc32c(
u32::to_be_bytes(u32::try_from(crc32c.finish()).unwrap()),
)),
Some(MultipartExtraChecksummer::Sha1(sha1)) => {
Some(ChecksumValue::Sha1(sha1.finalize()[..].try_into().unwrap()))
}
Some(MultipartExtraChecksummer::Sha256(sha256)) => Some(ChecksumValue::Sha256(
sha256.finalize()[..].try_into().unwrap(),
)),
};
(md5, extra)
}
}

Some files were not shown because too many files have changed in this diff Show more