Age | Commit message (Collapse) | Author |
|
|
|
Tradefed cannot depend on the dependencies built with the Java
version that is newer than 11.
Test: m remote-execution-java-proto remote-execution-java-grpc
Bug: 267831518
Change-Id: I1a8b8167870b17025f3cd7aa046672b8fbaa412c
|
|
This project was upgraded with external_updater.
Usage: tools/external_updater/updater.sh update external/bazelbuild-remote-apis
For more info, check https://cs.android.com/android/platform/superproject/+/main:tools/external_updater/README.md
Test: TreeHugger
Change-Id: I0050def086d535476e499619f141e88039e0830f
|
|
Downstream tooling wants to use remote_execution.proto to facilitate
caching.
Test: m remote-execution-java-proto remote-execution-java-grpc
Bug: 335918437
Change-Id: Ie872713cd575b31d023b01dd7ec806cd0c2f935c
|
|
Change-Id: Ic5945f7de8239515c6324428aa229f1b7ef98e09
|
|
* Support digest functions in remote_asset API
* Add digest_function to response and directory rpc
* change digest_function in responses to bool
* Revert "change digest_function in responses to bool"
This reverts commit a7496382b94f838b9001b6300ba515870b2ba05b.
---------
Co-authored-by: Tyler Williams <williams.tyler@gmail.com>
|
|
Match the capabilities of GetActionRequest to support hints for composing ActionResults for Execute
|
|
* Add readme notes for worker apis
* Update README.md
|
|
|
|
It seems bad form to special-case SHA-256 while not saying anything about other digest functions.
|
|
* Regenerate all .pb.go files
It looks like the current ones are out of sync.
* Allow emitting output directories as plain Directory messages
As part of #257 we're discussing adding support for storing directories
in Git's format. This means OutputDirectory.tree_digest will no longer
point to an actual recursive tree object (like REv2 Tree). Instead of
doing that, I would like to investigate whether we can add native
support for storing output directories in decomposed form.
|
|
|
|
|
|
|
|
Fix a typo
|
|
|
|
|
|
|
|
This is a follow up to #248 to ensure the git hook is running correctly.
|
|
It's tested against different implementations of Remote APIs
https://buck2.build/docs/remote_execution/.
|
|
As discussed during the monthly working group meeting, let's explicitly
document that people can also just submit issues to get constants added
in case they are unable to sign the CLA.
|
|
I.e., servers may impose limitations on the `instance_name` they accept (e.g., that it does not contain slashes, newlines, or emojis).
|
|
|
|
|
|
Even though the cache can announce support for multiple digest
functions, the remote execution system cannot. Let's address this
inconsistency by adding a new repeated field that should in the (very)
long term replace the singular field.
|
|
* Add support for auxiliary operation metadata
* Hint at using ExecutedActionMetadata
* Embed ExecutedActionMetadata directly
|
|
In PR #233 I proposed the addition of two new ContentAddressableStorage
methods (ConcatenateBlobs and SplitBlobs) that allow one to gain random
access it large CAS objects, while still providing a way to very data
integrity. As part of that change, I added a new digest function to help
with that, named SHA256TREE.
This PR adds just this digest function, without bringing in any support
for chunking. This will be done separately, as it was requested that
both these features landed independently.
I have also included test vectors for the SHA256TREE digest function.
I have derived these by implementing three different versions in the Go
programming language:
- One version that uses regular arithmetic in Go.
- One version for x86-64 that uses AVX2.
- One version for ARM64 that uses the ARMv8 cryptography extensions.
All three versions behave identically.
|
|
(#239)
This status value was already mentioned in the Fetch service comments,
but was not included in the FetchBlobResponse or FetchDirectoryResponse
status field comments alongside some other similar values. Let's make
this more clear.
|
|
This adds the base ISA [1] of the LoongArch architecture to the platform
lexicon.
[1] https://loongson.github.io/LoongArch-Documentation/LoongArch-toolchain-conventions-EN.html
Co-authored-by: Robin Lee <cheeselee@fedoraproject.org>
|
|
https://en.wikipedia.org/wiki/Brotli
|
|
|
|
|
|
* Regenerate the Go source code for the Remote Execution protocol
* Add a hint for indicating that a Tree is topologically sorted
I'm currently trying to improve the performance of handling of large
output directories (Tree messages), having sizes in the order of
hundreds of megabytes. In the process, I have realised that there is a
lot of value in enforcing that the Directory messages contained in them
are topologically sorted. Two practical use cases:
- When instantiating the contents of a Tree on a local file system,
having the Tree be topologically sorted allows you to immediately
create files and directories in the right place.
- When needing to resolve the properties of a single file by path, a
topologically sorted Tree permits resolution by doing a simple forward
scan.
Especially when other features like compression are taken into account,
it's useful if Tree messages can be processed in a streaming manner.
One practical issue is that most Protobuf libraries don't offer APIs for
processing messages in a streaming manner. This means that implementors
who want to achive these optimisations will need to write their own
message parsers; at least for the Tree tree itself. To make this as
painless as possible, we also require that the Tree is stored in some
normal form.
Fixes: #229
|
|
GetCapabilities (#226)
|
|
* Move language-specific targets for Go and Java to subdirectories.
* Add a `cc_grpc_codegen` rule for internal use.
* Move language-specific targets for C++ to subdirectories.
* Stub out switched_rules_by_language and update README.
|
|
This adds the two general-purpose RISC-V ISA standards to the platform
lexicon.
|
|
This commit updates .pb.go using hooks/pre-commit as it was forgotten
in the following commits:
2af1c43 Use fully-qualified import paths in `go_package` options. (#219)
5971c1e Add a note about ordering of Tree protos (#223)
|
|
|
|
|
|
The Protobuf documentation for [`go_package`][1] requires that it
contain a fully-qualified import path, with an optional package
name override.
As of [CL 301953][2] (released in [protobuf-go v1.26][3]), this
requirement is being enforced by the `protoc-gen-go` plugin.
I set the `go_package` options such that there is no change to
generated code compared to the previous version. This required
overriding the package names for the `remoteasset`, `remoteexecution`,
and `remotelogstream` packages, since those have import paths ending
in `/v1` or `/v2`.
Fixes #181
[1]: https://developers.google.com/protocol-buffers/docs/reference/go-generated#package
[2]: https://go-review.googlesource.com/c/protobuf/+/301953/
[3]: https://github.com/protocolbuffers/protobuf-go/releases/tag/v1.26.0
|
|
Bazel Buildfarm, Buildgrid and Buildbarn all perform resolution relative
to the working directory of an action; not the input root directory.
Instead of requiring that all implementations are updated, we should
consider just altering the spec.
Performing resolution relative to the input root directory can also be
very tricky, as it means that argv[0] as visible to the calling process
must also be rewritten. Applications may get confused otherwise. For
example, consider the case where the working directory is "foo" and
argv[0] is "bar/baz". In that case argv[0] as visible to the calling
process must become "../bar/baz" or be made absolute. Making it absolute
is inconsistent with what Bazel does right now. Attempting to keep it
relative can be complex when symbolic links are involved.
Furthermore, the specification doesn't mention what kind of path
separators are used for argv[0]. The only reasonable solution here is to
use path separators that are native to the host, as successive arguments
also need to be provided in that form.
|
|
committed_size -1 (#213)
We require that uncompressed bytestream uploads specify committed_size
set to the size of the blob when returning early (if the blob already
exists on the server).
We also require that for compressed bytestream uploads committed_size
refers to the initial write offset plus the number of compressed bytes
uploaded. But if the server wants to return early in this case it doesn't
know how many compressed bytes would have been uploaded (the client might
not know this ahead of time either). So let's require that the server
set committed_size to -1 in this case.
For early return to work, we also need to ensure that the server does
not return an error code.
Resolves #212.
|
|
* Add support for inlined compressed data in batch CAS operations
This is a small API change which allows for inlined data to be
compressed form in BatchReadBlobs and BatchUpdateBlobs calls.
Refers to #201.
* Remove some stray parentheses
|
|
* Document that ExecutionStages don't always transition forward
As discussed during the 2021-05-11 working group meeting, we should make
it explicit that remote execution systems are permitted to report
ExecutionStages that don't transition forward (e.g., going from
EXECUTING back to QUEUED). The reason for this is twofold:
- It allows a remote execution system to retry actions in case of
hardware failures.
- If the remote execution system performs automatic selection of worker
sizes, it may need to rerun an action in case it picked a worker that
was too small to run the action properly.
* Update build/bazel/remote/execution/v2/remote_execution.proto
Co-authored-by: Sander Striker <s.striker@striker.nl>
Co-authored-by: Sander Striker <s.striker@striker.nl>
|
|
(#208)
This possibility was not originally considered, and we suspect that
most server implementations do not handle it properly. Let's forbid
this for now, to promote compatibility between implementations.
Resolves #206.
|
|
Buildbarn's FUSE file system is capable of lazily loading the input root
of an action as it's being executed. Because this adds a
non-deterministic amount of noise to execution times, there is an option
for automatically compensating the execution timeout of time spent
reading data from the CAS. Let's extend the spec to explicitly allow this.
While there, add a new field to ExecutedActionMetadata that contains the
execution time of the action, using a method of timekeeping that is
consistent with the timeout.
|
|
The Protobuf style guide is pretty explicit about this: repeated fields
should use pluralized names:
https://developers.google.com/protocol-buffers/docs/style
|
|
Currently this repo exposes one function to install its dependencies,
through switched_rules_by_language which invokes remote_apis_go_deps
for golang.
The issues arises from remote_apis_go_deps declaring **and** registering
a go toolchain. We need to register a go toolchain so we can, eg, run
gazelle to clean this repo's BUILD rules. However, this is problematic
for other repos that may import this one but need a different golang version.
On v0.20.1, rules_go used the argument name `go_version` to specify a version.
Recent `rules_go` releases use `version` for specifying a golang version, so
projects importing this one before #195 would luckily not run into issues.
On v0.27.0, the same argument name is used and `rules_go` does not like that
you may try to attempt registering two golang versions as the current toolchain.
This code exploits that `rules_go` does not currently check that you're trying
to install multiple `go_sdk`s with the same name (in this case, the default name
which is coincidentally `go_sdk`). Luckily, the first one declared takes
precendence, allowing projects importing this one to use a different go version.
(I couldn't actually find a bazel doc stating that registering the same
named toolchain twice would ignore the second registering, but I
managed to test this empirically by calling `go_register_toolchains` and
then loading this repo.)
Naturally, I think the best option long term would be to tweak these go
dependencies into two separate functions where one installs the go toolchain
in this repo's workspaces for eg running gazelle, and another that exposes
the necessary package dependencies for projects that import this one - but
for the interest of my own time and bazel being "complicated", I went with
the easier solution for now, which effectivelly reverts functionality to
pre #195. Besides, breaking the go dependencies install would have to change
the `switched_rules_by_language` signature.
|
|
This is to use protobuf v2 library for Go.
|
|
I don't think that it's a good idea to use Deflate in general. That
said, the reason I'm proposing that we add it is because it allows a
remote execution service to do light-weight "extraction" of ZIP files.
An existing ZIP file can be carved up into individual file payloads,
without decompressing/recompressing them.
Furthermore, adding this algorithm will allow Buildbarn to get rid of a
similar enum, which already includes Deflate.
|