From 6467198bff03b5e2d4381940bf1ba960b7857065 Mon Sep 17 00:00:00 2001 From: Alex Suraci Date: Tue, 2 Apr 2024 20:15:09 -0400 Subject: [PATCH] Switch from Progrock to OpenTelemetry (#6835) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * progrock -> otel * All Progrock plumbing is now gone, though we may want to bring it back for compatibility. Removing it was a useful exercise to find the many places where we're relying on it. * TUI now supports -v, -vv, -vvv (configurable verbosity). Global flags like --debug, -v, --silent, etc. are processed anywhere in the command string and respected. * CLI forwards engine traces and logs to configured exporters, no need to configure engine-side (we already need this flow for the TUI) * "Live" spans are emitted to TUI and cloud, filtered out before sending to a traditional (non-Live) otel stack * Engine supports pub/sub for traces and logs, can be exposed in the future as a GraphQL subscription * Refactor context.Background usage to context.WithoutCancel. We usually don't want a total reset, since that drops the span context and any other telemetry related things (loggers etc). Go 1.21 added context.WithoutCancel which is more precise. * engine: don't include source in slogs. Added this prospectively and it doesn't seem worth the noise. * idtui: DB can record multiple traces, polish * multi traces is mostly for dagviz, so i can run it with a single DB * add 'passthrough' UI flag which tells the UI to ignore a span and descend into its children * add 'ignore' UI flag, to be used sparingly for things whose signal:noise ratio is irredeemibly low (e.g. 'id' calls) * make loadFooFromID calls passthrough * make Buildkit gRPC calls passthrough * Global Progrock rogs are theoretically replaced with tracing.GlobalLogger, but it has yet to be integrated into anything. * Module functions are pure after all. They're already cached per-session, so this makes DagQL reflect that, avoiding duplicate Buildkit work that would be deduped at the Buildkit layer. Cleans up the telemetry since previously you'd see duplicate queries. * TODO: ensure draining is airtight * TODO: global logging to TUI * TODO: batch forwarded engine spans instead of emitting them "live" * TODO: fix dagger terminal Signed-off-by: Alex Suraci * fix log draining, again, ish previously we would cancel all subscribers for a trace whenever a client/derver went away. but for modules/nesting this meant the inner call would cancel the whole trace early. * TODO: looks like services still don't drain completely? Signed-off-by: Alex Suraci * don't set up logs if not configured Signed-off-by: Alex Suraci * respect configured level Signed-off-by: Alex Suraci * clean up shim early tracing remnants Signed-off-by: Alex Suraci * synchronously detach services on main client exit previously service spans would be left incomplete on exit. now we'll detach from them on shutdown, which will only stop the service if we're the last depender on it. end result _should_ be that services are always completed through telemetry, but I've seen maybe 2 in 50 runs still leave it running. still troubleshooting, but without this change there is no hope at all. fixes #6493 Signed-off-by: Alex Suraci * flush telemetry before closing server clients Honestly not 100% confirmed, but seems right. I think the final solution might be to get traces/logs out without going through a session in the first place. Signed-off-by: Alex Suraci * switch from errgroup to conc for panic handling seeing a panic in ExportSpans/UploadTraces, this should help avoid bringing whole server down - I think - or at least give us hope. Signed-off-by: Alex Suraci * nest 'starting session' beneath 'connect' Signed-off-by: Alex Suraci * send logs out from engine to log exporter too Signed-off-by: Alex Suraci * bump midterm Signed-off-by: Alex Suraci * switch to server-side telemetry pub/sub fetching the logs/traces over a session is really annoying with draining because the session itself gets closed before things can be fully flushed. Signed-off-by: Alex Suraci * show newer traces first Signed-off-by: Alex Suraci * cleanup Signed-off-by: Alex Suraci * send individual Calls over telemetry instead of IDs More than a 10x efficiency increase. Frontend still super easy to implement. Test: # in ~/src/bass $ with-dev dagger call -m ./ --src https://github.com/vito/bass unit --packages ./pkg/cli stdout --debug &> out $ rg measuring out | cut -d= -f2 | xargs | tr ' ' '+' | sed -e 's/0m//g' -e 's/[^0-9\+]//g' | cat -v | bc Before: 8524838 (~8.1 MiB) After: 727039 (~0.7 MiB) Signed-off-by: Alex Suraci * idtui Base was correct in returning bool Signed-off-by: Alex Suraci * handle case where calls haven't been seen yet kinda hacky, but it makes sense that we need to handle this, cause loadFooFromID or generally anything can take an ID that's never been seen by the server before, and the loadFooFromID span will come first. Signed-off-by: Alex Suraci * idtui: add space between progress and primary output Signed-off-by: Alex Suraci * swap -vvv and -vv, -vv now breaks encapsulation Signed-off-by: Alex Suraci * cleanups Signed-off-by: Alex Suraci * tidy mage Signed-off-by: Alex Suraci * tidy Signed-off-by: Alex Suraci * loosen go.mod constraints Signed-off-by: Alex Suraci * revive labels tests Signed-off-by: Alex Suraci * fix cachemap tests Signed-off-by: Alex Suraci * nuclear option: wait for all spans to complete Rather than closing the telemetry connection and hoping the timing works out, we keep track of which traces have active spans and wait for that count to reach 0. A bit more complicated but not seeing a simpler solution really. Without this we can't ensure that the client sees the very outermost spans complete. Signed-off-by: Alex Suraci * pass-through all gRPC stuff hasn't really been useful, it's available in the full trace for devs, or we can add a verbosity level. Signed-off-by: Alex Suraci * dagviz: tweaks to support visualizing a live trace Signed-off-by: Alex Suraci * better 'docker tag' parsing Signed-off-by: Alex Suraci * fixup docker tag check Signed-off-by: Alex Suraci * pass auth headers to OTLP logs too Signed-off-by: Alex Suraci * fix stdio not making it out of gateway containers Signed-off-by: Alex Suraci * fix terminal support Signed-off-by: Alex Suraci * drain immediately when interrupted otherwise we can get stuck waiting for child spans of a nested process that got kill -9'd. not perfect but better than hanging on Ctrl+C which is already an emergent situation where you're not likely that interested in any remaining data if you already had a reason to interrupt. in Cloud we'll clean up any orphaned spans based on keepalives anyway. Signed-off-by: Alex Suraci * fix unintentionally HTTP-ifying gRPC otlp enpoint Signed-off-by: Alex Suraci * give up retrying connection if outer ctx canceled Signed-off-by: Alex Suraci * initiate draining only when main client goes away Signed-off-by: Alex Suraci * appease linter Signed-off-by: Alex Suraci * remove unnecessary wait we don't need to try synchronizing here now that we just generically wait for all spans to complete Signed-off-by: Alex Suraci * fix panic if no telemetry Signed-off-by: Alex Suraci * remove debug log Signed-off-by: Alex Suraci * print final progress tree in plain mode no substitute for live console streaming, but easier to implement for now, and probably easier to read in CI. probably needs more work, but might get some tests passing. Signed-off-by: Alex Suraci * fix Windows build Signed-off-by: Alex Suraci * propagate spans through dagger-in-dagger Signed-off-by: Alex Suraci * retry connecting to telemetry Signed-off-by: Alex Suraci * propagate span context through dagger run Signed-off-by: Alex Suraci * install default labels as otel resource attrs Signed-off-by: Alex Suraci * tidy Signed-off-by: Alex Suraci * remove pipeline tests these are expected to fail now Signed-off-by: Alex Suraci * fail root span when command fails Signed-off-by: Alex Suraci * Container.import: add span for streaming image Signed-off-by: Alex Suraci * idtui: break encapsulation in case of errors Signed-off-by: Alex Suraci * fix schema-level logging not exporting caught by TestDaggerUp/random Signed-off-by: Alex Suraci * update TestDaggerRun assertion Signed-off-by: Alex Suraci * fix test not syncing on progress completion Signed-off-by: Alex Suraci * add verbose debug log Signed-off-by: Alex Suraci * respect $DAGGER_CLOUD_URL and $DAGGER_CLOUD_TOKEN promoting these from _EXPERIMENTAL along the way, which has already been done for _TOKEN, don't really see a strong reason to keep the _EXPERIMENTAL prefix, but low conviction Signed-off-by: Alex Suraci * port 'processor: support span keepalive' originally https://github.com/aluzzardi/otel-in-flight/commit/2fc011fce99a4b21a2007d71ae760b72388e508a Signed-off-by: Alex Suraci * add 'watch' command really helps with troubleshooting hanging tests! Signed-off-by: Alex Suraci * set a reasonable window size in plain mode otherwise the terminals resize a ton of times when a long string is printed, absolutely tanking performance. would be nice if that were fast, but no time for that now. Signed-off-by: Alex Suraci * manually revert container.import change i thought this wouldn't break it, but ... ? Signed-off-by: Alex Suraci * fix race Signed-off-by: Alex Suraci * mark watch command experimental Signed-off-by: Alex Suraci * fixup lock, more logging Signed-off-by: Alex Suraci * tidy Signed-off-by: Alex Suraci * fix data race in tests Signed-off-by: Alex Suraci * fix java SDK hang once again really not sure what's writing to stderr even with --silent but this is just too brittle. redirect stderr to /dev/null instead. Signed-off-by: Alex Suraci * retire dagger.io/ui.primary, use root span instead fixes Views test; frontend must have been getting confused because there were multiple "primary" spans Signed-off-by: Alex Suraci * take 2: just manually mark the 'primary' span Signed-off-by: Alex Suraci * merge tracing and telemetry packages Signed-off-by: Alex Suraci * cleanups Signed-off-by: Alex Suraci * roll back sync detach change this was no longer needed with the change to wait for spans to finish, not worth the review-time distraction Signed-off-by: Alex Suraci * cleanups Signed-off-by: Alex Suraci * update comment Signed-off-by: Alex Suraci * remove dead code Signed-off-by: Alex Suraci * default primary span to root span Signed-off-by: Alex Suraci * remove unused module arg Signed-off-by: Alex Suraci * send engine traces/logs to cloud Signed-off-by: Alex Suraci * implement sub metrics pub/sub Some clients presume this service is supported by the OTLP endpoint. So we can just have a stub implementation for now. Signed-off-by: Alex Suraci * sdk/go runtime: implement otel propagation TODO: set up otel for you Signed-off-by: Alex Suraci * tidy Signed-off-by: Alex Suraci * add scary comment Signed-off-by: Alex Suraci * batch events that are sent from the engine Previously we were just sending each individual update to the configured exporters, which was very expensive and would even slow down the TUI. When I originally tried to send it to span processors, nothing would be sent out; turns out that was because the transform.Spans call we were using didn't set the `Sampled` trace flag. Now we forward engine traces and logs to all configured processors, so their individual batching settings should be respected. Signed-off-by: Alex Suraci * fix spans being deduped within single batch * fix detection for in flight spans; we need to check EndTime < StartTime since sometimes we end up with a 1754 timestamp * when a span is already present in a batch, update it in-place rather than dropping it on the floor Signed-off-by: Alex Suraci * Add Python support Signed-off-by: Helder Correia <174525+helderco@users.noreply.github.com> * shim: proxy otel to 127.0.0.1:0 more universally compatible than unix:// Signed-off-by: Alex Suraci * remove unnecesssary fn Signed-off-by: Alex Suraci * attributes: add passthrough, bikeshed + document also start cleaning up "tasks" cruft nonsense, these can just be plain old attributes on a single span i think Signed-off-by: Alex Suraci * fix janky flag parsing parse global flags in two passes, ensuring the same flags are installed in both cases, and capturing the values before installing them into the real flag set, since that clobbers the values Signed-off-by: Alex Suraci * discard Buildkit progress ...just in case it gets buffered in memory forever otherwise Signed-off-by: Alex Suraci * sdk/go: somewhat gross support for opentelemetry had to copy-paste a lot of the telemetry code into sdk/go/. would love to just move everything there so it can be shared between the shim, the Go runtime, and the engine, however it is currently a huge PITA to share code between all three, because of the way codegen works. saving that for another day. maybe tomorrow. Signed-off-by: Alex Suraci * send logs to function call span, not exec /runtime Signed-off-by: Alex Suraci * tui: respect dagger.io/ui.mask no more exec /runtime! Signed-off-by: Alex Suraci * silence linter worth refactoring, but not now™ Signed-off-by: Alex Suraci * ignore --help when parsing global flags Signed-off-by: Alex Suraci * Pin python requirements Signed-off-by: Helder Correia <174525+helderco@users.noreply.github.com> * revert Python SDK changes for now looks like there's more to figure out with module dependencies? either way, don't want this to block the current PR, they can be re-introduced in another PR like the other SDKs Revert "Pin python requirements" This reverts commit b40c4115008a203e2529ce71c6b0a45d4e8a7f42. Revert "Add Python support" This reverts commit 08aa92cdbb49d9185fcb93daca186160e3a76884. Signed-off-by: Alex Suraci * fix race conditions in python SDK runtime Signed-off-by: Alex Suraci --------- Signed-off-by: Alex Suraci Signed-off-by: Helder Correia <174525+helderco@users.noreply.github.com> Co-authored-by: Helder Correia <174525+helderco@users.noreply.github.com> --- .gitignore | 1 - .gitmodules | 3 + analytics/analytics.go | 54 +- cmd/codegen/codegen.go | 24 +- cmd/codegen/generator/go/generator.go | 6 +- .../go/templates/module_interfaces.go | 2 +- cmd/codegen/generator/go/templates/modules.go | 72 +- .../src/_dagger.gen.go/alias.go.tmpl | 20 +- .../templates/src/_dagger.gen.go/defs.go.tmpl | 39 + .../go/templates/src/_types/object.go.tmpl | 2 +- cmd/codegen/main.go | 28 - cmd/dagger-graph/README.md | 14 - cmd/dagger-graph/example.svg | 59 -- cmd/dagger-graph/main.go | 140 --- cmd/dagger-graph/vertex.go | 68 -- cmd/dagger/config.go | 14 +- cmd/dagger/engine.go | 185 +--- cmd/dagger/functions.go | 40 +- cmd/dagger/licenses.go | 11 +- cmd/dagger/listen.go | 21 +- cmd/dagger/main.go | 131 ++- cmd/dagger/module.go | 33 +- cmd/dagger/query.go | 6 +- cmd/dagger/run.go | 34 +- cmd/dagger/session.go | 80 +- cmd/dagger/shell.go | 104 ++- cmd/dagger/shell_nounix.go | 11 + cmd/dagger/shell_unix.go | 21 + cmd/dagger/watch.go | 33 + cmd/engine/logger.go | 74 +- cmd/engine/main.go | 133 +-- cmd/otel-collector/logs.go | 143 --- cmd/otel-collector/loki/client.go | 567 ------------ cmd/otel-collector/main.go | 118 --- cmd/otel-collector/summary.go | 86 -- cmd/otel-collector/traces.go | 187 ---- cmd/otel-collector/vertex.go | 133 --- cmd/shim/main.go | 205 ++++- cmd/upload-journal/main.go | 118 --- core/c2h.go | 18 +- core/container.go | 27 +- core/directory.go | 12 +- core/file.go | 10 +- core/healthcheck.go | 18 +- core/host.go | 5 - core/integration/engine_test.go | 8 +- core/integration/module_iface_test.go | 1 + core/integration/module_test.go | 1 + core/integration/pipeline_test.go | 108 --- core/integration/suite_test.go | 17 + core/integration/testdata/telemetry/main.go | 19 +- core/moddeps.go | 4 +- core/modfunc.go | 24 +- core/object.go | 9 +- core/pipeline/label.go | 690 --------------- core/pipeline/label_test.go | 481 ---------- core/pipeline/pipeline.go | 55 +- core/query.go | 23 +- core/schema/container.go | 7 +- core/schema/deprecations.go | 15 + core/schema/directory.go | 7 +- core/schema/modulesource.go | 12 +- core/schema/query.go | 27 +- core/schema/sdk.go | 1 - core/schema/util.go | 11 - core/service.go | 46 +- core/services.go | 18 +- core/terminal.go | 8 +- core/tracing.go | 12 + core/typedef.go | 14 +- dagql/cachemap.go | 12 +- dagql/cachemap_test.go | 28 +- dagql/call/callpbv1/call.go | 25 +- dagql/call/id.go | 14 +- dagql/demo/main.go | 96 -- dagql/idtui/db.go | 471 ++++++---- dagql/idtui/frontend.go | 819 ++++++++---------- dagql/idtui/sigquit.go | 10 + dagql/idtui/sigquit_windows.go | 5 + dagql/idtui/spans.go | 206 +++++ dagql/idtui/steps.go | 274 ------ dagql/idtui/types.go | 138 +-- dagql/server.go | 22 +- dagql/tracing.go | 12 + docs/current_docs/reference/979596-cli.mdx | 42 +- engine/buildkit/auth.go | 5 + engine/buildkit/client.go | 74 +- engine/buildkit/containerimage.go | 4 +- engine/buildkit/filesync.go | 7 +- engine/buildkit/gateway.go | 92 ++ engine/buildkit/progrock.go | 261 ------ engine/buildkit/socket.go | 3 + engine/client/buildkit.go | 42 +- engine/client/client.go | 406 ++++++--- engine/client/drivers/dial.go | 3 +- engine/client/drivers/docker.go | 78 +- engine/client/drivers/driver.go | 4 +- engine/client/progrock.go | 35 - engine/client/tracing.go | 12 + engine/opts.go | 4 +- engine/server/buildkitcontroller.go | 40 +- engine/server/server.go | 189 ++-- engine/session/h2c.go | 31 +- go.mod | 96 +- go.sum | 513 ++--------- hack/with-dev | 1 + internal/mage/engine.go | 9 +- internal/mage/go.mod | 2 +- internal/mage/go.sum | 4 +- internal/tui/details.go | 97 --- internal/tui/editor.go | 68 -- internal/tui/group.go | 266 ------ internal/tui/item.go | 221 ----- internal/tui/keys.go | 105 --- internal/tui/model.go | 465 ---------- internal/tui/style.go | 124 --- internal/tui/tree.go | 406 --------- internal/tui/util.go | 41 - sdk/go/dagger.gen.go | 73 +- sdk/go/fs.go | 3 + sdk/go/go.mod | 21 +- sdk/go/go.sum | 54 +- sdk/go/internal/engineconn/engineconn.go | 22 + sdk/go/internal/engineconn/session.go | 12 + sdk/go/telemetry/attrs.go | 31 + sdk/go/telemetry/batch_processor.go | 455 ++++++++++ sdk/go/telemetry/init.go | 251 ++++++ sdk/go/telemetry/processor.go | 139 +++ sdk/go/telemetry/proxy.go | 94 ++ sdk/go/telemetry/span.go | 33 + .../io/dagger/codegen/DaggerCLIUtils.java | 4 +- sdk/python/runtime/discovery.go | 16 + telemetry/attrs.go | 64 ++ telemetry/env.go | 22 + telemetry/env/env.go | 173 ++++ telemetry/event.go | 68 -- telemetry/exporters.go | 140 +++ telemetry/generate.go | 3 + telemetry/graphql.go | 106 +++ telemetry/grpc.go | 63 ++ telemetry/inflight/batch_processor.go | 457 ++++++++++ telemetry/inflight/processor.go | 139 +++ telemetry/inflight/proxy.go | 94 ++ telemetry/init.go | 443 ++++++++++ telemetry/labels.go | 498 +++++++++++ telemetry/labels_test.go | 354 ++++++++ telemetry/legacy.go | 41 - telemetry/logging.go | 77 ++ telemetry/opentelemetry-proto | 1 + telemetry/pipeliner.go | 143 --- telemetry/pubsub.go | 351 ++++++++ telemetry/sdklog/batch_processor.go | 334 +++++++ telemetry/sdklog/exporter.go | 8 + telemetry/sdklog/logger.go | 36 + telemetry/sdklog/otlploggrpc/client.go | 298 +++++++ .../internal/envconfig/envconfig.go | 190 ++++ .../internal/otlpconfig/envconfig.go | 141 +++ .../internal/otlpconfig/options.go | 332 +++++++ .../internal/otlpconfig/optiontypes.go | 40 + .../otlploggrpc/internal/otlpconfig/tls.go | 27 + .../otlploggrpc/internal/partialsuccess.go | 46 + .../otlploggrpc/internal/retry/retry.go | 145 ++++ telemetry/sdklog/otlploggrpc/options.go | 202 +++++ telemetry/sdklog/otlploghttp/client.go | 250 ++++++ .../sdklog/otlploghttp/transform/resource.go | 137 +++ .../sdklog/otlploghttp/transform/tranform.go | 159 ++++ telemetry/sdklog/processor.go | 48 + telemetry/sdklog/provider.go | 80 ++ telemetry/servers.go | 231 +++++ telemetry/servers.pb.go | 182 ++++ telemetry/servers.proto | 24 + telemetry/servers_grpc.pb.go | 373 ++++++++ telemetry/span.go | 398 +++++++++ telemetry/telemetry.go | 173 ---- .../testdata/.gitattributes | 0 .../testdata/pull_request.synchronize.json | 0 .../pipeline => telemetry}/testdata/push.json | 0 .../testdata/workflow_dispatch.json | 0 {core/pipeline => telemetry}/util.go | 2 +- telemetry/writer.go | 132 --- tracing/graphql.go | 143 --- tracing/tracing.go | 78 -- 182 files changed, 10546 insertions(+), 8718 deletions(-) create mode 100644 .gitmodules delete mode 100644 cmd/dagger-graph/README.md delete mode 100644 cmd/dagger-graph/example.svg delete mode 100644 cmd/dagger-graph/main.go delete mode 100644 cmd/dagger-graph/vertex.go create mode 100644 cmd/dagger/shell_nounix.go create mode 100644 cmd/dagger/shell_unix.go create mode 100644 cmd/dagger/watch.go delete mode 100644 cmd/otel-collector/logs.go delete mode 100644 cmd/otel-collector/loki/client.go delete mode 100644 cmd/otel-collector/main.go delete mode 100644 cmd/otel-collector/summary.go delete mode 100644 cmd/otel-collector/traces.go delete mode 100644 cmd/otel-collector/vertex.go delete mode 100644 cmd/upload-journal/main.go delete mode 100644 core/pipeline/label.go delete mode 100644 core/pipeline/label_test.go create mode 100644 core/schema/deprecations.go create mode 100644 core/tracing.go delete mode 100644 dagql/demo/main.go create mode 100644 dagql/idtui/sigquit.go create mode 100644 dagql/idtui/sigquit_windows.go create mode 100644 dagql/idtui/spans.go delete mode 100644 dagql/idtui/steps.go create mode 100644 dagql/tracing.go create mode 100644 engine/buildkit/gateway.go delete mode 100644 engine/buildkit/progrock.go delete mode 100644 engine/client/progrock.go create mode 100644 engine/client/tracing.go delete mode 100644 internal/tui/details.go delete mode 100644 internal/tui/editor.go delete mode 100644 internal/tui/group.go delete mode 100644 internal/tui/item.go delete mode 100644 internal/tui/keys.go delete mode 100644 internal/tui/model.go delete mode 100644 internal/tui/style.go delete mode 100644 internal/tui/tree.go delete mode 100644 internal/tui/util.go create mode 100644 sdk/go/telemetry/attrs.go create mode 100644 sdk/go/telemetry/batch_processor.go create mode 100644 sdk/go/telemetry/init.go create mode 100644 sdk/go/telemetry/processor.go create mode 100644 sdk/go/telemetry/proxy.go create mode 100644 sdk/go/telemetry/span.go create mode 100644 telemetry/attrs.go create mode 100644 telemetry/env.go create mode 100644 telemetry/env/env.go delete mode 100644 telemetry/event.go create mode 100644 telemetry/exporters.go create mode 100644 telemetry/generate.go create mode 100644 telemetry/graphql.go create mode 100644 telemetry/grpc.go create mode 100644 telemetry/inflight/batch_processor.go create mode 100644 telemetry/inflight/processor.go create mode 100644 telemetry/inflight/proxy.go create mode 100644 telemetry/init.go create mode 100644 telemetry/labels.go create mode 100644 telemetry/labels_test.go delete mode 100644 telemetry/legacy.go create mode 100644 telemetry/logging.go create mode 160000 telemetry/opentelemetry-proto delete mode 100644 telemetry/pipeliner.go create mode 100644 telemetry/pubsub.go create mode 100644 telemetry/sdklog/batch_processor.go create mode 100644 telemetry/sdklog/exporter.go create mode 100644 telemetry/sdklog/logger.go create mode 100644 telemetry/sdklog/otlploggrpc/client.go create mode 100644 telemetry/sdklog/otlploggrpc/internal/envconfig/envconfig.go create mode 100644 telemetry/sdklog/otlploggrpc/internal/otlpconfig/envconfig.go create mode 100644 telemetry/sdklog/otlploggrpc/internal/otlpconfig/options.go create mode 100644 telemetry/sdklog/otlploggrpc/internal/otlpconfig/optiontypes.go create mode 100644 telemetry/sdklog/otlploggrpc/internal/otlpconfig/tls.go create mode 100644 telemetry/sdklog/otlploggrpc/internal/partialsuccess.go create mode 100644 telemetry/sdklog/otlploggrpc/internal/retry/retry.go create mode 100644 telemetry/sdklog/otlploggrpc/options.go create mode 100644 telemetry/sdklog/otlploghttp/client.go create mode 100644 telemetry/sdklog/otlploghttp/transform/resource.go create mode 100644 telemetry/sdklog/otlploghttp/transform/tranform.go create mode 100644 telemetry/sdklog/processor.go create mode 100644 telemetry/sdklog/provider.go create mode 100644 telemetry/servers.go create mode 100644 telemetry/servers.pb.go create mode 100644 telemetry/servers.proto create mode 100644 telemetry/servers_grpc.pb.go create mode 100644 telemetry/span.go delete mode 100644 telemetry/telemetry.go rename {core/pipeline => telemetry}/testdata/.gitattributes (100%) rename {core/pipeline => telemetry}/testdata/pull_request.synchronize.json (100%) rename {core/pipeline => telemetry}/testdata/push.json (100%) rename {core/pipeline => telemetry}/testdata/workflow_dispatch.json (100%) rename {core/pipeline => telemetry}/util.go (98%) delete mode 100644 telemetry/writer.go delete mode 100644 tracing/graphql.go delete mode 100644 tracing/tracing.go diff --git a/.gitignore b/.gitignore index 7411f51fd4a..82fc31613e3 100644 --- a/.gitignore +++ b/.gitignore @@ -45,4 +45,3 @@ go.work.sum # merged from dagger/examples repository **/node_modules -**/env diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 00000000000..a6d49dfbae0 --- /dev/null +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "telemetry/opentelemetry-proto"] + path = telemetry/opentelemetry-proto + url = https://github.com/open-telemetry/opentelemetry-proto diff --git a/analytics/analytics.go b/analytics/analytics.go index 3f4d94404f2..2a7d8247f67 100644 --- a/analytics/analytics.go +++ b/analytics/analytics.go @@ -8,14 +8,14 @@ import ( "encoding/json" "fmt" "io" + "log/slog" "net/http" "os" "sync" "time" - "github.com/dagger/dagger/core/pipeline" "github.com/dagger/dagger/engine" - "github.com/vito/progrock" + "github.com/dagger/dagger/telemetry" ) const ( @@ -78,30 +78,20 @@ func DoNotTrack() bool { type Config struct { DoNotTrack bool - Labels pipeline.Labels + Labels telemetry.Labels CloudToken string } -func DefaultConfig() Config { +func DefaultConfig(labels telemetry.Labels) Config { cfg := Config{ DoNotTrack: DoNotTrack(), CloudToken: os.Getenv("DAGGER_CLOUD_TOKEN"), + Labels: labels, } // Backward compatibility with the old environment variable. if cfg.CloudToken == "" { cfg.CloudToken = os.Getenv("_EXPERIMENTAL_DAGGER_CLOUD_TOKEN") } - - workdir, err := os.Getwd() - if err != nil { - fmt.Fprintf(os.Stderr, "failed to get cwd: %v\n", err) - return cfg - } - - cfg.Labels.AppendCILabel() - cfg.Labels = append(cfg.Labels, pipeline.LoadVCSLabels(workdir)...) - cfg.Labels = append(cfg.Labels, pipeline.LoadClientLabels(engine.Version)...) - return cfg } @@ -111,8 +101,7 @@ type queuedEvent struct { } type CloudTracker struct { - cfg Config - labels map[string]string + cfg Config closed bool mu sync.Mutex @@ -128,15 +117,10 @@ func New(cfg Config) Tracker { t := &CloudTracker{ cfg: cfg, - labels: make(map[string]string), stopCh: make(chan struct{}), doneCh: make(chan struct{}), } - for _, l := range cfg.Labels { - t.labels[l.Name] = l.Value - } - go t.start() return t @@ -155,19 +139,19 @@ func (t *CloudTracker) Capture(ctx context.Context, event string, properties map Type: event, Properties: properties, - DeviceID: t.labels["dagger.io/client.machine_id"], + DeviceID: t.cfg.Labels["dagger.io/client.machine_id"], - ClientVersion: t.labels["dagger.io/client.version"], - ClientOS: t.labels["dagger.io/client.os"], - ClientArch: t.labels["dagger.io/client.arch"], + ClientVersion: t.cfg.Labels["dagger.io/client.version"], + ClientOS: t.cfg.Labels["dagger.io/client.os"], + ClientArch: t.cfg.Labels["dagger.io/client.arch"], - CI: t.labels["dagger.io/ci"] == "true", - CIVendor: t.labels["dagger.io/ci.vendor"], + CI: t.cfg.Labels["dagger.io/ci"] == "true", + CIVendor: t.cfg.Labels["dagger.io/ci.vendor"], } - if remote := t.labels["dagger.io/git.remote"]; remote != "" { + if remote := t.cfg.Labels["dagger.io/git.remote"]; remote != "" { ev.GitRemoteEncoded = fmt.Sprintf("%x", base64.StdEncoding.EncodeToString([]byte(remote))) } - if author := t.labels["dagger.io/git.author.email"]; author != "" { + if author := t.cfg.Labels["dagger.io/git.author.email"]; author != "" { ev.GitAuthorHashed = fmt.Sprintf("%x", sha256.Sum256([]byte(author))) } if clientMetadata, err := engine.ClientMetadataFromContext(ctx); err == nil { @@ -203,21 +187,19 @@ func (t *CloudTracker) send() { } // grab the progrock recorder from the last event in the queue - rec := progrock.FromContext(queue[len(queue)-1].ctx) - payload := bytes.NewBuffer([]byte{}) enc := json.NewEncoder(payload) for _, q := range queue { err := enc.Encode(q.event) if err != nil { - rec.Debug("analytics: encode failed", progrock.ErrorLabel(err)) + slog.Debug("analytics: encode failed", "error", err) continue } } req, err := http.NewRequest(http.MethodPost, trackURL, bytes.NewReader(payload.Bytes())) if err != nil { - rec.Debug("analytics: new request failed", progrock.ErrorLabel(err)) + slog.Debug("analytics: new request failed", "error", err) return } if t.cfg.CloudToken != "" { @@ -225,12 +207,12 @@ func (t *CloudTracker) send() { } resp, err := http.DefaultClient.Do(req) if err != nil { - rec.Debug("analytics: do request failed", progrock.ErrorLabel(err)) + slog.Debug("analytics: do request failed", "error", err) return } defer resp.Body.Close() if resp.StatusCode != http.StatusCreated { - rec.Debug("analytics: unexpected response", progrock.Labelf("status", resp.Status)) + slog.Debug("analytics: unexpected response", "status", resp.Status) } } diff --git a/cmd/codegen/codegen.go b/cmd/codegen/codegen.go index 5b87c9f8a3b..144b1106155 100644 --- a/cmd/codegen/codegen.go +++ b/cmd/codegen/codegen.go @@ -4,10 +4,8 @@ import ( "context" "encoding/json" "fmt" + "os" "strings" - "time" - - "github.com/vito/progrock" "dagger.io/dagger" "github.com/dagger/dagger/cmd/codegen/generator" @@ -17,18 +15,14 @@ import ( ) func Generate(ctx context.Context, cfg generator.Config, dag *dagger.Client) (err error) { - var vtxName string + logsW := os.Stdout + if cfg.ModuleName != "" { - vtxName = fmt.Sprintf("generating %s module: %s", cfg.Lang, cfg.ModuleName) + fmt.Fprintf(logsW, "generating %s module: %s\n", cfg.Lang, cfg.ModuleName) } else { - vtxName = fmt.Sprintf("generating %s SDK client", cfg.Lang) + fmt.Fprintf(logsW, "generating %s SDK client\n", cfg.Lang) } - ctx, vtx := progrock.Span(ctx, time.Now().String(), vtxName) - defer func() { vtx.Done(err) }() - - logsW := vtx.Stdout() - var introspectionSchema *introspection.Schema if cfg.IntrospectionJSON != "" { var resp introspection.Response @@ -55,12 +49,12 @@ func Generate(ctx context.Context, cfg generator.Config, dag *dagger.Client) (er for _, cmd := range generated.PostCommands { cmd.Dir = cfg.OutputDir - cmd.Stdout = vtx.Stdout() - cmd.Stderr = vtx.Stderr() - task := vtx.Task(strings.Join(cmd.Args, " ")) + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + fmt.Fprintln(logsW, "running post-command:", strings.Join(cmd.Args, " ")) err := cmd.Run() - task.Done(err) if err != nil { + fmt.Fprintln(logsW, "post-command failed:", err) return err } } diff --git a/cmd/codegen/generator/go/generator.go b/cmd/codegen/generator/go/generator.go index 0e43460821b..d215a286289 100644 --- a/cmd/codegen/generator/go/generator.go +++ b/cmd/codegen/generator/go/generator.go @@ -56,7 +56,11 @@ func (g *GoGenerator) Generate(ctx context.Context, schema *introspection.Schema var overlay fs.FS = mfs if g.Config.ModuleName != "" { - overlay = layerfs.New(mfs, &MountedFS{FS: dagger.QueryBuilder, Name: "internal"}) + overlay = layerfs.New( + mfs, + &MountedFS{FS: dagger.QueryBuilder, Name: "internal"}, + &MountedFS{FS: dagger.Telemetry, Name: "internal"}, + ) } genSt := &generator.GeneratedState{ diff --git a/cmd/codegen/generator/go/templates/module_interfaces.go b/cmd/codegen/generator/go/templates/module_interfaces.go index 5807d1d63d5..f283fb16b7d 100644 --- a/cmd/codegen/generator/go/templates/module_interfaces.go +++ b/cmd/codegen/generator/go/templates/module_interfaces.go @@ -346,7 +346,7 @@ func (spec *parsedIfaceType) marshalJSONMethodCode() *Statement { BlockFunc(func(g *Group) { g.If(Id("r").Op("==").Nil()).Block(Return(Index().Byte().Parens(Lit(`""`)), Nil())) - g.List(Id("id"), Id("err")).Op(":=").Id("r").Dot("ID").Call(Qual("context", "Background").Call()) + g.List(Id("id"), Id("err")).Op(":=").Id("r").Dot("ID").Call(Id("marshalCtx")) g.If(Id("err").Op("!=").Nil()).Block(Return(Nil(), Id("err"))) g.Return(Id("json").Dot("Marshal").Call(Id("id"))) }) diff --git a/cmd/codegen/generator/go/templates/modules.go b/cmd/codegen/generator/go/templates/modules.go index 555fefb13aa..0b8ad90df1b 100644 --- a/cmd/codegen/generator/go/templates/modules.go +++ b/cmd/codegen/generator/go/templates/modules.go @@ -44,7 +44,7 @@ from the Engine, calls the relevant function and returns the result. The generat on the object+function name, with each case doing json deserialization of the input arguments and calling the actual Go function. */ -func (funcs goTemplateFuncs) moduleMainSrc() (string, error) { +func (funcs goTemplateFuncs) moduleMainSrc() (string, error) { //nolint: gocyclo // HACK: the code in this func can be pretty flaky and tricky to debug - // it's much easier to debug when we actually have stack traces, so we grab // those on a panic @@ -93,6 +93,12 @@ func (funcs goTemplateFuncs) moduleMainSrc() (string, error) { tps := []types.Type{} for _, obj := range objs { + // ignore any private definitions, they may be part of the runtime itself + // e.g. marshalCtx + if !obj.Exported() { + continue + } + // check if this is the constructor func, save it for later if so if ok := ps.checkConstructor(obj); ok { continue @@ -230,58 +236,86 @@ const ( mainSrc = `func main() { ctx := context.Background() + // Direct slog to the new stderr. This is only for dev time debugging, and + // runtime errors/warnings. + slog.SetDefault(slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{ + Level: slog.LevelWarn, + }))) + + if err := dispatch(ctx); err != nil { + fmt.Println(err.Error()) + os.Exit(2) + } +} + +func dispatch(ctx context.Context) error { + ctx = telemetry.InitEmbedded(ctx, resource.NewWithAttributes( + semconv.SchemaURL, + semconv.ServiceNameKey.String("dagger-go-sdk"), + // TODO version? + )) + defer telemetry.Close() + + ctx, span := Tracer().Start(ctx, "Go runtime", + trace.WithAttributes( + // In effect, the following two attributes hide the exec /runtime span. + // + // Replace the parent span, + attribute.Bool("dagger.io/ui.mask", true), + // and only show our children. + attribute.Bool("dagger.io/ui.passthrough", true), + )) + defer span.End() + + // A lot of the "work" actually happens when we're marshalling the return + // value, which entails getting object IDs, which happens in MarshalJSON, + // which has no ctx argument, so we use this lovely global variable. + setMarshalContext(ctx) + fnCall := dag.CurrentFunctionCall() parentName, err := fnCall.ParentName(ctx) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("get parent name: %w", err) } fnName, err := fnCall.Name(ctx) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("get fn name: %w", err) } parentJson, err := fnCall.Parent(ctx) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("get fn parent: %w", err) } fnArgs, err := fnCall.InputArgs(ctx) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("get fn args: %w", err) } inputArgs := map[string][]byte{} for _, fnArg := range fnArgs { argName, err := fnArg.Name(ctx) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("get fn arg name: %w", err) } argValue, err := fnArg.Value(ctx) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("get fn arg value: %w", err) } inputArgs[argName] = []byte(argValue) } result, err := invoke(ctx, []byte(parentJson), parentName, fnName, inputArgs) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("invoke: %w", err) } resultBytes, err := json.Marshal(result) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("marshal: %w", err) } _, err = fnCall.ReturnValue(ctx, JSON(resultBytes)) if err != nil { - fmt.Println(err.Error()) - os.Exit(2) + return fmt.Errorf("store return value: %w", err) } + return nil } ` parentJSONVar = "parentJSON" diff --git a/cmd/codegen/generator/go/templates/src/_dagger.gen.go/alias.go.tmpl b/cmd/codegen/generator/go/templates/src/_dagger.gen.go/alias.go.tmpl index e6acec2b70d..dd16e6b6520 100644 --- a/cmd/codegen/generator/go/templates/src/_dagger.gen.go/alias.go.tmpl +++ b/cmd/codegen/generator/go/templates/src/_dagger.gen.go/alias.go.tmpl @@ -1,15 +1,33 @@ import ( + "context" + "log/slog" + "{{.PackageImport}}/internal/dagger" + + "go.opentelemetry.io/otel/trace" ) var dag = dagger.Connect() +func Tracer() trace.Tracer { + return otel.Tracer("dagger.io/sdk.go") +} + +// used for local MarshalJSON implementations +var marshalCtx = context.Background() + +// called by main() +func setMarshalContext(ctx context.Context) { + marshalCtx = ctx + dagger.SetMarshalContext(ctx) +} + type DaggerObject = dagger.DaggerObject type ExecError = dagger.ExecError {{ range .Types }} -{{ $name := .Name | FormatName }} +{{ $name := .Name | FormatName }} {{ .Description | Comment }} type {{ $name }} = dagger.{{ $name }} diff --git a/cmd/codegen/generator/go/templates/src/_dagger.gen.go/defs.go.tmpl b/cmd/codegen/generator/go/templates/src/_dagger.gen.go/defs.go.tmpl index 832f862dec9..9309d777fb4 100644 --- a/cmd/codegen/generator/go/templates/src/_dagger.gen.go/defs.go.tmpl +++ b/cmd/codegen/generator/go/templates/src/_dagger.gen.go/defs.go.tmpl @@ -1,6 +1,7 @@ import ( "context" "encoding/json" + "log/slog" "errors" "fmt" "net" @@ -13,14 +14,34 @@ import ( "github.com/Khan/genqlient/graphql" "github.com/vektah/gqlparser/v2/gqlerror" + "go.opentelemetry.io/otel/propagation" + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/trace" {{ if IsModuleCode }} "{{.PackageImport}}/internal/querybuilder" + "{{.PackageImport}}/internal/telemetry" {{ else }} "{{.PackageImport}}/querybuilder" + "{{.PackageImport}}/telemetry" {{ end }} ) +func Tracer() trace.Tracer { + return otel.Tracer("dagger.io/sdk.go") +} + +// reassigned at runtime after the span is initialized +var marshalCtx = context.Background() + +{{ if IsModuleCode }} +// SetMarshalContext is a hack that lets us set the ctx to use for +// MarshalJSON implementations that get an object's ID. +func SetMarshalContext(ctx context.Context) { + marshalCtx = ctx +} +{{ end }} + // assertNotNil panic if the given value is nil. // This function is used to validate that input with pointer type are not nil. // See https://github.com/dagger/dagger/issues/5696 for more context. @@ -163,6 +184,13 @@ func getClientParams() (graphql.Client, *querybuilder.Selection) { httpClient := &http.Client{ Transport: roundTripperFunc(func(r *http.Request) (*http.Response, error) { r.SetBasicAuth(sessionToken, "") + + // detect $TRACEPARENT set by 'dagger run' + r = r.WithContext(fallbackSpanContext(r.Context())) + + // propagate span context via headers (i.e. for Dagger-in-Dagger) + propagation.TraceContext{}.Inject(r.Context(), propagation.HeaderCarrier(r.Header)) + return dialTransport.RoundTrip(r) }), } @@ -171,6 +199,17 @@ func getClientParams() (graphql.Client, *querybuilder.Selection) { return gqlClient, querybuilder.Query() } +func fallbackSpanContext(ctx context.Context) context.Context { + if trace.SpanContextFromContext(ctx).IsValid() { + return ctx + } + if p, ok := os.LookupEnv("TRACEPARENT"); ok { + slog.Debug("falling back to $TRACEPARENT", "value", p) + return propagation.TraceContext{}.Extract(ctx, propagation.MapCarrier{"traceparent": p}) + } + return ctx +} + // TODO: pollutes namespace, move to non internal package in dagger.io/dagger type roundTripperFunc func(*http.Request) (*http.Response, error) diff --git a/cmd/codegen/generator/go/templates/src/_types/object.go.tmpl b/cmd/codegen/generator/go/templates/src/_types/object.go.tmpl index f5b0d759c82..aa599b50af9 100644 --- a/cmd/codegen/generator/go/templates/src/_types/object.go.tmpl +++ b/cmd/codegen/generator/go/templates/src/_types/object.go.tmpl @@ -175,7 +175,7 @@ func (r *{{ $.Name | FormatName }}) XXX_GraphQLID(ctx context.Context) (string, } func (r *{{ $.Name | FormatName }}) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } diff --git a/cmd/codegen/main.go b/cmd/codegen/main.go index 5b33ab08013..039950034d4 100644 --- a/cmd/codegen/main.go +++ b/cmd/codegen/main.go @@ -5,8 +5,6 @@ import ( "os" "github.com/spf13/cobra" - "github.com/vito/progrock" - "github.com/vito/progrock/console" "dagger.io/dagger" "github.com/dagger/dagger/cmd/codegen/generator" @@ -15,7 +13,6 @@ import ( var ( outputDir string lang string - propagateLogs bool introspectionJSONPath string moduleContextPath string @@ -35,15 +32,12 @@ var rootCmd = &cobra.Command{ func init() { rootCmd.Flags().StringVar(&lang, "lang", "go", "language to generate") rootCmd.Flags().StringVarP(&outputDir, "output", "o", ".", "output directory") - rootCmd.Flags().BoolVar(&propagateLogs, "propagate-logs", false, "propagate logs directly to progrock.sock") rootCmd.Flags().StringVar(&introspectionJSONPath, "introspection-json-path", "", "optional path to file containing pre-computed graphql introspection JSON") rootCmd.Flags().StringVar(&moduleContextPath, "module-context", "", "path to context directory of the module") rootCmd.Flags().StringVar(&moduleName, "module-name", "", "name of module to generate code for") } -const nestedSock = "/.progrock.sock" - func ClientGen(cmd *cobra.Command, args []string) error { ctx := cmd.Context() dag, err := dagger.Connect(ctx, dagger.WithSkipCompatibilityCheck()) @@ -51,28 +45,6 @@ func ClientGen(cmd *cobra.Command, args []string) error { return err } - var progW progrock.Writer - var dialErr error - if propagateLogs { - progW, dialErr = progrock.DialRPC(ctx, "unix://"+nestedSock) - if dialErr != nil { - return fmt.Errorf("error connecting to progrock: %w; falling back to console output", dialErr) - } - } else { - progW = console.NewWriter(os.Stderr, console.WithMessageLevel(progrock.MessageLevel_DEBUG)) - } - - var rec *progrock.Recorder - if parent := os.Getenv("_DAGGER_PROGROCK_PARENT"); parent != "" { - rec = progrock.NewSubRecorder(progW, parent) - } else { - rec = progrock.NewRecorder(progW) - } - defer rec.Complete() - defer rec.Close() - - ctx = progrock.ToContext(ctx, rec) - cfg := generator.Config{ Lang: generator.SDKLang(lang), diff --git a/cmd/dagger-graph/README.md b/cmd/dagger-graph/README.md deleted file mode 100644 index bdbfd0954f0..00000000000 --- a/cmd/dagger-graph/README.md +++ /dev/null @@ -1,14 +0,0 @@ -# dagger-graph - -**Experimental** tool to generate graphs from a dagger journal file. Built using [D2](https://d2lang.com/tour/intro/). - -## Usage - -```console -_EXPERIMENTAL_DAGGER_JOURNAL="./journal.log" go run mycode.go -dagger-graph ./journal.log ./graph.svg -``` - -## Example - -![example](./example.svg) \ No newline at end of file diff --git a/cmd/dagger-graph/example.svg b/cmd/dagger-graph/example.svg deleted file mode 100644 index 0f0fb8d27c9..00000000000 --- a/cmd/dagger-graph/example.svg +++ /dev/null @@ -1,59 +0,0 @@ - -nodejsgolangrepositoryfrom node:18-alpinefrom golangci/golangci-lint:v1.48lintcopy /sdk/go /host.directory /Users/al/work/daggerlintbuildcopy / /workdirexec docker-entrypoint.sh yarn installcopy /yarn.lock /workdir/yarn.lockcopy /package.json /workdir/package.jsoncopy /sdk/nodejs /resolve image config for docker.io/library/node:18-alpineresolve image config for docker.io/golangci/golangci-lint:v1.48exec golangci-lint run -v --timeout 5mcopy /Users/al/work/daggerupload /Users/al/work/daggerpull docker.io/golangci/golangci-lint:v1.48exec docker-entrypoint.sh yarn lintexec docker-entrypoint.sh yarn buildpull docker.io/library/node:18-alpine - - - \ No newline at end of file diff --git a/cmd/dagger-graph/main.go b/cmd/dagger-graph/main.go deleted file mode 100644 index b02b5e9f683..00000000000 --- a/cmd/dagger-graph/main.go +++ /dev/null @@ -1,140 +0,0 @@ -package main - -import ( - "context" - "encoding/json" - "errors" - "fmt" - "io" - "os" - "strings" - "time" - - "github.com/dagger/dagger/telemetry" - "github.com/vito/progrock" - - "oss.terrastruct.com/d2/d2lib" - "oss.terrastruct.com/d2/d2renderers/d2svg" - "oss.terrastruct.com/d2/lib/textmeasure" - "oss.terrastruct.com/util-go/go2" -) - -func main() { - if len(os.Args) < 3 { - fmt.Fprintf(os.Stderr, "Usage: %s \n", os.Args[0]) - os.Exit(1) - } - var ( - input = os.Args[1] - output = os.Args[2] - ) - pl := loadEvents(input) - graph := generateGraph(pl.Vertices()) - svg, err := renderSVG(graph) - if err != nil { - fmt.Fprintf(os.Stderr, "%v\n", err) - os.Exit(1) - } - if err := os.WriteFile(output, svg, 0600); err != nil { - fmt.Fprintf(os.Stderr, "%v\n", err) - os.Exit(1) - } -} - -func loadEvents(journal string) *telemetry.Pipeliner { - f, err := os.Open(journal) - if err != nil { - panic(err) - } - - defer f.Close() - - pl := telemetry.NewPipeliner() - - dec := json.NewDecoder(f) - - for { - var update progrock.StatusUpdate - if err := dec.Decode(&update); err != nil { - if errors.Is(err, io.EOF) { - break - } - - panic(err) - } - - pl.TrackUpdate(&update) - } - - return pl -} - -func generateGraph(vertices []*telemetry.PipelinedVertex) string { - s := strings.Builder{} - - vertexToGraphID := map[string]string{} - for _, v := range vertices { - w := WrappedVertex{v} - - if w.Internal() { - continue - } - - graphPath := []string{} - for _, p := range w.Pipeline() { - graphPath = append(graphPath, fmt.Sprintf("%q", p.Name)) - } - graphPath = append(graphPath, fmt.Sprintf("%q", w.ID())) - graphID := strings.Join(graphPath, ".") - - duration := w.Duration().Round(time.Second / 10).String() - if w.Cached() { - duration = "CACHED" - } - - // `$` has special meaning in D2 - name := strings.ReplaceAll(w.Name(), "$", "") + " (" + duration + ")" - - vertexToGraphID[w.ID()] = graphID - s.WriteString(graphID + ": {\n") - s.WriteString(fmt.Sprintf(" label: %q\n", name)) - s.WriteString("}\n") - } - - for _, v := range vertices { - w := WrappedVertex{v} - if w.Internal() { - continue - } - - graphID := vertexToGraphID[w.ID()] - if graphID == "" { - fmt.Printf("id %s not found\n", w.ID()) - continue - } - for _, input := range w.Inputs() { - source := vertexToGraphID[input] - if source == "" { - continue - } - s.WriteString(fmt.Sprintf("%s <- %s\n", graphID, source)) - } - } - - return s.String() -} - -func renderSVG(graph string) ([]byte, error) { - ruler, err := textmeasure.NewRuler() - if err != nil { - return nil, err - } - diagram, _, err := d2lib.Compile(context.Background(), graph, &d2lib.CompileOptions{ - Layout: go2.Pointer("dagre"), - Ruler: ruler, - }, &d2svg.RenderOpts{}) - if err != nil { - return nil, err - } - return d2svg.Render(diagram, &d2svg.RenderOpts{}) -} diff --git a/cmd/dagger-graph/vertex.go b/cmd/dagger-graph/vertex.go deleted file mode 100644 index 3dce5e77859..00000000000 --- a/cmd/dagger-graph/vertex.go +++ /dev/null @@ -1,68 +0,0 @@ -package main - -import ( - "fmt" - "strings" - "time" - - "github.com/dagger/dagger/core/pipeline" - "github.com/dagger/dagger/telemetry" -) - -type WrappedVertex struct { - v *telemetry.PipelinedVertex -} - -func (w WrappedVertex) ID() string { - return w.v.Id -} - -func (w WrappedVertex) FullName() string { - path := []string{} - for _, p := range w.Pipeline() { - path = append(path, p.Name) - } - path = append(path, fmt.Sprintf("%q", w.ID())) - return strings.Join(path, ".") -} - -func (w WrappedVertex) Name() string { - return w.v.Name -} - -func (w WrappedVertex) Pipeline() pipeline.Path { - if len(w.v.Pipelines) == 0 { - return pipeline.Path{} - } - return w.v.Pipelines[0] -} - -func (w WrappedVertex) Internal() bool { - return w.v.Internal -} - -func (w WrappedVertex) Inputs() []string { - return w.v.Inputs -} - -func (w WrappedVertex) Started() time.Time { - if w.v.Started == nil { - return time.Time{} - } - return w.v.Started.AsTime() -} - -func (w WrappedVertex) Completed() time.Time { - if w.v.Completed == nil { - return time.Time{} - } - return w.v.Completed.AsTime() -} - -func (w WrappedVertex) Duration() time.Duration { - return w.Completed().Sub(w.Started()) -} - -func (w WrappedVertex) Cached() bool { - return w.v.Cached -} diff --git a/cmd/dagger/config.go b/cmd/dagger/config.go index 1d4b64bd35c..8ab4603adde 100644 --- a/cmd/dagger/config.go +++ b/cmd/dagger/config.go @@ -8,13 +8,11 @@ import ( "dagger.io/dagger" "github.com/dagger/dagger/core/modules" - "github.com/dagger/dagger/dagql/idtui" "github.com/dagger/dagger/engine/client" "github.com/juju/ansiterm/tabwriter" "github.com/muesli/termenv" "github.com/spf13/cobra" "github.com/spf13/pflag" - "github.com/vito/progrock" ) var configJSONOutput bool @@ -40,11 +38,6 @@ dagger config -m github.com/dagger/hello-dagger Args: cobra.NoArgs, GroupID: moduleGroup.ID, RunE: configSubcmdRun(func(ctx context.Context, cmd *cobra.Command, _ []string, modConf *configuredModule) (err error) { - ctx, vtx := progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer func() { vtx.Done(err) }() - cmd.SetContext(ctx) - setCmdOutput(cmd, vtx) - if configJSONOutput { cfgContents, err := modConf.Source.Directory(".").File(modules.Filename).Contents(ctx) if err != nil { @@ -440,12 +433,7 @@ func (run configSubcmdRun) runE(localOnly bool) cobraRunE { return func(cmd *cobra.Command, args []string) error { ctx := cmd.Context() - return withEngineAndTUI(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { - ctx, vtx := progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer func() { vtx.Done(err) }() - cmd.SetContext(ctx) - setCmdOutput(cmd, vtx) - + return withEngine(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { modConf, err := getDefaultModuleConfiguration(ctx, engineClient.Dagger(), true, true) if err != nil { return fmt.Errorf("failed to load module: %w", err) diff --git a/cmd/dagger/engine.go b/cmd/dagger/engine.go index d63cae91f15..39453fe53a2 100644 --- a/cmd/dagger/engine.go +++ b/cmd/dagger/engine.go @@ -2,61 +2,15 @@ package main import ( "context" - "errors" - "fmt" - "os" - tea "github.com/charmbracelet/bubbletea" - "github.com/dagger/dagger/dagql/idtui" "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/client" - "github.com/dagger/dagger/internal/tui" "github.com/dagger/dagger/telemetry" - "github.com/mattn/go-isatty" - "github.com/vito/progrock" - "github.com/vito/progrock/console" ) -var silent bool - -var progress string -var stdoutIsTTY = isatty.IsTerminal(os.Stdout.Fd()) -var stderrIsTTY = isatty.IsTerminal(os.Stderr.Fd()) - -var autoTTY = stdoutIsTTY || stderrIsTTY - -func init() { - rootCmd.PersistentFlags().BoolVarP( - &silent, - "silent", - "s", - false, - "disable terminal UI and progress output", - ) - - rootCmd.PersistentFlags().StringVar( - &progress, - "progress", - "auto", - "progress output format (auto, plain, tty)", - ) -} - -// show only focused vertices -var focus bool - -// show errored vertices even if focused -// -// set this to false if your command handles errors (e.g. dagger checks) -var revealErrored = true - -var interactive = os.Getenv("_EXPERIMENTAL_DAGGER_INTERACTIVE_TUI") != "" - type runClientCallback func(context.Context, *client.Client) error -var useLegacyTUI = os.Getenv("_EXPERIMENTAL_DAGGER_LEGACY_TUI") != "" - -func withEngineAndTUI( +func withEngine( ctx context.Context, params client.Params, fn runClientCallback, @@ -71,141 +25,22 @@ func withEngineAndTUI( params.DisableHostRW = disableHostRW - if params.JournalFile == "" { - params.JournalFile = os.Getenv("_EXPERIMENTAL_DAGGER_JOURNAL") - } + params.EngineNameCallback = Frontend.ConnectedToEngine - if interactive { - return interactiveTUI(ctx, params, fn) - } - if useLegacyTUI { - if progress == "auto" && autoTTY || progress == "tty" { - return legacyTUI(ctx, params, fn) - } else { - return plainConsole(ctx, params, fn) - } - } - return runWithFrontend(ctx, params, fn) -} + params.CloudURLCallback = Frontend.ConnectedToCloud -// TODO remove when legacy TUI is no longer supported; this has been -// assimilated into idtui.Frontend -func plainConsole(ctx context.Context, params client.Params, fn runClientCallback) error { - opts := []console.WriterOpt{ - console.ShowInternal(debug), + params.EngineTrace = telemetry.SpanForwarder{ + Processors: telemetry.SpanProcessors, } - if debug { - opts = append(opts, console.WithMessageLevel(progrock.MessageLevel_DEBUG)) - } - progW := console.NewWriter(os.Stderr, opts...) - params.ProgrockWriter = progW - params.EngineNameCallback = func(name string) { - fmt.Fprintln(os.Stderr, "Connected to engine", name) - } - params.CloudURLCallback = func(cloudURL string) { - fmt.Fprintln(os.Stderr, "Dagger Cloud URL:", cloudURL) - } - engineClient, ctx, err := client.Connect(ctx, params) - if err != nil { - return err - } - defer engineClient.Close() - return fn(ctx, engineClient) -} - -func runWithFrontend( - ctx context.Context, - params client.Params, - fn runClientCallback, -) error { - frontend := idtui.New() - frontend.Debug = debug - frontend.Plain = progress == "plain" - frontend.Silent = silent - params.ProgrockWriter = frontend - params.EngineNameCallback = frontend.ConnectedToEngine - params.CloudURLCallback = frontend.ConnectedToCloud - return frontend.Run(ctx, func(ctx context.Context) error { - sess, ctx, err := client.Connect(ctx, params) - if err != nil { - return err - } - defer sess.Close() - return fn(ctx, sess) - }) -} - -func legacyTUI( - ctx context.Context, - params client.Params, - fn runClientCallback, -) error { - tape := progrock.NewTape() - tape.ShowInternal(debug) - tape.Focus(focus) - tape.RevealErrored(revealErrored) - - if debug { - tape.MessageLevel(progrock.MessageLevel_DEBUG) + params.EngineLogs = telemetry.LogForwarder{ + Processors: telemetry.LogProcessors, } - params.ProgrockWriter = telemetry.NewLegacyIDInternalizer(tape) - - return progrock.DefaultUI().Run(ctx, tape, func(ctx context.Context, ui progrock.UIClient) error { - params.CloudURLCallback = func(cloudURL string) { - ui.SetStatusInfo(progrock.StatusInfo{ - Name: "Cloud URL", - Value: cloudURL, - Order: 1, - }) - } - - params.EngineNameCallback = func(name string) { - ui.SetStatusInfo(progrock.StatusInfo{ - Name: "Engine", - Value: name, - Order: 2, - }) - } - - sess, ctx, err := client.Connect(ctx, params) - if err != nil { - return err - } - defer sess.Close() - return fn(ctx, sess) - }) -} - -func interactiveTUI( - ctx context.Context, - params client.Params, - fn runClientCallback, -) error { - progR, progW := progrock.Pipe() - params.ProgrockWriter = telemetry.NewLegacyIDInternalizer(progW) - - ctx, quit := context.WithCancel(ctx) - defer quit() - - program := tea.NewProgram(tui.New(quit, progR), tea.WithAltScreen()) - - tuiDone := make(chan error, 1) - go func() { - _, err := program.Run() - tuiDone <- err - }() - sess, ctx, err := client.Connect(ctx, params) if err != nil { - tuiErr := <-tuiDone - return errors.Join(tuiErr, err) + return err } + defer sess.Close() - err = fn(ctx, sess) - - closeErr := sess.Close() - - tuiErr := <-tuiDone - return errors.Join(tuiErr, closeErr, err) + return fn(ctx, sess) } diff --git a/cmd/dagger/functions.go b/cmd/dagger/functions.go index e4b9296485c..c708264dcbc 100644 --- a/cmd/dagger/functions.go +++ b/cmd/dagger/functions.go @@ -4,18 +4,18 @@ import ( "context" "errors" "fmt" + "log/slog" "sort" "strings" "dagger.io/dagger" "dagger.io/dagger/querybuilder" - "github.com/dagger/dagger/dagql/idtui" "github.com/dagger/dagger/engine/client" + "github.com/dagger/dagger/telemetry" "github.com/juju/ansiterm/tabwriter" "github.com/muesli/termenv" "github.com/spf13/cobra" "github.com/spf13/pflag" - "github.com/vito/progrock" ) const ( @@ -123,11 +123,6 @@ func (fcs FuncCommands) All() []*cobra.Command { return cmds } -func setCmdOutput(cmd *cobra.Command, vtx *progrock.VertexRecorder) { - cmd.SetOut(vtx.Stdout()) - cmd.SetErr(vtx.Stderr()) -} - // FuncCommand is a config object used to create a dynamic set of commands // for querying a module's functions. type FuncCommand struct { @@ -254,7 +249,7 @@ func (fc *FuncCommand) Command() *cobra.Command { // Between PreRunE and RunE, flags are validated. RunE: func(c *cobra.Command, a []string) error { - return withEngineAndTUI(c.Context(), client.Params{}, func(ctx context.Context, engineClient *client.Client) (rerr error) { + return withEngine(c.Context(), client.Params{}, func(ctx context.Context, engineClient *client.Client) (rerr error) { fc.c = engineClient // withEngineAndTUI changes the context. @@ -280,27 +275,11 @@ func (fc *FuncCommand) Command() *cobra.Command { func (fc *FuncCommand) execute(c *cobra.Command, a []string) (rerr error) { ctx := c.Context() - var primaryVtx *progrock.VertexRecorder var cmd *cobra.Command - - // The following is a little complicated because it needs to handle the case - // where we fail to load the modules or parse the CLI. - // - // In the happy path we want to initialize the PrimaryVertex with the parsed - // command string, but we can't have that until we load the command. - // - // So we just detect if we failed before getting to that point and fall back - // to the outer command. defer func() { if cmd == nil { // errored during loading cmd = c } - if primaryVtx == nil { // errored during loading - ctx, primaryVtx = progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer func() { primaryVtx.Done(rerr) }() - cmd.SetContext(ctx) - setCmdOutput(cmd, primaryVtx) - } if ctx.Err() != nil { cmd.PrintErrln("Canceled.") } else if rerr != nil { @@ -326,12 +305,6 @@ func (fc *FuncCommand) execute(c *cobra.Command, a []string) (rerr error) { return err } - // Ok, we've loaded the command, now we can initialize the PrimaryVertex. - ctx, primaryVtx = progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer func() { primaryVtx.Done(rerr) }() - cmd.SetContext(ctx) - setCmdOutput(cmd, primaryVtx) - if fc.showHelp { // Hide aliases for sub-commands. They just allow using the SDK's // casing for functions but there's no need to advertise. @@ -367,8 +340,8 @@ func (fc *FuncCommand) load(c *cobra.Command, a []string) (cmd *cobra.Command, _ ctx := c.Context() dag := fc.c.Dagger() - ctx, vtx := progrock.Span(ctx, idtui.InitVertex, "initialize") - defer func() { vtx.Done(rerr) }() + ctx, span := Tracer().Start(ctx, "initialize", telemetry.Encapsulate()) + defer telemetry.End(span, func() error { return rerr }) modConf, err := getDefaultModuleConfiguration(ctx, dag, true, true) if err != nil { @@ -528,8 +501,7 @@ func (fc *FuncCommand) makeSubCmd(dag *dagger.Client, fn *modFunction) *cobra.Co ctx := cmd.Context() query, _ := fc.q.Build(ctx) - rec := progrock.FromContext(ctx) - rec.Debug("executing", progrock.Labelf("query", "%+v", query)) + slog.Debug("executing query", "query", query) var response any diff --git a/cmd/dagger/licenses.go b/cmd/dagger/licenses.go index 5a86e91f1fe..ef631480a77 100644 --- a/cmd/dagger/licenses.go +++ b/cmd/dagger/licenses.go @@ -4,11 +4,12 @@ import ( "context" "errors" "fmt" + "log/slog" "os" "path/filepath" + "github.com/dagger/dagger/telemetry" "github.com/mitchellh/go-spdx" - "github.com/vito/progrock" ) const ( @@ -60,20 +61,20 @@ var licenseFiles = []string{ } func findOrCreateLicense(ctx context.Context, dir string) error { - rec := progrock.FromContext(ctx) + log := telemetry.ContextLogger(ctx, slog.LevelWarn) id := licenseID if id == "" { if foundLicense, err := searchForLicense(dir); err == nil { - rec.Debug("found existing LICENSE file", progrock.Labelf("path", foundLicense)) + log.Debug("found existing LICENSE file", "path", foundLicense) return nil } id = defaultLicense } - rec.Warn("no LICENSE file found; generating one for you, feel free to change or remove", - progrock.Labelf("license", id)) + log.Warn("no LICENSE file found; generating one for you, feel free to change or remove", + "license", id) license, err := spdx.License(id) if err != nil { diff --git a/cmd/dagger/listen.go b/cmd/dagger/listen.go index f104fcae76f..227890500ac 100644 --- a/cmd/dagger/listen.go +++ b/cmd/dagger/listen.go @@ -3,18 +3,16 @@ package main import ( "context" "fmt" - "io" "net" "net/http" "os" "time" "dagger.io/dagger" - "github.com/dagger/dagger/dagql/idtui" "github.com/dagger/dagger/engine/client" "github.com/rs/cors" "github.com/spf13/cobra" - "github.com/vito/progrock" + "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" ) var ( @@ -38,15 +36,7 @@ func init() { } func Listen(ctx context.Context, engineClient *client.Client, _ *dagger.Module, cmd *cobra.Command, _ []string) error { - var stderr io.Writer - if silent { - stderr = os.Stderr - } else { - var vtx *progrock.VertexRecorder - ctx, vtx = progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer vtx.Done(nil) - stderr = vtx.Stderr() - } + stderr := cmd.OutOrStderr() sessionL, err := net.Listen("tcp", listenAddress) if err != nil { @@ -60,9 +50,14 @@ func Listen(ctx context.Context, engineClient *client.Client, _ *dagger.Module, } srv := &http.Server{ - Handler: handler, + Handler: otelhttp.NewHandler(handler, "listen", otelhttp.WithSpanNameFormatter(func(o string, r *http.Request) string { + return fmt.Sprintf("%s: HTTP %s %s", o, r.Method, r.URL.Path) + })), // Gosec G112: prevent slowloris attacks ReadHeaderTimeout: 10 * time.Second, + BaseContext: func(_ net.Listener) context.Context { + return ctx + }, } go func() { diff --git a/cmd/dagger/main.go b/cmd/dagger/main.go index a044a7310ed..9bcceb19ff1 100644 --- a/cmd/dagger/main.go +++ b/cmd/dagger/main.go @@ -1,22 +1,34 @@ package main import ( + "context" + "errors" "fmt" "io" "os" "path/filepath" "runtime/pprof" - "runtime/trace" + runtimetrace "runtime/trace" "strings" "unicode" "github.com/dagger/dagger/analytics" - "github.com/dagger/dagger/tracing" + "github.com/dagger/dagger/dagql/idtui" + "github.com/dagger/dagger/engine" + "github.com/dagger/dagger/telemetry" + "github.com/dagger/dagger/telemetry/sdklog" + "github.com/mattn/go-isatty" "github.com/muesli/reflow/indent" "github.com/muesli/reflow/wordwrap" "github.com/sirupsen/logrus" "github.com/spf13/cobra" "github.com/spf13/pflag" + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/sdk/resource" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + semconv "go.opentelemetry.io/otel/semconv/v1.24.0" + "go.opentelemetry.io/otel/trace" "golang.org/x/term" ) @@ -31,7 +43,17 @@ var ( workdir string - debug bool + debug bool + verbosity int + silent bool + progress string + + stdoutIsTTY = isatty.IsTerminal(os.Stdout.Fd()) + stderrIsTTY = isatty.IsTerminal(os.Stderr.Fd()) + + autoTTY = stdoutIsTTY || stderrIsTTY + + Frontend = idtui.New() ) func init() { @@ -40,21 +62,12 @@ func init() { // and prints unneeded warning logs. logrus.StandardLogger().SetOutput(io.Discard) - rootCmd.PersistentFlags().StringVar(&workdir, "workdir", ".", "The host workdir loaded into dagger") - rootCmd.PersistentFlags().BoolVar(&debug, "debug", false, "Show more information for debugging") - - for _, fl := range []string{"workdir"} { - if err := rootCmd.PersistentFlags().MarkHidden(fl); err != nil { - fmt.Println("Error hiding flag: "+fl, err) - os.Exit(1) - } - } - rootCmd.AddCommand( listenCmd, versionCmd, queryCmd, runCmd, + watchCmd, configCmd, moduleInitCmd, moduleInstallCmd, @@ -104,10 +117,10 @@ var rootCmd = &cobra.Command{ return fmt.Errorf("create trace: %w", err) } - if err := trace.Start(traceF); err != nil { + if err := runtimetrace.Start(traceF); err != nil { return fmt.Errorf("start trace: %w", err) } - cobra.OnFinalize(trace.Stop) + cobra.OnFinalize(runtimetrace.Stop) } if pprofAddr != "" { @@ -124,7 +137,8 @@ var rootCmd = &cobra.Command{ return err } - t := analytics.New(analytics.DefaultConfig()) + labels := telemetry.LoadDefaultLabels(workdir, engine.Version) + t := analytics.New(analytics.DefaultConfig(labels)) cmd.SetContext(analytics.WithContext(cmd.Context(), t)) cobra.OnFinalize(func() { t.Close() @@ -140,13 +154,90 @@ var rootCmd = &cobra.Command{ }, } +func installGlobalFlags(flags *pflag.FlagSet) { + flags.StringVar(&workdir, "workdir", ".", "The host workdir loaded into dagger") + flags.CountVarP(&verbosity, "verbose", "v", "increase verbosity (use -vv or -vvv for more)") + flags.BoolVarP(&debug, "debug", "d", false, "show debug logs and full verbosity") + flags.BoolVarP(&silent, "silent", "s", false, "disable terminal UI and progress output") + flags.StringVar(&progress, "progress", "auto", "progress output format (auto, plain, tty)") + + for _, fl := range []string{"workdir"} { + if err := flags.MarkHidden(fl); err != nil { + fmt.Println("Error hiding flag: "+fl, err) + os.Exit(1) + } + } +} + +func parseGlobalFlags() { + flags := pflag.NewFlagSet("global", pflag.ContinueOnError) + flags.ParseErrorsWhitelist.UnknownFlags = true + installGlobalFlags(flags) + if err := flags.Parse(os.Args[1:]); err != nil && !errors.Is(err, pflag.ErrHelp) { + fmt.Fprintln(os.Stderr, err) + os.Exit(1) + } +} + +func Tracer() trace.Tracer { + return otel.Tracer("dagger.io/cli") +} + +func Resource() *resource.Resource { + attrs := []attribute.KeyValue{ + semconv.ServiceName("dagger-cli"), + semconv.ServiceVersion(engine.Version), + semconv.ProcessCommandArgs(os.Args...), + } + for k, v := range telemetry.LoadDefaultLabels(workdir, engine.Version) { + attrs = append(attrs, attribute.String(k, v)) + } + return resource.NewWithAttributes(semconv.SchemaURL, attrs...) +} + func main() { - closer := tracing.Init() - if err := rootCmd.Execute(); err != nil { - closer.Close() + parseGlobalFlags() + + Frontend.Debug = debug + Frontend.Plain = progress == "plain" + Frontend.Silent = silent + Frontend.Verbosity = verbosity + + installGlobalFlags(rootCmd.PersistentFlags()) + + ctx := context.Background() + + if err := Frontend.Run(ctx, func(ctx context.Context) (rerr error) { + // Init tracing as early as possible and shutdown after the command + // completes, ensuring progress is fully flushed to the frontend. + ctx = telemetry.Init(ctx, telemetry.Config{ + Detect: true, + Resource: Resource(), + LiveTraceExporters: []sdktrace.SpanExporter{Frontend}, + LiveLogExporters: []sdklog.LogExporter{Frontend}, + }) + defer telemetry.Close() + + // Set the full command string as the name of the root span. + // + // If you pass credentials in plaintext, yes, they will be leaked; don't do + // that, since they will also be leaked in various other places (like the + // process tree). Use Secret arguments instead. + ctx, span := Tracer().Start(ctx, strings.Join(os.Args, " ")) + defer telemetry.End(span, func() error { return rerr }) + + // Set the span as the primary span for the frontend. + Frontend.SetPrimary(span.SpanContext().SpanID()) + + // Direct command stdout/stderr to span logs via OpenTelemetry. + ctx, stdout, stderr := telemetry.WithStdioToOtel(ctx, "dagger") + rootCmd.SetOut(stdout) + rootCmd.SetErr(stderr) + + return rootCmd.ExecuteContext(ctx) + }); err != nil { os.Exit(1) } - closer.Close() } func NormalizeWorkdir(workdir string) (string, error) { diff --git a/cmd/dagger/module.go b/cmd/dagger/module.go index bbad1e751d4..2844f0ed3c0 100644 --- a/cmd/dagger/module.go +++ b/cmd/dagger/module.go @@ -14,14 +14,13 @@ import ( "dagger.io/dagger" "github.com/dagger/dagger/analytics" "github.com/dagger/dagger/core/modules" - "github.com/dagger/dagger/dagql/idtui" "github.com/dagger/dagger/engine/client" + "github.com/dagger/dagger/telemetry" "github.com/go-git/go-git/v5" "github.com/iancoleman/strcase" "github.com/moby/buildkit/util/gitutil" "github.com/spf13/cobra" "github.com/spf13/pflag" - "github.com/vito/progrock" ) var ( @@ -55,7 +54,6 @@ const ( func init() { moduleFlags.StringVarP(&moduleURL, "mod", "m", "", "Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. \"/path/to/some/dir\") or a github repo (e.g. \"github.com/dagger/dagger/path/to/some/subdir\")") - moduleFlags.BoolVar(&focus, "focus", true, "Only show output for focused commands") listenCmd.PersistentFlags().AddFlagSet(moduleFlags) queryCmd.PersistentFlags().AddFlagSet(moduleFlags) @@ -98,14 +96,9 @@ The "--source" flag allows controlling the directory in which the actual module RunE: func(cmd *cobra.Command, extraArgs []string) (rerr error) { ctx := cmd.Context() - return withEngineAndTUI(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { + return withEngine(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { dag := engineClient.Dagger() - ctx, vtx := progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer func() { vtx.Done(err) }() - cmd.SetContext(ctx) - setCmdOutput(cmd, vtx) - // default the module source root to the current working directory if it doesn't exist yet cwd, err := os.Getwd() if err != nil { @@ -189,7 +182,7 @@ var moduleInstallCmd = &cobra.Command{ Args: cobra.ExactArgs(1), RunE: func(cmd *cobra.Command, extraArgs []string) (rerr error) { ctx := cmd.Context() - return withEngineAndTUI(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { + return withEngine(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { dag := engineClient.Dagger() modConf, err := getDefaultModuleConfiguration(ctx, dag, true, false) if err != nil { @@ -313,7 +306,7 @@ If not updating source or SDK, this is only required for IDE auto-completion/LSP GroupID: moduleGroup.ID, RunE: func(cmd *cobra.Command, extraArgs []string) (rerr error) { ctx := cmd.Context() - return withEngineAndTUI(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { + return withEngine(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { dag := engineClient.Dagger() modConf, err := getDefaultModuleConfiguration(ctx, dag, true, false) if err != nil { @@ -403,13 +396,8 @@ forced), to avoid mistakenly depending on uncommitted files. GroupID: moduleGroup.ID, RunE: func(cmd *cobra.Command, extraArgs []string) (rerr error) { ctx := cmd.Context() - return withEngineAndTUI(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { - rec := progrock.FromContext(ctx) - - ctx, vtx := progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer func() { vtx.Done(err) }() - cmd.SetContext(ctx) - setCmdOutput(cmd, vtx) + return withEngine(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) (err error) { + log := telemetry.GlobalLogger(ctx) dag := engineClient.Dagger() modConf, err := getDefaultModuleConfiguration(ctx, dag, true, true) @@ -442,7 +430,7 @@ forced), to avoid mistakenly depending on uncommitted files. } commit := head.Hash() - rec.Debug("git commit", progrock.Labelf("commit", commit.String())) + log.Debug("git commit", "commit", commit.String()) orig, err := repo.Remote("origin") if err != nil { @@ -717,7 +705,7 @@ func optionalModCmdWrapper( presetSecretToken string, ) func(*cobra.Command, []string) error { return func(cmd *cobra.Command, cmdArgs []string) error { - return withEngineAndTUI(cmd.Context(), client.Params{ + return withEngine(cmd.Context(), client.Params{ SecretToken: presetSecretToken, }, func(ctx context.Context, engineClient *client.Client) (err error) { _, explicitModRefSet := getExplicitModuleSourceRef() @@ -801,7 +789,7 @@ fragment FieldParts on FieldTypeDef { } } -query TypeDefs($module: ModuleID!) { +query TypeDefs { typeDefs: currentTypeDefs { kind optional @@ -837,9 +825,6 @@ query TypeDefs($module: ModuleID!) { err := dag.Do(ctx, &dagger.Request{ Query: query, - Variables: map[string]interface{}{ - "module": mod, - }, }, &dagger.Response{ Data: &res, }) diff --git a/cmd/dagger/query.go b/cmd/dagger/query.go index f8a1aed2215..1727e04d9ea 100644 --- a/cmd/dagger/query.go +++ b/cmd/dagger/query.go @@ -9,10 +9,8 @@ import ( "strings" "dagger.io/dagger" - "github.com/dagger/dagger/dagql/idtui" "github.com/dagger/dagger/engine/client" "github.com/spf13/cobra" - "github.com/vito/progrock" "golang.org/x/term" ) @@ -53,8 +51,6 @@ EOF } func Query(ctx context.Context, engineClient *client.Client, _ *dagger.Module, cmd *cobra.Command, args []string) (rerr error) { - ctx, vtx := progrock.Span(ctx, idtui.PrimaryVertex, cmd.CommandPath()) - defer func() { vtx.Done(rerr) }() res, err := runQuery(ctx, engineClient, args) if err != nil { return err @@ -63,7 +59,7 @@ func Query(ctx context.Context, engineClient *client.Client, _ *dagger.Module, c if err != nil { return err } - fmt.Fprintf(vtx.Stdout(), "%s\n", result) + fmt.Fprintf(cmd.OutOrStdout(), "%s\n", result) return nil } diff --git a/cmd/dagger/run.go b/cmd/dagger/run.go index ba7b8ad7b5b..caaabfceba5 100644 --- a/cmd/dagger/run.go +++ b/cmd/dagger/run.go @@ -11,11 +11,11 @@ import ( "strings" "time" - "github.com/dagger/dagger/dagql/idtui" + "github.com/dagger/dagger/dagql/ioctx" "github.com/dagger/dagger/engine/client" + "github.com/dagger/dagger/telemetry" "github.com/google/uuid" "github.com/spf13/cobra" - "github.com/vito/progrock" ) var runCmd = &cobra.Command{ @@ -71,7 +71,7 @@ func init() { } func Run(cmd *cobra.Command, args []string) { - ctx := context.Background() + ctx := cmd.Context() err := run(ctx, args) if err != nil { @@ -95,9 +95,7 @@ func run(ctx context.Context, args []string) error { sessionToken := u.String() - focus = runFocus - useLegacyTUI = true - return withEngineAndTUI(ctx, client.Params{ + return withEngine(ctx, client.Params{ SecretToken: sessionToken, }, func(ctx context.Context, engineClient *client.Client) error { sessionL, err := net.Listen("tcp", "127.0.0.1:0") @@ -106,12 +104,16 @@ func run(ctx context.Context, args []string) error { } defer sessionL.Close() + env := os.Environ() sessionPort := fmt.Sprintf("%d", sessionL.Addr().(*net.TCPAddr).Port) - os.Setenv("DAGGER_SESSION_PORT", sessionPort) - os.Setenv("DAGGER_SESSION_TOKEN", sessionToken) + env = append(env, "DAGGER_SESSION_PORT="+sessionPort) + env = append(env, "DAGGER_SESSION_TOKEN="+sessionToken) + env = append(env, telemetry.PropagationEnv(ctx)...) subCmd := exec.CommandContext(ctx, args[0], args[1:]...) // #nosec + subCmd.Env = env + // allow piping to the command subCmd.Stdin = os.Stdin @@ -120,26 +122,30 @@ func run(ctx context.Context, args []string) error { // shell because Ctrl+C sends to the process group.) ensureChildProcessesAreKilled(subCmd) - go http.Serve(sessionL, engineClient) //nolint:gosec + srv := &http.Server{ //nolint:gosec + Handler: engineClient, + BaseContext: func(listener net.Listener) context.Context { + return ctx + }, + } + + go srv.Serve(sessionL) var cmdErr error if !silent { - cmdline := strings.Join(subCmd.Args, " ") - _, cmdVtx := progrock.Span(ctx, idtui.PrimaryVertex, cmdline) if stdoutIsTTY { - subCmd.Stdout = cmdVtx.Stdout() + subCmd.Stdout = ioctx.Stdout(ctx) } else { subCmd.Stdout = os.Stdout } if stderrIsTTY { - subCmd.Stderr = cmdVtx.Stderr() + subCmd.Stderr = ioctx.Stderr(ctx) } else { subCmd.Stderr = os.Stderr } cmdErr = subCmd.Run() - cmdVtx.Done(cmdErr) } else { subCmd.Stdout = os.Stdout subCmd.Stderr = os.Stderr diff --git a/cmd/dagger/session.go b/cmd/dagger/session.go index 6097f1562eb..e95dd54960b 100644 --- a/cmd/dagger/session.go +++ b/cmd/dagger/session.go @@ -12,16 +12,13 @@ import ( "syscall" "time" - "github.com/dagger/dagger/core/pipeline" - "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/client" "github.com/dagger/dagger/telemetry" "github.com/google/uuid" "github.com/spf13/cobra" - "github.com/vito/progrock/console" ) -var sessionLabels pipeline.Labels +var sessionLabels = telemetry.NewLabelFlag() func sessionCmd() *cobra.Command { cmd := &cobra.Command{ @@ -41,14 +38,14 @@ type connectParams struct { } func EngineSession(cmd *cobra.Command, args []string) error { - ctx := context.Background() + ctx := cmd.Context() sessionToken, err := uuid.NewRandom() if err != nil { return err } - labels := &sessionLabels + labelsFlag := &sessionLabels signalCh := make(chan os.Signal, 1) signal.Notify(signalCh, syscall.SIGINT, syscall.SIGTERM) @@ -72,46 +69,37 @@ func EngineSession(cmd *cobra.Command, args []string) error { port := l.Addr().(*net.TCPAddr).Port - runnerHost, err := engine.RunnerHost() - if err != nil { - return err - } - - sess, _, err := client.Connect(ctx, client.Params{ - SecretToken: sessionToken.String(), - RunnerHost: runnerHost, - UserAgent: labels.AppendCILabel().AppendAnonymousGitLabels(workdir).String(), - ProgrockWriter: telemetry.NewLegacyIDInternalizer(console.NewWriter(os.Stderr)), - JournalFile: os.Getenv("_EXPERIMENTAL_DAGGER_JOURNAL"), - }) - if err != nil { - return err - } - defer sess.Close() - - srv := http.Server{ - Handler: sess, - ReadHeaderTimeout: 30 * time.Second, - } - - paramBytes, err := json.Marshal(connectParams{ - Port: port, - SessionToken: sessionToken.String(), - }) - if err != nil { - return err - } - paramBytes = append(paramBytes, '\n') - go func() { - if _, err := os.Stdout.Write(paramBytes); err != nil { - panic(err) + return withEngine(ctx, client.Params{ + SecretToken: sessionToken.String(), + UserAgent: labelsFlag.Labels.WithCILabels().WithAnonymousGitLabels(workdir).UserAgent(), + }, func(ctx context.Context, sess *client.Client) error { + srv := http.Server{ + Handler: sess, + ReadHeaderTimeout: 30 * time.Second, + BaseContext: func(net.Listener) context.Context { + return ctx + }, } - }() - err = srv.Serve(l) - // if error is "use of closed network connection", it's expected - if err != nil && !errors.Is(err, net.ErrClosed) { - return err - } - return nil + paramBytes, err := json.Marshal(connectParams{ + Port: port, + SessionToken: sessionToken.String(), + }) + if err != nil { + return err + } + paramBytes = append(paramBytes, '\n') + go func() { + if _, err := os.Stdout.Write(paramBytes); err != nil { + panic(err) + } + }() + + err = srv.Serve(l) + // if error is "use of closed network connection", it's expected + if err != nil && !errors.Is(err, net.ErrClosed) { + return err + } + return nil + }) } diff --git a/cmd/dagger/shell.go b/cmd/dagger/shell.go index 7d81782e9d8..e313e172d17 100644 --- a/cmd/dagger/shell.go +++ b/cmd/dagger/shell.go @@ -7,29 +7,62 @@ import ( "errors" "fmt" "io" + "log/slog" "net/http" "os" "strconv" - "github.com/dagger/dagger/dagql/idtui" + tea "github.com/charmbracelet/bubbletea" "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/client" "github.com/gorilla/websocket" - "github.com/vito/midterm" - "github.com/vito/progrock" + "github.com/mattn/go-isatty" + "golang.org/x/term" ) func attachToShell(ctx context.Context, engineClient *client.Client, shellEndpoint string) (rerr error) { + return Frontend.Background(&terminalSession{ + Ctx: ctx, + Client: engineClient, + Endpoint: shellEndpoint, + }) +} + +type terminalSession struct { + Ctx context.Context + Client *client.Client + Endpoint string + + stdin io.Reader + stdout io.Writer + stderr io.Writer +} + +var _ tea.ExecCommand = (*terminalSession)(nil) + +func (ts *terminalSession) SetStdin(r io.Reader) { + ts.stdin = r +} + +func (ts *terminalSession) SetStdout(w io.Writer) { + ts.stdout = w +} + +func (ts *terminalSession) SetStderr(w io.Writer) { + ts.stderr = w +} + +func (ts *terminalSession) Run() error { dialer := &websocket.Dialer{ - NetDialContext: engineClient.DialContext, + NetDialContext: ts.Client.DialContext, } reqHeader := http.Header{} - if engineClient.SecretToken != "" { - reqHeader["Authorization"] = []string{"Basic " + base64.StdEncoding.EncodeToString([]byte(engineClient.SecretToken+":"))} + if ts.Client.SecretToken != "" { + reqHeader["Authorization"] = []string{"Basic " + base64.StdEncoding.EncodeToString([]byte(ts.Client.SecretToken+":"))} } - wsconn, errResp, err := dialer.DialContext(ctx, shellEndpoint, reqHeader) + wsconn, errResp, err := dialer.DialContext(ts.Ctx, ts.Endpoint, reqHeader) if err != nil { if errors.Is(err, websocket.ErrBadHandshake) { return fmt.Errorf("dial error %d: %w", errResp.StatusCode, err) @@ -37,32 +70,20 @@ func attachToShell(ctx context.Context, engineClient *client.Client, shellEndpoi return fmt.Errorf("dial: %w", err) } - // wsconn is closed as part of the caller closing engineClient + if err := ts.sendSize(wsconn); err != nil { + return fmt.Errorf("sending initial size: %w", err) + } + + go ts.listenForResize(wsconn) + + // wsconn is closed as part of the caller closing ts.client if errResp != nil { defer errResp.Body.Close() } - shellStdinR, shellStdinW := io.Pipe() - - // NB: this is not idtui.PrimaryVertex because instead of spitting out the - // raw TTY output, we want to render the post-processed vterm. - _, vtx := progrock.Span(ctx, shellEndpoint, "terminal", - idtui.Zoomed(func(term *midterm.Terminal) io.Writer { - term.ForwardRequests = os.Stderr - term.ForwardResponses = shellStdinW - term.CursorVisible = true - term.OnResize(func(h, w int) { - message := []byte(engine.ResizePrefix) - message = append(message, []byte(fmt.Sprintf("%d;%d", w, h))...) - // best effort - _ = wsconn.WriteMessage(websocket.BinaryMessage, message) - }) - return shellStdinW - })) - defer func() { vtx.Done(rerr) }() - - stdout := vtx.Stdout() - stderr := vtx.Stderr() + buf := new(bytes.Buffer) + stdout := io.MultiWriter(buf, ts.stdout) + stderr := io.MultiWriter(buf, ts.stdout) // Handle incoming messages errCh := make(chan error) @@ -100,7 +121,7 @@ func attachToShell(ctx context.Context, engineClient *client.Client, shellEndpoi b := make([]byte, 512) for { - n, err := shellStdinR.Read(b) + n, err := ts.stdin.Read(b) if err != nil { fmt.Fprintf(os.Stderr, "read: %v\n", err) continue @@ -120,7 +141,28 @@ func attachToShell(ctx context.Context, engineClient *client.Client, shellEndpoi } if exitCode != 0 { - return fmt.Errorf("exited with code %d", exitCode) + return fmt.Errorf("exited with code %d\n\nOutput:\n\n%s", exitCode, buf.String()) + } + + return nil +} + +func (ts *terminalSession) sendSize(wsconn *websocket.Conn) error { + f, ok := ts.stdin.(*os.File) + if !ok || !isatty.IsTerminal(f.Fd()) { + slog.Debug("stdin is not a terminal; cannot get terminal size") + return nil + } + + w, h, err := term.GetSize(int(f.Fd())) + if err != nil { + return fmt.Errorf("get terminal size: %w", err) + } + + message := []byte(engine.ResizePrefix) + message = append(message, []byte(fmt.Sprintf("%d;%d", w, h))...) + if err := wsconn.WriteMessage(websocket.BinaryMessage, message); err != nil { + return fmt.Errorf("send resize message: %w", err) } return nil diff --git a/cmd/dagger/shell_nounix.go b/cmd/dagger/shell_nounix.go new file mode 100644 index 00000000000..a9f4898e3f2 --- /dev/null +++ b/cmd/dagger/shell_nounix.go @@ -0,0 +1,11 @@ +//go:build !unix +// +build !unix + +package main + +import ( + "github.com/gorilla/websocket" +) + +func (ts *terminalSession) listenForResize(wsconn *websocket.Conn) { +} diff --git a/cmd/dagger/shell_unix.go b/cmd/dagger/shell_unix.go new file mode 100644 index 00000000000..b76a600467e --- /dev/null +++ b/cmd/dagger/shell_unix.go @@ -0,0 +1,21 @@ +//go:build unix +// +build unix + +package main + +import ( + "os" + "os/signal" + "syscall" + + "github.com/gorilla/websocket" +) + +func (ts *terminalSession) listenForResize(wsconn *websocket.Conn) { + sig := make(chan os.Signal, 1) + signal.Notify(sig, syscall.SIGWINCH) + defer signal.Stop(sig) + for range sig { + ts.sendSize(wsconn) + } +} diff --git a/cmd/dagger/watch.go b/cmd/dagger/watch.go new file mode 100644 index 00000000000..f6f26877c08 --- /dev/null +++ b/cmd/dagger/watch.go @@ -0,0 +1,33 @@ +package main + +import ( + "context" + + "github.com/dagger/dagger/engine/client" + "github.com/spf13/cobra" + "go.opentelemetry.io/otel/trace" +) + +var watchCmd = &cobra.Command{ + Use: "watch [flags] COMMAND", + Hidden: true, + Annotations: map[string]string{ + "experimental": "true", + }, + Aliases: []string{"w"}, + Short: "Watch activity across all Dagger sessions.", + Example: `dagger watch`, + RunE: Watch, +} + +func Watch(cmd *cobra.Command, _ []string) error { + // HACK: the PubSub service treats the 000000000 trace ID as "subscribe to + // everything", and the client subscribes to its current trace ID, so let's + // just zero it out. + ctx := trace.ContextWithSpanContext(cmd.Context(), trace.SpanContext{}) + + return withEngine(ctx, client.Params{}, func(ctx context.Context, engineClient *client.Client) error { + <-ctx.Done() + return ctx.Err() + }) +} diff --git a/cmd/engine/logger.go b/cmd/engine/logger.go index 018481debde..d0f8a3662c5 100644 --- a/cmd/engine/logger.go +++ b/cmd/engine/logger.go @@ -3,14 +3,11 @@ package main import ( "os" - "github.com/dagger/dagger/telemetry" "github.com/moby/buildkit/identity" - "github.com/sirupsen/logrus" ) var ( engineName string - tel *telemetry.Telemetry ) func init() { @@ -26,49 +23,50 @@ func init() { } } - tel = telemetry.New() + // TODO(vito): send engine logs over OTLP + // tel = telemetry.New() - logrus.AddHook(&cloudHook{}) + // logrus.AddHook(&cloudHook{}) } -type cloudHook struct{} +// type cloudHook struct{} -var _ logrus.Hook = (*cloudHook)(nil) +// var _ logrus.Hook = (*cloudHook)(nil) -func (h *cloudHook) Levels() []logrus.Level { - return logrus.AllLevels -} +// func (h *cloudHook) Levels() []logrus.Level { +// return logrus.AllLevels +// } -func (h *cloudHook) Fire(entry *logrus.Entry) error { - payload := &engineLogPayload{ - Engine: engineMetadata{ - Name: engineName, - }, - Message: entry.Message, - Level: entry.Level.String(), - Fields: entry.Data, - } +// func (h *cloudHook) Fire(entry *logrus.Entry) error { +// payload := &engineLogPayload{ +// Engine: engineMetadata{ +// Name: engineName, +// }, +// Message: entry.Message, +// Level: entry.Level.String(), +// Fields: entry.Data, +// } - tel.Push(payload, entry.Time) - return nil -} +// tel.Push(payload, entry.Time) +// return nil +// } -type engineLogPayload struct { - Engine engineMetadata `json:"engine"` - Message string `json:"message"` - Level string `json:"level"` - // NOTE: fields includes traceID and spanID, can we use that to correlate with clients? - Fields map[string]any `json:"fields"` -} +// type engineLogPayload struct { +// Engine engineMetadata `json:"engine"` +// Message string `json:"message"` +// Level string `json:"level"` +// // NOTE: fields includes traceID and spanID, can we use that to correlate with clients? +// Fields map[string]any `json:"fields"` +// } -func (engineLogPayload) Type() telemetry.EventType { - return telemetry.EventType("engine_log") -} +// func (engineLogPayload) Type() telemetry.EventType { +// return telemetry.EventType("engine_log") +// } -func (engineLogPayload) Scope() telemetry.EventScope { - return telemetry.EventScopeSystem -} +// func (engineLogPayload) Scope() telemetry.EventScope { +// return telemetry.EventScopeSystem +// } -type engineMetadata struct { - Name string `json:"name"` -} +// type engineMetadata struct { +// Name string `json:"name"` +// } diff --git a/cmd/engine/main.go b/cmd/engine/main.go index 09f21ae5ff4..9577735b45e 100644 --- a/cmd/engine/main.go +++ b/cmd/engine/main.go @@ -22,13 +22,15 @@ import ( "github.com/containerd/containerd/remotes/docker" "github.com/containerd/containerd/sys" sddaemon "github.com/coreos/go-systemd/v22/daemon" + "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/cache" "github.com/dagger/dagger/engine/server" "github.com/dagger/dagger/network" "github.com/dagger/dagger/network/netinst" + "github.com/dagger/dagger/telemetry" + "github.com/dagger/dagger/telemetry/sdklog" "github.com/docker/docker/pkg/reexec" "github.com/gofrs/flock" - grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware" "github.com/moby/buildkit/cache/remotecache" "github.com/moby/buildkit/cache/remotecache/azblob" "github.com/moby/buildkit/cache/remotecache/gha" @@ -52,14 +54,9 @@ import ( "github.com/moby/buildkit/util/appdefaults" "github.com/moby/buildkit/util/archutil" "github.com/moby/buildkit/util/bklog" - "github.com/moby/buildkit/util/grpcerrors" "github.com/moby/buildkit/util/profiler" "github.com/moby/buildkit/util/resolver" "github.com/moby/buildkit/util/stack" - "github.com/moby/buildkit/util/tracing/detect" - _ "github.com/moby/buildkit/util/tracing/detect/jaeger" - _ "github.com/moby/buildkit/util/tracing/env" - "github.com/moby/buildkit/util/tracing/transform" "github.com/moby/buildkit/version" "github.com/moby/buildkit/worker" ocispecs "github.com/opencontainers/image-spec/specs-go/v1" @@ -68,9 +65,11 @@ import ( "github.com/sirupsen/logrus" "github.com/urfave/cli" "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc" - "go.opentelemetry.io/otel/propagation" - sdktrace "go.opentelemetry.io/otel/sdk/trace" - "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/sdk/resource" + "go.opentelemetry.io/otel/sdk/trace" + semconv "go.opentelemetry.io/otel/semconv/v1.24.0" + logsv1 "go.opentelemetry.io/proto/otlp/collector/logs/v1" + metricsv1 "go.opentelemetry.io/proto/otlp/collector/metrics/v1" tracev1 "go.opentelemetry.io/proto/otlp/collector/trace/v1" "golang.org/x/sync/errgroup" "google.golang.org/grpc" @@ -90,13 +89,8 @@ func init() { if reexec.Init() { os.Exit(0) } - - // enable in memory recording for buildkitd traces - detect.Recorder = detect.NewTraceRecorder() } -var propagators = propagation.NewCompositeTextMapPropagator(propagation.TraceContext{}, propagation.Baggage{}) - type workerInitializerOpt struct { config *config.Config sessionManager *session.Manager @@ -233,6 +227,19 @@ func main() { //nolint:gocyclo ctx, cancel := context.WithCancel(appcontext.Context()) defer cancel() + pubsub := telemetry.NewPubSub() + + ctx = telemetry.Init(ctx, telemetry.Config{ + Detect: true, + Resource: resource.NewWithAttributes( + semconv.SchemaURL, + semconv.ServiceNameKey.String("dagger-engine"), + semconv.ServiceVersionKey.String(engine.Version), + ), + LiveTraceExporters: []trace.SpanExporter{pubsub}, + LiveLogExporters: []sdklog.LogExporter{pubsub}, + }) + bklog.G(ctx).Debug("loading engine config file") cfg, err := config.LoadFile(c.GlobalString("config")) if err != nil { @@ -265,9 +272,7 @@ func main() { //nolint:gocyclo // Wire slog up to send to Logrus so engine logs using slog also get sent // to Cloud - slogOpts := sloglogrus.Option{ - AddSource: true, - } + slogOpts := sloglogrus.Option{} if cfg.Debug { slogOpts.Level = slog.LevelDebug logrus.SetLevel(logrus.DebugLevel) @@ -283,27 +288,8 @@ func main() { //nolint:gocyclo } } - bklog.G(ctx).Debug("setting up engine tracing") - - tp, err := detect.TracerProvider() - if err != nil { - // just log it, this can happen when there's mismatching versions of otel libraries in your - // module dependency DAG... - bklog.G(ctx).WithError(err).Error("failed to create tracer provider") - } - - // FIXME: continuing to use the deprecated interceptor until/unless there's a replacement that works w/ grpc_middleware - //nolint:staticcheck // SA1019 deprecated - streamTracer := otelgrpc.StreamServerInterceptor(otelgrpc.WithTracerProvider(tp), otelgrpc.WithPropagators(propagators)) - - // NOTE: using context.Background because otherwise when the outer context is cancelled the server - // stops working. Server shutdown based on context cancellation is handled later in this func. - unary := grpc_middleware.ChainUnaryServer(unaryInterceptor(context.Background(), tp), grpcerrors.UnaryServerInterceptor) - stream := grpc_middleware.ChainStreamServer(streamTracer, grpcerrors.StreamServerInterceptor) - bklog.G(ctx).Debug("creating engine GRPC server") - grpcOpts := []grpc.ServerOption{grpc.UnaryInterceptor(unary), grpc.StreamInterceptor(stream)} - server := grpc.NewServer(grpcOpts...) + server := grpc.NewServer(grpc.StatsHandler(otelgrpc.NewServerHandler())) // relative path does not work with nightlyone/lockfile root, err := filepath.Abs(cfg.Root) @@ -332,7 +318,7 @@ func main() { //nolint:gocyclo }() bklog.G(ctx).Debug("creating engine controller") - controller, cacheManager, err := newController(ctx, c, &cfg) + controller, cacheManager, err := newController(ctx, c, &cfg, pubsub) if err != nil { return err } @@ -407,8 +393,8 @@ func main() { //nolint:gocyclo } app.After = func(_ *cli.Context) error { - tel.Close() - return detect.Shutdown(context.TODO()) + telemetry.Close() + return nil } profiler.Attach(app) @@ -650,38 +636,6 @@ func getListener(addr string, uid, gid int, tlsConfig *tls.Config) (net.Listener } } -func unaryInterceptor(globalCtx context.Context, tp trace.TracerProvider) grpc.UnaryServerInterceptor { - // FIXME: continuing to use the deprecated interceptor until/unless there's a replacement that works w/ grpc_middleware - //nolint:staticcheck // SA1019 deprecated - withTrace := otelgrpc.UnaryServerInterceptor(otelgrpc.WithTracerProvider(tp), otelgrpc.WithPropagators(propagators)) - - return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) { - ctx, cancel := context.WithCancel(ctx) - defer cancel() - - go func() { - select { - case <-ctx.Done(): - case <-globalCtx.Done(): - cancel() - } - }() - - if strings.HasSuffix(info.FullMethod, "opentelemetry.proto.collector.trace.v1.TraceService/Export") { - return handler(ctx, req) - } - - resp, err = withTrace(ctx, req, info, handler) - if err != nil { - logrus.Errorf("%s returned error: %v", info.FullMethod, err) - if logrus.GetLevel() >= logrus.DebugLevel { - fmt.Fprintf(os.Stderr, "%+v", stack.Formatter(grpcerrors.FromGRPC(err))) - } - } - return - } -} - func serverCredentials(cfg config.TLSConfig) (*tls.Config, error) { certFile := cfg.Cert keyFile := cfg.Key @@ -720,23 +674,16 @@ func serverCredentials(cfg config.TLSConfig) (*tls.Config, error) { return tlsConf, nil } -func newController(ctx context.Context, c *cli.Context, cfg *config.Config) (*server.BuildkitController, cache.Manager, error) { +func newController(ctx context.Context, c *cli.Context, cfg *config.Config, pubsub *telemetry.PubSub) (*server.BuildkitController, cache.Manager, error) { sessionManager, err := session.NewManager() if err != nil { return nil, nil, err } - tc, _, err := detect.Exporter() - if err != nil { - // just log it, this can happen when there's mismatching versions of otel libraries in your - // module dependency DAG... - bklog.G(ctx).WithError(err).Error("failed to create tracer exporter") - } - var traceSocket string - if tc != nil { + if pubsub != nil { traceSocket = filepath.Join(cfg.Root, "otel-grpc.sock") - if err := runTraceController(traceSocket, tc); err != nil { + if err := runOtelController(traceSocket, pubsub); err != nil { logrus.Warnf("failed set up otel-grpc controller: %v", err) traceSocket = "" } @@ -824,7 +771,7 @@ func newController(ctx context.Context, c *cli.Context, cfg *config.Config) (*se Entitlements: cfg.Entitlements, EngineName: engineName, Frontends: frontends, - TraceCollector: tc, + TelemetryPubSub: pubsub, UpstreamCacheExporters: remoteCacheExporterFuncs, UpstreamCacheImporters: remoteCacheImporterFuncs, DNSConfig: getDNSConfig(cfg.DNS), @@ -950,9 +897,12 @@ func parseBoolOrAuto(s string) (*bool, error) { return &b, err } -func runTraceController(p string, exp sdktrace.SpanExporter) error { +// Run a separate gRPC serving _only_ the trace/log exporter services. +func runOtelController(p string, pubsub *telemetry.PubSub) error { server := grpc.NewServer() - tracev1.RegisterTraceServiceServer(server, &traceCollector{exporter: exp}) + tracev1.RegisterTraceServiceServer(server, &telemetry.TraceServer{PubSub: pubsub}) + logsv1.RegisterLogsServiceServer(server, &telemetry.LogsServer{PubSub: pubsub}) + metricsv1.RegisterMetricsServiceServer(server, &telemetry.MetricsServer{PubSub: pubsub}) uid := os.Getuid() l, err := sys.GetLocalListener(p, uid, uid) if err != nil { @@ -966,19 +916,6 @@ func runTraceController(p string, exp sdktrace.SpanExporter) error { return nil } -type traceCollector struct { - *tracev1.UnimplementedTraceServiceServer - exporter sdktrace.SpanExporter -} - -func (t *traceCollector) Export(ctx context.Context, req *tracev1.ExportTraceServiceRequest) (*tracev1.ExportTraceServiceResponse, error) { - err := t.exporter.ExportSpans(ctx, transform.Spans(req.GetResourceSpans())) - if err != nil { - return nil, err - } - return &tracev1.ExportTraceServiceResponse{}, nil -} - type networkConfig struct { NetName string NetCIDR string diff --git a/cmd/otel-collector/logs.go b/cmd/otel-collector/logs.go deleted file mode 100644 index a424c6a0f1d..00000000000 --- a/cmd/otel-collector/logs.go +++ /dev/null @@ -1,143 +0,0 @@ -package main - -import ( - "encoding/json" - "fmt" - "os" - "time" - - "github.com/dagger/dagger/cmd/otel-collector/loki" -) - -type Event struct { - Name string `json:"name"` - Duration int64 `json:"duration"` - Error string `json:"error,omitempty"` - Tags map[string]string `json:"tag,omitempty"` - TraceID string `json:"trace_id,omitempty"` - Hostname string `json:"hostname,omitempty"` -} - -func (e Event) Errored() bool { - return e.Error != "" -} - -type Label struct { - Type string `json:"type"` - Cached bool `json:"cached"` - Errored bool `json:"errored"` -} - -const ( - TypeRun = "run" - TypePipeline = "pipeline" - TypeOp = "op" -) - -func logSummary(name string, vertices VertexList, tags map[string]string, traceID string) error { - client := loki.New( - env("GRAFANA_CLOUD_USER_ID"), - env("GRAFANA_CLOUD_API_KEY"), - env("GRAFANA_CLOUD_URL"), - ) - defer client.Flush() - - hostname, err := os.Hostname() - if err != nil { - hostname = "" - } - - runEvent := Event{ - Name: name, - Duration: vertices.Duration().Microseconds(), - Error: errorString(vertices.Error()), - Tags: tags, - TraceID: traceID, - Hostname: hostname, - } - runLabel := Label{ - Type: TypeRun, - Cached: vertices.Cached(), - Errored: runEvent.Errored(), - } - err = pushEvent(client, runEvent, runLabel, vertices.Started()) - if err != nil { - return err - } - - for pipeline, vertices := range vertices.ByPipeline() { - pipelineEvent := Event{ - Name: pipeline, - Duration: vertices.Duration().Microseconds(), - Error: errorString(vertices.Error()), - Tags: tags, - TraceID: traceID, - Hostname: hostname, - } - pipelineLabel := Label{ - Type: TypePipeline, - Cached: vertices.Cached(), - Errored: pipelineEvent.Errored(), - } - err := pushEvent(client, pipelineEvent, pipelineLabel, vertices.Started()) - if err != nil { - return err - } - } - - for _, vertex := range vertices { - opEvent := Event{ - Name: vertex.Name(), - Duration: vertex.Duration().Microseconds(), - Error: errorString(vertex.Error()), - Tags: tags, - TraceID: traceID, - Hostname: hostname, - } - opLabel := Label{ - Type: TypeOp, - Cached: vertex.Cached(), - Errored: opEvent.Errored(), - } - err := pushEvent(client, opEvent, opLabel, vertex.Started()) - if err != nil { - return err - } - } - - return nil -} - -func pushEvent(client *loki.Client, event Event, label Label, ts time.Time) error { - marshalled, err := json.Marshal(event) - if err != nil { - return err - } - return client.PushLogLineWithTimestamp( - string(marshalled), - ts, - map[string]string{ - "user": os.Getenv("USER"), - "version": "2023-01-26.1540", - "type": label.Type, - "cached": fmt.Sprintf("%t", label.Cached), - "errored": fmt.Sprintf("%t", label.Errored), - }, - ) -} - -func errorString(err error) string { - if err == nil { - return "" - } - return err.Error() -} - -func env(varName string) string { - env := os.Getenv(varName) - if env == "" { - fmt.Fprintf(os.Stderr, "env var %s must be set\n", varName) - os.Exit(1) - } - return env -} diff --git a/cmd/otel-collector/loki/client.go b/cmd/otel-collector/loki/client.go deleted file mode 100644 index 440fc8dba06..00000000000 --- a/cmd/otel-collector/loki/client.go +++ /dev/null @@ -1,567 +0,0 @@ -// https://github.com/grafana/loki/blob/2dc5a71a6707383aadebe0b10c23c9e09c4f0ce7/integration/client/client.go -package loki - -import ( - "bytes" - "context" - "encoding/json" - "errors" - "fmt" - "io" - "net/http" - "net/url" - "strconv" - "strings" - "time" - - "github.com/weaveworks/common/user" -) - -const requestTimeout = 30 * time.Second - -type roundTripper struct { - instanceID string - token string - injectHeaders map[string][]string - next http.RoundTripper -} - -func (r *roundTripper) RoundTrip(req *http.Request) (*http.Response, error) { - req.Header.Set("X-Scope-OrgID", r.instanceID) - if r.token != "" { - req.SetBasicAuth(r.instanceID, r.token) - } - - for key, values := range r.injectHeaders { - for _, v := range values { - req.Header.Add(key, v) - } - - fmt.Println(req.Header.Values(key)) - } - - return r.next.RoundTrip(req) -} - -type Option interface { - Type() string -} - -type InjectHeadersOption map[string][]string - -func (n InjectHeadersOption) Type() string { - return "headerinject" -} - -// Client is a HTTP client that adds basic auth and scope -type Client struct { - Now time.Time - - httpClient *http.Client - baseURL string - instanceID string -} - -// NewLogsClient creates a new client -func New(instanceID, token, baseURL string, opts ...Option) *Client { - rt := &roundTripper{ - instanceID: instanceID, - token: token, - next: http.DefaultTransport, - } - - for _, opt := range opts { - if opt.Type() == "headerinject" { - rt.injectHeaders = opt.(InjectHeadersOption) - } - } - - return &Client{ - Now: time.Now(), - httpClient: &http.Client{ - Transport: rt, - }, - baseURL: baseURL, - instanceID: instanceID, - } -} - -// PushLogLine creates a new logline with the current time as timestamp -func (c *Client) PushLogLine(line string, extraLabels ...map[string]string) error { - return c.pushLogLine(line, c.Now, extraLabels...) -} - -// PushLogLineWithTimestamp creates a new logline at the given timestamp -// The timestamp has to be a Unix timestamp (epoch seconds) -func (c *Client) PushLogLineWithTimestamp(line string, timestamp time.Time, extraLabelList ...map[string]string) error { - return c.pushLogLine(line, timestamp, extraLabelList...) -} - -func formatTS(ts time.Time) string { - return strconv.FormatInt(ts.UnixNano(), 10) -} - -type stream struct { - Stream map[string]string `json:"stream"` - Values [][]string `json:"values"` -} - -// pushLogLine creates a new logline -func (c *Client) pushLogLine(line string, timestamp time.Time, extraLabelList ...map[string]string) error { - apiEndpoint := fmt.Sprintf("%s/loki/api/v1/push", c.baseURL) - - s := stream{ - Stream: map[string]string{ - "job": "dagger", - }, - Values: [][]string{ - { - formatTS(timestamp), - line, - }, - }, - } - // add extra labels - for _, labelList := range extraLabelList { - for k, v := range labelList { - s.Stream[k] = v - } - } - - data, err := json.Marshal(&struct { - Streams []stream `json:"streams"` - }{ - Streams: []stream{s}, - }) - if err != nil { - return err - } - req, err := http.NewRequest("POST", apiEndpoint, bytes.NewReader(data)) - if err != nil { - return err - } - req.Header.Set("Content-Type", "application/json") - req.Header.Set("X-Scope-OrgID", c.instanceID) - - // Execute HTTP request - res, err := c.httpClient.Do(req) - if err != nil { - return err - } - - if res.StatusCode/100 == 2 { - defer res.Body.Close() - return nil - } - - buf, err := io.ReadAll(res.Body) - if err != nil { - return fmt.Errorf("reading request failed with status code %v: %w", res.StatusCode, err) - } - - return fmt.Errorf("request failed with status code %v: %w", res.StatusCode, errors.New(string(buf))) -} - -func (c *Client) Get(path string) (*http.Response, error) { - url := fmt.Sprintf("%s%s", c.baseURL, path) - req, err := http.NewRequest("GET", url, nil) - if err != nil { - return nil, err - } - return c.httpClient.Do(req) -} - -// Get all the metrics -func (c *Client) Metrics() (string, error) { - url := fmt.Sprintf("%s/metrics", c.baseURL) - res, err := http.Get(url) //nolint - if err != nil { - return "", err - } - - var sb strings.Builder - if _, err := io.Copy(&sb, res.Body); err != nil { - return "", err - } - - if res.StatusCode != http.StatusOK { - return "", fmt.Errorf("request failed with status code %d", res.StatusCode) - } - return sb.String(), nil -} - -// Flush all in-memory chunks held by the ingesters to the backing store -func (c *Client) Flush() error { - req, err := c.request(context.Background(), "POST", fmt.Sprintf("%s/flush", c.baseURL)) - if err != nil { - return err - } - - req.Header.Set("Content-Type", "application/json") - - res, err := c.httpClient.Do(req) - if err != nil { - return err - } - defer res.Body.Close() - - if res.StatusCode/100 == 2 { - return nil - } - return fmt.Errorf("request failed with status code %d", res.StatusCode) -} - -type DeleteRequestParams struct { - Query string `json:"query"` - Start string `json:"start,omitempty"` - End string `json:"end,omitempty"` -} - -// AddDeleteRequest adds a new delete request -func (c *Client) AddDeleteRequest(params DeleteRequestParams) error { - apiEndpoint := fmt.Sprintf("%s/loki/api/v1/delete", c.baseURL) - - req, err := http.NewRequest("POST", apiEndpoint, nil) - if err != nil { - return err - } - - q := req.URL.Query() - q.Add("query", params.Query) - q.Add("start", params.Start) - q.Add("end", params.End) - req.URL.RawQuery = q.Encode() - fmt.Printf("Delete request URL: %v\n", req.URL.String()) - - res, err := c.httpClient.Do(req) - if err != nil { - return err - } - - if res.StatusCode != http.StatusNoContent { - buf, err := io.ReadAll(res.Body) - if err != nil { - return fmt.Errorf("reading request failed with status code %v: %w", res.StatusCode, err) - } - defer res.Body.Close() - return fmt.Errorf("request failed with status code %v: %w", res.StatusCode, errors.New(string(buf))) - } - - return nil -} - -type DeleteRequests []DeleteRequest -type DeleteRequest struct { - RequestID string `json:"request_id"` - StartTime float64 `json:"start_time"` - EndTime float64 `json:"end_time"` - Query string `json:"query"` - Status string `json:"status"` - CreatedAt float64 `json:"created_at"` -} - -// GetDeleteRequest gets a delete request using the request ID -func (c *Client) GetDeleteRequests() (DeleteRequests, error) { - resp, err := c.Get("/loki/api/v1/delete") - if err != nil { - return nil, err - } - defer resp.Body.Close() - - buf, err := io.ReadAll(resp.Body) - if err != nil { - return nil, fmt.Errorf("reading request failed with status code %v: %w", resp.StatusCode, err) - } - - var deleteReqs DeleteRequests - err = json.Unmarshal(buf, &deleteReqs) - if err != nil { - return nil, fmt.Errorf("parsing json output failed: %w", err) - } - - return deleteReqs, nil -} - -// StreamValues holds a label key value pairs for the Stream and a list of a list of values -type StreamValues struct { - Stream map[string]string - Values [][]string -} - -// MatrixValues holds a label key value pairs for the metric and a list of a list of values -type MatrixValues struct { - Metric map[string]string - Values [][]interface{} -} - -// VectorValues holds a label key value pairs for the metric and single timestamp and value -type VectorValues struct { - Metric map[string]string `json:"metric"` - Time time.Time - Value string -} - -func (a *VectorValues) UnmarshalJSON(b []byte) error { - var s struct { - Metric map[string]string `json:"metric"` - Value []interface{} `json:"value"` - } - if err := json.Unmarshal(b, &s); err != nil { - return err - } - a.Metric = s.Metric - if len(s.Value) != 2 { - return fmt.Errorf("unexpected value length %d", len(s.Value)) - } - if ts, ok := s.Value[0].(int64); ok { - a.Time = time.Unix(ts, 0) - } - if val, ok := s.Value[1].(string); ok { - a.Value = val - } - return nil -} - -// DataType holds the result type and a list of StreamValues -type DataType struct { - ResultType string - Stream []StreamValues - Matrix []MatrixValues - Vector []VectorValues -} - -func (a *DataType) UnmarshalJSON(b []byte) error { - // get the result type - var s struct { - ResultType string `json:"resultType"` - Result json.RawMessage `json:"result"` - } - if err := json.Unmarshal(b, &s); err != nil { - return err - } - - switch s.ResultType { - case "streams": - if err := json.Unmarshal(s.Result, &a.Stream); err != nil { - return err - } - case "matrix": - if err := json.Unmarshal(s.Result, &a.Matrix); err != nil { - return err - } - case "vector": - if err := json.Unmarshal(s.Result, &a.Vector); err != nil { - return err - } - default: - return fmt.Errorf("unknown result type %s", s.ResultType) - } - a.ResultType = s.ResultType - return nil -} - -// Response holds the status and data -type Response struct { - Status string - Data DataType -} - -type RulesResponse struct { - Status string - Data RulesData -} - -type RulesData struct { - Groups []Rules -} - -type Rules struct { - Name string - File string - Rules []interface{} -} - -// RunRangeQuery runs a query and returns an error if anything went wrong -func (c *Client) RunRangeQuery(ctx context.Context, query string) (*Response, error) { - ctx, cancelFunc := context.WithTimeout(ctx, requestTimeout) - defer cancelFunc() - - buf, statusCode, err := c.run(ctx, c.rangeQueryURL(query)) - if err != nil { - return nil, err - } - - return c.parseResponse(buf, statusCode) -} - -// RunQuery runs a query and returns an error if anything went wrong -func (c *Client) RunQuery(ctx context.Context, query string) (*Response, error) { - ctx, cancelFunc := context.WithTimeout(ctx, requestTimeout) - defer cancelFunc() - - v := url.Values{} - v.Set("query", query) - v.Set("time", formatTS(c.Now.Add(time.Second))) - - u, err := url.Parse(c.baseURL) - if err != nil { - return nil, err - } - u.Path = "/loki/api/v1/query" - u.RawQuery = v.Encode() - - buf, statusCode, err := c.run(ctx, u.String()) - if err != nil { - return nil, err - } - - return c.parseResponse(buf, statusCode) -} - -// GetRules returns the loki ruler rules -func (c *Client) GetRules(ctx context.Context) (*RulesResponse, error) { - ctx, cancelFunc := context.WithTimeout(ctx, requestTimeout) - defer cancelFunc() - - u, err := url.Parse(c.baseURL) - if err != nil { - return nil, err - } - u.Path = "/prometheus/api/v1/rules" - - buf, _, err := c.run(ctx, u.String()) - if err != nil { - return nil, err - } - - resp := RulesResponse{} - err = json.Unmarshal(buf, &resp) - if err != nil { - return nil, fmt.Errorf("error parsing response data: %w", err) - } - - return &resp, err -} - -func (c *Client) parseResponse(buf []byte, statusCode int) (*Response, error) { - lokiResp := Response{} - err := json.Unmarshal(buf, &lokiResp) - if err != nil { - return nil, fmt.Errorf("error parsing response data: %w", err) - } - - if statusCode/100 == 2 { - return &lokiResp, nil - } - return nil, fmt.Errorf("request failed with status code %d: %w", statusCode, errors.New(string(buf))) -} - -func (c *Client) rangeQueryURL(query string) string { - v := url.Values{} - v.Set("query", query) - v.Set("start", formatTS(c.Now.Add(-2*time.Hour))) - v.Set("end", formatTS(c.Now.Add(time.Second))) - - u, err := url.Parse(c.baseURL) - if err != nil { - panic(err) - } - u.Path = "/loki/api/v1/query_range" - u.RawQuery = v.Encode() - - return u.String() -} - -func (c *Client) LabelNames(ctx context.Context) ([]string, error) { - ctx, cancelFunc := context.WithTimeout(ctx, requestTimeout) - defer cancelFunc() - - url := fmt.Sprintf("%s/loki/api/v1/labels", c.baseURL) - - req, err := c.request(ctx, "GET", url) - if err != nil { - return nil, err - } - - res, err := c.httpClient.Do(req) - if err != nil { - return nil, err - } - defer res.Body.Close() - - if res.StatusCode/100 != 2 { - return nil, fmt.Errorf("unexpected status code of %d", res.StatusCode) - } - - var values struct { - Data []string `json:"data"` - } - if err := json.NewDecoder(res.Body).Decode(&values); err != nil { - return nil, err - } - - return values.Data, nil -} - -// LabelValues return a LabelValues query -func (c *Client) LabelValues(ctx context.Context, labelName string) ([]string, error) { - ctx, cancelFunc := context.WithTimeout(ctx, requestTimeout) - defer cancelFunc() - - url := fmt.Sprintf("%s/loki/api/v1/label/%s/values", c.baseURL, url.PathEscape(labelName)) - - req, err := c.request(ctx, "GET", url) - if err != nil { - return nil, err - } - - res, err := c.httpClient.Do(req) - if err != nil { - return nil, err - } - defer res.Body.Close() - - if res.StatusCode/100 != 2 { - return nil, fmt.Errorf("unexpected status code of %d", res.StatusCode) - } - - var values struct { - Data []string `json:"data"` - } - if err := json.NewDecoder(res.Body).Decode(&values); err != nil { - return nil, err - } - - return values.Data, nil -} - -func (c *Client) request(ctx context.Context, method string, url string) (*http.Request, error) { - ctx = user.InjectOrgID(ctx, c.instanceID) - req, err := http.NewRequestWithContext(ctx, method, url, nil) - if err != nil { - return nil, err - } - req.Header.Set("X-Scope-OrgID", c.instanceID) - return req, nil -} - -func (c *Client) run(ctx context.Context, u string) ([]byte, int, error) { - req, err := c.request(ctx, "GET", u) - if err != nil { - return nil, 0, err - } - - // Execute HTTP request - res, err := c.httpClient.Do(req) - if err != nil { - return nil, 0, err - } - defer res.Body.Close() - - buf, err := io.ReadAll(res.Body) - if err != nil { - return nil, 0, fmt.Errorf("request failed with status code %v: %w", res.StatusCode, err) - } - - return buf, res.StatusCode, nil -} diff --git a/cmd/otel-collector/main.go b/cmd/otel-collector/main.go deleted file mode 100644 index f2d650b842d..00000000000 --- a/cmd/otel-collector/main.go +++ /dev/null @@ -1,118 +0,0 @@ -package main - -import ( - "encoding/json" - "fmt" - "io" - "os" - "strings" - "time" - - "github.com/dagger/dagger/telemetry" - "github.com/spf13/cobra" - "github.com/vito/progrock" -) - -func main() { - cmd.Flags().String("name", "pipeline", "name") - cmd.Flags().StringArray("tag", []string{}, "tags") - - if err := cmd.Execute(); err != nil { - fmt.Fprintf(os.Stderr, "%v\n", err) - os.Exit(1) - } -} - -var cmd = &cobra.Command{ - Use: "otel-collector ", - Args: cobra.ExactArgs(1), - RunE: func(cmd *cobra.Command, args []string) error { - ctx := cmd.Context() - - name, err := cmd.Flags().GetString("name") - if err != nil { - return err - } - tagList, err := cmd.Flags().GetStringArray("tag") - if err != nil { - return err - } - tags, err := parseTags(tagList) - if err != nil { - return err - } - - ch := loadEvents(args[0]) - vertices := completedVertices(ch) - trace := NewTraceExporter(name, vertices, tags) - - now := time.Now() - err = trace.Run(ctx) - if err != nil { - return err - } - fmt.Fprintf(os.Stderr, "=> traces completed in %s\n", time.Since(now)) - - now = time.Now() - if err := logSummary(name, vertices, tags, trace.TraceID()); err != nil { - return err - } - fmt.Fprintf(os.Stderr, "=> logs completed in %s\n", time.Since(now)) - - now = time.Now() - printSummary(os.Stdout, trace) - fmt.Fprintf(os.Stderr, "=> summary completed in %s\n", time.Since(now)) - return nil - }, -} - -func completedVertices(pl *telemetry.Pipeliner) VertexList { - list := VertexList{} - for _, v := range pl.Vertices() { - if v.Completed == nil { - continue - } - - list = append(list, Vertex{v}) - } - - return list -} - -func loadEvents(journal string) *telemetry.Pipeliner { - pl := telemetry.NewPipeliner() - - f, err := os.Open(journal) - if err != nil { - panic(err) - } - - decoder := json.NewDecoder(f) - - for { - var entry progrock.StatusUpdate - err := decoder.Decode(&entry) - if err == io.EOF { - break - } - if err != nil { - panic(err) - } - - pl.TrackUpdate(&entry) - } - - return pl -} - -func parseTags(tags []string) (map[string]string, error) { - res := make(map[string]string) - for _, l := range tags { - parts := strings.SplitN(l, "=", 2) - if len(parts) != 2 { - return nil, fmt.Errorf("malformed tag: %q", l) - } - res[parts[0]] = parts[1] - } - return res, nil -} diff --git a/cmd/otel-collector/summary.go b/cmd/otel-collector/summary.go deleted file mode 100644 index b879310ee8a..00000000000 --- a/cmd/otel-collector/summary.go +++ /dev/null @@ -1,86 +0,0 @@ -package main - -import ( - "fmt" - "io" - "sort" - "strconv" - "strings" - "text/tabwriter" - "time" -) - -const ( - traceURL = "https://daggerboard.grafana.net/explore?orgId=1&left=%7B%22datasource%22:%22grafanacloud-traces%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22datasource%22:%7B%22type%22:%22tempo%22,%22uid%22:%22grafanacloud-traces%22%7D,%22queryType%22:%22traceId%22,%22query%22:%22{TRACE_ID}%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D" - metricsURL = "https://daggerboard.grafana.net/d/SyaItlTVk/dagger-overview?from={FROM}&to={TO}&var-detail=pipeline&var-micros=1000000" -) - -func printSummary(w io.Writer, exporter *TraceExporter) { - vertices := exporter.Vertices() - duration := vertices.Duration().Round(time.Second / 10).String() - - fmt.Fprintf(w, "🚀 Dagger pipeline completed in **%s**\n\n", duration) - - printBreakdown(w, exporter.Vertices()) - - traceRunURL := strings.ReplaceAll(traceURL, "{TRACE_ID}", exporter.TraceID()) - metricsRunURL := strings.ReplaceAll(metricsURL, "{FROM}", strconv.FormatInt(vertices.Started().UnixMilli(), 10)) - metricsRunURL = strings.ReplaceAll(metricsRunURL, "{TO}", strconv.FormatInt(vertices.Completed().UnixMilli(), 10)) - fmt.Fprintf(w, "\n- 📈 [Explore metrics](%s)\n", metricsRunURL) - fmt.Fprintf(w, "\n- 🔍 [Explore traces](%s)\n", traceRunURL) - - fmt.Fprintf(w, "\n\n### DAG\n") - fmt.Fprintf(w, "```mermaid\n") - fmt.Fprint(w, printGraph(exporter.Vertices())) - fmt.Fprintf(w, "```\n") -} - -func printBreakdown(w io.Writer, vertices VertexList) { - tw := tabwriter.NewWriter(w, 4, 4, 1, ' ', 0) - defer tw.Flush() - - pipelines := vertices.ByPipeline() - pipelineNames := []string{} - for p := range pipelines { - pipelineNames = append(pipelineNames, p) - } - sort.Strings(pipelineNames) - - fmt.Fprintf(tw, "| **Pipeline** \t| **Duration** \t|\n") - fmt.Fprintf(tw, "| --- \t| --- \t|\n") - for _, pipeline := range pipelineNames { - vertices := pipelines[pipeline] - status := "✅" - if vertices.Error() != nil { - status = "❌" - } - duration := vertices.Duration().Round(time.Second / 10).String() - if vertices.Cached() { - duration = "CACHED" - } - - fmt.Fprintf(tw, "| %s **%s** \t| %s \t|\n", status, pipeline, duration) - } -} - -func printGraph(vertices VertexList) string { - s := strings.Builder{} - s.WriteString("flowchart TD\n") - - for _, v := range vertices { - duration := v.Duration().Round(time.Second / 10).String() - if v.Cached() { - duration = "CACHED" - } - name := strings.ReplaceAll(v.Name(), "\"", "") + " (" + duration + ")" - s.WriteString(fmt.Sprintf(" %s[%q]\n", v.ID(), name)) - } - - for _, v := range vertices { - for _, input := range v.Inputs() { - s.WriteString(fmt.Sprintf(" %s --> %s\n", input, v.ID())) - } - } - - return s.String() -} diff --git a/cmd/otel-collector/traces.go b/cmd/otel-collector/traces.go deleted file mode 100644 index d631ebad893..00000000000 --- a/cmd/otel-collector/traces.go +++ /dev/null @@ -1,187 +0,0 @@ -package main - -import ( - "context" - "fmt" - "os" - - "github.com/dagger/dagger/core/pipeline" - "go.opentelemetry.io/otel" - "go.opentelemetry.io/otel/attribute" - "go.opentelemetry.io/otel/codes" - "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc" - otrace "go.opentelemetry.io/otel/trace" - - "go.opentelemetry.io/otel/sdk/resource" - "go.opentelemetry.io/otel/sdk/trace" - semconv "go.opentelemetry.io/otel/semconv/v1.4.0" -) - -const ( - tracer = "dagger" -) - -type TraceExporter struct { - name string - vertices VertexList - tags map[string]string - - tr otrace.Tracer - - rootSpan otrace.Span - rootCtx context.Context - - contextByVertex map[string]context.Context - contextByPipeline map[string]context.Context -} - -func NewTraceExporter(name string, vertices VertexList, tags map[string]string) *TraceExporter { - return &TraceExporter{ - name: name, - vertices: vertices, - tags: tags, - - contextByVertex: make(map[string]context.Context), - contextByPipeline: make(map[string]context.Context), - } -} - -func (c *TraceExporter) Vertices() VertexList { - return c.vertices -} - -func (c *TraceExporter) Run(ctx context.Context) error { - exp, err := newExporter() - if err != nil { - return err - } - - tp := trace.NewTracerProvider( - trace.WithBatcher(exp), - trace.WithResource(newResource()), - ) - defer func() { - _ = tp.Shutdown(context.Background()) - }() - otel.SetTracerProvider(tp) - - c.tr = otel.Tracer(tracer) - - c.rootCtx, c.rootSpan = c.tr.Start(ctx, - c.name, - otrace.WithTimestamp(c.vertices.Started()), - ) - c.rootSpan.SetAttributes(c.attributes()...) - c.rootSpan.End(otrace.WithTimestamp(c.vertices.Completed())) - - for _, v := range c.vertices { - c.sendVertex(v) - } - - return nil -} - -func (c *TraceExporter) sendVertex(v Vertex) { - // Register links for vertex inputs - links := []otrace.Link{} - for _, input := range v.Inputs() { - inputCtx := c.contextByVertex[input] - if inputCtx == nil { - fmt.Fprintf(os.Stderr, "input %s not found\n", input) - continue - } - inputLink := otrace.LinkFromContext(inputCtx) - links = append(links, inputLink) - } - - pipelineCtx := c.pipelineContext(v) - vertexCtx, vertexSpan := c.tr.Start( - pipelineCtx, - v.Name(), - otrace.WithTimestamp(v.Started()), - otrace.WithLinks(links...), - ) - c.contextByVertex[v.ID()] = vertexCtx - vertexSpan.SetAttributes(c.attributes(attribute.Bool("cached", v.Cached()), attribute.String("digest", v.ID()))...) - - if err := v.Error(); err != nil { - vertexSpan.RecordError(err) - vertexSpan.SetStatus(codes.Error, err.Error()) - } - vertexSpan.End(otrace.WithTimestamp(v.Completed())) -} - -func (c *TraceExporter) pipelineContext(v Vertex) context.Context { - ctx := c.rootCtx - pipeline := v.Pipeline() - for i := range pipeline { - parent := pipeline[0 : i+1] - parentCtx := c.contextByPipeline[parent.ID()] - if parentCtx == nil { - parentVertices := c.verticesForPipeline(parent) - var parentSpan otrace.Span - parentCtx, parentSpan = c.tr.Start(ctx, - pipeline[i].Name, - otrace.WithTimestamp(parentVertices.Started()), - ) - parentSpan.SetAttributes(c.attributes()...) - parentSpan.End(otrace.WithTimestamp(parentVertices.Completed())) - - c.contextByPipeline[parent.ID()] = parentCtx - } - ctx = parentCtx - } - - return ctx -} - -func (c *TraceExporter) attributes(attributes ...attribute.KeyValue) []attribute.KeyValue { - for k, v := range c.tags { - attributes = append(attributes, attribute.String(k, v)) - } - return attributes -} - -func (c *TraceExporter) verticesForPipeline(selector pipeline.Path) VertexList { - matches := VertexList{} - for _, v := range c.vertices { - if matchPipeline(v, selector) { - matches = append(matches, v) - } - } - return matches -} - -func matchPipeline(v Vertex, selector pipeline.Path) bool { - pipeline := v.Pipeline() - if len(selector) > len(pipeline) { - return false - } - for i, sel := range selector { - if pipeline[i].Name != sel.Name { - return false - } - } - - return true -} - -func (c *TraceExporter) TraceID() string { - if c.rootSpan == nil { - return "" - } - return c.rootSpan.SpanContext().TraceID().String() -} - -func newExporter() (trace.SpanExporter, error) { - return otlptracegrpc.New(context.Background()) -} - -func newResource() *resource.Resource { - return resource.NewWithAttributes( - semconv.SchemaURL, - semconv.ServiceNameKey.String("dagger"), - semconv.ServiceVersionKey.String("v0.1.0"), - attribute.String("environment", "test"), - ) -} diff --git a/cmd/otel-collector/vertex.go b/cmd/otel-collector/vertex.go deleted file mode 100644 index a9a2a612ed0..00000000000 --- a/cmd/otel-collector/vertex.go +++ /dev/null @@ -1,133 +0,0 @@ -package main - -import ( - "errors" - "strings" - "time" - - "github.com/dagger/dagger/core/pipeline" - "github.com/dagger/dagger/telemetry" -) - -type Vertex struct { - v *telemetry.PipelinedVertex -} - -func (w Vertex) ID() string { - return w.v.Id -} - -func (w Vertex) Name() string { - return w.v.Name -} - -func (w Vertex) Pipeline() pipeline.Path { - if len(w.v.Pipelines) == 0 { - return pipeline.Path{} - } - return w.v.Pipelines[0] -} - -func (w Vertex) Internal() bool { - return w.v.Internal -} - -func (w Vertex) Started() time.Time { - if w.v.Started == nil { - return time.Time{} - } - return w.v.Started.AsTime() -} - -func (w Vertex) Completed() time.Time { - if w.v.Completed == nil { - return time.Time{} - } - return w.v.Completed.AsTime() -} - -func (w Vertex) Duration() time.Duration { - return w.Completed().Sub(w.Started()) -} - -func (w Vertex) Cached() bool { - return w.v.Cached -} - -func (w Vertex) Error() error { - if w.v.Error == nil { - return nil - } - return errors.New(*w.v.Error) -} - -func (w Vertex) Inputs() []string { - return w.v.Inputs -} - -type VertexList []Vertex - -func (l VertexList) Started() time.Time { - var first time.Time - for _, v := range l { - if first.IsZero() || v.Started().Before(first) { - first = v.Started() - } - } - return first -} - -func (l VertexList) Completed() time.Time { - var last time.Time - for _, v := range l { - if last.IsZero() || v.Completed().After(last) { - last = v.Completed() - } - } - return last -} - -func (l VertexList) Cached() bool { - for _, v := range l { - if !v.Cached() { - return false - } - } - - // Return true if there is more than one vertex and they're all cached - return len(l) > 0 -} - -func (l VertexList) Duration() time.Duration { - return l.Completed().Sub(l.Started()) -} - -func (l VertexList) Error() error { - for _, v := range l { - if err := v.Error(); err != nil { - return err - } - } - return nil -} - -func (l VertexList) ByPipeline() map[string]VertexList { - breakdown := map[string]VertexList{} - for _, v := range l { - pipeline := v.Pipeline() - if len(pipeline) == 0 { - continue - } - // FIXME: events should indicate if this is a "built-in" pipeline - name := pipeline.Name() - if strings.HasPrefix(name, "from ") || - strings.HasPrefix(name, "host.directory") || - name == "docker build" { - continue - } - - breakdown[pipeline.String()] = append(breakdown[pipeline.String()], v) - } - - return breakdown -} diff --git a/cmd/shim/main.go b/cmd/shim/main.go index 496db0b9d1f..093c7b73fdb 100644 --- a/cmd/shim/main.go +++ b/cmd/shim/main.go @@ -7,6 +7,7 @@ import ( "errors" "fmt" "io" + "log/slog" "net" "net/http" "os" @@ -23,16 +24,21 @@ import ( "github.com/cenkalti/backoff/v4" "github.com/containerd/console" - "github.com/dagger/dagger/core" - "github.com/dagger/dagger/engine/buildkit" - "github.com/dagger/dagger/engine/client" - "github.com/dagger/dagger/network" "github.com/google/uuid" "github.com/opencontainers/go-digest" "github.com/opencontainers/runtime-spec/specs-go" - "github.com/vito/progrock" + "go.opentelemetry.io/otel/propagation" + "go.opentelemetry.io/otel/sdk/resource" + semconv "go.opentelemetry.io/otel/semconv/v1.24.0" "golang.org/x/sys/unix" "golang.org/x/term" + + "github.com/dagger/dagger/core" + "github.com/dagger/dagger/engine" + "github.com/dagger/dagger/engine/buildkit" + "github.com/dagger/dagger/engine/client" + "github.com/dagger/dagger/network" + "github.com/dagger/dagger/telemetry" ) const ( @@ -179,6 +185,42 @@ func shim() (returnExitCode int) { return errorExitCode } + // Set up slog initially to log directly to stderr, in case something goes + // wrong with the logging setup. + slog.SetDefault(telemetry.PrettyLogger(os.Stderr, slog.LevelWarn)) + + cleanup, err := proxyOtelToTCP() + if err == nil { + defer cleanup() + } else { + fmt.Fprintln(os.Stderr, "failed to set up opentelemetry proxy:", err) + } + + traceCfg := telemetry.Config{ + Detect: false, // false, since we want "live" exporting + Resource: resource.NewWithAttributes( + semconv.SchemaURL, + semconv.ServiceNameKey.String("dagger-shim"), + semconv.ServiceVersionKey.String(engine.Version), + ), + } + if exp, ok := telemetry.ConfiguredSpanExporter(ctx); ok { + traceCfg.LiveTraceExporters = append(traceCfg.LiveTraceExporters, exp) + } + if exp, ok := telemetry.ConfiguredLogExporter(ctx); ok { + traceCfg.LiveLogExporters = append(traceCfg.LiveLogExporters, exp) + } + + ctx = telemetry.Init(ctx, traceCfg) + defer telemetry.Close() + + logCtx := ctx + if p, ok := os.LookupEnv("DAGGER_FUNCTION_TRACEPARENT"); ok { + logCtx = propagation.TraceContext{}.Extract(ctx, propagation.MapCarrier{"traceparent": p}) + } + + ctx, stdoutOtel, stderrOtel := telemetry.WithStdioToOtel(logCtx, "dagger.io/shim") + name := os.Args[1] args := []string{} if len(os.Args) > 2 { @@ -248,8 +290,12 @@ func shim() (returnExitCode int) { stderrRedirect = stderrRedirectFile } - outWriter := io.MultiWriter(stdoutFile, stdoutRedirect, os.Stdout) - errWriter := io.MultiWriter(stderrFile, stderrRedirect, os.Stderr) + outWriter := io.MultiWriter(stdoutFile, stdoutRedirect, stdoutOtel, os.Stdout) + errWriter := io.MultiWriter(stderrFile, stderrRedirect, stderrOtel, os.Stderr) + + // Direct slog to the new stderr. This is only for dev time debugging, and + // runtime errors/warnings. + slog.SetDefault(telemetry.PrettyLogger(errWriter, slog.LevelWarn)) if len(secretsToScrub.Envs) == 0 && len(secretsToScrub.Files) == 0 { cmd.Stdout = outWriter @@ -449,6 +495,8 @@ func setupBundle() int { } var gpuParams string + var otelEndpoint string + var otelProto string keepEnv := []string{} for _, env := range spec.Process.Env { switch { @@ -463,9 +511,6 @@ func setupBundle() int { } keepEnv = append(keepEnv, "_DAGGER_SERVER_ID="+execMetadata.ServerID) - // propagate parent vertex ID - keepEnv = append(keepEnv, "_DAGGER_PROGROCK_PARENT="+execMetadata.ProgParent) - // mount buildkit sock since it's nesting spec.Mounts = append(spec.Mounts, specs.Mount{ Destination: "/.runner.sock", @@ -473,17 +518,6 @@ func setupBundle() int { Options: []string{"rbind"}, Source: "/run/buildkit/buildkitd.sock", }) - // also need the progsock path for forwarding progress - if execMetadata.ProgSockPath == "" { - fmt.Fprintln(os.Stderr, "missing progsock path") - return errorExitCode - } - spec.Mounts = append(spec.Mounts, specs.Mount{ - Destination: "/.progrock.sock", - Type: "bind", - Options: []string{"rbind"}, - Source: execMetadata.ProgSockPath, - }) case strings.HasPrefix(env, "_DAGGER_SERVER_ID="): case strings.HasPrefix(env, aliasPrefix): // NB: don't keep this env var, it's only for the bundling step @@ -493,15 +527,44 @@ func setupBundle() int { fmt.Fprintln(os.Stderr, "host alias:", err) return errorExitCode } - case strings.HasPrefix(env, "_EXPERIMENTAL_DAGGER_GPU_PARAMS"): - splits := strings.Split(env, "=") - gpuParams = splits[1] + case strings.HasPrefix(env, "_EXPERIMENTAL_DAGGER_GPU_PARAMS="): + _, gpuParams, _ = strings.Cut(env, "=") + + // filter out Buildkit's OTLP env vars, we have our own + case strings.HasPrefix(env, "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="): + _, otelEndpoint, _ = strings.Cut(env, "=") + + case strings.HasPrefix(env, "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="): + _, otelProto, _ = strings.Cut(env, "=") + default: keepEnv = append(keepEnv, env) } } spec.Process.Env = keepEnv + if otelEndpoint != "" { + if strings.HasPrefix(otelEndpoint, "/") { + // Buildkit currently sets this to /dev/otel-grpc.sock which is not a valid + // endpoint URL despite being set in an OTEL_* env var. + otelEndpoint = "unix://" + otelEndpoint + } + spec.Process.Env = append(spec.Process.Env, + "OTEL_EXPORTER_OTLP_ENDPOINT="+otelEndpoint, + // Re-set the otel env vars, but with a corrected otelEndpoint. + "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="+otelProto, + "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="+otelEndpoint, + // Dagger sets up a log exporter too. Explicitly set it so things can + // detect support for it. + "OTEL_EXPORTER_OTLP_LOGS_PROTOCOL="+otelProto, + "OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="+otelEndpoint, + // Dagger doesn't set up metrics yet, but we should set this anyway, + // since otherwise some tools default to localhost. + "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL="+otelProto, + "OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="+otelEndpoint, + ) + } + if gpuParams != "" { spec.Process.Env = append(spec.Process.Env, fmt.Sprintf("NVIDIA_VISIBLE_DEVICES=%s", gpuParams)) } @@ -650,16 +713,6 @@ func runWithNesting(ctx context.Context, cmd *exec.Cmd) error { clientParams.ModuleCallerDigest = digest.Digest(moduleCallerDigest) } - progW, err := progrock.DialRPC(ctx, "unix:///.progrock.sock") - if err != nil { - return fmt.Errorf("error connecting to progrock: %w", err) - } - clientParams.ProgrockWriter = progW - - if parentID := os.Getenv("_DAGGER_PROGROCK_PARENT"); parentID != "" { - clientParams.ProgrockParent = parentID - } - sess, ctx, err := client.Connect(ctx, clientParams) if err != nil { return fmt.Errorf("error connecting to engine: %w", err) @@ -668,7 +721,11 @@ func runWithNesting(ctx context.Context, cmd *exec.Cmd) error { _ = ctx // avoid ineffasign lint - go http.Serve(l, sess) //nolint:gosec + srv := &http.Server{ //nolint:gosec + Handler: sess, + BaseContext: func(net.Listener) context.Context { return ctx }, + } + go srv.Serve(l) // pass dagger session along to any SDKs that run in the container os.Setenv("DAGGER_SESSION_PORT", strconv.Itoa(sessionPort)) @@ -770,3 +827,81 @@ func toggleONLCR(enable bool) error { return console.ClearONLCR(fd) } } + +// Some OpenTelemetry clients don't support unix:// endpoints, so we proxy them +// through a TCP endpoint instead. +func proxyOtelToTCP() (cleanup func(), rerr error) { + endpoints := map[string][]string{} + for _, env := range []string{ + "OTEL_EXPORTER_OTLP_ENDPOINT", + "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT", + "OTEL_EXPORTER_OTLP_LOGS_ENDPOINT", + "OTEL_EXPORTER_OTLP_METRICS_ENDPOINT", + } { + if val := os.Getenv(env); val != "" { + slog.Debug("found otel endpoint", "env", env, "endpoint", val) + endpoints[val] = append(endpoints[val], env) + } + } + closers := []func() error{} + cleanup = func() { + for _, closer := range closers { + closer() + } + } + defer func() { + if rerr != nil { + cleanup() + } + }() + for endpoint, envs := range endpoints { + if !strings.HasPrefix(endpoint, "unix://") { + // We only need to fix up unix:// endpoints. + continue + } + + l, err := net.Listen("tcp", "127.0.0.1:0") + if err != nil { + return func() {}, fmt.Errorf("listen: %w", err) + } + closers = append(closers, l.Close) + + slog.Debug("listening for otel proxy", "endpoint", endpoint, "proxy", l.Addr().String()) + go proxyOtelSocket(l, endpoint) + + for _, env := range envs { + slog.Debug("proxying otel endpoint", "env", env, "endpoint", endpoint) + os.Setenv(env, "http://"+l.Addr().String()) + } + } + return cleanup, nil +} + +func proxyOtelSocket(l net.Listener, endpoint string) { + sockPath := strings.TrimPrefix(endpoint, "unix://") + for { + conn, err := l.Accept() + if err != nil { + if !errors.Is(err, net.ErrClosed) { + slog.Error("failed to accept connection", "error", err) + } + return + } + + slog.Debug("accepting otel connection", "endpoint", endpoint) + + go func() { + defer conn.Close() + + remote, err := net.Dial("unix", sockPath) + if err != nil { + slog.Error("failed to dial socket", "error", err) + return + } + defer remote.Close() + + go io.Copy(remote, conn) + io.Copy(conn, remote) + }() + } +} diff --git a/cmd/upload-journal/main.go b/cmd/upload-journal/main.go deleted file mode 100644 index b27e8c86fa5..00000000000 --- a/cmd/upload-journal/main.go +++ /dev/null @@ -1,118 +0,0 @@ -package main - -import ( - "encoding/json" - "flag" - "fmt" - "os" - "os/signal" - "syscall" - "time" - - "github.com/dagger/dagger/telemetry" - bkclient "github.com/moby/buildkit/client" - "github.com/nxadm/tail" - "github.com/vito/progrock" -) - -func main() { - var followFlag bool - flag.BoolVar(&followFlag, "f", false, "follow") - - flag.Parse() - - args := flag.Args() - if len(args) < 1 { - fmt.Fprintf(os.Stderr, "usage: %s [-f] \n", os.Args[0]) - os.Exit(1) - } - journal := args[0] - - t := telemetry.New() - - if !t.Enabled() { - fmt.Fprintln(os.Stderr, "telemetry token not configured") - os.Exit(1) - return - } - - w := telemetry.NewWriter(t) - - fmt.Println("Dagger Cloud url:", t.URL()) - - sigCh := make(chan os.Signal, 1) - signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM) - stopCh := make(chan struct{}) - go func() { - defer close(stopCh) - <-sigCh - }() - - entries, err := tailJournal(journal, followFlag, stopCh) - if err != nil { - fmt.Fprintf(os.Stderr, "err: %v\n", err) - os.Exit(1) - } - err = processJournal(w, entries) - w.Close() - if err != nil { - fmt.Fprintf(os.Stderr, "err: %v\n", err) - os.Exit(1) - } -} - -func processJournal(w progrock.Writer, updates chan *progrock.StatusUpdate) error { - for update := range updates { - if err := w.WriteStatus(update); err != nil { - return err - } - } - - return nil -} - -type JournalEntry struct { - Event *bkclient.SolveStatus - TS time.Time -} - -func tailJournal(journal string, follow bool, stopCh chan struct{}) (chan *progrock.StatusUpdate, error) { - f, err := tail.TailFile(journal, tail.Config{Follow: follow}) - if err != nil { - return nil, err - } - - ch := make(chan *progrock.StatusUpdate) - - go func() { - if stopCh == nil { - return - } - <-stopCh - fmt.Fprintf(os.Stderr, "quitting\n") - if err := f.StopAtEOF(); err != nil { - fmt.Fprintf(os.Stderr, "err: %v\n", err) - } - }() - - go func() { - defer close(ch) - defer f.Cleanup() - - for line := range f.Lines { - if err := line.Err; err != nil { - fmt.Fprintf(os.Stderr, "err: %v\n", err) - return - } - var entry progrock.StatusUpdate - if err := json.Unmarshal([]byte(line.Text), &entry); err != nil { - fmt.Fprintf(os.Stderr, "err: %v\n", err) - return - } - - ch <- &entry - } - }() - - return ch, nil -} diff --git a/core/c2h.go b/core/c2h.go index 904372e14a7..d0b200e466d 100644 --- a/core/c2h.go +++ b/core/c2h.go @@ -8,11 +8,10 @@ import ( "time" "github.com/dagger/dagger/engine/buildkit" + "github.com/dagger/dagger/telemetry" "github.com/moby/buildkit/client/llb" bkgw "github.com/moby/buildkit/frontend/gateway/client" - "github.com/moby/buildkit/identity" "github.com/moby/buildkit/solver/pb" - "github.com/vito/progrock" ) type c2hTunnel struct { @@ -22,7 +21,7 @@ type c2hTunnel struct { tunnelServicePorts []PortForward } -func (d *c2hTunnel) Tunnel(ctx context.Context) (err error) { +func (d *c2hTunnel) Tunnel(ctx context.Context) (rerr error) { scratchDef, err := llb.Scratch().Marshal(ctx) if err != nil { return err @@ -76,8 +75,9 @@ func (d *c2hTunnel) Tunnel(ctx context.Context) (err error) { )) } - ctx, vtx := progrock.Span(ctx, identity.NewID(), strings.Join(args, " ")) - defer func() { vtx.Done(err) }() + ctx, span := Tracer().Start(ctx, strings.Join(args, " ")) + defer telemetry.End(span, func() error { return rerr }) + ctx, stdout, stderr := telemetry.WithStdioToOtel(ctx, InstrumentationLibrary) container, err := d.bk.NewContainer(ctx, bkgw.NewContainerRequest{ Hostname: d.tunnelServiceHost, @@ -92,16 +92,16 @@ func (d *c2hTunnel) Tunnel(ctx context.Context) (err error) { // // set a reasonable timeout on this since there have been funky hangs in the // past - cleanupCtx, cleanupCancel := context.WithTimeout(context.Background(), 30*time.Second) + cleanupCtx, cleanupCancel := context.WithTimeout(context.WithoutCancel(ctx), 30*time.Second) defer cleanupCancel() defer container.Release(cleanupCtx) proc, err := container.Start(ctx, bkgw.StartRequest{ Args: args, - Env: []string{"_DAGGER_INTERNAL_COMMAND="}, - Stdout: nopCloser{vtx.Stdout()}, - Stderr: nopCloser{vtx.Stderr()}, + Env: append(telemetry.PropagationEnv(ctx), "_DAGGER_INTERNAL_COMMAND="), + Stdout: nopCloser{stdout}, + Stderr: nopCloser{stderr}, }) if err != nil { return err diff --git a/core/container.go b/core/container.go index 4b97e9dfca9..cef55cf40dd 100644 --- a/core/container.go +++ b/core/container.go @@ -17,11 +17,6 @@ import ( "github.com/containerd/containerd/images" "github.com/containerd/containerd/pkg/transfer/archive" "github.com/containerd/containerd/platforms" - "github.com/vektah/gqlparser/v2/ast" - - "github.com/dagger/dagger/dagql" - "github.com/dagger/dagger/dagql/call" - "github.com/dagger/dagger/engine" "github.com/docker/distribution/reference" "github.com/moby/buildkit/client/llb" "github.com/moby/buildkit/exporter/containerimage/exptypes" @@ -33,9 +28,12 @@ import ( "github.com/opencontainers/go-digest" specs "github.com/opencontainers/image-spec/specs-go/v1" "github.com/pkg/errors" - "github.com/vito/progrock" + "github.com/vektah/gqlparser/v2/ast" "github.com/dagger/dagger/core/pipeline" + "github.com/dagger/dagger/dagql" + "github.com/dagger/dagger/dagql/call" + "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/buildkit" ) @@ -287,10 +285,6 @@ func (container *Container) From(ctx context.Context, addr string) (*Container, platform := container.Platform - // `From` creates 2 vertices: fetching the image config and actually pulling the image. - // We create a sub-pipeline to encapsulate both. - ctx, subRecorder := progrock.WithGroup(ctx, fmt.Sprintf("from %s", addr), progrock.Weak()) - refName, err := reference.ParseNormalizedNamed(addr) if err != nil { return nil, err @@ -328,9 +322,6 @@ func (container *Container) From(ctx context.Context, addr string) (*Container, container.FS = def.ToPB() - // associate vertexes to the 'from' sub-pipeline - buildkit.RecordVertexes(subRecorder, container.FS) - container.Config = mergeImageConfig(container.Config, imgSpec.Config) container.ImageRef = digested.String() @@ -364,9 +355,6 @@ func (container *Container) Build( svcs := container.Query.Services bk := container.Query.Buildkit - // add a weak group for the docker build vertices - ctx, subRecorder := progrock.WithGroup(ctx, "docker build", progrock.Weak()) - detach, _, err := svcs.StartBindings(ctx, container.Services) if err != nil { return nil, err @@ -434,9 +422,6 @@ func (container *Container) Build( return nil, err } - // associate vertexes to the 'docker build' sub-pipeline - buildkit.RecordVertexes(subRecorder, def.ToPB()) - container.FS = def.ToPB() container.FS.Source = nil @@ -976,9 +961,9 @@ func (container *Container) UpdateImageConfig(ctx context.Context, updateFn func return container, nil } -func (container *Container) WithPipeline(ctx context.Context, name, description string, labels []pipeline.Label) (*Container, error) { +func (container *Container) WithPipeline(ctx context.Context, name, description string) (*Container, error) { container = container.Clone() - container.Query = container.Query.WithPipeline(name, description, labels) + container.Query = container.Query.WithPipeline(name, description) return container, nil } diff --git a/core/directory.go b/core/directory.go index 3d878433529..e7246eb57db 100644 --- a/core/directory.go +++ b/core/directory.go @@ -12,17 +12,16 @@ import ( "github.com/moby/buildkit/client/llb" bkgw "github.com/moby/buildkit/frontend/gateway/client" - "github.com/moby/buildkit/identity" "github.com/moby/buildkit/solver/pb" "github.com/moby/patternmatcher" "github.com/pkg/errors" fstypes "github.com/tonistiigi/fsutil/types" "github.com/vektah/gqlparser/v2/ast" - "github.com/vito/progrock" "github.com/dagger/dagger/core/pipeline" "github.com/dagger/dagger/dagql" "github.com/dagger/dagger/engine/buildkit" + "github.com/dagger/dagger/telemetry" ) // Directory is a content-addressed directory. @@ -154,9 +153,9 @@ func (dir *Directory) SetState(ctx context.Context, st llb.State) error { return nil } -func (dir *Directory) WithPipeline(ctx context.Context, name, description string, labels []pipeline.Label) (*Directory, error) { +func (dir *Directory) WithPipeline(ctx context.Context, name, description string) (*Directory, error) { dir = dir.Clone() - dir.Query = dir.Query.WithPipeline(name, description, labels) + dir.Query = dir.Query.WithPipeline(name, description) return dir, nil } @@ -716,9 +715,8 @@ func (dir *Directory) Export(ctx context.Context, destPath string, merge bool) ( defPB = dir.LLB } - ctx, vtx := progrock.Span(ctx, identity.NewID(), - fmt.Sprintf("export directory %s to host %s", dir.Dir, destPath)) - defer func() { vtx.Done(rerr) }() + ctx, span := Tracer().Start(ctx, fmt.Sprintf("export directory %s to host %s", dir.Dir, destPath)) + defer telemetry.End(span, func() error { return rerr }) detach, _, err := svcs.StartBindings(ctx, dir.Services) if err != nil { diff --git a/core/file.go b/core/file.go index b81ce24318a..20fb81fbbf9 100644 --- a/core/file.go +++ b/core/file.go @@ -11,15 +11,14 @@ import ( "github.com/moby/buildkit/client/llb" bkgw "github.com/moby/buildkit/frontend/gateway/client" - "github.com/moby/buildkit/identity" "github.com/moby/buildkit/solver/pb" fstypes "github.com/tonistiigi/fsutil/types" "github.com/vektah/gqlparser/v2/ast" - "github.com/vito/progrock" "github.com/dagger/dagger/core/pipeline" "github.com/dagger/dagger/core/reffs" "github.com/dagger/dagger/engine/buildkit" + "github.com/dagger/dagger/telemetry" ) // File is a content-addressed file. @@ -249,7 +248,7 @@ func (file *File) Open(ctx context.Context) (io.ReadCloser, error) { return fs.Open(file.File) } -func (file *File) Export(ctx context.Context, dest string, allowParentDirPath bool) error { +func (file *File) Export(ctx context.Context, dest string, allowParentDirPath bool) (rerr error) { svcs := file.Query.Services bk := file.Query.Buildkit @@ -262,9 +261,8 @@ func (file *File) Export(ctx context.Context, dest string, allowParentDirPath bo return err } - ctx, vtx := progrock.Span(ctx, identity.NewID(), - fmt.Sprintf("export file %s to host %s", file.File, dest)) - defer vtx.Done(err) + ctx, vtx := Tracer().Start(ctx, fmt.Sprintf("export file %s to host %s", file.File, dest)) + defer telemetry.End(vtx, func() error { return rerr }) detach, _, err := svcs.StartBindings(ctx, file.Services) if err != nil { diff --git a/core/healthcheck.go b/core/healthcheck.go index 214b743a131..ac6bc206fbe 100644 --- a/core/healthcheck.go +++ b/core/healthcheck.go @@ -7,11 +7,10 @@ import ( "syscall" "github.com/dagger/dagger/engine/buildkit" + "github.com/dagger/dagger/telemetry" "github.com/moby/buildkit/client/llb" bkgw "github.com/moby/buildkit/frontend/gateway/client" - "github.com/moby/buildkit/identity" "github.com/moby/buildkit/solver/pb" - "github.com/vito/progrock" ) type portHealthChecker struct { @@ -28,7 +27,7 @@ func newHealth(bk *buildkit.Client, host string, ports []Port) *portHealthChecke } } -func (d *portHealthChecker) Check(ctx context.Context) (err error) { +func (d *portHealthChecker) Check(ctx context.Context) (rerr error) { args := []string{"check", d.host} allPortsSkipped := true for _, port := range d.ports { @@ -42,8 +41,9 @@ func (d *portHealthChecker) Check(ctx context.Context) (err error) { } // always show health checks - ctx, vtx := progrock.Span(ctx, identity.NewID(), strings.Join(args, " ")) - defer func() { vtx.Done(err) }() + ctx, span := Tracer().Start(ctx, strings.Join(args, " ")) + defer telemetry.End(span, func() error { return rerr }) + ctx, stdout, stderr := telemetry.WithStdioToOtel(ctx, InstrumentationLibrary) scratchDef, err := llb.Scratch().Marshal(ctx) if err != nil { @@ -72,15 +72,15 @@ func (d *portHealthChecker) Check(ctx context.Context) (err error) { // NB: use a different ctx than the one that'll be interrupted for anything // that needs to run as part of post-interruption cleanup - cleanupCtx := context.Background() + cleanupCtx := context.WithoutCancel(ctx) defer container.Release(cleanupCtx) proc, err := container.Start(ctx, bkgw.StartRequest{ Args: args, - Env: []string{"_DAGGER_INTERNAL_COMMAND="}, - Stdout: nopCloser{vtx.Stdout()}, - Stderr: nopCloser{vtx.Stderr()}, + Env: append(telemetry.PropagationEnv(ctx), "_DAGGER_INTERNAL_COMMAND="), + Stdout: nopCloser{stdout}, + Stderr: nopCloser{stderr}, }) if err != nil { return err diff --git a/core/host.go b/core/host.go index 75e702a3361..414fed55d33 100644 --- a/core/host.go +++ b/core/host.go @@ -9,7 +9,6 @@ import ( "github.com/dagger/dagger/dagql" specs "github.com/opencontainers/image-spec/specs-go/v1" "github.com/vektah/gqlparser/v2/ast" - "github.com/vito/progrock" ) type Host struct { @@ -71,12 +70,8 @@ func (host *Host) Directory( // TODO: enforcement that requester session is granted access to source session at this path // Create a sub-pipeline to group llb.Local instructions - pipelineName := fmt.Sprintf("%s %s", pipelineNamePrefix, dirPath) - ctx, subRecorder := progrock.WithGroup(ctx, pipelineName, progrock.Weak()) - _, desc, err := host.Query.Buildkit.LocalImport( ctx, - subRecorder, host.Query.Platform.Spec(), dirPath, filter.Exclude, diff --git a/core/integration/engine_test.go b/core/integration/engine_test.go index f7dc0bcb2aa..83a065c521f 100644 --- a/core/integration/engine_test.go +++ b/core/integration/engine_test.go @@ -164,7 +164,7 @@ func TestDaggerRun(t *testing.T) { stderr, err := clientCtr.Stderr(ctx) require.NoError(t, err) // verify we got some progress output - require.Contains(t, stderr, "resolve image config for") + require.Contains(t, stderr, "Container.from") } func TestClientSendsLabelsInTelemetry(t *testing.T) { @@ -222,8 +222,8 @@ func TestClientSendsLabelsInTelemetry(t *testing.T) { WithEnvVariable("_EXPERIMENTAL_DAGGER_CLI_BIN", "/bin/dagger"). WithEnvVariable("_EXPERIMENTAL_DAGGER_RUNNER_HOST", "tcp://dev-engine:1234"). WithServiceBinding("cloud", fakeCloud). - WithEnvVariable("_EXPERIMENTAL_DAGGER_CLOUD_URL", "http://cloud:8080/"+eventsID). - WithEnvVariable("_EXPERIMENTAL_DAGGER_CLOUD_TOKEN", "test"). + WithEnvVariable("DAGGER_CLOUD_URL", "http://cloud:8080/"+eventsID). + WithEnvVariable("DAGGER_CLOUD_TOKEN", "test"). WithExec([]string{"git", "config", "--global", "init.defaultBranch", "main"}). WithExec([]string{"git", "config", "--global", "user.email", "test@example.com"}). // make sure we handle non-ASCII usernames @@ -238,7 +238,7 @@ func TestClientSendsLabelsInTelemetry(t *testing.T) { receivedEvents, err := withCode. WithMountedCache("/events", eventsVol). WithExec([]string{ - "cat", fmt.Sprintf("/events/%s.json", eventsID), + "sh", "-c", "cat $0", fmt.Sprintf("/events/%s/**/*.json", eventsID), }). Stdout(ctx) require.NoError(t, err) diff --git a/core/integration/module_iface_test.go b/core/integration/module_iface_test.go index 6383d8d45a2..8a6c3080152 100644 --- a/core/integration/module_iface_test.go +++ b/core/integration/module_iface_test.go @@ -50,6 +50,7 @@ func (m *Test) Fn() BadIface { With(daggerFunctions()). Sync(ctx) require.Error(t, err) + require.NoError(t, c.Close()) require.Regexp(t, `missing method .* from DaggerObject interface, which must be embedded in interfaces used in Functions and Objects`, logs.String()) }) } diff --git a/core/integration/module_test.go b/core/integration/module_test.go index 3344a6522d7..f392d19dd28 100644 --- a/core/integration/module_test.go +++ b/core/integration/module_test.go @@ -1000,6 +1000,7 @@ func (m *Minimal) Hello(name string, opts struct{}, opts2 struct{}) string { _, err := modGen.With(daggerQuery(`{minimal{hello}}`)).Stdout(ctx) require.Error(t, err) require.NoError(t, c.Close()) + t.Log(logs.String()) require.Contains(t, logs.String(), "nested structs are not supported") } diff --git a/core/integration/pipeline_test.go b/core/integration/pipeline_test.go index 3448507833f..dd5cc4be398 100644 --- a/core/integration/pipeline_test.go +++ b/core/integration/pipeline_test.go @@ -9,114 +9,6 @@ import ( "github.com/stretchr/testify/require" ) -func TestPipeline(t *testing.T) { - t.Parallel() - - cacheBuster := fmt.Sprintf("%d", time.Now().UTC().UnixNano()) - - t.Run("client pipeline", func(t *testing.T) { - t.Parallel() - - var logs safeBuffer - c, ctx := connect(t, dagger.WithLogOutput(&logs)) - - _, err := c. - Pipeline("client pipeline"). - Container(). - From(alpineImage). - WithExec([]string{"echo", cacheBuster}). - Sync(ctx) - - require.NoError(t, err) - - require.NoError(t, c.Close()) // close + flush logs - - t.Log(logs.String()) - require.Contains(t, logs.String(), "client pipeline") - }) - - t.Run("container pipeline", func(t *testing.T) { - t.Parallel() - - var logs safeBuffer - c, ctx := connect(t, dagger.WithLogOutput(&logs)) - - _, err := c. - Container(). - Pipeline("container pipeline"). - From(alpineImage). - WithExec([]string{"echo", cacheBuster}). - Sync(ctx) - - require.NoError(t, err) - - require.NoError(t, c.Close()) // close + flush logs - - t.Log(logs.String()) - require.Contains(t, logs.String(), "container pipeline") - }) - - t.Run("directory pipeline", func(t *testing.T) { - t.Parallel() - - var logs safeBuffer - c, ctx := connect(t, dagger.WithLogOutput(&logs)) - - contents, err := c. - Directory(). - Pipeline("directory pipeline"). - WithNewFile("/foo", cacheBuster). - File("/foo"). - Contents(ctx) - - require.NoError(t, err) - require.Equal(t, contents, cacheBuster) - - require.NoError(t, c.Close()) // close + flush logs - - t.Log(logs.String()) - require.Contains(t, logs.String(), "directory pipeline") - }) - - t.Run("service pipeline", func(t *testing.T) { - t.Parallel() - - var logs safeBuffer - c, ctx := connect(t, dagger.WithLogOutput(&logs)) - - srv, url := httpService(ctx, t, c, "Hello, world!") - - hostname, err := srv.Hostname(ctx) - require.NoError(t, err) - - client := c.Container(). - From(alpineImage). - WithServiceBinding("www", srv). - WithExec([]string{"apk", "add", "curl"}). - WithExec([]string{"curl", "-v", url}) - - _, err = client.Sync(ctx) - require.NoError(t, err) - - _, err = srv.Stop(ctx) // FIXME: shouldn't need this, but test is flaking - require.NoError(t, err) - - require.NoError(t, c.Close()) // close + flush logs - - t.Log(logs.String()) - require.Contains(t, logs.String(), "service "+hostname) - require.Regexp(t, `start python -m http.server.*DONE`, logs.String()) - }) -} - -func TestPipelineGraphQLClient(t *testing.T) { - t.Parallel() - - c, _ := connect(t) - require.NotNil(t, c.GraphQLClient()) - require.NotNil(t, c.Pipeline("client pipeline").GraphQLClient()) -} - func TestInternalVertexes(t *testing.T) { t.Parallel() diff --git a/core/integration/suite_test.go b/core/integration/suite_test.go index 23b4a90f8e6..bb05ad7662c 100644 --- a/core/integration/suite_test.go +++ b/core/integration/suite_test.go @@ -18,14 +18,31 @@ import ( "dagger.io/dagger" "github.com/dagger/dagger/core" "github.com/dagger/dagger/internal/testutil" + "github.com/dagger/dagger/telemetry" "github.com/moby/buildkit/identity" "github.com/stretchr/testify/require" + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/trace" ) +func init() { + telemetry.Init(context.Background(), telemetry.Config{ + Detect: true, + Resource: telemetry.FallbackResource(), + }) +} + +func Tracer() trace.Tracer { + return otel.Tracer("test") +} + func connect(t testing.TB, opts ...dagger.ClientOpt) (*dagger.Client, context.Context) { ctx, cancel := context.WithCancel(context.Background()) t.Cleanup(cancel) + ctx, span := Tracer().Start(ctx, t.Name()) + t.Cleanup(func() { span.End() }) + opts = append([]dagger.ClientOpt{ dagger.WithLogOutput(newTWriter(t)), }, opts...) diff --git a/core/integration/testdata/telemetry/main.go b/core/integration/testdata/telemetry/main.go index 180fbc952b9..84cc84750b2 100644 --- a/core/integration/testdata/telemetry/main.go +++ b/core/integration/testdata/telemetry/main.go @@ -11,21 +11,16 @@ import ( ) func main() { - err := http.ListenAndServe(":8080", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - u, p, ok := r.BasicAuth() - if !ok { - panic("no basic auth") - } - - if p != "" { - panic("password should be empty") - } - - if u != "test" { - panic("token must be set to 'test'") + err := http.ListenAndServe(":8080", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { //nolint: gosec + auth := r.Header.Get("Authorization") + if auth != "Bearer test" { + panic(fmt.Sprintf("authorization header must be %q, got %q", "Bearer test", auth)) } eventsFp := filepath.Join("/events", fmt.Sprintf("%s.json", r.URL.Path)) + if err := os.MkdirAll(filepath.Dir(eventsFp), 0755); err != nil { + panic(err) + } eventsF, err := os.OpenFile(eventsFp, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0644) if err != nil { diff --git a/core/moddeps.go b/core/moddeps.go index 60a814e8e51..7d2346bcae2 100644 --- a/core/moddeps.go +++ b/core/moddeps.go @@ -9,7 +9,7 @@ import ( "github.com/dagger/dagger/cmd/codegen/introspection" "github.com/dagger/dagger/dagql" dagintro "github.com/dagger/dagger/dagql/introspection" - "github.com/dagger/dagger/tracing" + "github.com/dagger/dagger/telemetry" ) const ( @@ -139,7 +139,7 @@ func (d *ModDeps) lazilyLoadSchema(ctx context.Context) (loadedSchema *dagql.Ser dag := dagql.NewServer[*Query](d.root) - dag.Around(tracing.AroundFunc) + dag.Around(telemetry.AroundFunc) // share the same cache session-wide dag.Cache = d.root.Cache diff --git a/core/modfunc.go b/core/modfunc.go index a131036a04a..8584ec50c4c 100644 --- a/core/modfunc.go +++ b/core/modfunc.go @@ -12,11 +12,12 @@ import ( "github.com/dagger/dagger/dagql/call" "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/buildkit" + "github.com/iancoleman/strcase" bkgw "github.com/moby/buildkit/frontend/gateway/client" "github.com/moby/buildkit/util/bklog" "github.com/opencontainers/go-digest" ocispecs "github.com/opencontainers/image-spec/specs-go/v1" - "github.com/vito/progrock" + "go.opentelemetry.io/otel/propagation" ) type ModuleFunction struct { @@ -197,6 +198,24 @@ func (fn *ModuleFunction) Call(ctx context.Context, caller *call.ID, opts *CallO return nil, fmt.Errorf("failed to mount mod metadata directory: %w", err) } + ctr, err = ctr.UpdateImageConfig(ctx, func(cfg ocispecs.ImageConfig) ocispecs.ImageConfig { + // Used by the shim to associate logs to the function call instead of the + // exec /runtime process, which we hide. + tc := propagation.TraceContext{} + carrier := propagation.MapCarrier{} + tc.Inject(ctx, carrier) + for _, f := range tc.Fields() { + name := "DAGGER_FUNCTION_" + strcase.ToScreamingSnake(f) + if val, ok := carrier[f]; ok { + cfg.Env = append(cfg.Env, name+"="+val) + } + } + return cfg + }) + if err != nil { + return nil, fmt.Errorf("failed to update image config: %w", err) + } + // Setup the Exec for the Function call and evaluate it ctr, err = ctr.WithExec(ctx, ContainerExecOpts{ ModuleCallerDigest: callerDigest, @@ -233,8 +252,7 @@ func (fn *ModuleFunction) Call(ctx context.Context, caller *call.ID, opts *CallO deps = mod.Deps.Prepend(mod) } - err = mod.Query.RegisterFunctionCall(ctx, callerDigest, deps, fn.mod, callMeta, - progrock.FromContext(ctx).Parent) + err = mod.Query.RegisterFunctionCall(ctx, callerDigest, deps, fn.mod, callMeta) if err != nil { return nil, fmt.Errorf("failed to register function call: %w", err) } diff --git a/core/object.go b/core/object.go index 7222457dca8..e6caa9c29b9 100644 --- a/core/object.go +++ b/core/object.go @@ -241,7 +241,14 @@ func (obj *ModuleObject) installConstructor(ctx context.Context, dag *dagql.Serv } spec.Name = gqlFieldName(mod.Name()) - spec.ImpurityReason = "Module functions are currently always impure." + + // NB: functions actually _are_ cached per-session, which matches the + // lifetime of the server, so we might as well consider them pure. + // That way there will be locking around concurrent calls, so the user won't + // see multiple in parallel. Reconsider if/when we have a global cache and/or + // figure out function caching. + spec.ImpurityReason = "" + spec.Module = obj.Module.IDModule() dag.Root().ObjectType().Extend( diff --git a/core/pipeline/label.go b/core/pipeline/label.go deleted file mode 100644 index 788b8c03e58..00000000000 --- a/core/pipeline/label.go +++ /dev/null @@ -1,690 +0,0 @@ -package pipeline - -import ( - "crypto/sha256" - "encoding/base64" - "errors" - "fmt" - "os" - "os/exec" - "regexp" - "runtime" - "strconv" - "strings" - - "github.com/denisbrodbeck/machineid" - "github.com/go-git/go-git/v5" - "github.com/go-git/go-git/v5/plumbing" - "github.com/go-git/go-git/v5/plumbing/object" - "github.com/google/go-github/v50/github" - "github.com/sirupsen/logrus" -) - -type Label struct { - Name string `json:"name" field:"true" doc:"Label name."` - Value string `json:"value" field:"true" doc:"Label value."` -} - -func (Label) TypeName() string { - return "PipelineLabel" -} - -func (Label) TypeDescription() string { - return "Key value object that represents a pipeline label." -} - -type Labels []Label - -func EngineLabel(engineName string) Label { - return Label{ - Name: "dagger.io/engine", - Value: engineName, - } -} - -func LoadServerLabels(engineVersion, os, arch string, cacheEnabled bool) Labels { - labels := []Label{ - { - Name: "dagger.io/server.os", - Value: os, - }, - { - Name: "dagger.io/server.arch", - Value: arch, - }, - { - Name: "dagger.io/server.version", - Value: engineVersion, - }, - - { - Name: "dagger.io/server.cache.enabled", - Value: strconv.FormatBool(cacheEnabled), - }, - } - - return labels -} - -func LoadClientLabels(engineVersion string) Labels { - labels := []Label{ - { - Name: "dagger.io/client.os", - Value: runtime.GOOS, - }, - { - Name: "dagger.io/client.arch", - Value: runtime.GOARCH, - }, - { - Name: "dagger.io/client.version", - Value: engineVersion, - }, - } - - machineID, err := machineid.ProtectedID("dagger") - if err == nil { - labels = append(labels, Label{ - Name: "dagger.io/client.machine_id", - Value: machineID, - }) - } - - return labels -} - -func LoadVCSLabels(workdir string) Labels { - labels := []Label{} - - if gitLabels, err := LoadGitLabels(workdir); err == nil { - labels = append(labels, gitLabels...) - } else { - logrus.Warnf("failed to collect git labels: %s", err) - } - - if githubLabels, err := LoadGitHubLabels(); err == nil { - labels = append(labels, githubLabels...) - } else { - logrus.Warnf("failed to collect GitHub labels: %s", err) - } - - if gitlabLabels, err := LoadGitLabLabels(); err == nil { - labels = append(labels, gitlabLabels...) - } else { - logrus.Warnf("failed to collect Gitlab labels: %s", err) - } - - if CircleCILabels, err := LoadCircleCILabels(); err == nil { - labels = append(labels, CircleCILabels...) - } else { - logrus.Warnf("failed to collect CircleCI labels: %s", err) - } - - return labels -} - -// Define a type for functions that fetch a branch commit -type fetchFunc func(repo *git.Repository, branch string) (*object.Commit, error) - -// Function to fetch from the origin remote -func fetchFromOrigin(repo *git.Repository, branch string) (*object.Commit, error) { - // Fetch from the origin remote - cmd := exec.Command("git", "fetch", "--depth", "1", "origin", branch) - err := cmd.Run() - if err != nil { - return nil, fmt.Errorf("error fetching branch from origin: %w", err) - } - - // Get the reference of the fetched branch - refName := plumbing.ReferenceName(fmt.Sprintf("refs/remotes/origin/%s", branch)) - ref, err := repo.Reference(refName, true) - if err != nil { - return nil, fmt.Errorf("error getting reference: %w", err) - } - - // Get the commit object of the fetched branch - branchCommit, err := repo.CommitObject(ref.Hash()) - if err != nil { - return nil, fmt.Errorf("error getting commit: %w", err) - } - - return branchCommit, nil -} - -// Function to fetch from the fork remote -// GitHub forks are not added as remotes by default, so we need to guess the fork URL -// This is a heuristic approach, as the fork might not exist from the information we have -func fetchFromFork(repo *git.Repository, branch string) (*object.Commit, error) { - // Get the username of the person who initiated the workflow run - username := os.Getenv("GITHUB_ACTOR") - - // Get the repository name (owner/repo) - repository := os.Getenv("GITHUB_REPOSITORY") - parts := strings.Split(repository, "/") - if len(parts) < 2 { - return nil, fmt.Errorf("invalid repository format: %s", repository) - } - - // Get the server URL: "https://github.com/" in general, - // but can be different for GitHub Enterprise - serverURL := os.Getenv("GITHUB_SERVER_URL") - - forkURL := fmt.Sprintf("%s/%s/%s", serverURL, username, parts[1]) - - cmd := exec.Command("git", "remote", "add", "fork", forkURL) - err := cmd.Run() - if err != nil { - return nil, fmt.Errorf("error adding fork as remote: %w", err) - } - - cmd = exec.Command("git", "fetch", "--depth", "1", "fork", branch) - err = cmd.Run() - if err != nil { - return nil, fmt.Errorf("error fetching branch from fork: %w", err) - } - - // Get the reference of the fetched branch - refName := plumbing.ReferenceName(fmt.Sprintf("refs/remotes/fork/%s", branch)) - ref, err := repo.Reference(refName, true) - if err != nil { - return nil, fmt.Errorf("error getting reference: %w", err) - } - - // Get the commit object of the fetched branch - branchCommit, err := repo.CommitObject(ref.Hash()) - if err != nil { - return nil, fmt.Errorf("error getting commit: %w", err) - } - - return branchCommit, nil -} - -func LoadGitLabels(workdir string) (Labels, error) { - repo, err := git.PlainOpenWithOptions(workdir, &git.PlainOpenOptions{ - DetectDotGit: true, - }) - if err != nil { - if errors.Is(err, git.ErrRepositoryNotExists) { - return nil, nil - } - - return nil, err - } - - labels := []Label{} - - origin, err := repo.Remote("origin") - if err == nil { - urls := origin.Config().URLs - if len(urls) == 0 { - return []Label{}, nil - } - - endpoint, err := parseGitURL(urls[0]) - if err != nil { - return nil, err - } - - labels = append(labels, Label{ - Name: "dagger.io/git.remote", - Value: endpoint, - }) - } - - head, err := repo.Head() - if err != nil { - return nil, err - } - - commit, err := repo.CommitObject(head.Hash()) - if err != nil { - return nil, err - } - - // Checks if the commit is a merge commit in the context of pull request - // Only GitHub needs to be handled, as GitLab doesn't detach the head in MR context - if os.Getenv("GITHUB_EVENT_NAME") == "pull_request" && commit.NumParents() > 1 { - // Get the pull request's origin branch name - branch := os.Getenv("GITHUB_HEAD_REF") - - // List of remotes function to try fetching from: origin and fork - fetchFuncs := []fetchFunc{fetchFromOrigin, fetchFromFork} - - var branchCommit *object.Commit - var err error - - for _, fetch := range fetchFuncs { - branchCommit, err = fetch(repo, branch) - if err == nil { - commit = branchCommit - break - } else { - fmt.Fprintf(os.Stderr, "Error fetching branch: %s", err.Error()) - } - } - } - - title, _, _ := strings.Cut(commit.Message, "\n") - - labels = append(labels, - Label{ - Name: "dagger.io/git.ref", - Value: commit.Hash.String(), - }, - Label{ - Name: "dagger.io/git.author.name", - Value: commit.Author.Name, - }, - Label{ - Name: "dagger.io/git.author.email", - Value: commit.Author.Email, - }, - Label{ - Name: "dagger.io/git.committer.name", - Value: commit.Committer.Name, - }, - Label{ - Name: "dagger.io/git.committer.email", - Value: commit.Committer.Email, - }, - Label{ - Name: "dagger.io/git.title", - Value: title, // first line from commit message - }, - ) - - // check if ref is a tag or branch - refs, _ := repo.References() - err = refs.ForEach(func(ref *plumbing.Reference) error { - if ref.Hash() == commit.Hash { - if ref.Name().IsTag() { - labels = append(labels, Label{ - Name: "dagger.io/git.tag", - Value: ref.Name().Short(), - }) - } - if ref.Name().IsBranch() { - labels = append(labels, Label{ - Name: "dagger.io/git.branch", - Value: ref.Name().Short(), - }) - } - } - return nil - }) - if err != nil { - return nil, err - } - - return labels, nil -} - -func LoadCircleCILabels() (Labels, error) { - if os.Getenv("CIRCLECI") != "true" { //nolint:goconst - return []Label{}, nil - } - - labels := []Label{ - { - Name: "dagger.io/vcs.change.branch", - Value: os.Getenv("CIRCLE_BRANCH"), - }, - { - Name: "dagger.io/vcs.change.head_sha", - Value: os.Getenv("CIRCLE_SHA1"), - }, - { - Name: "dagger.io/vcs.job.name", - Value: os.Getenv("CIRCLE_JOB"), - }, - } - - firstEnvLabel := func(label string, envVar []string) (Label, bool) { - for _, envVar := range envVar { - triggererLogin := os.Getenv(envVar) - if triggererLogin != "" { - return Label{ - Name: label, - Value: triggererLogin, - }, true - } - } - return Label{}, false - } - - // environment variables beginning with "CIRCLE_PIPELINE_" are set in `.circle-ci` pipeline config - pipelineNumber := []string{ - "CIRCLE_PIPELINE_NUMBER", - } - if label, found := firstEnvLabel("dagger.io/vcs.change.number", pipelineNumber); found { - labels = append(labels, label) - } - - triggererLabels := []string{ - "CIRCLE_USERNAME", // all, but account needs to exist on circleCI - "CIRCLE_PROJECT_USERNAME", // github / bitbucket - "CIRCLE_PIPELINE_TRIGGER_LOGIN", // gitlab - } - if label, found := firstEnvLabel("dagger.io/vcs.triggerer.login", triggererLabels); found { - labels = append(labels, label) - } - - repoNameLabels := []string{ - "CIRCLE_PROJECT_REPONAME", // github / bitbucket - "CIRCLE_PIPELINE_REPO_FULL_NAME", // gitlab - } - if label, found := firstEnvLabel("dagger.io/vcs.repo.full_name", repoNameLabels); found { - labels = append(labels, label) - } - - vcsChangeURL := []string{ - "CIRCLE_PULL_REQUEST", // github / bitbucket, only from forks - } - if label, found := firstEnvLabel("dagger.io/vcs.change.url", vcsChangeURL); found { - labels = append(labels, label) - } - - pipelineRepoURL := os.Getenv("CIRCLE_PIPELINE_REPO_URL") - repositoryURL := os.Getenv("CIRCLE_REPOSITORY_URL") - if pipelineRepoURL != "" { // gitlab - labels = append(labels, Label{ - Name: "dagger.io/vcs.repo.url", - Value: pipelineRepoURL, - }) - } else if repositoryURL != "" { // github / bitbucket (returns the remote) - transformedURL := repositoryURL - if strings.Contains(repositoryURL, "@") { // from ssh to https - re := regexp.MustCompile(`git@(.*?):(.*?)/(.*)\.git`) - transformedURL = re.ReplaceAllString(repositoryURL, "https://$1/$2/$3") - } - labels = append(labels, Label{ - Name: "dagger.io/vcs.repo.url", - Value: transformedURL, - }) - } - - return labels, nil -} - -func LoadGitLabLabels() (Labels, error) { - if os.Getenv("GITLAB_CI") != "true" { //nolint:goconst - return []Label{}, nil - } - - branchName := os.Getenv("CI_MERGE_REQUEST_SOURCE_BRANCH_NAME") - if branchName == "" { - // for a branch job, CI_MERGE_REQUEST_SOURCE_BRANCH_NAME is empty - branchName = os.Getenv("CI_COMMIT_BRANCH") - } - - changeTitle := os.Getenv("CI_MERGE_REQUEST_TITLE") - if changeTitle == "" { - changeTitle = os.Getenv("CI_COMMIT_TITLE") - } - - labels := []Label{ - { - Name: "dagger.io/vcs.repo.url", - Value: os.Getenv("CI_PROJECT_URL"), - }, - { - Name: "dagger.io/vcs.repo.full_name", - Value: os.Getenv("CI_PROJECT_PATH"), - }, - { - Name: "dagger.io/vcs.change.branch", - Value: branchName, - }, - { - Name: "dagger.io/vcs.change.title", - Value: changeTitle, - }, - { - Name: "dagger.io/vcs.change.head_sha", - Value: os.Getenv("CI_COMMIT_SHA"), - }, - { - Name: "dagger.io/vcs.triggerer.login", - Value: os.Getenv("GITLAB_USER_LOGIN"), - }, - { - Name: "dagger.io/vcs.event.type", - Value: os.Getenv("CI_PIPELINE_SOURCE"), - }, - { - Name: "dagger.io/vcs.job.name", - Value: os.Getenv("CI_JOB_NAME"), - }, - { - Name: "dagger.io/vcs.workflow.name", - Value: os.Getenv("CI_PIPELINE_NAME"), - }, - { - Name: "dagger.io/vcs.change.label", - Value: os.Getenv("CI_MERGE_REQUEST_LABELS"), - }, - { - Name: "gitlab.com/job.id", - Value: os.Getenv("CI_JOB_ID"), - }, - { - Name: "gitlab.com/triggerer.id", - Value: os.Getenv("GITLAB_USER_ID"), - }, - { - Name: "gitlab.com/triggerer.email", - Value: os.Getenv("GITLAB_USER_EMAIL"), - }, - { - Name: "gitlab.com/triggerer.name", - Value: os.Getenv("GITLAB_USER_NAME"), - }, - } - - projectURL := os.Getenv("CI_MERGE_REQUEST_PROJECT_URL") - mrIID := os.Getenv("CI_MERGE_REQUEST_IID") - if projectURL != "" && mrIID != "" { - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.url", - Value: fmt.Sprintf("%s/-/merge_requests/%s", projectURL, mrIID), - }) - - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.number", - Value: mrIID, - }) - } - - return labels, nil -} - -func LoadGitHubLabels() (Labels, error) { - if os.Getenv("GITHUB_ACTIONS") != "true" { //nolint:goconst - return []Label{}, nil - } - - eventType := os.Getenv("GITHUB_EVENT_NAME") - - labels := []Label{ - { - Name: "dagger.io/vcs.event.type", - Value: eventType, - }, - { - Name: "dagger.io/vcs.job.name", - Value: os.Getenv("GITHUB_JOB"), - }, - { - Name: "dagger.io/vcs.triggerer.login", - Value: os.Getenv("GITHUB_ACTOR"), - }, - { - Name: "dagger.io/vcs.workflow.name", - Value: os.Getenv("GITHUB_WORKFLOW"), - }, - } - - eventPath := os.Getenv("GITHUB_EVENT_PATH") - if eventPath != "" { - payload, err := os.ReadFile(eventPath) - if err != nil { - return nil, fmt.Errorf("read $GITHUB_EVENT_PATH: %w", err) - } - - event, err := github.ParseWebHook(eventType, payload) - if err != nil { - return nil, fmt.Errorf("unmarshal $GITHUB_EVENT_PATH: %w", err) - } - - if event, ok := event.(interface { - GetAction() string - }); ok && event.GetAction() != "" { - labels = append(labels, Label{ - Name: "github.com/event.action", - Value: event.GetAction(), - }) - } - - if repo, ok := getRepoIsh(event); ok { - labels = append(labels, Label{ - Name: "dagger.io/vcs.repo.full_name", - Value: repo.GetFullName(), - }) - - labels = append(labels, Label{ - Name: "dagger.io/vcs.repo.url", - Value: repo.GetHTMLURL(), - }) - } - - if event, ok := event.(interface { - GetPullRequest() *github.PullRequest - }); ok && event.GetPullRequest() != nil { - pr := event.GetPullRequest() - - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.number", - Value: fmt.Sprintf("%d", pr.GetNumber()), - }) - - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.title", - Value: pr.GetTitle(), - }) - - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.url", - Value: pr.GetHTMLURL(), - }) - - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.branch", - Value: pr.GetHead().GetRef(), - }) - - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.head_sha", - Value: pr.GetHead().GetSHA(), - }) - - labels = append(labels, Label{ - Name: "dagger.io/vcs.change.label", - Value: pr.GetHead().GetLabel(), - }) - } - } - - return labels, nil -} - -type repoIsh interface { - GetFullName() string - GetHTMLURL() string -} - -func getRepoIsh(event any) (repoIsh, bool) { - switch x := event.(type) { - case *github.PushEvent: - // push event repositories aren't quite a *github.Repository for silly - // legacy reasons - return x.GetRepo(), true - case interface{ GetRepo() *github.Repository }: - return x.GetRepo(), true - default: - return nil, false - } -} - -func (labels *Labels) Type() string { - return "labels" -} - -func (labels *Labels) Set(s string) error { - parts := strings.Split(s, ":") - if len(parts) != 2 { - return fmt.Errorf("bad format: '%s' (expected name:value)", s) - } - - labels.Add(parts[0], parts[1]) - - return nil -} - -func (labels *Labels) Add(name string, value string) { - *labels = append(*labels, Label{Name: name, Value: value}) -} - -func (labels *Labels) String() string { - var ls string - for _, l := range *labels { - ls += fmt.Sprintf("%s:%s,", l.Name, l.Value) - } - return ls -} - -func (labels *Labels) AppendCILabel() *Labels { - isCIValue := "false" - if isCI() { - isCIValue = "true" - } - labels.Add("dagger.io/ci", isCIValue) - - vendor := "" - switch { - case os.Getenv("GITHUB_ACTIONS") == "true": //nolint:goconst - vendor = "GitHub" - case os.Getenv("CIRCLECI") == "true": //nolint:goconst - vendor = "CircleCI" - case os.Getenv("GITLAB_CI") == "true": //nolint:goconst - vendor = "GitLab" - } - if vendor != "" { - labels.Add("dagger.io/ci.vendor", vendor) - } - - return labels -} - -func isCI() bool { - return os.Getenv("CI") != "" || // GitHub Actions, Travis CI, CircleCI, Cirrus CI, GitLab CI, AppVeyor, CodeShip, dsari - os.Getenv("BUILD_NUMBER") != "" || // Jenkins, TeamCity - os.Getenv("RUN_ID") != "" // TaskCluster, dsari -} - -func (labels *Labels) AppendAnonymousGitLabels(workdir string) *Labels { - gitLabels, err := LoadGitLabels(workdir) - if err != nil { - return labels - } - - for _, gitLabel := range gitLabels { - if gitLabel.Name == "dagger.io/git.author.email" { - labels.Add(gitLabel.Name, fmt.Sprintf("%x", sha256.Sum256([]byte(gitLabel.Value)))) - } - if gitLabel.Name == "dagger.io/git.remote" { - labels.Add(gitLabel.Name, base64.StdEncoding.EncodeToString([]byte(gitLabel.Value))) - } - } - - return labels -} diff --git a/core/pipeline/label_test.go b/core/pipeline/label_test.go deleted file mode 100644 index f4a0f8fee67..00000000000 --- a/core/pipeline/label_test.go +++ /dev/null @@ -1,481 +0,0 @@ -package pipeline_test - -import ( - "os" - "os/exec" - "runtime" - "strings" - "testing" - - "github.com/dagger/dagger/core/pipeline" - "github.com/dagger/dagger/engine" - "github.com/stretchr/testify/require" -) - -func TestLoadClientLabels(t *testing.T) { - labels := pipeline.LoadClientLabels(engine.Version) - - expected := []pipeline.Label{ - {"dagger.io/client.os", runtime.GOOS}, - {"dagger.io/client.arch", runtime.GOARCH}, - {"dagger.io/client.version", engine.Version}, - } - - require.ElementsMatch(t, expected, labels) -} - -func TestLoadServerLabels(t *testing.T) { - labels := pipeline.LoadServerLabels("0.8.4", "linux", "amd64", false) - - expected := []pipeline.Label{ - {"dagger.io/server.os", "linux"}, - {"dagger.io/server.arch", "amd64"}, - {"dagger.io/server.version", "0.8.4"}, - {"dagger.io/server.cache.enabled", "false"}, - } - - require.ElementsMatch(t, expected, labels) -} - -func TestLoadGitLabels(t *testing.T) { - normalRepo := setupRepo(t) - repoHead := run(t, "git", "-C", normalRepo, "rev-parse", "HEAD") - - detachedRepo := setupRepo(t) - run(t, "git", "-C", detachedRepo, "commit", "--allow-empty", "-m", "second") - run(t, "git", "-C", detachedRepo, "commit", "--allow-empty", "-m", "third") - run(t, "git", "-C", detachedRepo, "checkout", "HEAD~2") - run(t, "git", "-C", detachedRepo, "merge", "main") - detachedHead := run(t, "git", "-C", detachedRepo, "rev-parse", "HEAD") - - type Example struct { - Name string - Repo string - Labels []pipeline.Label - } - - for _, example := range []Example{ - { - Name: "normal branch state", - Repo: normalRepo, - Labels: []pipeline.Label{ - { - Name: "dagger.io/git.remote", - Value: "example.com", - }, - { - Name: "dagger.io/git.branch", - Value: "main", - }, - { - Name: "dagger.io/git.ref", - Value: repoHead, - }, - { - Name: "dagger.io/git.author.name", - Value: "Test User", - }, - { - Name: "dagger.io/git.author.email", - Value: "test@example.com", - }, - { - Name: "dagger.io/git.committer.name", - Value: "Test User", - }, - { - Name: "dagger.io/git.committer.email", - Value: "test@example.com", - }, - { - Name: "dagger.io/git.title", - Value: "init", - }, - }, - }, - { - Name: "detached HEAD state", - Repo: detachedRepo, - Labels: []pipeline.Label{ - { - Name: "dagger.io/git.remote", - Value: "example.com", - }, - { - Name: "dagger.io/git.branch", - Value: "main", - }, - { - Name: "dagger.io/git.ref", - Value: detachedHead, - }, - { - Name: "dagger.io/git.author.name", - Value: "Test User", - }, - { - Name: "dagger.io/git.author.email", - Value: "test@example.com", - }, - { - Name: "dagger.io/git.committer.name", - Value: "Test User", - }, - { - Name: "dagger.io/git.committer.email", - Value: "test@example.com", - }, - { - Name: "dagger.io/git.title", - Value: "third", - }, - }, - }, - } { - example := example - t.Run(example.Name, func(t *testing.T) { - labels, err := pipeline.LoadGitLabels(example.Repo) - require.NoError(t, err) - require.ElementsMatch(t, example.Labels, labels) - }) - } -} - -func TestLoadGitHubLabels(t *testing.T) { - type Example struct { - Name string - Env []string - Labels []pipeline.Label - } - - for _, example := range []Example{ - { - Name: "workflow_dispatch", - Env: []string{ - "GITHUB_ACTIONS=true", - "GITHUB_ACTOR=vito", - "GITHUB_WORKFLOW=some-workflow", - "GITHUB_JOB=some-job", - "GITHUB_EVENT_NAME=workflow_dispatch", - "GITHUB_EVENT_PATH=testdata/workflow_dispatch.json", - }, - Labels: []pipeline.Label{ - { - Name: "dagger.io/vcs.triggerer.login", - Value: "vito", - }, - { - Name: "dagger.io/vcs.event.type", - Value: "workflow_dispatch", - }, - { - Name: "dagger.io/vcs.workflow.name", - Value: "some-workflow", - }, - { - Name: "dagger.io/vcs.job.name", - Value: "some-job", - }, - { - Name: "dagger.io/vcs.repo.full_name", - Value: "dagger/testdata", - }, - { - Name: "dagger.io/vcs.repo.url", - Value: "https://github.com/dagger/testdata", - }, - }, - }, - { - Name: "pull_request.synchronize", - Env: []string{ - "GITHUB_ACTIONS=true", - "GITHUB_ACTOR=vito", - "GITHUB_WORKFLOW=some-workflow", - "GITHUB_JOB=some-job", - "GITHUB_EVENT_NAME=pull_request", - "GITHUB_EVENT_PATH=testdata/pull_request.synchronize.json", - }, - Labels: []pipeline.Label{ - { - Name: "dagger.io/vcs.triggerer.login", - Value: "vito", - }, - { - Name: "dagger.io/vcs.event.type", - Value: "pull_request", - }, - { - Name: "dagger.io/vcs.workflow.name", - Value: "some-workflow", - }, - { - Name: "dagger.io/vcs.job.name", - Value: "some-job", - }, - { - Name: "github.com/event.action", - Value: "synchronize", - }, - { - Name: "dagger.io/vcs.repo.full_name", - Value: "dagger/testdata", - }, - { - Name: "dagger.io/vcs.repo.url", - Value: "https://github.com/dagger/testdata", - }, - { - Name: "dagger.io/vcs.change.number", - Value: "2018", - }, - { - Name: "dagger.io/vcs.change.title", - Value: "dump env, use session binary from submodule", - }, - { - Name: "dagger.io/vcs.change.url", - Value: "https://github.com/dagger/testdata/pull/2018", - }, - { - Name: "dagger.io/vcs.change.head_sha", - Value: "81be07d3103b512159628bfa3aae2fbb5d255964", - }, - { - Name: "dagger.io/vcs.change.branch", - Value: "dump-env", - }, - { - Name: "dagger.io/vcs.change.label", - Value: "vito:dump-env", - }, - }, - }, - { - Name: "push", - Env: []string{ - "GITHUB_ACTIONS=true", - "GITHUB_ACTOR=vito", - "GITHUB_WORKFLOW=some-workflow", - "GITHUB_JOB=some-job", - "GITHUB_EVENT_NAME=push", - "GITHUB_EVENT_PATH=testdata/push.json", - }, - Labels: []pipeline.Label{ - { - Name: "dagger.io/vcs.triggerer.login", - Value: "vito", - }, - { - Name: "dagger.io/vcs.event.type", - Value: "push", - }, - { - Name: "dagger.io/vcs.workflow.name", - Value: "some-workflow", - }, - { - Name: "dagger.io/vcs.job.name", - Value: "some-job", - }, - { - Name: "dagger.io/vcs.repo.full_name", - Value: "vito/bass", - }, - { - Name: "dagger.io/vcs.repo.url", - Value: "https://github.com/vito/bass", - }, - }, - }, - } { - example := example - t.Run(example.Name, func(t *testing.T) { - for _, e := range example.Env { - k, v, _ := strings.Cut(e, "=") - os.Setenv(k, v) - } - - labels, err := pipeline.LoadGitHubLabels() - require.NoError(t, err) - require.ElementsMatch(t, example.Labels, labels) - }) - } -} - -func TestLoadGitLabLabels(t *testing.T) { - type Example struct { - Name string - Env map[string]string - Labels []pipeline.Label - } - - for _, example := range []Example{ - { - Name: "GitLab CI merge request job", - Env: map[string]string{ - "GITLAB_CI": "true", - "CI_PROJECT_URL": "https://gitlab.com/dagger/testdata", - "CI_PROJECT_PATH": "dagger/testdata", - "CI_MERGE_REQUEST_SOURCE_BRANCH_NAME": "feature-branch", - "CI_MERGE_REQUEST_TITLE": "Some title", - "CI_MERGE_REQUEST_LABELS": "label1,label2", - "CI_COMMIT_SHA": "123abc", - "CI_PIPELINE_SOURCE": "push", - "CI_PIPELINE_NAME": "pipeline-name", - "CI_JOB_ID": "123", - "CI_JOB_NAME": "test-job", - "GITLAB_USER_ID": "789", - "GITLAB_USER_EMAIL": "user@gitlab.com", - "GITLAB_USER_NAME": "Gitlab User", - "GITLAB_USER_LOGIN": "gitlab-user", - }, - Labels: []pipeline.Label{ - {Name: "dagger.io/vcs.repo.url", Value: "https://gitlab.com/dagger/testdata"}, - {Name: "dagger.io/vcs.repo.full_name", Value: "dagger/testdata"}, - {Name: "dagger.io/vcs.change.branch", Value: "feature-branch"}, - {Name: "dagger.io/vcs.change.title", Value: "Some title"}, - {Name: "dagger.io/vcs.change.head_sha", Value: "123abc"}, - {Name: "dagger.io/vcs.triggerer.login", Value: "gitlab-user"}, - {Name: "dagger.io/vcs.event.type", Value: "push"}, - {Name: "dagger.io/vcs.job.name", Value: "test-job"}, - {Name: "dagger.io/vcs.workflow.name", Value: "pipeline-name"}, - {Name: "dagger.io/vcs.change.label", Value: "label1,label2"}, - {Name: "gitlab.com/job.id", Value: "123"}, - {Name: "gitlab.com/triggerer.id", Value: "789"}, - {Name: "gitlab.com/triggerer.email", Value: "user@gitlab.com"}, - {Name: "gitlab.com/triggerer.name", Value: "Gitlab User"}, - }, - }, - { - Name: "GitLab CI branch job", - Env: map[string]string{ - "GITLAB_CI": "true", - "CI_PROJECT_URL": "https://gitlab.com/dagger/testdata", - "CI_PROJECT_PATH": "dagger/testdata", - "CI_COMMIT_BRANCH": "feature-branch", - "CI_COMMIT_TITLE": "Some title", - "CI_COMMIT_SHA": "123abc", - "CI_PIPELINE_SOURCE": "push", - "CI_PIPELINE_NAME": "pipeline-name", - "CI_JOB_ID": "123", - "CI_JOB_NAME": "test-job", - "GITLAB_USER_ID": "789", - "GITLAB_USER_EMAIL": "user@gitlab.com", - "GITLAB_USER_NAME": "Gitlab User", - "GITLAB_USER_LOGIN": "gitlab-user", - }, - Labels: []pipeline.Label{ - {Name: "dagger.io/vcs.repo.url", Value: "https://gitlab.com/dagger/testdata"}, - {Name: "dagger.io/vcs.repo.full_name", Value: "dagger/testdata"}, - {Name: "dagger.io/vcs.change.branch", Value: "feature-branch"}, - {Name: "dagger.io/vcs.change.title", Value: "Some title"}, - {Name: "dagger.io/vcs.change.head_sha", Value: "123abc"}, - {Name: "dagger.io/vcs.triggerer.login", Value: "gitlab-user"}, - {Name: "dagger.io/vcs.event.type", Value: "push"}, - {Name: "dagger.io/vcs.job.name", Value: "test-job"}, - {Name: "dagger.io/vcs.workflow.name", Value: "pipeline-name"}, - {Name: "dagger.io/vcs.change.label", Value: ""}, - {Name: "gitlab.com/job.id", Value: "123"}, - {Name: "gitlab.com/triggerer.id", Value: "789"}, - {Name: "gitlab.com/triggerer.email", Value: "user@gitlab.com"}, - {Name: "gitlab.com/triggerer.name", Value: "Gitlab User"}, - }, - }, - } { - example := example - t.Run(example.Name, func(t *testing.T) { - // Set environment variables - for k, v := range example.Env { - os.Setenv(k, v) - } - - // Run the function and collect the result - labels, err := pipeline.LoadGitLabLabels() - - // Clean up environment variables - for k := range example.Env { - os.Unsetenv(k) - } - - // Make assertions - require.NoError(t, err) - require.ElementsMatch(t, example.Labels, labels) - }) - } -} - -func TestLoadCircleCILabels(t *testing.T) { - type Example struct { - Name string - Env map[string]string - Labels []pipeline.Label - } - - for _, example := range []Example{ - { - Name: "CircleCI", - Env: map[string]string{ - "CIRCLECI": "true", - "CIRCLE_BRANCH": "main", - "CIRCLE_SHA1": "abc123", - "CIRCLE_JOB": "build", - "CIRCLE_PIPELINE_NUMBER": "42", - "CIRCLE_PIPELINE_TRIGGER_LOGIN": "circle-user", - "CIRCLE_REPOSITORY_URL": "git@github.com:user/repo.git", - "CIRCLE_PROJECT_REPONAME": "repo", - "CIRCLE_PULL_REQUEST": "https://github.com/circle/repo/pull/1", - }, - Labels: []pipeline.Label{ - {Name: "dagger.io/vcs.change.branch", Value: "main"}, - {Name: "dagger.io/vcs.change.head_sha", Value: "abc123"}, - {Name: "dagger.io/vcs.job.name", Value: "build"}, - {Name: "dagger.io/vcs.change.number", Value: "42"}, - {Name: "dagger.io/vcs.triggerer.login", Value: "circle-user"}, - {Name: "dagger.io/vcs.repo.url", Value: "https://github.com/user/repo"}, - {Name: "dagger.io/vcs.repo.full_name", Value: "repo"}, - {Name: "dagger.io/vcs.change.url", Value: "https://github.com/circle/repo/pull/1"}, - }, - }, - } { - example := example - t.Run(example.Name, func(t *testing.T) { - // Set environment variables - for k, v := range example.Env { - os.Setenv(k, v) - } - - // Run the function and collect the result - labels, err := pipeline.LoadCircleCILabels() - - // Clean up environment variables - for k := range example.Env { - os.Unsetenv(k) - } - - // Make assertions - require.NoError(t, err) - require.ElementsMatch(t, example.Labels, labels) - }) - } -} - -func run(t *testing.T, exe string, args ...string) string { //nolint: unparam - t.Helper() - cmd := exec.Command(exe, args...) - cmd.Stderr = os.Stderr - out, err := cmd.Output() - require.NoError(t, err) - return strings.TrimSpace(string(out)) -} - -func setupRepo(t *testing.T) string { - repo := t.TempDir() - run(t, "git", "-C", repo, "init") - run(t, "git", "-C", repo, "config", "--local", "--add", "user.name", "Test User") - run(t, "git", "-C", repo, "config", "--local", "--add", "user.email", "test@example.com") - run(t, "git", "-C", repo, "remote", "add", "origin", "https://example.com") - run(t, "git", "-C", repo, "checkout", "-b", "main") - run(t, "git", "-C", repo, "commit", "--allow-empty", "-m", "init") - return repo -} diff --git a/core/pipeline/pipeline.go b/core/pipeline/pipeline.go index fe912add9e7..7ee79882928 100644 --- a/core/pipeline/pipeline.go +++ b/core/pipeline/pipeline.go @@ -1,18 +1,14 @@ package pipeline import ( - "context" "encoding/json" "strings" - - "github.com/vito/progrock" ) type Pipeline struct { - Name string `json:"name"` - Description string `json:"description,omitempty"` - Labels []Label `json:"labels,omitempty"` - Weak bool `json:"weak,omitempty"` + Name string `json:"name"` + Description string `json:"description,omitempty"` + Weak bool `json:"weak,omitempty"` } // Pipelineable is any object which can return a pipeline.Path. @@ -58,48 +54,3 @@ func (g Path) String() string { } return strings.Join(parts, " / ") } - -const ProgrockDescriptionLabel = "dagger.io/pipeline.description" - -// RecorderGroup converts the path to a Progrock recorder for the group. -func (g Path) WithGroups(ctx context.Context) context.Context { - if len(g) == 0 { - return ctx - } - - rec := progrock.FromContext(ctx) - - for _, p := range g { - var labels []*progrock.Label - - if p.Description != "" { - labels = append(labels, &progrock.Label{ - Name: ProgrockDescriptionLabel, - Value: p.Description, - }) - } - - for _, l := range p.Labels { - labels = append(labels, &progrock.Label{ - Name: l.Name, - Value: l.Value, - }) - } - - opts := []progrock.GroupOpt{} - - if len(labels) > 0 { - opts = append(opts, progrock.WithLabels(labels...)) - } - - if p.Weak { - opts = append(opts, progrock.Weak()) - } - - // WithGroup stores an internal hierarchy of groups by name, so this will - // always return the same group ID throughout the session. - rec = rec.WithGroup(p.Name, opts...) - } - - return progrock.ToContext(ctx, rec) -} diff --git a/core/query.go b/core/query.go index 4470a68838a..a3516efa5ea 100644 --- a/core/query.go +++ b/core/query.go @@ -16,7 +16,6 @@ import ( "github.com/moby/buildkit/util/leaseutil" "github.com/opencontainers/go-digest" "github.com/vektah/gqlparser/v2/ast" - "github.com/vito/progrock" ) // Query forms the root of the DAG and houses all necessary state and @@ -33,8 +32,6 @@ var ErrNoCurrentModule = fmt.Errorf("no current module") // Settings for Query that are shared across all instances for a given DaggerServer type QueryOpts struct { - ProgrockSocketPath string - Services *Services Secrets *SecretStore @@ -54,7 +51,6 @@ type QueryOpts struct { Cache dagql.Cache BuildkitOpts *buildkit.Opts - Recorder *progrock.Recorder // The metadata of client calls. // For the special case of the main client caller, the key is just empty string. @@ -74,10 +70,6 @@ func NewRoot(ctx context.Context, opts QueryOpts) (*Query, error) { return nil, fmt.Errorf("buildkit client: %w", err) } - // NOTE: context.WithoutCancel is used because if the provided context is canceled, buildkit can - // leave internal progress contexts open and leak goroutines. - bk.WriteStatusesTo(context.WithoutCancel(ctx), opts.Recorder) - return &Query{ QueryOpts: opts, Buildkit: bk, @@ -116,8 +108,6 @@ type ClientCallContext struct { // metadata of that ongoing function call Module *Module FnCall *FunctionCall - - ProgrockParent string } func (q *Query) ServeModuleToMainClient(ctx context.Context, modMeta dagql.Instance[*Module]) error { @@ -147,7 +137,6 @@ func (q *Query) RegisterFunctionCall( deps *ModDeps, mod *Module, call *FunctionCall, - progrockParent string, ) error { if dgst == "" { return fmt.Errorf("cannot register function call with empty digest") @@ -164,11 +153,10 @@ func (q *Query) RegisterFunctionCall( return err } q.ClientCallContext[dgst] = &ClientCallContext{ - Root: newRoot, - Deps: deps, - Module: mod, - FnCall: call, - ProgrockParent: progrockParent, + Root: newRoot, + Deps: deps, + Module: mod, + FnCall: call, } return nil } @@ -222,12 +210,11 @@ func (q *Query) CurrentServedDeps(ctx context.Context) (*ModDeps, error) { return callCtx.Deps, nil } -func (q *Query) WithPipeline(name, desc string, labels []pipeline.Label) *Query { +func (q *Query) WithPipeline(name, desc string) *Query { q = q.Clone() q.Pipeline = q.Pipeline.Add(pipeline.Pipeline{ Name: name, Description: desc, - Labels: labels, }) return q } diff --git a/core/schema/container.go b/core/schema/container.go index b3432b80ef6..4a71e552286 100644 --- a/core/schema/container.go +++ b/core/schema/container.go @@ -12,7 +12,6 @@ import ( "time" "github.com/dagger/dagger/core" - "github.com/dagger/dagger/core/pipeline" "github.com/dagger/dagger/dagql" "github.com/moby/buildkit/frontend/dockerfile/shell" specs "github.com/opencontainers/image-spec/specs-go/v1" @@ -560,12 +559,12 @@ func (s *containerSchema) withRootfs(ctx context.Context, parent *core.Container type containerPipelineArgs struct { Name string - Description string `default:""` - Labels []dagql.InputObject[pipeline.Label] `default:"[]"` + Description string `default:""` + Labels []dagql.InputObject[PipelineLabel] `default:"[]"` } func (s *containerSchema) pipeline(ctx context.Context, parent *core.Container, args containerPipelineArgs) (*core.Container, error) { - return parent.WithPipeline(ctx, args.Name, args.Description, collectInputsSlice(args.Labels)) + return parent.WithPipeline(ctx, args.Name, args.Description) } func (s *containerSchema) rootfs(ctx context.Context, parent *core.Container, args struct{}) (*core.Directory, error) { diff --git a/core/schema/deprecations.go b/core/schema/deprecations.go new file mode 100644 index 00000000000..af9670863b1 --- /dev/null +++ b/core/schema/deprecations.go @@ -0,0 +1,15 @@ +package schema + +// PipelineLabel is deprecated and has no effect. +type PipelineLabel struct { + Name string `field:"true" doc:"Label name."` + Value string `field:"true" doc:"Label value."` +} + +func (PipelineLabel) TypeName() string { + return "PipelineLabel" +} + +func (PipelineLabel) TypeDescription() string { + return "Key value object that represents a pipeline label." +} diff --git a/core/schema/directory.go b/core/schema/directory.go index 730d14cc95c..f1519d42ca3 100644 --- a/core/schema/directory.go +++ b/core/schema/directory.go @@ -7,7 +7,6 @@ import ( "github.com/dagger/dagger/dagql" "github.com/dagger/dagger/core" - "github.com/dagger/dagger/core/pipeline" ) type directorySchema struct { @@ -99,12 +98,12 @@ func (s *directorySchema) Install() { type directoryPipelineArgs struct { Name string - Description string `default:""` - Labels []dagql.InputObject[pipeline.Label] `default:"[]"` + Description string `default:""` + Labels []dagql.InputObject[PipelineLabel] `default:"[]"` } func (s *directorySchema) pipeline(ctx context.Context, parent *core.Directory, args directoryPipelineArgs) (*core.Directory, error) { - return parent.WithPipeline(ctx, args.Name, args.Description, collectInputsSlice(args.Labels)) + return parent.WithPipeline(ctx, args.Name, args.Description) } type directoryArgs struct { diff --git a/core/schema/modulesource.go b/core/schema/modulesource.go index df94e9c3b39..87f63130fe8 100644 --- a/core/schema/modulesource.go +++ b/core/schema/modulesource.go @@ -13,7 +13,6 @@ import ( "github.com/dagger/dagger/core/modules" "github.com/dagger/dagger/dagql" "github.com/dagger/dagger/engine/buildkit" - "github.com/vito/progrock" "golang.org/x/sync/errgroup" ) @@ -762,10 +761,9 @@ func (s *moduleSchema) moduleSourceResolveFromCaller( excludes = append(excludes, exclude) } - pipelineName := fmt.Sprintf("load local module context %s", contextAbsPath) - ctx, subRecorder := progrock.WithGroup(ctx, pipelineName, progrock.Weak()) _, desc, err := src.Query.Buildkit.LocalImport( - ctx, subRecorder, src.Query.Platform.Spec(), + ctx, + src.Query.Platform.Spec(), contextAbsPath, excludes, includes, @@ -907,7 +905,7 @@ func (s *moduleSchema) collectCallerLocalDeps( // cache of sourceRootAbsPath -> *callerLocalDep collectedDeps dagql.CacheMap[string, *callerLocalDep], ) error { - _, err := collectedDeps.GetOrInitialize(ctx, sourceRootAbsPath, func(ctx context.Context) (*callerLocalDep, error) { + _, _, err := collectedDeps.GetOrInitialize(ctx, sourceRootAbsPath, func(ctx context.Context) (*callerLocalDep, error) { sourceRootRelPath, err := filepath.Rel(contextAbsPath, sourceRootAbsPath) if err != nil { return nil, fmt.Errorf("failed to get source root relative path: %s", err) @@ -1108,10 +1106,8 @@ func (s *moduleSchema) moduleSourceResolveDirectoryFromCaller( } } - pipelineName := fmt.Sprintf("load local directory module arg %s", path) - ctx, subRecorder := progrock.WithGroup(ctx, pipelineName, progrock.Weak()) _, desc, err := src.Query.Buildkit.LocalImport( - ctx, subRecorder, src.Query.Platform.Spec(), + ctx, src.Query.Platform.Spec(), path, excludes, includes, diff --git a/core/schema/query.go b/core/schema/query.go index f5a30df41dc..b9a3ed53c2e 100644 --- a/core/schema/query.go +++ b/core/schema/query.go @@ -6,13 +6,12 @@ import ( "strings" "github.com/blang/semver" - "github.com/dagger/dagger/dagql" - "github.com/dagger/dagger/dagql/introspection" - "github.com/vito/progrock" "github.com/dagger/dagger/core" - "github.com/dagger/dagger/core/pipeline" + "github.com/dagger/dagger/dagql" + "github.com/dagger/dagger/dagql/introspection" "github.com/dagger/dagger/engine" + "github.com/dagger/dagger/telemetry" ) type querySchema struct { @@ -34,7 +33,7 @@ func (s *querySchema) Install() { core.TypeDefKinds.Install(s.srv) core.ModuleSourceKindEnum.Install(s.srv) - dagql.MustInputSpec(pipeline.Label{}).Install(s.srv) + dagql.MustInputSpec(PipelineLabel{}).Install(s.srv) dagql.MustInputSpec(core.PortForward{}).Install(s.srv) dagql.MustInputSpec(core.BuildArg{}).Install(s.srv) @@ -60,15 +59,11 @@ func (s *querySchema) Install() { type pipelineArgs struct { Name string Description string `default:""` - Labels dagql.Optional[dagql.ArrayInput[dagql.InputObject[pipeline.Label]]] + Labels dagql.Optional[dagql.ArrayInput[dagql.InputObject[PipelineLabel]]] } func (s *querySchema) pipeline(ctx context.Context, parent *core.Query, args pipelineArgs) (*core.Query, error) { - return parent.WithPipeline( - args.Name, - args.Description, - collectInputs(args.Labels), - ), nil + return parent.WithPipeline(args.Name, args.Description), nil } type checkVersionCompatibilityArgs struct { @@ -76,11 +71,11 @@ type checkVersionCompatibilityArgs struct { } func (s *querySchema) checkVersionCompatibility(ctx context.Context, _ *core.Query, args checkVersionCompatibilityArgs) (dagql.Boolean, error) { - recorder := progrock.FromContext(ctx) + logger := telemetry.GlobalLogger(ctx) // Skip development version if _, err := semver.Parse(engine.Version); err != nil { - recorder.Debug("Using development engine; skipping version compatibility check.") + logger.Debug("Using development engine; skipping version compatibility check.") return true, nil } @@ -99,7 +94,7 @@ func (s *querySchema) checkVersionCompatibility(ctx context.Context, _ *core.Que // If the Engine is a major version above the SDK version, fails // TODO: throw an error and abort the session if engineVersion.Major > sdkVersion.Major { - recorder.Warn(fmt.Sprintf("Dagger engine version (%s) is significantly newer than the SDK's required version (%s). Please update your SDK.", engineVersion, sdkVersion)) + logger.Warn(fmt.Sprintf("Dagger engine version (%s) is significantly newer than the SDK's required version (%s). Please update your SDK.", engineVersion, sdkVersion)) // return false, fmt.Errorf("Dagger engine version (%s) is not compatible with the SDK (%s)", engineVersion, sdkVersion) return false, nil @@ -108,7 +103,7 @@ func (s *querySchema) checkVersionCompatibility(ctx context.Context, _ *core.Que // If the Engine is older than the SDK, fails // TODO: throw an error and abort the session if engineVersion.LT(sdkVersion) { - recorder.Warn(fmt.Sprintf("Dagger engine version (%s) is older than the SDK's required version (%s). Please update your Dagger CLI.", engineVersion, sdkVersion)) + logger.Warn(fmt.Sprintf("Dagger engine version (%s) is older than the SDK's required version (%s). Please update your Dagger CLI.", engineVersion, sdkVersion)) // return false, fmt.Errorf("API version is older than the SDK, please update your Dagger CLI") return false, nil @@ -116,7 +111,7 @@ func (s *querySchema) checkVersionCompatibility(ctx context.Context, _ *core.Que // If the Engine is a minor version newer, warn if engineVersion.Minor > sdkVersion.Minor { - recorder.Warn(fmt.Sprintf("Dagger engine version (%s) is newer than the SDK's required version (%s). Consider updating your SDK.", engineVersion, sdkVersion)) + logger.Warn(fmt.Sprintf("Dagger engine version (%s) is newer than the SDK's required version (%s). Consider updating your SDK.", engineVersion, sdkVersion)) } return true, nil diff --git a/core/schema/sdk.go b/core/schema/sdk.go index 77de2f54f90..e98714b725d 100644 --- a/core/schema/sdk.go +++ b/core/schema/sdk.go @@ -512,7 +512,6 @@ func (sdk *goSDK) baseWithCodegen( Value: dagql.ArrayInput[dagql.String]{ "--module-context", goSDKUserModContextDirPath, "--module-name", dagql.String(modName), - "--propagate-logs=true", "--introspection-json-path", goSDKIntrospectionJSONPath, }, }, diff --git a/core/schema/util.go b/core/schema/util.go index bca52015cf4..9682ac09395 100644 --- a/core/schema/util.go +++ b/core/schema/util.go @@ -31,17 +31,6 @@ func Syncer[T Evaluatable]() dagql.Field[T] { }) } -func collectInputs[T dagql.Type](inputs dagql.Optional[dagql.ArrayInput[dagql.InputObject[T]]]) []T { - if !inputs.Valid { - return nil - } - ts := make([]T, len(inputs.Value)) - for i, input := range inputs.Value { - ts[i] = input.Value - } - return ts -} - func collectInputsSlice[T dagql.Type](inputs []dagql.InputObject[T]) []T { ts := make([]T, len(inputs)) for i, input := range inputs { diff --git a/core/service.go b/core/service.go index e0fa1fd5072..fe2c9aca29c 100644 --- a/core/service.go +++ b/core/service.go @@ -17,11 +17,10 @@ import ( "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/buildkit" "github.com/dagger/dagger/network" + "github.com/dagger/dagger/telemetry" bkgw "github.com/moby/buildkit/frontend/gateway/client" - "github.com/moby/buildkit/identity" "github.com/moby/buildkit/solver/pb" "github.com/vektah/gqlparser/v2/ast" - "github.com/vito/progrock" ) const ( @@ -206,7 +205,7 @@ func (svc *Service) startContainer( forwardStdin func(io.Writer, bkgw.ContainerProcess), forwardStdout func(io.Reader), forwardStderr func(io.Reader), -) (running *RunningService, err error) { +) (running *RunningService, rerr error) { dig := id.Digest() host, err := svc.Hostname(ctx, id) @@ -214,12 +213,6 @@ func (svc *Service) startContainer( return nil, err } - rec := progrock.FromContext(ctx).WithGroup( - fmt.Sprintf("service %s", host), - progrock.Weak(), - ) - ctx = progrock.ToContext(ctx, rec) - clientMetadata, err := engine.ClientMetadataFromContext(ctx) if err != nil { return nil, err @@ -256,12 +249,13 @@ func (svc *Service) startContainer( } }() - ctx, vtx := progrock.Span(ctx, dig.String()+"."+identity.NewID(), "start "+strings.Join(execOp.Meta.Args, " ")) + ctx, span := Tracer().Start(ctx, "start "+strings.Join(execOp.Meta.Args, " ")) + ctx, stdout, stderr := telemetry.WithStdioToOtel(ctx, InstrumentationLibrary) defer func() { - if err != nil { + if rerr != nil { // NB: this is intentionally conditional; we only complete if there was - // an error starting. vtx.Done is called elsewhere. - vtx.Error(err) + // an error starting. span.End is called when the service exits. + telemetry.End(span, func() error { return rerr }) } }() @@ -319,7 +313,7 @@ func (svc *Service) startContainer( defer func() { if err != nil { - gc.Release(context.Background()) + gc.Release(context.WithoutCancel(ctx)) } }() @@ -335,8 +329,6 @@ func (svc *Service) startContainer( execMeta := buildkit.ContainerExecUncachedMetadata{ ParentClientIDs: clientMetadata.ClientIDs(), ServerID: clientMetadata.ServerID, - ProgSockPath: bk.ProgSockPath, - ProgParent: rec.Parent, } execOp.Meta.ProxyEnv.FtpProxy, err = execMeta.ToPBFtpProxyVal() if err != nil { @@ -345,6 +337,7 @@ func (svc *Service) startContainer( env := append([]string{}, execOp.Meta.Env...) env = append(env, proxyEnvList(execOp.Meta.ProxyEnv)...) + env = append(env, telemetry.PropagationEnv(ctx)...) if interactive { env = append(env, ShimEnableTTYEnvVar+"=1") } @@ -359,13 +352,13 @@ func (svc *Service) startContainer( if forwardStdout != nil { stdoutClient, stdoutCtr = io.Pipe() } else { - stdoutCtr = nopCloser{io.MultiWriter(vtx.Stdout(), outBuf)} + stdoutCtr = nopCloser{io.MultiWriter(stdout, outBuf)} } if forwardStderr != nil { stderrClient, stderrCtr = io.Pipe() } else { - stderrCtr = nopCloser{io.MultiWriter(vtx.Stderr(), outBuf)} + stderrCtr = nopCloser{io.MultiWriter(stderr, outBuf)} } svcProc, err := gc.Start(ctx, bkgw.StartRequest{ @@ -422,7 +415,9 @@ func (svc *Service) startContainer( } } - vtx.Done(err) + // terminate the span; we're not interested in setting an error, since + // services return a benign error like `exit status 1` on exit + span.End() }() stopSvc := func(ctx context.Context, force bool) error { @@ -499,7 +494,7 @@ func proxyEnvList(p *pb.ProxyEnv) []string { } func (svc *Service) startTunnel(ctx context.Context, id *call.ID) (running *RunningService, rerr error) { - svcCtx, stop := context.WithCancel(context.Background()) + svcCtx, stop := context.WithCancel(context.WithoutCancel(ctx)) defer func() { if rerr != nil { stop() @@ -512,8 +507,6 @@ func (svc *Service) startTunnel(ctx context.Context, id *call.ID) (running *Runn } svcCtx = engine.ContextWithClientMetadata(svcCtx, clientMetadata) - svcCtx = progrock.ToContext(svcCtx, progrock.FromContext(ctx)) - svcs := svc.Query.Services bk := svc.Query.Buildkit @@ -602,12 +595,6 @@ func (svc *Service) startReverseTunnel(ctx context.Context, id *call.ID) (runnin return nil, err } - rec := progrock.FromContext(ctx) - - svcCtx, stop := context.WithCancel(context.Background()) - svcCtx = engine.ContextWithClientMetadata(svcCtx, clientMetadata) - svcCtx = progrock.ToContext(svcCtx, rec) - fullHost := host + "." + network.ClientDomain(clientMetadata.ClientID) bk := svc.Query.Buildkit @@ -631,6 +618,9 @@ func (svc *Service) startReverseTunnel(ctx context.Context, id *call.ID) (runnin check := newHealth(bk, fullHost, checkPorts) + // NB: decouple from the incoming ctx cancel and add our own + svcCtx, stop := context.WithCancel(context.WithoutCancel(ctx)) + exited := make(chan error, 1) go func() { exited <- tunnel.Tunnel(svcCtx) diff --git a/core/services.go b/core/services.go index f16643f3ed1..3c6b79ac887 100644 --- a/core/services.go +++ b/core/services.go @@ -4,6 +4,7 @@ import ( "context" "fmt" "io" + "log/slog" "sync" "time" @@ -14,7 +15,6 @@ import ( "github.com/moby/buildkit/util/bklog" "github.com/opencontainers/go-digest" "github.com/pkg/errors" - "github.com/vito/progrock" "golang.org/x/sync/errgroup" ) @@ -172,11 +172,7 @@ dance: } } - svcCtx, stop := context.WithCancel(context.Background()) - svcCtx = progrock.ToContext(svcCtx, progrock.FromContext(ctx)) - if clientMetadata, err := engine.ClientMetadataFromContext(ctx); err == nil { - svcCtx = engine.ContextWithClientMetadata(svcCtx, clientMetadata) - } + svcCtx, stop := context.WithCancel(context.WithoutCancel(ctx)) running, err := svc.Start(svcCtx, id, false, nil, nil, nil) if err != nil { @@ -282,8 +278,8 @@ func (ss *Services) Stop(ctx context.Context, id *call.ID, kill bool) error { } } -// StopClientServices stops all of the services being run by the given client. -// It is called when a client is closing. +// StopClientServices stops all of the services being run by the given server. +// It is called when a server is closing. func (ss *Services) StopClientServices(ctx context.Context, serverID string) error { ss.l.Lock() var svcs []*RunningService @@ -317,9 +313,12 @@ func (ss *Services) StopClientServices(ctx context.Context, serverID string) err func (ss *Services) Detach(ctx context.Context, svc *RunningService) { ss.l.Lock() + slog := slog.With("service", svc.Host, "bindings", ss.bindings[svc.Key]) + running, found := ss.running[svc.Key] if !found { ss.l.Unlock() + slog.Debug("detach: service not running") // not even running; ignore return } @@ -328,12 +327,15 @@ func (ss *Services) Detach(ctx context.Context, svc *RunningService) { if ss.bindings[svc.Key] > 0 { ss.l.Unlock() + slog.Debug("detach: service still has binders") // detached, but other instances still active return } ss.l.Unlock() + slog.Debug("detach: stopping") + // we should avoid blocking, and return immediately go ss.stopGraceful(ctx, running, TerminateGracePeriod) } diff --git a/core/terminal.go b/core/terminal.go index c714a90fc4f..8d5f76b24c3 100644 --- a/core/terminal.go +++ b/core/terminal.go @@ -56,11 +56,6 @@ func (container *Container) Terminal(svcID *call.ID, args *TerminalArgs) (*Termi endpoint := "terminals/" + termID.Encoded() term := &Terminal{Endpoint: endpoint} return term, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - clientMetadata, err := engine.ClientMetadataFromContext(r.Context()) - if err != nil { - panic(err) - } - var upgrader = websocket.Upgrader{} ws, err := upgrader.Upgrade(w, r, nil) if err != nil { @@ -73,7 +68,7 @@ func (container *Container) Terminal(svcID *call.ID, args *TerminalArgs) (*Termi bklog.G(r.Context()).Debugf("terminal handler for %s has been upgraded", endpoint) defer bklog.G(context.Background()).Debugf("terminal handler for %s finished", endpoint) - if err := container.runTerminal(r.Context(), svcID, ws, clientMetadata, args); err != nil { + if err := container.runTerminal(r.Context(), svcID, ws, args); err != nil { bklog.G(r.Context()).WithError(err).Error("terminal handler failed") err = ws.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, "")) if err != nil { @@ -87,7 +82,6 @@ func (container *Container) runTerminal( ctx context.Context, svcID *call.ID, conn *websocket.Conn, - clientMetadata *engine.ClientMetadata, args *TerminalArgs, ) error { container = container.Clone() diff --git a/core/tracing.go b/core/tracing.go new file mode 100644 index 00000000000..67d19a5f935 --- /dev/null +++ b/core/tracing.go @@ -0,0 +1,12 @@ +package core + +import ( + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/trace" +) + +const InstrumentationLibrary = "dagger.io/core" + +func Tracer() trace.Tracer { + return otel.Tracer(InstrumentationLibrary) +} diff --git a/core/typedef.go b/core/typedef.go index c01d1b67306..bfa8e6ea9d2 100644 --- a/core/typedef.go +++ b/core/typedef.go @@ -67,10 +67,16 @@ func (fn Function) Clone() *Function { func (fn *Function) FieldSpec() (dagql.FieldSpec, error) { spec := dagql.FieldSpec{ - Name: fn.Name, - Description: formatGqlDescription(fn.Description), - Type: fn.ReturnType.ToTyped(), - ImpurityReason: "Module functions are currently always impure.", // TODO + Name: fn.Name, + Description: formatGqlDescription(fn.Description), + Type: fn.ReturnType.ToTyped(), + + // NB: functions actually _are_ cached per-session, which matches the + // lifetime of the server, so we might as well consider them pure. That way + // there will be locking around concurrent calls, so the user won't see + // multiple in parallel. Reconsider if/when we have a global cache and/or + // figure out function caching. + ImpurityReason: "", } for _, arg := range fn.Args { input := arg.TypeDef.ToInput() diff --git a/dagql/cachemap.go b/dagql/cachemap.go index 8c41b322e9d..8a39748c924 100644 --- a/dagql/cachemap.go +++ b/dagql/cachemap.go @@ -9,7 +9,7 @@ import ( ) type CacheMap[K comparable, T any] interface { - GetOrInitialize(context.Context, K, func(context.Context) (T, error)) (T, error) + GetOrInitialize(context.Context, K, func(context.Context) (T, error)) (T, bool, error) Get(context.Context, K) (T, error) Keys() []K } @@ -56,14 +56,14 @@ func (m *cacheMap[K, T]) Set(key K, val T) { m.l.Unlock() } -func (m *cacheMap[K, T]) GetOrInitialize(ctx context.Context, key K, fn func(ctx context.Context) (T, error)) (T, error) { +func (m *cacheMap[K, T]) GetOrInitialize(ctx context.Context, key K, fn func(ctx context.Context) (T, error)) (T, bool, error) { return m.GetOrInitializeOnHit(ctx, key, fn, func(T, error) {}) } -func (m *cacheMap[K, T]) GetOrInitializeOnHit(ctx context.Context, key K, fn func(ctx context.Context) (T, error), onHit func(T, error)) (T, error) { +func (m *cacheMap[K, T]) GetOrInitializeOnHit(ctx context.Context, key K, fn func(ctx context.Context) (T, error), onHit func(T, error)) (T, bool, error) { if v := ctx.Value(cacheMapContextKey[K, T]{key: key, m: m}); v != nil { var zero T - return zero, ErrCacheMapRecursiveCall + return zero, false, ErrCacheMapRecursiveCall } m.l.Lock() @@ -73,7 +73,7 @@ func (m *cacheMap[K, T]) GetOrInitializeOnHit(ctx context.Context, key K, fn fun if onHit != nil { onHit(c.val, c.err) } - return c.val, c.err + return c.val, true, c.err } c := &cache[T]{} @@ -91,7 +91,7 @@ func (m *cacheMap[K, T]) GetOrInitializeOnHit(ctx context.Context, key K, fn fun m.l.Unlock() } - return c.val, c.err + return c.val, false, c.err } func (m *cacheMap[K, T]) Get(ctx context.Context, key K) (T, error) { diff --git a/dagql/cachemap_test.go b/dagql/cachemap_test.go index 0428eeb5fab..a81a7415b26 100644 --- a/dagql/cachemap_test.go +++ b/dagql/cachemap_test.go @@ -24,7 +24,7 @@ func TestCacheMapConcurrent(t *testing.T) { wg.Add(1) go func() { defer wg.Done() - val, err := c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { + val, _, err := c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { initialized[i] = true return i, nil }) @@ -47,27 +47,29 @@ func TestCacheMapErrors(t *testing.T) { commonKey := 42 myErr := errors.New("nope") - _, err := c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { + _, _, err := c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { return 0, myErr }) assert.Assert(t, is.ErrorIs(err, myErr)) otherErr := errors.New("nope 2") - _, err = c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { + _, _, err = c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { return 0, otherErr }) assert.Assert(t, is.ErrorIs(err, otherErr)) - res, err := c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { + res, cached, err := c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { return 1, nil }) assert.NilError(t, err) + assert.Assert(t, !cached) assert.Equal(t, 1, res) - res, err = c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { + res, cached, err = c.GetOrInitialize(ctx, commonKey, func(_ context.Context) (int, error) { return 0, errors.New("ignored") }) assert.NilError(t, err) + assert.Assert(t, cached) assert.Equal(t, 1, res) } @@ -77,29 +79,33 @@ func TestCacheMapRecursiveCall(t *testing.T) { ctx := context.Background() // recursive calls that are guaranteed to result in deadlock should error out - _, err := c.GetOrInitialize(ctx, 1, func(ctx context.Context) (int, error) { - return c.GetOrInitialize(ctx, 1, func(ctx context.Context) (int, error) { + _, _, err := c.GetOrInitialize(ctx, 1, func(ctx context.Context) (int, error) { + res, _, err := c.GetOrInitialize(ctx, 1, func(ctx context.Context) (int, error) { return 2, nil }) + return res, err }) assert.Assert(t, is.ErrorIs(err, ErrCacheMapRecursiveCall)) // verify same cachemap can be called recursively w/ different keys - v, err := c.GetOrInitialize(ctx, 10, func(ctx context.Context) (int, error) { - return c.GetOrInitialize(ctx, 11, func(ctx context.Context) (int, error) { + v, _, err := c.GetOrInitialize(ctx, 10, func(ctx context.Context) (int, error) { + res, _, err := c.GetOrInitialize(ctx, 11, func(ctx context.Context) (int, error) { return 12, nil }) + return res, err }) assert.NilError(t, err) assert.Equal(t, 12, v) // verify other cachemaps can be called w/ same keys c2 := newCacheMap[int, int]() - v, err = c.GetOrInitialize(ctx, 100, func(ctx context.Context) (int, error) { - return c2.GetOrInitialize(ctx, 100, func(ctx context.Context) (int, error) { + v, cached, err := c.GetOrInitialize(ctx, 100, func(ctx context.Context) (int, error) { + res, _, err := c2.GetOrInitialize(ctx, 100, func(ctx context.Context) (int, error) { return 101, nil }) + return res, err }) assert.NilError(t, err) assert.Equal(t, 101, v) + assert.Assert(t, !cached) } diff --git a/dagql/call/callpbv1/call.go b/dagql/call/callpbv1/call.go index a1e946bd73c..4a37d0d38a1 100644 --- a/dagql/call/callpbv1/call.go +++ b/dagql/call/callpbv1/call.go @@ -1,6 +1,29 @@ package callpbv1 -import "github.com/vektah/gqlparser/v2/ast" +import ( + "encoding/base64" + "fmt" + + "github.com/vektah/gqlparser/v2/ast" + "google.golang.org/protobuf/proto" +) + +func (call *Call) Encode() (string, error) { + // Deterministic is strictly needed so the CallsByDigest map is sorted in the serialized proto + proto, err := proto.MarshalOptions{Deterministic: true}.Marshal(call) + if err != nil { + return "", fmt.Errorf("failed to marshal ID proto: %w", err) + } + return base64.StdEncoding.EncodeToString(proto), nil +} + +func (call *Call) Decode(str string) error { + bytes, err := base64.StdEncoding.DecodeString(str) + if err != nil { + return fmt.Errorf("failed to decode base64: %w", err) + } + return proto.Unmarshal(bytes, call) +} func (t *Type) ToAST() *ast.Type { a := &ast.Type{ diff --git a/dagql/call/id.go b/dagql/call/id.go index 7c8053bc838..6c214554184 100644 --- a/dagql/call/id.go +++ b/dagql/call/id.go @@ -56,6 +56,16 @@ func (id *ID) Base() *ID { return id.base } +// The root Call of the ID, with its Digest set. Exposed so that Calls can be +// streamed over the wire one-by-one, rather than emitting full DAGs, which +// would involve a ton of duplication. +// +// WARRANTY VOID IF MUTATIONS ARE MADE TO THE INNER PROTOBUF. Perform a +// proto.Clone before mutating. +func (id *ID) Call() *callpbv1.Call { + return id.pb +} + // The GraphQL type of the value. func (id *ID) Type() *Type { return id.typ @@ -254,7 +264,7 @@ func (id *ID) Encode() (string, error) { return "", fmt.Errorf("failed to marshal ID proto: %w", err) } - return base64.URLEncoding.EncodeToString(proto), nil + return base64.StdEncoding.EncodeToString(proto), nil } // NOTE: use with caution, any mutations to the returned proto can corrupt the ID @@ -293,7 +303,7 @@ func (id *ID) FromAnyPB(data *anypb.Any) error { } func (id *ID) Decode(str string) error { - bytes, err := base64.URLEncoding.DecodeString(str) + bytes, err := base64.StdEncoding.DecodeString(str) if err != nil { return fmt.Errorf("failed to decode base64: %w", err) } diff --git a/dagql/demo/main.go b/dagql/demo/main.go deleted file mode 100644 index b431541c03d..00000000000 --- a/dagql/demo/main.go +++ /dev/null @@ -1,96 +0,0 @@ -package main - -import ( - "context" - "fmt" - "net" - "net/http" - "os" - - "github.com/99designs/gqlgen/graphql/handler" - "github.com/99designs/gqlgen/graphql/playground" - "github.com/dagger/dagger/dagql" - "github.com/dagger/dagger/dagql/call" - "github.com/dagger/dagger/dagql/internal/pipes" - "github.com/dagger/dagger/dagql/internal/points" - "github.com/dagger/dagger/dagql/introspection" - "github.com/dagger/dagger/dagql/ioctx" - "github.com/vektah/gqlparser/v2/ast" - "github.com/vito/progrock" -) - -type Query struct { -} - -func (Query) Type() *ast.Type { - return &ast.Type{ - NamedType: "Query", - NonNull: true, - } -} - -func (Query) TypeDefinition() *ast.Definition { - return &ast.Definition{ - Kind: ast.Object, - Name: "Query", - } -} - -func main() { - ctx := context.Background() - tape := progrock.NewTape() - rec := progrock.NewRecorder(tape) - ctx = progrock.ToContext(ctx, rec) - - port := os.Getenv("PORT") - if port == "" { - port = "8080" - } - - srv := dagql.NewServer(Query{}) - srv.Around(TelemetryFunc(rec)) - points.Install[Query](srv) - pipes.Install[Query](srv) - introspection.Install[Query](srv) - - http.Handle("/", playground.Handler("GraphQL playground", "/query")) - http.Handle("/query", handler.NewDefaultServer(srv)) - - l, err := net.Listen("tcp", ":"+port) - if err != nil { - panic(err) - } - defer l.Close() - - if err := progrock.DefaultUI().Run(ctx, tape, func(ctx context.Context, ui progrock.UIClient) (err error) { - vtx := rec.Vertex("dagql", "server") - fmt.Fprintf(vtx.Stdout(), "connect to http://localhost:%s for GraphQL playground", port) - defer vtx.Done(err) - go func() { - <-ctx.Done() - l.Close() - }() - return http.Serve(l, nil) //nolint: gosec - }); err != nil { - panic(err) - } -} - -func TelemetryFunc(rec *progrock.Recorder) dagql.AroundFunc { - return func( - ctx context.Context, - obj dagql.Object, - id *call.ID, - next func(context.Context) (dagql.Typed, error), - ) func(context.Context) (dagql.Typed, error) { - dig := id.Digest() - return func(context.Context) (dagql.Typed, error) { - vtx := rec.Vertex(dig, id.Display()) - ctx = ioctx.WithStdout(ctx, vtx.Stdout()) - ctx = ioctx.WithStderr(ctx, vtx.Stderr()) - res, err := next(ctx) - vtx.Done(err) - return res, err - } - } -} diff --git a/dagql/idtui/db.go b/dagql/idtui/db.go index aa7e0703228..756de7a6b97 100644 --- a/dagql/idtui/db.go +++ b/dagql/idtui/db.go @@ -1,209 +1,319 @@ package idtui import ( + "context" "fmt" + "log/slog" "sort" - "strings" "time" - "github.com/dagger/dagger/dagql/call" - "github.com/dagger/dagger/tracing" - "github.com/vito/progrock" -) + "go.opentelemetry.io/otel/attribute" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" -func init() { -} + "github.com/dagger/dagger/dagql/call/callpbv1" + "github.com/dagger/dagger/telemetry" + "github.com/dagger/dagger/telemetry/sdklog" +) type DB struct { - Epoch, End time.Time + Traces map[trace.TraceID]*Trace + Spans map[trace.SpanID]*Span + Children map[trace.SpanID]map[trace.SpanID]struct{} - IDs map[string]*call.ID - Vertices map[string]*progrock.Vertex - Tasks map[string][]*progrock.VertexTask + Logs map[trace.SpanID]*Vterm + LogWidth int + PrimarySpan trace.SpanID + PrimaryLogs map[trace.SpanID][]*sdklog.LogData + + Calls map[string]*callpbv1.Call Outputs map[string]map[string]struct{} OutputOf map[string]map[string]struct{} - Children map[string]map[string]struct{} - Intervals map[string]map[time.Time]*progrock.Vertex + Intervals map[string]map[time.Time]*Span } func NewDB() *DB { return &DB{ - Epoch: time.Now(), // replaced at runtime - End: time.Time{}, // replaced at runtime + Traces: make(map[trace.TraceID]*Trace), + Spans: make(map[trace.SpanID]*Span), + Children: make(map[trace.SpanID]map[trace.SpanID]struct{}), + + Logs: make(map[trace.SpanID]*Vterm), + LogWidth: -1, + PrimaryLogs: make(map[trace.SpanID][]*sdklog.LogData), - IDs: make(map[string]*call.ID), - Vertices: make(map[string]*progrock.Vertex), - Tasks: make(map[string][]*progrock.VertexTask), + Calls: make(map[string]*callpbv1.Call), OutputOf: make(map[string]map[string]struct{}), Outputs: make(map[string]map[string]struct{}), - Children: make(map[string]map[string]struct{}), - Intervals: make(map[string]map[time.Time]*progrock.Vertex), + Intervals: make(map[string]map[time.Time]*Span), } } -var _ progrock.Writer = (*DB)(nil) +func (db *DB) AllTraces() []*Trace { + traces := make([]*Trace, 0, len(db.Traces)) + for _, traceData := range db.Traces { + traces = append(traces, traceData) + } + sort.Slice(traces, func(i, j int) bool { + return traces[i].Epoch.After(traces[j].Epoch) + }) + return traces +} -func (db *DB) WriteStatus(status *progrock.StatusUpdate) error { - // collect IDs - for _, meta := range status.Metas { - if meta.Name == "id" { - var id call.ID - if err := id.FromAnyPB(meta.Data); err != nil { - return fmt.Errorf("unmarshal payload: %w", err) - } - db.IDs[meta.Vertex] = &id - } +func (db *DB) SetWidth(width int) { + db.LogWidth = width + for _, vt := range db.Logs { + vt.SetWidth(width) } +} - for _, v := range status.Vertexes { - // track the earliest start time and latest end time - if v.Started != nil && v.Started.AsTime().Before(db.Epoch) { - db.Epoch = v.Started.AsTime() - } - if v.Completed != nil && v.Completed.AsTime().After(db.End) { - db.End = v.Completed.AsTime() - } +var _ sdktrace.SpanExporter = (*DB)(nil) - // keep track of vertices, just so we track everything, not just IDs - db.Vertices[v.Id] = v +func (db *DB) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error { + for _, span := range spans { + traceID := span.SpanContext().TraceID() - // keep track of outputs - for _, out := range v.Outputs { - if strings.HasPrefix(v.Name, "load") && strings.HasSuffix(v.Name, "FromID") { - // don't consider loadFooFromID to be a 'creator' - continue + traceData, found := db.Traces[traceID] + if !found { + traceData = &Trace{ + ID: traceID, + Epoch: span.StartTime(), + End: span.EndTime(), + db: db, } - if db.Outputs[v.Id] == nil { - db.Outputs[v.Id] = make(map[string]struct{}) - } - db.Outputs[v.Id][out] = struct{}{} - if db.OutputOf[out] == nil { - db.OutputOf[out] = make(map[string]struct{}) - } - db.OutputOf[out][v.Id] = struct{}{} + db.Traces[traceID] = traceData } - // keep track of intervals seen for a digest - if v.Started != nil { - if db.Intervals[v.Id] == nil { - db.Intervals[v.Id] = make(map[time.Time]*progrock.Vertex) - } - db.Intervals[v.Id][v.Started.AsTime()] = v + if span.StartTime().Before(traceData.Epoch) { + slog.Debug("new epoch", "old", traceData.Epoch, "new", span.StartTime()) + traceData.Epoch = span.StartTime() } - } - // track vertex sub-tasks - for _, t := range status.Tasks { - db.recordTask(t) - } - - // track parent/child vertices - for _, v := range status.Children { - if db.Children[v.Vertex] == nil { - db.Children[v.Vertex] = make(map[string]struct{}) + if span.EndTime().Before(span.StartTime()) { + traceData.IsRunning = true } - for _, out := range v.Vertexes { - db.Children[v.Vertex][out] = struct{}{} + + if span.EndTime().After(traceData.End) { + slog.Debug("new end", "old", traceData.End, "new", span.EndTime()) + traceData.End = span.EndTime() } - } + db.maybeRecordSpan(traceData, span) + } return nil } -func (db *DB) recordTask(t *progrock.VertexTask) { - tasks := db.Tasks[t.Vertex] - var updated bool - for i, task := range tasks { - if task.Name == t.Name { - tasks[i] = t - updated = true +var _ sdklog.LogExporter = (*DB)(nil) + +func (db *DB) ExportLogs(ctx context.Context, logs []*sdklog.LogData) error { + for _, log := range logs { + slog.Debug("exporting log", "span", log.SpanID, "body", log.Body().AsString()) + + // render vterm for TUI + _, _ = fmt.Fprint(db.spanLogs(log.SpanID), log.Body().AsString()) + + if log.SpanID == db.PrimarySpan { + // buffer raw logs so we can replay them later + db.PrimaryLogs[log.SpanID] = append(db.PrimaryLogs[log.SpanID], log) } } - if !updated { - tasks = append(tasks, t) - db.Tasks[t.Vertex] = tasks + return nil +} + +func (db *DB) Shutdown(ctx context.Context) error { + return nil // noop +} + +func (db *DB) spanLogs(id trace.SpanID) *Vterm { + term, found := db.Logs[id] + if !found { + term = NewVterm() + if db.LogWidth > -1 { + term.SetWidth(db.LogWidth) + } + db.Logs[id] = term } + return term } -// Step returns a Step for the given digest if and only if the step should be -// displayed. -// -// Currently this means: -// -// - We don't show `id` selections, since that would be way too much noise. -// - We don't show internal non-ID vertices, since they're not interesting. -// - We DO show internal ID vertices, since they're currently marked internal -// just to hide them from the old TUI. -func (db *DB) Step(dig string) (*Step, bool) { - step := &Step{ - Digest: dig, - db: db, +// SetPrimarySpan allows the primary span to be explicitly set to a particular +// span. normally we assume the root span is the primary span, but in a nested +// scenario we never actually see the root span, so the CLI explicitly sets it +// to the span it created. +func (db *DB) SetPrimarySpan(span trace.SpanID) { + db.PrimarySpan = span +} + +func (db *DB) maybeRecordSpan(traceData *Trace, span sdktrace.ReadOnlySpan) { + spanID := span.SpanContext().SpanID() + + spanData := &Span{ + ReadOnlySpan: span, + + // All root spans are Primary, unless we're explicitly told a different + // span to treat as the "primary" as with Dagger-in-Dagger. + Primary: !span.Parent().SpanID().IsValid() || + spanID == db.PrimarySpan, + + db: db, + trace: traceData, + } + + slog.Debug("recording span", "span", span.Name(), "id", spanID) + + db.Spans[spanID] = spanData + + // track parent/child relationships + if parent := span.Parent(); parent.IsValid() { + if db.Children[parent.SpanID()] == nil { + db.Children[parent.SpanID()] = make(map[trace.SpanID]struct{}) + } + slog.Debug("recording span child", "span", span.Name(), "parent", parent.SpanID(), "child", spanID) + db.Children[parent.SpanID()][spanID] = struct{}{} + } else if !db.PrimarySpan.IsValid() { + // default primary to "root" span, but we might never see it in a nested + // scenario. + db.PrimarySpan = spanID } - ivals := db.Intervals[dig] - if len(ivals) == 0 { - // no vertices seen; give up - return nil, false + + attrs := span.Attributes() + + var digest string + if digestAttr, ok := getAttr(attrs, telemetry.DagDigestAttr); ok { + digest = digestAttr.AsString() + spanData.Digest = digest + + // keep track of intervals seen for a digest + if db.Intervals[digest] == nil { + db.Intervals[digest] = make(map[time.Time]*Span) + } + + db.Intervals[digest][span.StartTime()] = spanData } - outID := db.IDs[dig] - switch { - case outID != nil && outID.Field() == "id": - // ignore 'id' field selections, they're everywhere and not interesting - return nil, false - case !step.HasStarted(): - // ignore anything in pending state; not interesting, easier to assume - // things have always started - return nil, false - case outID == nil: - // no ID; check if we're a regular vertex, or if we're supposed to have an - // ID (arrives later via VertexMeta event) - for _, vtx := range ivals { - if vtx.Label(tracing.IDLabel) == "true" { - // no ID yet, but it's an ID vertex; ignore it until we get the ID so - // we never have to deal with the intermediate state - return nil, false + + for _, attr := range attrs { + switch attr.Key { + case telemetry.DagCallAttr: + var call callpbv1.Call + if err := call.Decode(attr.Value.AsString()); err != nil { + slog.Warn("failed to decode id", "err", err) + continue + } + + spanData.Call = &call + + // Seeing loadFooFromID is only really interesting if it actually + // resulted in evaluating the ID, so we set Passthrough, which will only + // show its children. + if call.Field == fmt.Sprintf("load%sFromID", call.Type.ToAST().Name()) { + spanData.Passthrough = true } + + // We also don't care about seeing the id field selection itself, since + // it's more noisy and confusing than helpful. We'll still show all the + // spans leadning up to it, just not the ID selection. + if call.Field == "id" { + spanData.Ignore = true + } + + if digest != "" { + db.Calls[digest] = &call + } + + case telemetry.LLBOpAttr: + // TODO + + case telemetry.CachedAttr: + spanData.Cached = attr.Value.AsBool() + + case telemetry.CanceledAttr: + spanData.Canceled = attr.Value.AsBool() + + case telemetry.UIEncapsulateAttr: + spanData.Encapsulate = attr.Value.AsBool() + + case telemetry.UIInternalAttr: + spanData.Internal = attr.Value.AsBool() + + case telemetry.UIMaskAttr: + spanData.Mask = attr.Value.AsBool() + + case telemetry.UIPassthroughAttr: + spanData.Passthrough = attr.Value.AsBool() + + case telemetry.DagInputsAttr: + spanData.Inputs = attr.Value.AsStringSlice() + + case telemetry.DagOutputAttr: + output := attr.Value.AsString() + if digest == "" { + slog.Warn("output attribute is set, but a digest is not?") + } else { + slog.Debug("recording output", "digest", digest, "output", output) + + // parent -> child + if db.Outputs[digest] == nil { + db.Outputs[digest] = make(map[string]struct{}) + } + db.Outputs[digest][output] = struct{}{} + + // child -> parent + if db.OutputOf[output] == nil { + db.OutputOf[output] = make(map[string]struct{}) + } + db.OutputOf[output][digest] = struct{}{} + } + + case "rpc.service": + spanData.Passthrough = true } } - if outID != nil && outID.Base() != nil { - parentDig := outID.Base().Digest() - step.BaseDigest = db.Simplify(parentDig.String()) +} + +func (db *DB) PrimarySpanForTrace(traceID trace.TraceID) *Span { + for _, span := range db.Spans { + spanCtx := span.SpanContext() + if span.Primary && spanCtx.TraceID() == traceID { + return span + } } - return step, true + return nil } -func (db *DB) HighLevelStep(id *call.ID) (*Step, bool) { - parentDig := id.Digest() - return db.Step(db.Simplify(parentDig.String())) +func (db *DB) HighLevelSpan(call *callpbv1.Call) *Span { + return db.MostInterestingSpan(db.Simplify(call).Digest) } -func (db *DB) MostInterestingVertex(dig string) *progrock.Vertex { - var earliest *progrock.Vertex - vs := make([]*progrock.Vertex, 0, len(db.Intervals[dig])) - for _, vtx := range db.Intervals[dig] { - vs = append(vs, vtx) +func (db *DB) MostInterestingSpan(dig string) *Span { + var earliest *Span + var earliestCached bool + vs := make([]sdktrace.ReadOnlySpan, 0, len(db.Intervals[dig])) + for _, span := range db.Intervals[dig] { + vs = append(vs, span) } sort.Slice(vs, func(i, j int) bool { - return vs[i].Started.AsTime().Before(vs[j].Started.AsTime()) + return vs[i].StartTime().Before(vs[j].StartTime()) }) - for _, vtx := range db.Intervals[dig] { + for _, span := range db.Intervals[dig] { // a running vertex is always most interesting, and these are already in // order - if vtx.Completed == nil { - return vtx + if span.IsRunning() { + return span } switch { case earliest == nil: // always show _something_ - earliest = vtx - case vtx.Cached: + earliest = span + earliestCached = span.Cached + case span.Cached: // don't allow a cached vertex to override a non-cached one - case earliest.Cached: + case earliestCached: // unclear how this would happen, but non-cached versions are always more // interesting - earliest = vtx - case vtx.Started.AsTime().Before(earliest.Started.AsTime()): + earliest = span + case span.StartTime().Before(earliest.StartTime()): // prefer the earliest active interval - earliest = vtx + earliest = span } } return earliest @@ -229,52 +339,69 @@ func (*DB) Close() error { return nil } -func litSize(lit call.Literal) int { - switch x := lit.(type) { - case *call.LiteralID: - return idSize(x.Value()) - case *call.LiteralList: +func (db *DB) MustCall(dig string) *callpbv1.Call { + call, ok := db.Calls[dig] + if !ok { + // Sometimes may see a call's digest before the call itself. + // + // The loadFooFromID APIs for example will emit their call via their span + // before loading the ID, and its ID argument will just be a digest like + // anything else. + return &callpbv1.Call{ + Field: "unknown", + Type: &callpbv1.Type{ + NamedType: "Void", + }, + Digest: dig, + } + } + return call +} + +func (db *DB) litSize(lit *callpbv1.Literal) int { + switch x := lit.GetValue().(type) { + case *callpbv1.Literal_CallDigest: + return db.idSize(db.MustCall(x.CallDigest)) + case *callpbv1.Literal_List: size := 0 - x.Range(func(_ int, lit call.Literal) error { - size += litSize(lit) - return nil - }) + for _, lit := range x.List.GetValues() { + size += db.litSize(lit) + } return size - case *call.LiteralObject: + case *callpbv1.Literal_Object: size := 0 - x.Range(func(_ int, _ string, value call.Literal) error { - size += litSize(value) - return nil - }) + for _, lit := range x.Object.GetValues() { + size += db.litSize(lit.GetValue()) + } return size } return 1 } -func idSize(id *call.ID) int { +func (db *DB) idSize(id *callpbv1.Call) int { size := 0 - for id := id; id != nil; id = id.Base() { + for id := id; id != nil; id = db.Calls[id.ReceiverDigest] { size++ - size += len(id.Args()) - for _, arg := range id.Args() { - size += litSize(arg.Value()) + size += len(id.Args) + for _, arg := range id.Args { + size += db.litSize(arg.GetValue()) } } return size } -func (db *DB) Simplify(dig string) string { - creators, ok := db.OutputOf[dig] +func (db *DB) Simplify(call *callpbv1.Call) (smallest *callpbv1.Call) { + smallest = call + creators, ok := db.OutputOf[call.Digest] if !ok { - return dig + return } - var smallest = db.IDs[dig] - var smallestSize = idSize(smallest) + var smallestSize = db.idSize(smallest) var simplified bool for creatorDig := range creators { - creator, ok := db.IDs[creatorDig] + creator, ok := db.Calls[creatorDig] if ok { - if size := idSize(creator); smallest == nil || size < smallestSize { + if size := db.idSize(creator); size < smallestSize { smallest = creator smallestSize = size simplified = true @@ -282,8 +409,16 @@ func (db *DB) Simplify(dig string) string { } } if simplified { - smallestDig := smallest.Digest() - return db.Simplify(smallestDig.String()) + return db.Simplify(smallest) + } + return +} + +func getAttr(attrs []attribute.KeyValue, key attribute.Key) (attribute.Value, bool) { + for _, attr := range attrs { + if attr.Key == key { + return attr.Value, true + } } - return dig + return attribute.Value{}, false } diff --git a/dagql/idtui/frontend.go b/dagql/idtui/frontend.go index dbb72edaa57..94e76174972 100644 --- a/dagql/idtui/frontend.go +++ b/dagql/idtui/frontend.go @@ -1,29 +1,29 @@ package idtui import ( - "bytes" "context" "fmt" "io" + "log/slog" "os" "strings" "sync" + "time" tea "github.com/charmbracelet/bubbletea" - "github.com/dagger/dagger/dagql/call" - "github.com/dagger/dagger/telemetry" "github.com/muesli/termenv" "github.com/opencontainers/go-digest" - "github.com/vito/midterm" - "github.com/vito/progrock" - "github.com/vito/progrock/console" "github.com/vito/progrock/ui" + "go.opentelemetry.io/otel/codes" + "go.opentelemetry.io/otel/log" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" "golang.org/x/term" -) -const ( - InitVertex = "init" - PrimaryVertex = "primary" + "github.com/dagger/dagger/dagql/call" + "github.com/dagger/dagger/dagql/call/callpbv1" + "github.com/dagger/dagger/telemetry" + "github.com/dagger/dagger/telemetry/sdklog" ) var consoleSink = os.Stderr @@ -39,74 +39,68 @@ type Frontend struct { // Silent tells the frontend to not display progress at all. Silent bool + // Verbosity is the level of detail to show in the TUI. + Verbosity int + // updated by Run - program *tea.Program - in *swappableWriter - out *termenv.Output - run func(context.Context) error - runCtx context.Context - interrupt func() - done bool - err error - - // updated via progrock.Writer interface - db *DB - eof bool - logsView *LogsView - - // progrock logging - messages *Vterm - messagesW *termenv.Output - - // primaryVtx is the primary vertex whose output is printed directly to - // stdout/stderr on exit after cleaning up the TUI. - primaryVtx *progrock.Vertex - primaryLogs []*progrock.VertexLog - - // plainConsole is the writer to forward events to when in plain mode. - plainConsole progrock.Writer + program *tea.Program + out *termenv.Output + run func(context.Context) error + runCtx context.Context + interrupt func() + interrupted bool + done bool + err error + + // updated as events are written + db *DB + eof bool + backgrounded bool + logsView *LogsView + + // global logs + messagesView *Vterm + messagesBuf *strings.Builder + messagesW *termenv.Output // TUI state/config - restore func() // restore terminal - fps float64 // frames per second - profile termenv.Profile - window tea.WindowSizeMsg // set by BubbleTea - view *strings.Builder // rendered async - logs map[string]*Vterm // vertex logs - zoomed map[string]*zoomState // interactive zoomed terminals - currentZoom *zoomState // current zoomed terminal - scrollbackQueue []tea.Cmd // queue of tea.Printlns for scrollback - scrollbackQueueMu sync.Mutex // need a separate lock for this - - // held to synchronize tea.Model and progrock.Writer - mu sync.Mutex -} + restore func() // restore terminal + fps float64 // frames per second + profile termenv.Profile + window tea.WindowSizeMsg // set by BubbleTea + view *strings.Builder // rendered async -type zoomState struct { - Input io.Writer - Output *midterm.Terminal + // held to synchronize tea.Model with updates + mu sync.Mutex } func New() *Frontend { - logs := NewVterm() profile := ui.ColorProfile() + logsView := NewVterm() + logsOut := new(strings.Builder) return &Frontend{ db: NewDB(), - fps: 30, // sane default, fine-tune if needed - profile: profile, - window: tea.WindowSizeMsg{Width: -1, Height: -1}, // be clear that it's not set - view: new(strings.Builder), - logs: make(map[string]*Vterm), - zoomed: make(map[string]*zoomState), - messages: logs, - messagesW: ui.NewOutput(logs, termenv.WithProfile(profile)), + fps: 30, // sane default, fine-tune if needed + profile: profile, + window: tea.WindowSizeMsg{Width: -1, Height: -1}, // be clear that it's not set + view: new(strings.Builder), + messagesView: logsView, + messagesBuf: logsOut, + messagesW: ui.NewOutput(io.MultiWriter(logsView, logsOut), termenv.WithProfile(profile)), } } // Run starts the TUI, calls the run function, stops the TUI, and finally // prints the primary output to the appropriate stdout/stderr streams. func (fe *Frontend) Run(ctx context.Context, run func(context.Context) error) error { + // redirect slog to the logs pane + level := slog.LevelWarn + if fe.Debug { + level = slog.LevelDebug + } + slog.SetDefault(telemetry.PrettyLogger(fe.messagesW, level)) + // find a TTY anywhere in stdio. stdout might be redirected, in which case we // can show the TUI on stderr. tty, isTTY := findTTY() @@ -117,8 +111,13 @@ func (fe *Frontend) Run(ctx context.Context, run func(context.Context) error) er var runErr error if fe.Plain || fe.Silent { - // no TTY found; default to console - runErr = fe.runWithoutTUI(ctx, tty, run) + // no TTY found; set a reasonable screen size for logs, and just run the + // function + fe.SetWindowSize(tea.WindowSizeMsg{ + Width: 300, // influences vterm width + Height: 100, // theoretically noop, since we always render full logs + }) + runErr = run(ctx) } else { // run the TUI until it exits and cleans up the TTY runErr = fe.runWithTUI(ctx, tty, run) @@ -148,17 +147,19 @@ func (fe *Frontend) ConnectedToCloud(cloudURL string) { } } +// SetPrimary tells the frontend which span should be treated like the focal +// point of the command. Its output will be displayed at the end, and its +// children will be promoted to the "top-level" of the TUI. +func (fe *Frontend) SetPrimary(spanID trace.SpanID) { + fe.mu.Lock() + fe.db.PrimarySpan = spanID + fe.mu.Unlock() +} + func (fe *Frontend) runWithTUI(ctx context.Context, tty *os.File, run func(context.Context) error) error { // NOTE: establish color cache before we start consuming stdin fe.out = ui.NewOutput(tty, termenv.WithProfile(fe.profile), termenv.WithColorCache(true)) - // in order to allow the TUI to receive user input but _also_ allow an - // interactive terminal to receive keyboard input, we pipe the user input - // to an io.Writer that can have its destination swapped between the TUI - // and the remote terminal. - inR, inW := io.Pipe() - fe.in = &swappableWriter{original: inW} - // Bubbletea will just receive an `io.Reader` for its input rather than the // raw TTY *os.File, so we need to set up the TTY ourselves. ttyFd := int(tty.Fd()) @@ -169,12 +170,6 @@ func (fe *Frontend) runWithTUI(ctx context.Context, tty *os.File, run func(conte fe.restore = func() { _ = term.Restore(ttyFd, oldState) } defer fe.restore() - // start piping from the TTY to our swappable writer. - go io.Copy(fe.in, tty) //nolint: errcheck - - // support scrollable viewport - // fe.out.EnableMouseCellMotion() - // wire up the run so we can call it asynchronously with the TUI running fe.run = run // set up ctx cancellation so the TUI can interrupt via keypresses @@ -182,7 +177,7 @@ func (fe *Frontend) runWithTUI(ctx context.Context, tty *os.File, run func(conte // keep program state so we can send messages to it fe.program = tea.NewProgram(fe, - tea.WithInput(inR), + tea.WithInput(tty), tea.WithOutput(fe.out), // We set up the TTY ourselves, so Bubbletea's panic handler becomes // counter-productive. @@ -204,82 +199,70 @@ func (fe *Frontend) runWithTUI(ctx context.Context, tty *os.File, run func(conte return fe.err } -func (fe *Frontend) runWithoutTUI(ctx context.Context, tty *os.File, run func(context.Context) error) error { - if !fe.Silent { - opts := []console.WriterOpt{ - console.ShowInternal(fe.Debug), - } - if fe.Debug { - opts = append(opts, console.WithMessageLevel(progrock.MessageLevel_DEBUG)) - } - fe.plainConsole = telemetry.NewLegacyIDInternalizer( - console.NewWriter(consoleSink, opts...), - ) - } - return run(ctx) -} - // finalRender is called after the program has finished running and prints the // final output after the TUI has exited. func (fe *Frontend) finalRender() error { fe.mu.Lock() defer fe.mu.Unlock() + + fe.recalculateView() + out := termenv.NewOutput(os.Stderr) - if fe.Debug || fe.err != nil { - if renderedAny, err := fe.renderProgress(out); err != nil { - return err - } else if renderedAny { - fmt.Fprintln(out) - } + if fe.messagesBuf.Len() > 0 { + fmt.Fprintln(out, fe.messagesBuf.String()) } - if zoom := fe.currentZoom; zoom != nil { - if renderedAny, err := fe.renderZoomed(out, zoom); err != nil { + if fe.Plain || fe.Debug || fe.Verbosity > 0 || fe.err != nil { + if renderedAny, err := fe.renderProgress(out); err != nil { return err } else if renderedAny { fmt.Fprintln(out) } } - if renderedAny, err := fe.renderMessages(out, true); err != nil { - return err - } else if renderedAny { - fmt.Fprintln(out) - } - return fe.renderPrimaryOutput() } func (fe *Frontend) renderMessages(out *termenv.Output, full bool) (bool, error) { - if fe.messages.UsedHeight() == 0 { + if fe.messagesView.UsedHeight() == 0 { return false, nil } if full { - fe.messages.SetHeight(fe.messages.UsedHeight()) + fe.messagesView.SetHeight(fe.messagesView.UsedHeight()) } else { - fe.messages.SetHeight(10) + fe.messagesView.SetHeight(10) } - _, err := fmt.Fprint(out, fe.messages.View()) + _, err := fmt.Fprint(out, fe.messagesView.View()) return true, err } func (fe *Frontend) renderPrimaryOutput() error { - if len(fe.primaryLogs) == 0 { + logs := fe.db.PrimaryLogs[fe.db.PrimarySpan] + if len(logs) == 0 { return nil } var trailingLn bool - for _, l := range fe.primaryLogs { - if bytes.HasSuffix(l.Data, []byte("\n")) { + for _, l := range logs { + data := l.Body().AsString() + if strings.HasSuffix(data, "\n") { trailingLn = true } - switch l.Stream { - case progrock.LogStream_STDOUT: - if _, err := os.Stdout.Write(l.Data); err != nil { + var stream int + l.WalkAttributes(func(attr log.KeyValue) bool { + if attr.Key == telemetry.LogStreamAttr { + stream = int(attr.Value.AsInt64()) + return false + } + return true + }) + switch stream { + case 1: + if _, err := fmt.Fprint(os.Stdout, data); err != nil { return err } - case progrock.LogStream_STDERR: - if _, err := os.Stderr.Write(l.Data); err != nil { + case 2: + if _, err := fmt.Fprint(os.Stderr, data); err != nil { return err } } @@ -292,20 +275,6 @@ func (fe *Frontend) renderPrimaryOutput() error { return nil } -func (fe *Frontend) redirectStdin(st *zoomState) { - if st == nil { - fe.in.Restore() - // restore scrolling as we transition back to the DAG UI, since an app - // may have disabled it - // fe.out.EnableMouseCellMotion() - } else { - // disable mouse events, can't assume zoomed input wants it (might be - // regular shell like sh) - // fe.out.DisableMouseCellMotion() - fe.in.SetOverride(st.Input) - } -} - func findTTY() (*os.File, bool) { // some of these may be redirected for _, f := range []*os.File{os.Stderr, os.Stdout, os.Stdin} { @@ -316,174 +285,65 @@ func findTTY() (*os.File, bool) { return nil, false } -type swappableWriter struct { - original io.Writer - override io.Writer - sync.Mutex -} - -func (w *swappableWriter) SetOverride(to io.Writer) { - w.Lock() - w.override = to - w.Unlock() -} +var _ sdktrace.SpanExporter = (*Frontend)(nil) -func (w *swappableWriter) Restore() { - w.SetOverride(nil) -} - -func (w *swappableWriter) Write(p []byte) (int, error) { - w.Lock() - defer w.Unlock() - if w.override != nil { - return w.override.Write(p) +func (fe *Frontend) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error { + fe.mu.Lock() + defer fe.mu.Unlock() + slog.Debug("frontend exporting", "spans", len(spans)) + for _, span := range spans { + slog.Debug("frontend exporting span", + "trace", span.SpanContext().TraceID(), + "id", span.SpanContext().SpanID(), + "parent", span.Parent().SpanID(), + "span", span.Name(), + ) } - return w.original.Write(p) + return fe.db.ExportSpans(ctx, spans) } -var _ progrock.Writer = (*Frontend)(nil) +var _ sdklog.LogExporter = (*Frontend)(nil) -func (fe *Frontend) WriteStatus(update *progrock.StatusUpdate) error { +func (fe *Frontend) ExportLogs(ctx context.Context, logs []*sdklog.LogData) error { fe.mu.Lock() defer fe.mu.Unlock() - if err := fe.db.WriteStatus(update); err != nil { - return err - } - if fe.plainConsole != nil { - if err := fe.plainConsole.WriteStatus(update); err != nil { - return err - } - } - for _, v := range update.Vertexes { - _, isZoomed := fe.zoomed[v.Id] - if v.Zoomed && !isZoomed { - fe.initZoom(v) - } else if isZoomed { - fe.releaseZoom(v) - } - if v.Id == PrimaryVertex { - fe.primaryVtx = v - } - } - for _, l := range update.Logs { - if l.Vertex == PrimaryVertex { - fe.primaryLogs = append(fe.primaryLogs, l) - } - var w io.Writer - if t, found := fe.zoomed[l.Vertex]; found { - w = t.Output - } else { - w = fe.vertexLogs(l.Vertex) - } - _, err := w.Write(l.Data) - if err != nil { - return fmt.Errorf("write logs: %w", err) - } - } - for _, msg := range update.Messages { - if fe.Debug || msg.Level > progrock.MessageLevel_DEBUG { - progrock.WriteMessage(fe.messagesW, msg) - } - } - if len(update.Vertexes) > 0 { - steps := CollectSteps(fe.db) - rows := CollectRows(steps) - fe.logsView = CollectLogsView(rows) - } - return nil + slog.Debug("frontend exporting logs", "logs", len(logs)) + return fe.db.ExportLogs(ctx, logs) } -func (fe *Frontend) vertexLogs(id string) *Vterm { - term, found := fe.logs[id] - if !found { - term = NewVterm() - if fe.window.Width != -1 { - term.SetWidth(fe.window.Width) - } - fe.logs[id] = term +func (fe *Frontend) Shutdown(ctx context.Context) error { + // TODO this gets called twice (once for traces, once for logs) + if err := fe.db.Shutdown(ctx); err != nil { + return err } - return term + return fe.Close() } -var ( - // what's a little global state between friends? - termSetups = map[string]progrock.TermSetupFunc{} - termSetupsL = new(sync.Mutex) -) +type eofMsg struct{} -func setupTerm(vtxID string, vt *midterm.Terminal) io.Writer { - termSetupsL.Lock() - defer termSetupsL.Unlock() - setup, ok := termSetups[vtxID] - if ok && setup != nil { - return setup(vt) +func (fe *Frontend) Close() error { + if fe.program != nil { + fe.program.Send(eofMsg{}) } return nil } -// Zoomed marks the vertex as zoomed, indicating it should take up as much -// screen space as possible. -func Zoomed(setup progrock.TermSetupFunc) progrock.VertexOpt { - return progrock.VertexOptFunc(func(vertex *progrock.Vertex) { - termSetupsL.Lock() - termSetups[vertex.Id] = setup - termSetupsL.Unlock() - vertex.Zoomed = true - }) +type backgroundMsg struct { + cmd tea.ExecCommand + errs chan<- error } -type scrollbackMsg struct { - Line string -} - -func (fe *Frontend) initZoom(v *progrock.Vertex) { - var vt *midterm.Terminal - if fe.window.Height == -1 || fe.window.Width == -1 { - vt = midterm.NewAutoResizingTerminal() - } else { - vt = midterm.NewTerminal(fe.window.Height, fe.window.Width) - } - vt.OnScrollback(func(line midterm.Line) { - fe.scrollbackQueueMu.Lock() - fe.scrollbackQueue = append(fe.scrollbackQueue, tea.Println(line.Display())) - fe.scrollbackQueueMu.Unlock() +func (fe *Frontend) Background(cmd tea.ExecCommand) error { + errs := make(chan error, 1) + fe.program.Send(backgroundMsg{ + cmd: cmd, + errs: errs, }) - vt.Raw = true - w := setupTerm(v.Id, vt) - st := &zoomState{ - Output: vt, - Input: w, - } - fe.zoomed[v.Id] = st - fe.currentZoom = st - fe.redirectStdin(st) -} - -func (fe *Frontend) releaseZoom(vtx *progrock.Vertex) { - delete(fe.zoomed, vtx.Id) -} - -type eofMsg struct{} - -func (fe *Frontend) Close() error { - if fe.program != nil { - fe.program.Send(eofMsg{}) - } else if fe.plainConsole != nil { - if err := fe.plainConsole.Close(); err != nil { - return err - } - fmt.Fprintln(consoleSink) - } - return nil + return <-errs } func (fe *Frontend) Render(out *termenv.Output) error { - // if we're zoomed, render the zoomed terminal and nothing else, but only - // after we've actually seen output from it. - if fe.currentZoom != nil && fe.currentZoom.Output.UsedHeight() > 0 { - _, err := fe.renderZoomed(out, fe.currentZoom) - return err - } + fe.recalculateView() if _, err := fe.renderProgress(out); err != nil { return err } @@ -493,43 +353,58 @@ func (fe *Frontend) Render(out *termenv.Output) error { return nil } +func (fe *Frontend) recalculateView() { + steps := CollectSpans(fe.db, trace.TraceID{}) + rows := CollectRows(steps) + fe.logsView = CollectLogsView(rows) +} + func (fe *Frontend) renderProgress(out *termenv.Output) (bool, error) { var renderedAny bool if fe.logsView == nil { return false, nil } - if init := fe.logsView.Init; init != nil && (fe.Debug || init.IsInteresting()) { - if err := fe.renderRow(out, init); err != nil { - return renderedAny, err - } - } for _, row := range fe.logsView.Body { - if fe.Debug || row.IsInteresting() { - if err := fe.renderRow(out, row); err != nil { + if fe.Debug || fe.ShouldShow(row) { + if err := fe.renderRow(out, row, 0); err != nil { return renderedAny, err } renderedAny = true } } - if fe.logsView.Primary != nil && !fe.done { - fe.renderLogs(out, fe.logsView.Primary.Digest, -1) + if fe.Plain || (fe.logsView.Primary != nil && !fe.done) { + if renderedAny { + fmt.Fprintln(out) + } + fe.renderLogs(out, fe.logsView.Primary, -1) renderedAny = true } return renderedAny, nil } -func (fe *Frontend) renderZoomed(out *termenv.Output, st *zoomState) (bool, error) { - var renderedAny bool - for i := 0; i < st.Output.UsedHeight(); i++ { - if i > 0 { - fmt.Fprintln(out) - } - if err := st.Output.RenderLine(out, i); err != nil { - return renderedAny, err - } - renderedAny = true +func (fe *Frontend) ShouldShow(row *TraceRow) bool { + span := row.Span + if span.Err() != nil { + // show errors always + return true } - return renderedAny, nil + if span.IsInternal() && fe.Verbosity < 2 { + // internal steps are hidden by default + return false + } + if span.Duration() < TooFastThreshold && fe.Verbosity < 3 { + // ignore fast steps; signal:noise is too poor + return false + } + if row.IsRunning { + return true + } + if time.Since(span.EndTime()) < GCThreshold || + fe.Plain || + fe.Verbosity >= 1 { + return true + } + return false } var _ tea.Model = (*Frontend)(nil) @@ -555,9 +430,12 @@ func (fe *Frontend) spawn() (msg tea.Msg) { return doneMsg{fe.run(fe.runCtx)} } +type backgroundDoneMsg struct{} + func (fe *Frontend) Update(msg tea.Msg) (tea.Model, tea.Cmd) { switch msg := msg.(type) { case doneMsg: // run finished + slog.Debug("run finished", "err", msg.err) fe.done = true fe.err = msg.err if fe.eof { @@ -566,33 +444,45 @@ func (fe *Frontend) Update(msg tea.Msg) (tea.Model, tea.Cmd) { return fe, nil case eofMsg: // received end of updates + slog.Debug("got EOF") fe.eof = true if fe.done { return fe, tea.Quit } return fe, nil - case scrollbackMsg: - return fe, tea.Println(msg.Line) + case backgroundMsg: + fe.backgrounded = true + return fe, tea.Exec(msg.cmd, func(err error) tea.Msg { + msg.errs <- err + return backgroundDoneMsg{} + }) + + case backgroundDoneMsg: + return fe, nil case tea.KeyMsg: switch msg.String() { case "q", "esc", "ctrl+c": + if fe.interrupted { + slog.Warn("exiting immediately") + return fe, tea.Quit + } else { + slog.Warn("canceling... (press again to exit immediately)") + } fe.interrupt() + fe.interrupted = true return fe, nil // tea.Quit is deferred until we receive doneMsg + case "ctrl+\\": // SIGQUIT + fe.restore() + sigquit() + return fe, nil default: return fe, nil } case tea.WindowSizeMsg: - fe.window = msg - for _, st := range fe.zoomed { - st.Output.Resize(msg.Height, msg.Width) - } - for _, vt := range fe.logs { - vt.SetWidth(msg.Width) - } - fe.messages.SetWidth(msg.Width) + fe.SetWindowSize(msg) return fe, nil case ui.FrameMsg: @@ -600,17 +490,19 @@ func (fe *Frontend) Update(msg tea.Msg) (tea.Model, tea.Cmd) { // NB: take care not to forward Frame downstream, since that will result // in runaway ticks. instead inner components should send a SetFpsMsg to // adjust the outermost layer. - fe.scrollbackQueueMu.Lock() - queue := fe.scrollbackQueue - fe.scrollbackQueue = nil - fe.scrollbackQueueMu.Unlock() - return fe, tea.Sequence(append(queue, ui.Frame(fe.fps))...) + return fe, ui.Frame(fe.fps) default: return fe, nil } } +func (fe *Frontend) SetWindowSize(msg tea.WindowSizeMsg) { + fe.window = msg + fe.db.SetWidth(msg.Width) + fe.messagesView.SetWidth(msg.Width) +} + func (fe *Frontend) render() { fe.mu.Lock() fe.view.Reset() @@ -622,6 +514,11 @@ func (fe *Frontend) View() string { fe.mu.Lock() defer fe.mu.Unlock() view := fe.view.String() + if fe.backgrounded { + // if we've been backgrounded, show nothing, so a user's shell session + // doesn't have any garbage before/after + return "" + } if fe.done && fe.eof { // print nothing; make way for the pristine output in the final render return "" @@ -636,45 +533,69 @@ func (fe *Frontend) DumpID(out *termenv.Output, id *call.ID) error { return err } } - return fe.renderID(out, nil, id, 0, false) + dag, err := id.ToProto() + if err != nil { + return err + } + for dig, call := range dag.CallsByDigest { + fe.db.Calls[dig] = call + } + return fe.renderCall(out, nil, id.Call(), 0, false) } -func (fe *Frontend) renderRow(out *termenv.Output, row *TraceRow) error { - if !row.IsInteresting() && !fe.Debug { +func (fe *Frontend) renderRow(out *termenv.Output, row *TraceRow, depth int) error { + if !fe.ShouldShow(row) && !fe.Debug { return nil } - fe.renderStep(out, row.Step, row.Depth()) - fe.renderLogs(out, row.Step.Digest, row.Depth()) - for _, child := range row.Children { - if err := fe.renderRow(out, child); err != nil { - return err + if !row.Span.Passthrough { + fe.renderStep(out, row.Span, depth) + fe.renderLogs(out, row.Span, depth) + depth++ + } + if !row.Span.Encapsulate || row.Span.Status().Code == codes.Error || fe.Verbosity >= 2 { + for _, child := range row.Children { + if err := fe.renderRow(out, child, depth); err != nil { + return err + } } } return nil } -func (fe *Frontend) renderStep(out *termenv.Output, step *Step, depth int) error { - id := step.ID() - vtx := step.db.MostInterestingVertex(step.Digest) +func (fe *Frontend) renderStep(out *termenv.Output, span *Span, depth int) error { + id := span.Call if id != nil { - if err := fe.renderID(out, vtx, id, depth, false); err != nil { + if err := fe.renderCall(out, span, id, depth, false); err != nil { return err } - } else if vtx != nil { - if err := fe.renderVertex(out, vtx, depth); err != nil { + } else if span != nil { + if err := fe.renderVertex(out, span, depth); err != nil { return err } } + if span.Status().Code == codes.Error && span.Status().Description != "" { + indent(out, depth) + // print error description above it + fmt.Fprintf(out, + out.String("! %s\n").Foreground(termenv.ANSIYellow).String(), + span.Status().Description, + ) + } return nil } -func (fe *Frontend) renderLogs(out *termenv.Output, dig string, depth int) { - if logs, ok := fe.logs[dig]; ok { +func (fe *Frontend) renderLogs(out *termenv.Output, span *Span, depth int) { + if logs, ok := fe.db.Logs[span.SpanContext().SpanID()]; ok { pipe := out.String(ui.VertBoldBar).Foreground(termenv.ANSIBrightBlack) if depth != -1 { logs.SetPrefix(strings.Repeat(" ", depth) + pipe.String() + " ") } - logs.SetHeight(fe.window.Height / 3) + if fe.Plain { + // print full logs in plain mode + logs.SetHeight(logs.UsedHeight()) + } else { + logs.SetHeight(fe.window.Height / 3) + } fmt.Fprint(out, logs.View()) } } @@ -689,39 +610,39 @@ const ( moduleColor = termenv.ANSIMagenta ) -func (fe *Frontend) renderIDBase(out *termenv.Output, id *call.ID) error { - typeName := id.Type().ToAST().Name() +func (fe *Frontend) renderIDBase(out *termenv.Output, call *callpbv1.Call) error { + typeName := call.Type.ToAST().Name() parent := out.String(typeName) - if id.Module() != nil { + if call.Module != nil { parent = parent.Foreground(moduleColor) } fmt.Fprint(out, parent.String()) return nil } -func (fe *Frontend) renderID(out *termenv.Output, vtx *progrock.Vertex, id *call.ID, depth int, inline bool) error { +func (fe *Frontend) renderCall(out *termenv.Output, span *Span, id *callpbv1.Call, depth int, inline bool) error { if !inline { indent(out, depth) } - if vtx != nil { - fe.renderStatus(out, vtx) + if span != nil { + fe.renderStatus(out, span, depth) } - if id.Base() != nil { - if err := fe.renderIDBase(out, id.Base()); err != nil { + if id.ReceiverDigest != "" { + if err := fe.renderIDBase(out, fe.db.MustCall(id.ReceiverDigest)); err != nil { return err } fmt.Fprint(out, ".") } - fmt.Fprint(out, out.String(id.Field()).Bold()) + fmt.Fprint(out, out.String(id.Field).Bold()) - if len(id.Args()) > 0 { + if len(id.Args) > 0 { fmt.Fprint(out, "(") var needIndent bool - for _, arg := range id.Args() { - if _, ok := arg.Value().ToInput().(*call.ID); ok { + for _, arg := range id.Args { + if arg.GetValue().GetCallDigest() != "" { needIndent = true break } @@ -730,24 +651,19 @@ func (fe *Frontend) renderID(out *termenv.Output, vtx *progrock.Vertex, id *call fmt.Fprintln(out) depth++ depth++ - for _, arg := range id.Args() { + for _, arg := range id.Args { indent(out, depth) - fmt.Fprintf(out, out.String("%s:").Foreground(kwColor).String(), arg.Name()) - val := arg.Value() + fmt.Fprintf(out, out.String("%s:").Foreground(kwColor).String(), arg.GetName()) + val := arg.GetValue() fmt.Fprint(out, " ") - switch x := val.(type) { - case *call.LiteralID: - argVertexID := x.Value().Digest() - argVtx := fe.db.Vertices[argVertexID.String()] - base := x.Value() - if baseStep, ok := fe.db.HighLevelStep(x.Value()); ok { - base = baseStep.ID() - } - if err := fe.renderID(out, argVtx, base, depth-1, true); err != nil { + if argDig := val.GetCallDigest(); argDig != "" { + argCall := fe.db.Simplify(fe.db.MustCall(argDig)) + span := fe.db.MostInterestingSpan(argDig) + if err := fe.renderCall(out, span, argCall, depth-1, true); err != nil { return err } - default: - fe.renderLiteral(out, arg.Value()) + } else { + fe.renderLiteral(out, arg.GetValue()) fmt.Fprintln(out) } } @@ -755,22 +671,22 @@ func (fe *Frontend) renderID(out *termenv.Output, vtx *progrock.Vertex, id *call indent(out, depth) depth-- //nolint:ineffassign } else { - for i, arg := range id.Args() { + for i, arg := range id.Args { if i > 0 { fmt.Fprint(out, ", ") } - fmt.Fprintf(out, out.String("%s:").Foreground(kwColor).String()+" ", arg.Name()) - fe.renderLiteral(out, arg.Value()) + fmt.Fprintf(out, out.String("%s:").Foreground(kwColor).String()+" ", arg.GetName()) + fe.renderLiteral(out, arg.GetValue()) } } fmt.Fprint(out, ")") } - typeStr := out.String(": " + id.Type().ToAST().String()).Faint() + typeStr := out.String(": " + id.Type.ToAST().String()).Faint() fmt.Fprint(out, typeStr) - if vtx != nil { - fe.renderDuration(out, vtx) + if span != nil { + fe.renderDuration(out, span) } fmt.Fprintln(out) @@ -778,83 +694,76 @@ func (fe *Frontend) renderID(out *termenv.Output, vtx *progrock.Vertex, id *call return nil } -func (fe *Frontend) renderVertex(out *termenv.Output, vtx *progrock.Vertex, depth int) error { +func (fe *Frontend) renderVertex(out *termenv.Output, span *Span, depth int) error { indent(out, depth) - fe.renderStatus(out, vtx) - fmt.Fprint(out, vtx.Name) - fe.renderVertexTasks(out, vtx, depth) - fe.renderDuration(out, vtx) + fe.renderStatus(out, span, depth) + fmt.Fprint(out, span.Name()) + // TODO: when a span has child spans that have progress, do 2-d progress + // fe.renderVertexTasks(out, span, depth) + fe.renderDuration(out, span) fmt.Fprintln(out) return nil } -func (fe *Frontend) renderLiteral(out *termenv.Output, lit call.Literal) { - var color termenv.Color - switch val := lit.(type) { - case *call.LiteralBool: - color = termenv.ANSIRed - case *call.LiteralInt: - color = termenv.ANSIRed - case *call.LiteralFloat: - color = termenv.ANSIRed - case *call.LiteralString: - color = termenv.ANSIYellow +func (fe *Frontend) renderLiteral(out *termenv.Output, lit *callpbv1.Literal) { + switch val := lit.GetValue().(type) { + case *callpbv1.Literal_Bool: + fmt.Fprint(out, out.String(fmt.Sprintf("%v", val.Bool)).Foreground(termenv.ANSIRed)) + case *callpbv1.Literal_Int: + fmt.Fprint(out, out.String(fmt.Sprintf("%d", val.Int)).Foreground(termenv.ANSIRed)) + case *callpbv1.Literal_Float: + fmt.Fprint(out, out.String(fmt.Sprintf("%f", val.Float)).Foreground(termenv.ANSIRed)) + case *callpbv1.Literal_String_: if fe.window.Width != -1 && len(val.Value()) > fe.window.Width { display := string(digest.FromString(val.Value())) - fmt.Fprint(out, out.String("ETOOBIG:"+display).Foreground(color)) + fmt.Fprint(out, out.String("ETOOBIG:"+display).Foreground(termenv.ANSIYellow)) return } - case *call.LiteralID: - color = termenv.ANSIMagenta - case *call.LiteralEnum: - color = termenv.ANSIYellow - case *call.LiteralNull: - color = termenv.ANSIBrightBlack - case *call.LiteralList: + fmt.Fprint(out, out.String(fmt.Sprintf("%q", val.String_)).Foreground(termenv.ANSIYellow)) + case *callpbv1.Literal_CallDigest: + fmt.Fprint(out, out.String(val.CallDigest).Foreground(termenv.ANSIMagenta)) + case *callpbv1.Literal_Enum: + fmt.Fprint(out, out.String(val.Enum).Foreground(termenv.ANSIYellow)) + case *callpbv1.Literal_Null: + fmt.Fprint(out, out.String("null").Foreground(termenv.ANSIBrightBlack)) + case *callpbv1.Literal_List: fmt.Fprint(out, "[") - val.Range(func(i int, item call.Literal) error { + for i, item := range val.List.GetValues() { if i > 0 { fmt.Fprint(out, ", ") } fe.renderLiteral(out, item) - return nil - }) + } fmt.Fprint(out, "]") - return - case *call.LiteralObject: + case *callpbv1.Literal_Object: fmt.Fprint(out, "{") - val.Range(func(i int, name string, value call.Literal) error { + for i, item := range val.Object.GetValues() { if i > 0 { fmt.Fprint(out, ", ") } - fmt.Fprintf(out, "%s: ", name) - fe.renderLiteral(out, value) - return nil - }) + fmt.Fprintf(out, "%s: ", item.GetName()) + fe.renderLiteral(out, item.GetValue()) + } fmt.Fprint(out, "}") - return } - fmt.Fprint(out, out.String(lit.ToAST().String()).Foreground(color)) } -func (fe *Frontend) renderStatus(out *termenv.Output, vtx *progrock.Vertex) { +func (fe *Frontend) renderStatus(out *termenv.Output, span *Span, depth int) { var symbol string var color termenv.Color - if vtx.Completed != nil { - switch { - case vtx.Error != nil: - symbol = ui.IconFailure - color = termenv.ANSIRed - case vtx.Canceled: - symbol = ui.IconSkipped - color = termenv.ANSIBrightBlack - default: - symbol = ui.IconSuccess - color = termenv.ANSIGreen - } - } else { + switch { + case span.IsRunning(): symbol = ui.DotFilled color = termenv.ANSIYellow + case span.Canceled: + symbol = ui.IconSkipped + color = termenv.ANSIBrightBlack + case span.Status().Code == codes.Error: + symbol = ui.IconFailure + color = termenv.ANSIRed + default: + symbol = ui.IconSuccess + color = termenv.ANSIGreen } symbol = out.String(symbol).Foreground(color).String() @@ -862,53 +771,53 @@ func (fe *Frontend) renderStatus(out *termenv.Output, vtx *progrock.Vertex) { fmt.Fprintf(out, "%s ", symbol) } -func (fe *Frontend) renderDuration(out *termenv.Output, vtx *progrock.Vertex) { +func (fe *Frontend) renderDuration(out *termenv.Output, span *Span) { fmt.Fprint(out, " ") - duration := out.String(fmtDuration(vtx.Duration())) - if vtx.Completed != nil { - duration = duration.Faint() - } else { + duration := out.String(fmtDuration(span.Duration())) + if span.IsRunning() { duration = duration.Foreground(termenv.ANSIYellow) + } else { + duration = duration.Faint() } fmt.Fprint(out, duration) } -var ( - progChars = []string{"⠀", "⡀", "⣀", "⣄", "⣤", "⣦", "⣶", "⣷", "⣿"} -) - -func (fe *Frontend) renderVertexTasks(out *termenv.Output, vtx *progrock.Vertex, depth int) error { - tasks := fe.db.Tasks[vtx.Id] - if len(tasks) == 0 { - return nil - } - var spaced bool - for _, t := range tasks { - var sym termenv.Style - if t.GetTotal() != 0 { - percent := int(100 * (float64(t.GetCurrent()) / float64(t.GetTotal()))) - idx := (len(progChars) - 1) * percent / 100 - chr := progChars[idx] - sym = out.String(chr) - } else { - // TODO: don't bother printing non-progress-bar tasks for now - // else if t.Completed != nil { - // sym = out.String(ui.IconSuccess) - // } else if t.Started != nil { - // sym = out.String(ui.DotFilled) - // } - continue - } - if t.Completed != nil { - sym = sym.Foreground(termenv.ANSIGreen) - } else if t.Started != nil { - sym = sym.Foreground(termenv.ANSIYellow) - } - if !spaced { - fmt.Fprint(out, " ") - spaced = true - } - fmt.Fprint(out, sym) - } - return nil -} +// var ( +// progChars = []string{"⠀", "⡀", "⣀", "⣄", "⣤", "⣦", "⣶", "⣷", "⣿"} +// ) + +// func (fe *Frontend) renderVertexTasks(out *termenv.Output, span *Span, depth int) error { +// tasks := fe.db.Tasks[span.SpanContext().SpanID()] +// if len(tasks) == 0 { +// return nil +// } +// var spaced bool +// for _, t := range tasks { +// var sym termenv.Style +// if t.Total != 0 { +// percent := int(100 * (float64(t.Current) / float64(t.Total))) +// idx := (len(progChars) - 1) * percent / 100 +// chr := progChars[idx] +// sym = out.String(chr) +// } else { +// // TODO: don't bother printing non-progress-bar tasks for now +// // else if t.Completed != nil { +// // sym = out.String(ui.IconSuccess) +// // } else if t.Started != nil { +// // sym = out.String(ui.DotFilled) +// // } +// continue +// } +// if t.Completed.IsZero() { +// sym = sym.Foreground(termenv.ANSIYellow) +// } else { +// sym = sym.Foreground(termenv.ANSIGreen) +// } +// if !spaced { +// fmt.Fprint(out, " ") +// spaced = true +// } +// fmt.Fprint(out, sym) +// } +// return nil +// } diff --git a/dagql/idtui/sigquit.go b/dagql/idtui/sigquit.go new file mode 100644 index 00000000000..39d901d1538 --- /dev/null +++ b/dagql/idtui/sigquit.go @@ -0,0 +1,10 @@ +//go:build !windows +// +build !windows + +package idtui + +import "syscall" + +func sigquit() { + syscall.Kill(syscall.Getpid(), syscall.SIGQUIT) +} diff --git a/dagql/idtui/sigquit_windows.go b/dagql/idtui/sigquit_windows.go new file mode 100644 index 00000000000..7a9ebf2dba1 --- /dev/null +++ b/dagql/idtui/sigquit_windows.go @@ -0,0 +1,5 @@ +package idtui + +func sigquit() { + // TODO? +} diff --git a/dagql/idtui/spans.go b/dagql/idtui/spans.go new file mode 100644 index 00000000000..59c5aa1ed91 --- /dev/null +++ b/dagql/idtui/spans.go @@ -0,0 +1,206 @@ +package idtui + +import ( + "errors" + "fmt" + "sort" + "strings" + "time" + + "github.com/a-h/templ" + "github.com/dagger/dagger/dagql/call/callpbv1" + "go.opentelemetry.io/otel/codes" + sdktrace "go.opentelemetry.io/otel/sdk/trace" +) + +type Span struct { + sdktrace.ReadOnlySpan + + Digest string + + Call *callpbv1.Call + + Internal bool + Cached bool + Canceled bool + Inputs []string + + Primary bool + Encapsulate bool + Mask bool + Passthrough bool + Ignore bool + + db *DB + trace *Trace +} + +func (span *Span) Base() (*callpbv1.Call, bool) { + if span.Call == nil { + return nil, false + } + if span.Call.ReceiverDigest == "" { + return nil, false + } + call, ok := span.db.Calls[span.Call.ReceiverDigest] + if !ok { + return nil, false + } + return span.db.Simplify(call), true +} + +func (span *Span) IsRunning() bool { + inner := span.ReadOnlySpan + return inner.EndTime().Before(inner.StartTime()) +} + +func (span *Span) Logs() *Vterm { + return span.db.Logs[span.SpanContext().SpanID()] +} + +func (span *Span) Name() string { + return span.ReadOnlySpan.Name() +} + +// func (step *Step) Inputs() []string { +// for _, vtx := range step.db.Intervals[step.Digest] { +// return vtx.Inputs // assume all names are equal +// } +// if step.ID() != nil { +// // TODO: in principle this could return arg ID digests, but not needed +// return nil +// } +// return nil +// } + +func (span *Span) Err() error { + status := span.Status() + if status.Code == codes.Error { + return errors.New(status.Description) + } + return nil +} + +func (span *Span) IsInternal() bool { + return span.Internal +} + +func (span *Span) Duration() time.Duration { + inner := span.ReadOnlySpan + var dur time.Duration + if span.IsRunning() { + dur = time.Since(inner.StartTime()) + } else { + dur = inner.EndTime().Sub(inner.StartTime()) + } + return dur +} + +func (span *Span) EndTime() time.Time { + if span.IsRunning() { + return time.Now() + } + return span.ReadOnlySpan.EndTime() +} + +func (span *Span) IsBefore(other *Span) bool { + return span.StartTime().Before(other.StartTime()) +} + +func (span *Span) Children() []*Span { + children := []*Span{} + for childID := range span.db.Children[span.SpanContext().SpanID()] { + child, ok := span.db.Spans[childID] + if !ok { + continue + } + if !child.Ignore { + children = append(children, child) + } + } + sort.Slice(children, func(i, j int) bool { + return children[i].IsBefore(children[j]) + }) + return children +} + +type SpanBar struct { + Span *Span + Duration time.Duration + OffsetPercent float64 + WidthPercent float64 +} + +func (span *Span) Bar() SpanBar { + epoch := span.trace.Epoch + end := span.trace.End + if span.trace.IsRunning { + end = time.Now() + } + total := end.Sub(epoch) + + started := span.StartTime() + + var bar SpanBar + bar.OffsetPercent = float64(started.Sub(epoch)) / float64(total) + if span.EndTime().IsZero() { + bar.WidthPercent = 1 - bar.OffsetPercent + } else { + bar.Duration = span.EndTime().Sub(started) + bar.WidthPercent = float64(bar.Duration) / float64(total) + } + bar.Span = span + + return bar +} + +func (bar SpanBar) Render() templ.Component { + var dur string + if bar.Duration > 10*time.Millisecond { + dur = fmtDuration(bar.Duration) + } + return templ.Raw( + fmt.Sprintf( + `
%s
`, + bar.Span.Classes(), + bar.OffsetPercent*100, + bar.WidthPercent*100, + dur, + ), + ) +} + +func (span *Span) Classes() string { + classes := []string{} + if span.Cached { + classes = append(classes, "cached") + } + if span.Canceled { + classes = append(classes, "canceled") + } + if span.Err() != nil { + classes = append(classes, "errored") + } + if span.Internal { + classes = append(classes, "internal") + } + return strings.Join(classes, " ") +} + +func fmtDuration(d time.Duration) string { + days := int64(d.Hours()) / 24 + hours := int64(d.Hours()) % 24 + minutes := int64(d.Minutes()) % 60 + seconds := d.Seconds() - float64(86400*days) - float64(3600*hours) - float64(60*minutes) + + switch { + case d < time.Minute: + return fmt.Sprintf("%.1fs", seconds) + case d < time.Hour: + return fmt.Sprintf("%dm%.1fs", minutes, seconds) + case d < 24*time.Hour: + return fmt.Sprintf("%dh%dm%.1fs", hours, minutes, seconds) + default: + return fmt.Sprintf("%dd%dh%dm%.1fs", days, hours, minutes, seconds) + } +} diff --git a/dagql/idtui/steps.go b/dagql/idtui/steps.go deleted file mode 100644 index 4f11b57fc38..00000000000 --- a/dagql/idtui/steps.go +++ /dev/null @@ -1,274 +0,0 @@ -package idtui - -import ( - "errors" - "fmt" - "sort" - "strings" - "time" - - "github.com/a-h/templ" - "github.com/dagger/dagger/dagql/call" - "github.com/vito/progrock" -) - -type Step struct { - BaseDigest string - Digest string - - db *DB -} - -func (step *Step) ID() *call.ID { - return step.db.IDs[step.Digest] -} - -func (step *Step) Base() (*Step, bool) { - return step.db.Step(step.BaseDigest) -} - -func (step *Step) HasStarted() bool { - return len(step.db.Intervals[step.Digest]) > 0 -} - -func (step *Step) MostInterestingVertex() *progrock.Vertex { - return step.db.MostInterestingVertex(step.Digest) -} - -func (step *Step) IsRunning() bool { - ivals := step.db.Intervals[step.Digest] - if len(ivals) == 0 { - return false - } - for _, vtx := range ivals { - if vtx.Completed == nil { - return true - } - } - return false -} - -func (step *Step) Name() string { - for _, vtx := range step.db.Intervals[step.Digest] { - return vtx.Name // assume all names are equal - } - if step.ID() != nil { - return step.ID().DisplaySelf() - } - return "" -} - -func (step *Step) Inputs() []string { - for _, vtx := range step.db.Intervals[step.Digest] { - return vtx.Inputs // assume all names are equal - } - if step.ID() != nil { - // TODO: in principle this could return arg ID digests, but not needed - return nil - } - return nil -} - -func (step *Step) Err() error { - for _, vtx := range step.db.Intervals[step.Digest] { - if vtx.Error != nil { - return errors.New(vtx.GetError()) - } - } - return nil -} - -func (step *Step) IsInternal() bool { - for _, vtx := range step.db.Intervals[step.Digest] { - if vtx.Internal { - return true - } - } - return false -} - -func (step *Step) Duration() time.Duration { - var d time.Duration - for _, vtx := range step.db.Intervals[step.Digest] { - d += vtx.Duration() - } - return d -} - -func (step *Step) FirstCompleted() *time.Time { - var completed *time.Time - for _, vtx := range step.db.Intervals[step.Digest] { - if vtx.Completed == nil { - continue - } - cmp := vtx.Completed.AsTime() - if completed == nil { - completed = &cmp - continue - } - if cmp.Before(*completed) { - completed = &cmp - } - } - return completed -} - -func (step *Step) StartTime() time.Time { - ivals := step.db.Intervals[step.Digest] - if len(ivals) == 0 { - return time.Time{} - } - lowest := time.Now() - for started := range ivals { - if started.Before(lowest) { - lowest = started - } - } - return lowest -} - -func (step *Step) EndTime() time.Time { - now := time.Now() - ivals := step.db.Intervals[step.Digest] - if len(ivals) == 0 { - return now - } - var highest time.Time - for _, vtx := range ivals { - if vtx.Completed == nil { - highest = now - } else if vtx.Completed.AsTime().After(highest) { - highest = vtx.Completed.AsTime() - } - } - return highest -} - -func (step *Step) IsBefore(other *Step) bool { - as, bs := step.StartTime(), other.StartTime() - switch { - case as.Before(bs): - return true - case bs.Before(as): - return false - case step.EndTime().Before(other.EndTime()): - return true - case other.EndTime().Before(step.EndTime()): - return false - default: - // equal start + end time; maybe a cache hit. break ties by seeing if one - // depends on the other. - // TODO: this isn't needed for the current TUI since we don't even show - // cached steps, but we do want this for the web UI. Bring this back once - // we can be sure it doesn't explode into quadratic complexity. (Memoize?) - // return step.db.IsTransitiveDependency(other.Digest, step.Digest) - return false - } -} - -func (step *Step) Children() []*Step { - children := []*Step{} - for out := range step.db.Children[step.Digest] { - child, ok := step.db.Step(out) - if !ok { - continue - } - children = append(children, child) - } - sort.Slice(children, func(i, j int) bool { - return children[i].IsBefore(children[j]) - }) - return children -} - -type Span struct { - Duration time.Duration - OffsetPercent float64 - WidthPercent float64 - Vertex *progrock.Vertex -} - -func (step *Step) Spans() (spans []Span) { - epoch := step.db.Epoch - end := step.db.End - - ivals := step.db.Intervals[step.Digest] - if len(ivals) == 0 { - return - } - - total := end.Sub(epoch) - - for started, vtx := range ivals { - var span Span - span.OffsetPercent = float64(started.Sub(epoch)) / float64(total) - if vtx.Completed != nil { - span.Duration = vtx.Completed.AsTime().Sub(started) - span.WidthPercent = float64(span.Duration) / float64(total) - } else { - span.WidthPercent = 1 - span.OffsetPercent - } - span.Vertex = vtx - spans = append(spans, span) - } - - sort.Slice(spans, func(i, j int) bool { - return spans[i].OffsetPercent < spans[j].OffsetPercent - }) - - return -} - -func (span Span) Bar() templ.Component { - var dur string - if span.Duration > 10*time.Millisecond { - dur = fmtDuration(span.Duration) - } - return templ.Raw( - fmt.Sprintf( - `
%s
`, - VertexClasses(span.Vertex), - span.OffsetPercent*100, - span.WidthPercent*100, - dur, - ), - ) -} - -func VertexClasses(vtx *progrock.Vertex) string { - classes := []string{} - if vtx.Cached { - classes = append(classes, "cached") - } - if vtx.Canceled { - classes = append(classes, "canceled") - } - if vtx.Error != nil { - classes = append(classes, "errored") - } - if vtx.Focused { - classes = append(classes, "focused") - } - if vtx.Internal { - classes = append(classes, "internal") - } - return strings.Join(classes, " ") -} - -func fmtDuration(d time.Duration) string { - days := int64(d.Hours()) / 24 - hours := int64(d.Hours()) % 24 - minutes := int64(d.Minutes()) % 60 - seconds := d.Seconds() - float64(86400*days) - float64(3600*hours) - float64(60*minutes) - - switch { - case d < time.Minute: - return fmt.Sprintf("%.1fs", seconds) - case d < time.Hour: - return fmt.Sprintf("%dm%.1fs", minutes, seconds) - case d < 24*time.Hour: - return fmt.Sprintf("%dh%dm%.1fs", hours, minutes, seconds) - default: - return fmt.Sprintf("%dd%dh%dm%.1fs", days, hours, minutes, seconds) - } -} diff --git a/dagql/idtui/types.go b/dagql/idtui/types.go index d1bd16ba321..51a4ed5b8f7 100644 --- a/dagql/idtui/types.go +++ b/dagql/idtui/types.go @@ -3,24 +3,63 @@ package idtui import ( "sort" "time" + + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" ) -func CollectSteps(db *DB) []*Step { - var steps []*Step //nolint:prealloc - for vID := range db.Vertices { - step, ok := db.Step(vID) - if !ok { +type Trace struct { + ID trace.TraceID + Epoch, End time.Time + IsRunning bool + db *DB +} + +func (trace *Trace) HexID() string { + return trace.ID.String() +} + +func (trace *Trace) Name() string { + if span := trace.db.PrimarySpanForTrace(trace.ID); span != nil { + return span.Name() + } + return "unknown" +} + +func (trace *Trace) PrimarySpan() *Span { + return trace.db.PrimarySpanForTrace(trace.ID) +} + +type Task struct { + Span sdktrace.ReadOnlySpan + Name string + Current int64 + Total int64 + Started time.Time + Completed time.Time +} + +func CollectSpans(db *DB, traceID trace.TraceID) []*Span { + var spans []*Span //nolint:prealloc + for _, span := range db.Spans { + if span.Ignore { + continue + } + if traceID.IsValid() && span.SpanContext().TraceID() != traceID { continue } - steps = append(steps, step) + if span.Mask && span.Parent().IsValid() { + db.Spans[span.Parent().SpanID()].Passthrough = true + } + spans = append(spans, span) } - sort.Slice(steps, func(i, j int) bool { - return steps[i].IsBefore(steps[j]) + sort.Slice(spans, func(i, j int) bool { + return spans[i].IsBefore(spans[j]) }) - return steps + return spans } -func CollectRows(steps []*Step) []*TraceRow { +func CollectRows(steps []*Span) []*TraceRow { var rows []*TraceRow WalkSteps(steps, func(row *TraceRow) { if row.Parent != nil { @@ -33,7 +72,7 @@ func CollectRows(steps []*Step) []*TraceRow { } type TraceRow struct { - Step *Step + Span *Span Parent *TraceRow @@ -67,25 +106,22 @@ func CollectPipelines(rows []*TraceRow) []Pipeline { } type LogsView struct { - Primary *Step + Primary *Span Body []*TraceRow - Init *TraceRow } func CollectLogsView(rows []*TraceRow) *LogsView { view := &LogsView{} for _, row := range rows { - switch { - case view.Primary == nil && row.Step.Digest == PrimaryVertex: + if row.Span.Primary { // promote children of primary vertex to the top-level for _, child := range row.Children { child.Parent = nil } - view.Primary = row.Step - view.Body = row.Children - case view.Primary == nil && row.Step.Digest == InitVertex: - view.Init = row - default: + view.Primary = row.Span + // reveal anything 'extra' below the primary content + view.Body = append(row.Children, view.Body...) + } else { // reveal anything 'extra' by default (fail open) view.Body = append(view.Body, row) } @@ -98,31 +134,6 @@ const ( GCThreshold = 1 * time.Second ) -func (row *TraceRow) IsInteresting() bool { - step := row.Step - if step.Err() != nil { - // show errors always - return true - } - if step.IsInternal() { - // internal steps are, by definition, not interesting - return false - } - if step.Duration() < TooFastThreshold { - // ignore fast steps; signal:noise is too poor - return false - } - if row.IsRunning { - // show things once they've been running for a while - return true - } - if completed := step.FirstCompleted(); completed != nil && time.Since(*completed) < GCThreshold { - // show things that just completed, to reduce flicker - return true - } - return false -} - func (row *TraceRow) Depth() int { if row.Parent == nil { return 0 @@ -137,33 +148,40 @@ func (row *TraceRow) setRunning() { } } -func WalkSteps(steps []*Step, f func(*TraceRow)) { - var lastSeen string - seen := map[string]bool{} - var walk func(*Step, *TraceRow) - walk = func(step *Step, parent *TraceRow) { - if seen[step.Digest] { +func WalkSteps(spans []*Span, f func(*TraceRow)) { + var lastRow *TraceRow + seen := map[trace.SpanID]bool{} + var walk func(*Span, *TraceRow) + walk = func(span *Span, parent *TraceRow) { + spanID := span.SpanContext().SpanID() + if seen[spanID] { + return + } + if span.Passthrough { + for _, child := range span.Children() { + walk(child, parent) + } return } row := &TraceRow{ - Step: step, + Span: span, Parent: parent, } - if step.BaseDigest != "" { - row.Chained = step.BaseDigest == lastSeen + if base, ok := span.Base(); ok && lastRow != nil { + row.Chained = base.Digest == lastRow.Span.Digest } - if step.IsRunning() { + if span.IsRunning() { row.setRunning() } f(row) - lastSeen = step.Digest - seen[step.Digest] = true - for _, child := range step.Children() { + lastRow = row + seen[spanID] = true + for _, child := range span.Children() { walk(child, row) } - lastSeen = step.Digest + lastRow = row } - for _, step := range steps { + for _, step := range spans { walk(step, nil) } } diff --git a/dagql/server.go b/dagql/server.go index b504f842715..33ffbb1160b 100644 --- a/dagql/server.go +++ b/dagql/server.go @@ -47,8 +47,7 @@ type AroundFunc func( context.Context, Object, *call.ID, - func(context.Context) (Typed, error), -) func(context.Context) (Typed, error) +) (context.Context, func(res Typed, cached bool, err error)) // Cache stores results of pure selections against Server. type Cache interface { @@ -56,7 +55,7 @@ type Cache interface { context.Context, digest.Digest, func(context.Context) (Typed, error), - ) (Typed, error) + ) (Typed, bool, error) } // TypeDef is a type whose sole practical purpose is to define a GraphQL type, @@ -568,23 +567,28 @@ func CurrentID(ctx context.Context) *call.ID { return val.(*call.ID) } +func NoopDone(res Typed, cached bool, rerr error) {} + func (s *Server) cachedSelect(ctx context.Context, self Object, sel Selector) (res Typed, chained *call.ID, rerr error) { chainedID, err := self.IDFor(ctx, sel) if err != nil { return nil, nil, err } ctx = idToContext(ctx, chainedID) - doSelect := func(ctx context.Context) (Typed, error) { + dig := chainedID.Digest() + var val Typed + doSelect := func(ctx context.Context) (innerVal Typed, innerErr error) { + if s.telemetry != nil { + wrappedCtx, done := s.telemetry(ctx, self, chainedID) + defer func() { done(innerVal, false, innerErr) }() + ctx = wrappedCtx + } return self.Select(ctx, sel) } - if s.telemetry != nil { - doSelect = s.telemetry(ctx, self, chainedID, doSelect) - } - var val Typed if chainedID.IsTainted() { val, err = doSelect(ctx) } else { - val, err = s.Cache.GetOrInitialize(ctx, chainedID.Digest(), doSelect) + val, _, err = s.Cache.GetOrInitialize(ctx, dig, doSelect) } if err != nil { return nil, nil, err diff --git a/dagql/tracing.go b/dagql/tracing.go new file mode 100644 index 00000000000..8cc2cdbdbf7 --- /dev/null +++ b/dagql/tracing.go @@ -0,0 +1,12 @@ +package dagql + +import ( + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/trace" +) + +const InstrumentationLibrary = "dagger.io/dagql" + +func Tracer() trace.Tracer { + return otel.Tracer(InstrumentationLibrary) +} diff --git a/docs/current_docs/reference/979596-cli.mdx b/docs/current_docs/reference/979596-cli.mdx index e8362c354eb..13abca18db7 100644 --- a/docs/current_docs/reference/979596-cli.mdx +++ b/docs/current_docs/reference/979596-cli.mdx @@ -12,9 +12,10 @@ The Dagger CLI provides a command-line interface to Dagger. ### Options ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -62,7 +63,6 @@ dagger call lint stdout ### Options ``` - --focus Only show output for focused commands (default true) --json Present result as JSON -m, --mod string Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. "/path/to/some/dir") or a github repo (e.g. "github.com/dagger/dagger/path/to/some/subdir") -o, --output string Path in the host to save the result to @@ -71,9 +71,10 @@ dagger call lint stdout ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -102,7 +103,6 @@ dagger config -m github.com/dagger/hello-dagger ### Options ``` - --focus Only show output for focused commands (default true) --json output in JSON format -m, --mod string Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. "/path/to/some/dir") or a github repo (e.g. "github.com/dagger/dagger/path/to/some/subdir") ``` @@ -110,9 +110,10 @@ dagger config -m github.com/dagger/hello-dagger ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -146,7 +147,6 @@ dagger develop [flags] ### Options ``` - --focus Only show output for focused commands (default true) -m, --mod string Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. "/path/to/some/dir") or a github repo (e.g. "github.com/dagger/dagger/path/to/some/subdir") --sdk string New SDK for the module --source string Directory to store the module implementation source code in @@ -155,9 +155,10 @@ dagger develop [flags] ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -183,16 +184,16 @@ dagger functions [flags] [FUNCTION]... ### Options ``` - --focus Only show output for focused commands (default true) -m, --mod string Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. "/path/to/some/dir") or a github repo (e.g. "github.com/dagger/dagger/path/to/some/subdir") ``` ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -239,9 +240,10 @@ dagger init --name=hello --sdk=python --source=some/subdir ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -269,7 +271,6 @@ dagger install github.com/shykes/daggerverse/ttlsh@16e40ec244966e55e36a13cb6e1ff ### Options ``` - --focus Only show output for focused commands (default true) -m, --mod string Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. "/path/to/some/dir") or a github repo (e.g. "github.com/dagger/dagger/path/to/some/subdir") -n, --name string Name to use for the dependency in the module. Defaults to the name of the module being installed. ``` @@ -277,9 +278,10 @@ dagger install github.com/shykes/daggerverse/ttlsh@16e40ec244966e55e36a13cb6e1ff ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -297,9 +299,10 @@ dagger login [flags] ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -317,9 +320,10 @@ dagger logout [flags] ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -365,7 +369,6 @@ EOF ``` --doc string Read query from file (defaults to reading from stdin) - --focus Only show output for focused commands (default true) -m, --mod string Path to dagger.json config file for the module or a directory containing that file. Either local path (e.g. "/path/to/some/dir") or a github repo (e.g. "github.com/dagger/dagger/path/to/some/subdir") --var strings List of query variables, in key=value format --var-json string Query variables in JSON format (overrides --var) @@ -374,9 +377,10 @@ EOF ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -427,9 +431,10 @@ dagger run python main.py ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO @@ -447,9 +452,10 @@ dagger version [flags] ### Options inherited from parent commands ``` - --debug Show more information for debugging + -d, --debug show debug logs and full verbosity --progress string progress output format (auto, plain, tty) (default "auto") -s, --silent disable terminal UI and progress output + -v, --verbose count increase verbosity (use -vv or -vvv for more) ``` ### SEE ALSO diff --git a/engine/buildkit/auth.go b/engine/buildkit/auth.go index f6e59a9c064..ccee5c24fde 100644 --- a/engine/buildkit/auth.go +++ b/engine/buildkit/auth.go @@ -4,6 +4,7 @@ import ( "context" bkauth "github.com/moby/buildkit/session/auth" + "go.opentelemetry.io/otel/trace" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" @@ -20,6 +21,7 @@ func (p *authProxy) Register(srv *grpc.Server) { // TODO: reduce boilerplate w/ generics? func (p *authProxy) Credentials(ctx context.Context, req *bkauth.CredentialsRequest) (*bkauth.CredentialsResponse, error) { + ctx = trace.ContextWithSpanContext(ctx, p.c.spanCtx) // ensure server's span context is propagated resp, err := p.c.AuthProvider.Credentials(ctx, req) if err == nil { return resp, nil @@ -31,6 +33,7 @@ func (p *authProxy) Credentials(ctx context.Context, req *bkauth.CredentialsRequ } func (p *authProxy) FetchToken(ctx context.Context, req *bkauth.FetchTokenRequest) (*bkauth.FetchTokenResponse, error) { + ctx = trace.ContextWithSpanContext(ctx, p.c.spanCtx) // ensure server's span context is propagated resp, err := p.c.AuthProvider.FetchToken(ctx, req) if err == nil { return resp, nil @@ -42,6 +45,7 @@ func (p *authProxy) FetchToken(ctx context.Context, req *bkauth.FetchTokenReques } func (p *authProxy) GetTokenAuthority(ctx context.Context, req *bkauth.GetTokenAuthorityRequest) (*bkauth.GetTokenAuthorityResponse, error) { + ctx = trace.ContextWithSpanContext(ctx, p.c.spanCtx) // ensure server's span context is propagated resp, err := p.c.AuthProvider.GetTokenAuthority(ctx, req) if err == nil { return resp, nil @@ -53,6 +57,7 @@ func (p *authProxy) GetTokenAuthority(ctx context.Context, req *bkauth.GetTokenA } func (p *authProxy) VerifyTokenAuthority(ctx context.Context, req *bkauth.VerifyTokenAuthorityRequest) (*bkauth.VerifyTokenAuthorityResponse, error) { + ctx = trace.ContextWithSpanContext(ctx, p.c.spanCtx) // ensure server's span context is propagated resp, err := p.c.AuthProvider.VerifyTokenAuthority(ctx, req) if err == nil { return resp, nil diff --git a/engine/buildkit/client.go b/engine/buildkit/client.go index 50aa8243427..b262031e112 100644 --- a/engine/buildkit/client.go +++ b/engine/buildkit/client.go @@ -5,6 +5,7 @@ import ( "encoding/json" "errors" "fmt" + "io" "net" "strings" "sync" @@ -13,6 +14,7 @@ import ( "github.com/dagger/dagger/auth" "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/session" + "github.com/koron-go/prefixw" bkcache "github.com/moby/buildkit/cache" bkcacheconfig "github.com/moby/buildkit/cache/config" "github.com/moby/buildkit/cache/remotecache" @@ -32,9 +34,10 @@ import ( solverresult "github.com/moby/buildkit/solver/result" "github.com/moby/buildkit/util/bklog" "github.com/moby/buildkit/util/entitlements" + "github.com/moby/buildkit/util/progress/progressui" bkworker "github.com/moby/buildkit/worker" "github.com/opencontainers/go-digest" - "github.com/vito/progrock" + "go.opentelemetry.io/otel/trace" "golang.org/x/sync/errgroup" "google.golang.org/grpc/metadata" ) @@ -54,7 +57,6 @@ type Opts struct { AuthProvider *auth.RegistryAuthProvider PrivilegedExecEnabled bool UpstreamCacheImports []bkgw.CacheOptionsEntry - ProgSockPath string // MainClientCaller is the caller who initialized the server associated with this // client. It is special in that when it shuts down, the client will be closed and // that registry auth and sockets are currently only ever sourced from this caller, @@ -80,11 +82,13 @@ type ResolveCacheExporterFunc func(ctx context.Context, g bksession.Group) (remo // Client is dagger's internal interface to buildkit APIs type Client struct { *Opts - session *bksession.Session - job *bksolver.Job - llbBridge bkfrontend.FrontendLLBBridge - llbExec executor.Executor - bk2progrock BK2Progrock + + spanCtx trace.SpanContext + + session *bksession.Session + job *bksolver.Job + llbBridge bkfrontend.FrontendLLBBridge + llbExec executor.Executor containers map[bkgw.Container]struct{} containersMu sync.Mutex @@ -101,6 +105,7 @@ func NewClient(ctx context.Context, opts *Opts) (*Client, error) { ctx, cancel := context.WithCancel(context.WithoutCancel(ctx)) client := &Client{ Opts: opts, + spanCtx: trace.SpanContextFromContext(ctx), containers: make(map[bkgw.Container]struct{}), closeCtx: ctx, cancel: cancel, @@ -142,10 +147,9 @@ func NewClient(ctx context.Context, opts *Opts) (*Client, error) { bkfrontend.FrontendLLBBridge executor.Executor }) - gw := &recordingGateway{llbBridge: br} + gw := &opTrackingGateway{llbBridge: br} client.llbBridge = gw client.llbExec = br - client.bk2progrock = gw client.dialer = &net.Dialer{} @@ -173,9 +177,28 @@ func NewClient(ctx context.Context, opts *Opts) (*Client, error) { } } + // NB(vito): break glass (replace with os.Stderr) to troubleshoot otel + // logging issues, since it's otherwise hard to see a command's output + go client.WriteStatusesTo(ctx, io.Discard) + return client, nil } +func (c *Client) WriteStatusesTo(ctx context.Context, dest io.Writer) { + dest = prefixw.New(dest, fmt.Sprintf("[buildkit] [%s] ", c.ID())) + statusCh := make(chan *bkclient.SolveStatus, 8) + pw, err := progressui.NewDisplay(dest, progressui.PlainMode) + if err != nil { + bklog.G(ctx).WithError(err).Error("failed to initialize progress writer") + return + } + go pw.UpdateFrom(ctx, statusCh) + err = c.job.Status(ctx, statusCh) + if err != nil { + bklog.G(ctx).WithError(err).Error("failed to write status updates") + } +} + func (c *Client) ID() string { return c.session.ID() } @@ -291,8 +314,6 @@ func (c *Client) Solve(ctx context.Context, req bkgw.SolveRequest) (_ *Result, r execMeta = ContainerExecUncachedMetadata{ ParentClientIDs: clientMetadata.ClientIDs(), ServerID: clientMetadata.ServerID, - ProgSockPath: c.ProgSockPath, - ProgParent: progrock.FromContext(ctx).Parent, } c.execMetadata[*execOp.OpDigest] = execMeta } @@ -467,34 +488,6 @@ func (c *Client) NewContainer(ctx context.Context, req bkgw.NewContainerRequest) return ctr, nil } -func (c *Client) WriteStatusesTo(ctx context.Context, recorder *progrock.Recorder) { - statusCh := make(chan *bkclient.SolveStatus, 8) - go func() { - err := c.job.Status(ctx, statusCh) - if err != nil { - bklog.G(ctx).WithError(err).Error("failed to write status updates") - } - }() - go func() { - defer func() { - // drain channel on error - for range statusCh { - } - }() - for { - status, ok := <-statusCh - if !ok { - return - } - err := recorder.Record(c.bk2progrock.ConvertStatus(status)) - if err != nil { - bklog.G(ctx).WithError(err).Error("failed to record status update") - return - } - } - }() -} - // CombinedResult returns a buildkit result with all the refs solved by this client so far. // This is useful for constructing a result for upstream remote caching. func (c *Client) CombinedResult(ctx context.Context) (*Result, error) { @@ -763,9 +756,6 @@ func withOutgoingContext(ctx context.Context) context.Context { type ContainerExecUncachedMetadata struct { ParentClientIDs []string `json:"parentClientIDs,omitempty"` ServerID string `json:"serverID,omitempty"` - // Progrock propagation - ProgSockPath string `json:"progSockPath,omitempty"` - ProgParent string `json:"progParent,omitempty"` } func (md ContainerExecUncachedMetadata) ToPBFtpProxyVal() (string, error) { diff --git a/engine/buildkit/containerimage.go b/engine/buildkit/containerimage.go index 7e6c877848e..305b94e9741 100644 --- a/engine/buildkit/containerimage.go +++ b/engine/buildkit/containerimage.go @@ -17,7 +17,6 @@ import ( bksolverpb "github.com/moby/buildkit/solver/pb" solverresult "github.com/moby/buildkit/solver/result" specs "github.com/opencontainers/image-spec/specs-go/v1" - "github.com/vito/progrock" ) type ContainerExport struct { @@ -171,8 +170,7 @@ func (c *Client) ContainerImageToTarball( defer descRef.Release() } - ctx, recorder := progrock.WithGroup(ctx, "container image to tarball") - pbDef, _, err := c.EngineContainerLocalImport(ctx, recorder, engineHostPlatform, tmpDir, nil, []string{fileName}) + pbDef, _, err := c.EngineContainerLocalImport(ctx, engineHostPlatform, tmpDir, nil, []string{fileName}) if err != nil { return nil, fmt.Errorf("failed to import container tarball from engine container filesystem: %s", err) } diff --git a/engine/buildkit/filesync.go b/engine/buildkit/filesync.go index 90a61aaa1c9..05151304cc8 100644 --- a/engine/buildkit/filesync.go +++ b/engine/buildkit/filesync.go @@ -22,12 +22,10 @@ import ( "github.com/moby/buildkit/util/bklog" specs "github.com/opencontainers/image-spec/specs-go/v1" fsutiltypes "github.com/tonistiigi/fsutil/types" - "github.com/vito/progrock" ) func (c *Client) LocalImport( ctx context.Context, - recorder *progrock.Recorder, platform specs.Platform, srcPath string, excludePatterns []string, @@ -80,15 +78,12 @@ func (c *Client) LocalImport( } copyPB := copyDef.ToPB() - RecordVertexes(recorder, copyPB) - return c.DefToBlob(ctx, copyPB) } // Import a directory from the engine container, as opposed to from a client func (c *Client) EngineContainerLocalImport( ctx context.Context, - recorder *progrock.Recorder, platform specs.Platform, srcPath string, excludePatterns []string, @@ -102,7 +97,7 @@ func (c *Client) EngineContainerLocalImport( ClientID: c.ID(), ClientHostname: hostname, }) - return c.LocalImport(ctx, recorder, platform, srcPath, excludePatterns, includePatterns) + return c.LocalImport(ctx, platform, srcPath, excludePatterns, includePatterns) } func (c *Client) ReadCallerHostFile(ctx context.Context, path string) ([]byte, error) { diff --git a/engine/buildkit/gateway.go b/engine/buildkit/gateway.go new file mode 100644 index 00000000000..24fa3b13412 --- /dev/null +++ b/engine/buildkit/gateway.go @@ -0,0 +1,92 @@ +package buildkit + +import ( + "context" + "sync" + + "github.com/gogo/protobuf/proto" + "github.com/moby/buildkit/client/llb" + "github.com/moby/buildkit/frontend" + "github.com/moby/buildkit/solver/pb" + "github.com/opencontainers/go-digest" +) + +const ( + FocusPrefix = "[focus] " + InternalPrefix = "[internal] " +) + +type opTrackingGateway struct { + llbBridge frontend.FrontendLLBBridge + + ops map[digest.Digest]proto.Message + opsMu sync.Mutex +} + +var _ frontend.FrontendLLBBridge = &opTrackingGateway{} + +// ResolveImageConfig calls the inner ResolveImageConfig. +func (g *opTrackingGateway) ResolveImageConfig(ctx context.Context, ref string, opt llb.ResolveImageConfigOpt) (string, digest.Digest, []byte, error) { + return g.llbBridge.ResolveImageConfig(ctx, ref, opt) +} + +// Solve records the vertexes of the definition and frontend inputs as members +// of the current progress group, and calls the inner Solve. +func (g *opTrackingGateway) Solve(ctx context.Context, req frontend.SolveRequest, sessionID string) (*frontend.Result, error) { + if req.Definition != nil { + g.opsMu.Lock() + if g.ops == nil { + g.ops = make(map[digest.Digest]proto.Message) + } + for _, dt := range req.Definition.Def { + dgst := digest.FromBytes(dt) + if _, ok := g.ops[dgst]; ok { + continue + } + var op pb.Op + if err := (&op).Unmarshal(dt); err != nil { + g.opsMu.Unlock() + return nil, err + } + + // remove raw file contents (these can be kinda large) + if fileOp := op.GetFile(); fileOp != nil { + for _, action := range fileOp.Actions { + if mkfile := action.GetMkfile(); mkfile != nil { + mkfile.Data = nil + } + } + } + + switch op := op.Op.(type) { + case *pb.Op_Exec: + g.ops[dgst] = op.Exec + case *pb.Op_Source: + g.ops[dgst] = op.Source + case *pb.Op_File: + g.ops[dgst] = op.File + case *pb.Op_Build: + g.ops[dgst] = op.Build + case *pb.Op_Merge: + g.ops[dgst] = op.Merge + case *pb.Op_Diff: + g.ops[dgst] = op.Diff + } + } + g.opsMu.Unlock() + } + + for _, input := range req.FrontendInputs { + if input == nil { + // TODO(vito): we currently pass a nil def to Dockerfile inputs, should + // probably change that to llb.Scratch + continue + } + } + + return g.llbBridge.Solve(ctx, req, sessionID) +} + +func (g *opTrackingGateway) Warn(ctx context.Context, dgst digest.Digest, msg string, opts frontend.WarnOpts) error { + return g.llbBridge.Warn(ctx, dgst, msg, opts) +} diff --git a/engine/buildkit/progrock.go b/engine/buildkit/progrock.go deleted file mode 100644 index 9171d33b9a4..00000000000 --- a/engine/buildkit/progrock.go +++ /dev/null @@ -1,261 +0,0 @@ -package buildkit - -import ( - "context" - "net" - "os" - "path/filepath" - "strings" - "sync" - - "github.com/containerd/containerd/platforms" - "github.com/gogo/protobuf/proto" - "github.com/gogo/protobuf/types" - bkclient "github.com/moby/buildkit/client" - "github.com/moby/buildkit/client/llb" - "github.com/moby/buildkit/frontend" - "github.com/moby/buildkit/solver/pb" - "github.com/moby/buildkit/util/bklog" - "github.com/opencontainers/go-digest" - "github.com/vito/progrock" - "google.golang.org/protobuf/types/known/anypb" - "google.golang.org/protobuf/types/known/timestamppb" -) - -const ( - FocusPrefix = "[focus] " - InternalPrefix = "[internal] " -) - -type BK2Progrock interface { - ConvertStatus(*bkclient.SolveStatus) *progrock.StatusUpdate -} - -type recordingGateway struct { - llbBridge frontend.FrontendLLBBridge - - records map[digest.Digest]proto.Message - recordsMu sync.Mutex -} - -var _ frontend.FrontendLLBBridge = &recordingGateway{} - -var _ BK2Progrock = &recordingGateway{} - -// ResolveImageConfig records the image config resolution vertex as a member of -// the current progress group, and calls the inner ResolveImageConfig. -func (g *recordingGateway) ResolveImageConfig(ctx context.Context, ref string, opt llb.ResolveImageConfigOpt) (string, digest.Digest, []byte, error) { - rec := progrock.FromContext(ctx) - - // HACK(vito): this is how Buildkit determines the vertex digest. Keep this - // in sync with Buildkit until a better way to do this arrives. It hasn't - // changed in 5 years, surely it won't soon, right? - id := ref - if platform := opt.Platform; platform == nil { - id += platforms.Format(platforms.DefaultSpec()) - } else { - id += platforms.Format(*platform) - } - - rec.Join(digest.FromString(id)) - - return g.llbBridge.ResolveImageConfig(ctx, ref, opt) -} - -// Solve records the vertexes of the definition and frontend inputs as members -// of the current progress group, and calls the inner Solve. -func (g *recordingGateway) Solve(ctx context.Context, req frontend.SolveRequest, sessionID string) (*frontend.Result, error) { - rec := progrock.FromContext(ctx) - - if req.Definition != nil { - RecordVertexes(rec, req.Definition) - - g.recordsMu.Lock() - if g.records == nil { - g.records = make(map[digest.Digest]proto.Message) - } - for _, dt := range req.Definition.Def { - dgst := digest.FromBytes(dt) - if _, ok := g.records[dgst]; ok { - continue - } - var op pb.Op - if err := (&op).Unmarshal(dt); err != nil { - g.recordsMu.Unlock() - return nil, err - } - - // remove raw file contents (these can be kinda large) - if fileOp := op.GetFile(); fileOp != nil { - for _, action := range fileOp.Actions { - if mkfile := action.GetMkfile(); mkfile != nil { - mkfile.Data = nil - } - } - } - - switch op := op.Op.(type) { - case *pb.Op_Exec: - g.records[dgst] = op.Exec - case *pb.Op_Source: - g.records[dgst] = op.Source - case *pb.Op_File: - g.records[dgst] = op.File - case *pb.Op_Build: - g.records[dgst] = op.Build - case *pb.Op_Merge: - g.records[dgst] = op.Merge - case *pb.Op_Diff: - g.records[dgst] = op.Diff - } - } - g.recordsMu.Unlock() - } - - for _, input := range req.FrontendInputs { - if input == nil { - // TODO(vito): we currently pass a nil def to Dockerfile inputs, should - // probably change that to llb.Scratch - continue - } - - RecordVertexes(rec, input) - } - - return g.llbBridge.Solve(ctx, req, sessionID) -} - -func (g *recordingGateway) Warn(ctx context.Context, dgst digest.Digest, msg string, opts frontend.WarnOpts) error { - return g.llbBridge.Warn(ctx, dgst, msg, opts) -} - -func (g *recordingGateway) ConvertStatus(event *bkclient.SolveStatus) *progrock.StatusUpdate { - var status progrock.StatusUpdate - for _, v := range event.Vertexes { - vtx := &progrock.Vertex{ - Id: v.Digest.String(), - Name: v.Name, - Cached: v.Cached, - } - if strings.HasPrefix(v.Name, InternalPrefix) { - vtx.Internal = true - vtx.Name = strings.TrimPrefix(v.Name, InternalPrefix) - } - if strings.HasPrefix(v.Name, FocusPrefix) { - vtx.Focused = true - vtx.Name = strings.TrimPrefix(v.Name, FocusPrefix) - } - for _, input := range v.Inputs { - vtx.Inputs = append(vtx.Inputs, input.String()) - } - if v.Started != nil { - vtx.Started = timestamppb.New(*v.Started) - } - if v.Completed != nil { - vtx.Completed = timestamppb.New(*v.Completed) - } - if v.Error != "" { - if strings.HasSuffix(v.Error, context.Canceled.Error()) { - vtx.Canceled = true - } else { - msg := v.Error - vtx.Error = &msg - } - } - - g.recordsMu.Lock() - if op, ok := g.records[v.Digest]; ok { - if op != nil { - g.records[v.Digest] = nil // don't write out a record again - - if a, err := types.MarshalAny(op); err == nil { - status.Metas = append(status.Metas, &progrock.VertexMeta{Name: "op", Vertex: vtx.Id, Data: &anypb.Any{TypeUrl: a.TypeUrl, Value: a.Value}}) - } - } - } - g.recordsMu.Unlock() - - status.Vertexes = append(status.Vertexes, vtx) - } - - for _, s := range event.Statuses { - task := &progrock.VertexTask{ - Vertex: s.Vertex.String(), - Name: s.ID, // remap - Total: s.Total, - Current: s.Current, - } - if s.Started != nil { - task.Started = timestamppb.New(*s.Started) - } - if s.Completed != nil { - task.Completed = timestamppb.New(*s.Completed) - } - status.Tasks = append(status.Tasks, task) - } - - for _, s := range event.Logs { - status.Logs = append(status.Logs, &progrock.VertexLog{ - Vertex: s.Vertex.String(), - Stream: progrock.LogStream(s.Stream), - Data: s.Data, - Timestamp: timestamppb.New(s.Timestamp), - }) - } - - return &status -} - -type ProgrockLogrusWriter struct{} - -func (w ProgrockLogrusWriter) WriteStatus(ev *progrock.StatusUpdate) error { - l := bklog.G(context.TODO()) - for _, vtx := range ev.Vertexes { - l = l.WithField("vertex-"+vtx.Id, vtx) - } - for _, task := range ev.Tasks { - l = l.WithField("task-"+task.Vertex, task) - } - for _, log := range ev.Logs { - l = l.WithField("log-"+log.Vertex, log) - } - l.Trace() - return nil -} - -func (w ProgrockLogrusWriter) Close() error { - return nil -} - -func ProgrockForwarder(sockPath string, w progrock.Writer) (progrock.Writer, func() error, error) { - if err := os.MkdirAll(filepath.Dir(sockPath), 0700); err != nil { - return nil, nil, err - } - l, err := net.Listen("unix", sockPath) - if err != nil { - return nil, nil, err - } - - progW, err := progrock.ServeRPC(l, w) - if err != nil { - return nil, nil, err - } - - return progW, l.Close, nil -} - -func RecordVertexes(recorder *progrock.Recorder, def *pb.Definition) { - dgsts := []digest.Digest{} - for dgst, meta := range def.Metadata { - if meta.ProgressGroup != nil { - // Regular progress group, i.e. from Dockerfile; record it as a subgroup, - // with 'weak' annotation so it's distinct from user-configured - // pipelines. - recorder.WithGroup(meta.ProgressGroup.Name, progrock.Weak()).Join(dgst) - } else { - dgsts = append(dgsts, dgst) - } - } - - recorder.Join(dgsts...) -} diff --git a/engine/buildkit/socket.go b/engine/buildkit/socket.go index c780f615f73..3f8c6bf5020 100644 --- a/engine/buildkit/socket.go +++ b/engine/buildkit/socket.go @@ -4,6 +4,7 @@ import ( "context" "github.com/moby/buildkit/session/sshforward" + "go.opentelemetry.io/otel/trace" "google.golang.org/grpc" "google.golang.org/grpc/metadata" ) @@ -26,6 +27,8 @@ func (p *socketProxy) ForwardAgent(stream sshforward.SSH_ForwardAgentServer) err ctx, cancel := context.WithCancel(stream.Context()) defer cancel() + ctx = trace.ContextWithSpanContext(ctx, p.c.spanCtx) // ensure server's span context is propagated + incomingMD, _ := metadata.FromIncomingContext(ctx) ctx = metadata.NewOutgoingContext(ctx, incomingMD) diff --git a/engine/client/buildkit.go b/engine/client/buildkit.go index f925789eaac..22e9cc9c186 100644 --- a/engine/client/buildkit.go +++ b/engine/client/buildkit.go @@ -5,14 +5,12 @@ import ( "fmt" "net" "net/url" - "os" "time" "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/client/drivers" + "github.com/dagger/dagger/telemetry" bkclient "github.com/moby/buildkit/client" - "github.com/moby/buildkit/util/tracing/detect" - "github.com/vito/progrock" "go.opentelemetry.io/otel" ) @@ -21,46 +19,17 @@ const ( envDaggerCloudCachetoken = "_EXPERIMENTAL_DAGGER_CACHESERVICE_TOKEN" ) -func newBuildkitClient(ctx context.Context, rec *progrock.VertexRecorder, remote *url.URL, userAgent string) (_ *bkclient.Client, _ *bkclient.Info, rerr error) { - driver, err := drivers.GetDriver(remote.Scheme) - if err != nil { - return nil, nil, err - } - - var cloudToken string - if v, ok := os.LookupEnv(drivers.EnvDaggerCloudToken); ok { - cloudToken = v - } else if _, ok := os.LookupEnv(envDaggerCloudCachetoken); ok { - cloudToken = v - } - - connector, err := driver.Provision(ctx, rec, remote, &drivers.DriverOpts{ - UserAgent: userAgent, - DaggerCloudToken: cloudToken, - GPUSupport: os.Getenv(drivers.EnvGPUSupport), - }) - if err != nil { - return nil, nil, err - } - +func newBuildkitClient(ctx context.Context, remote *url.URL, connector drivers.Connector) (_ *bkclient.Client, _ *bkclient.Info, rerr error) { opts := []bkclient.ClientOpt{ + // TODO verify? bkclient.WithTracerProvider(otel.GetTracerProvider()), } opts = append(opts, bkclient.WithContextDialer(func(context.Context, string) (net.Conn, error) { return connector.Connect(ctx) })) - exp, _, err := detect.Exporter() - if err == nil { - if td, ok := exp.(bkclient.TracerDelegate); ok { - opts = append(opts, bkclient.WithTracerDelegate(td)) - } - } else { - fmt.Fprintln(rec.Stdout(), "failed to detect opentelemetry exporter: ", err) - } - - startTask := rec.Task("starting engine") - defer startTask.Done(rerr) + ctx, span := Tracer().Start(ctx, "starting engine") + defer telemetry.End(span, func() error { return rerr }) c, err := bkclient.New(ctx, remote.String(), opts...) if err != nil { @@ -77,6 +46,7 @@ func newBuildkitClient(ctx context.Context, rec *progrock.VertexRecorder, remote if err != nil { return nil, nil, err } + if info.BuildkitVersion.Package != engine.Package { return nil, nil, fmt.Errorf("remote is not a valid dagger server (expected %q, got %q)", engine.Package, info.BuildkitVersion.Package) } diff --git a/engine/client/client.go b/engine/client/client.go index dc2fac4dbc1..dfeaad7e2a4 100644 --- a/engine/client/client.go +++ b/engine/client/client.go @@ -7,6 +7,7 @@ import ( "errors" "fmt" "io" + "log/slog" "net" "net/http" "net/http/httptest" @@ -21,6 +22,9 @@ import ( "dagger.io/dagger" "github.com/Khan/genqlient/graphql" "github.com/cenkalti/backoff/v4" + "github.com/containerd/containerd/defaults" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" "github.com/docker/cli/cli/config" "github.com/google/uuid" @@ -35,20 +39,19 @@ import ( "github.com/opencontainers/go-digest" "github.com/tonistiigi/fsutil" fstypes "github.com/tonistiigi/fsutil/types" - "github.com/vito/progrock" "golang.org/x/sync/errgroup" "google.golang.org/grpc" "google.golang.org/grpc/codes" + "google.golang.org/grpc/credentials/insecure" "github.com/dagger/dagger/analytics" - "github.com/dagger/dagger/core/pipeline" "github.com/dagger/dagger/engine" + "github.com/dagger/dagger/engine/client/drivers" "github.com/dagger/dagger/engine/session" "github.com/dagger/dagger/telemetry" + "github.com/dagger/dagger/telemetry/sdklog" ) -const ProgrockParentHeader = "X-Progrock-Parent" - type Params struct { // The id of the server to connect to, or if blank a new one // should be started. @@ -67,9 +70,6 @@ type Params struct { DisableHostRW bool - JournalFile string - ProgrockWriter progrock.Writer - ProgrockParent string EngineNameCallback func(string) CloudURLCallback func(string) @@ -77,11 +77,18 @@ type Params struct { // grpc context metadata for any api requests back to the engine. It's used by the API // server to determine which schema to serve and other module context metadata. ModuleCallerDigest digest.Digest + + EngineTrace sdktrace.SpanExporter + EngineLogs sdklog.LogExporter } type Client struct { Params - eg *errgroup.Group + + rootCtx context.Context + + eg *errgroup.Group + internalCtx context.Context internalCancel context.CancelFunc @@ -89,7 +96,8 @@ type Client struct { closeRequests context.CancelFunc closeMu sync.RWMutex - Recorder *progrock.Recorder + telemetry *errgroup.Group + telemetryConn *grpc.ClientConn httpClient *http.Client bkClient *bkclient.Client @@ -106,7 +114,7 @@ type Client struct { nestedSessionPort int - labels []pipeline.Label + labels telemetry.Labels } func Connect(ctx context.Context, params Params) (_ *Client, _ context.Context, rerr error) { @@ -118,47 +126,28 @@ func Connect(ctx context.Context, params Params) (_ *Client, _ context.Context, c.ServerID = identity.NewID() } - c.internalCtx, c.internalCancel = context.WithCancel(context.Background()) + // keep the root ctx around so we can detect whether we've been interrupted, + // so we can drain immediately in that scenario + c.rootCtx = ctx + + // NB: decouple from the originator's cancel ctx + c.internalCtx, c.internalCancel = context.WithCancel(context.WithoutCancel(ctx)) + c.closeCtx, c.closeRequests = context.WithCancel(context.WithoutCancel(ctx)) + c.eg, c.internalCtx = errgroup.WithContext(c.internalCtx) + defer func() { if rerr != nil { c.internalCancel() } }() - c.closeCtx, c.closeRequests = context.WithCancel(context.Background()) - - // progress - progMultiW := progrock.MultiWriter{} - if c.ProgrockWriter != nil { - progMultiW = append(progMultiW, c.ProgrockWriter) - } - if c.JournalFile != "" { - fw, err := newProgrockFileWriter(c.JournalFile) - if err != nil { - return nil, nil, err - } - progMultiW = append(progMultiW, fw) - } - - tel := telemetry.New() - var cloudURL string - if tel.Enabled() { - cloudURL = tel.URL() - progMultiW = append(progMultiW, telemetry.NewWriter(tel)) - } - if c.ProgrockParent != "" { - c.Recorder = progrock.NewSubRecorder(progMultiW, c.ProgrockParent) - } else { - c.Recorder = progrock.NewRecorder(progMultiW) + workdir, err := os.Getwd() + if err != nil { + return nil, nil, fmt.Errorf("get workdir: %w", err) } - ctx = progrock.ToContext(ctx, c.Recorder) - defer func() { - if rerr != nil { - c.Recorder.Close() - } - }() + c.labels = telemetry.LoadDefaultLabels(workdir, engine.Version) nestedSessionPortVal, isNestedSession := os.LookupEnv("DAGGER_SESSION_PORT") if isNestedSession { @@ -180,65 +169,148 @@ func Connect(ctx context.Context, params Params) (_ *Client, _ context.Context, return c, ctx, nil } - // sneakily using ModuleCallerDigest here because it seems nicer than just - // making something up, and should be pretty much 1:1 I think (even - // non-cached things will have a different caller digest each time) - connectDigest := params.ModuleCallerDigest - var opts []progrock.VertexOpt - if connectDigest == "" { - connectDigest = digest.FromString("_root") // arbitrary - } else { - opts = append(opts, progrock.Internal()) - } - - // NB: don't propagate this ctx, we don't want everything tucked beneath connect - _, loader := progrock.Span(ctx, connectDigest.String(), "connect", opts...) - defer func() { loader.Done(rerr) }() - // Check if any of the upstream cache importers/exporters are enabled. // Note that this is not the cache service support in engine/cache/, that // is a different feature which is configured in the engine daemon. - - var err error c.upstreamCacheImportOptions, c.upstreamCacheExportOptions, err = allCacheConfigsFromEnv() if err != nil { return nil, nil, fmt.Errorf("cache config from env: %w", err) } - remote, err := url.Parse(c.RunnerHost) - if err != nil { - return nil, nil, fmt.Errorf("parse runner host: %w", err) + connectSpanOpts := []trace.SpanStartOption{ + telemetry.Encapsulate(), } - bkClient, bkInfo, err := newBuildkitClient(ctx, loader, remote, c.UserAgent) - if err != nil { - return nil, nil, fmt.Errorf("new client: %w", err) + if c.Params.ModuleCallerDigest != "" { + connectSpanOpts = append(connectSpanOpts, telemetry.Internal()) + } + + // NB: don't propagate this ctx, we don't want everything tucked beneath connect + connectCtx, span := Tracer().Start(ctx, "connect", connectSpanOpts...) + defer telemetry.End(span, func() error { return rerr }) + + if err := c.startEngine(connectCtx); err != nil { + return nil, nil, fmt.Errorf("start engine: %w", err) } - c.bkClient = bkClient defer func() { if rerr != nil { c.bkClient.Close() } }() + if err := c.startSession(connectCtx); err != nil { + return nil, nil, fmt.Errorf("start session: %w", err) + } + + defer func() { + if rerr != nil { + c.bkSession.Close() + } + }() + + if err := c.daggerConnect(ctx); err != nil { + return nil, nil, fmt.Errorf("failed to connect to dagger: %w", err) + } + + return c, ctx, nil +} + +func (c *Client) startEngine(ctx context.Context) (rerr error) { + remote, err := url.Parse(c.RunnerHost) + if err != nil { + return fmt.Errorf("parse runner host: %w", err) + } + + driver, err := drivers.GetDriver(remote.Scheme) + if err != nil { + return err + } + + var cloudToken string + if v, ok := os.LookupEnv(drivers.EnvDaggerCloudToken); ok { + cloudToken = v + } else if _, ok := os.LookupEnv(envDaggerCloudCachetoken); ok { + cloudToken = v + } + + connector, err := driver.Provision(ctx, remote, &drivers.DriverOpts{ + UserAgent: c.UserAgent, + DaggerCloudToken: cloudToken, + GPUSupport: os.Getenv(drivers.EnvGPUSupport), + }) + if err != nil { + return err + } + + if err := retry(ctx, func(elapsed time.Duration, ctx context.Context) error { + // Open a separate connection for telemetry. + telemetryConn, err := grpc.DialContext(c.internalCtx, remote.String(), + grpc.WithContextDialer(func(context.Context, string) (net.Conn, error) { + return connector.Connect(c.internalCtx) + }), + // Same defaults as Buildkit. I hit the default 4MB limit pretty quickly. + // Shrinking IDs might help. + grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)), + grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)), + // Uncomment to measure telemetry traffic. + // grpc.WithUnaryInterceptor(telemetry.MeasuringUnaryClientInterceptor()), + // grpc.WithStreamInterceptor(telemetry.MeasuringStreamClientInterceptor()), + grpc.WithTransportCredentials(insecure.NewCredentials())) + if err != nil { + return fmt.Errorf("telemetry grpc dial: %w", err) + } + c.telemetryConn = telemetryConn + c.telemetry = new(errgroup.Group) + + if c.EngineTrace != nil { + if err := c.exportTraces(telemetry.NewTracesSourceClient(telemetryConn)); err != nil { + return fmt.Errorf("export traces: %w", err) + } + } + + if c.EngineLogs != nil { + if err := c.exportLogs(telemetry.NewLogsSourceClient(telemetryConn)); err != nil { + return fmt.Errorf("export logs: %w", err) + } + } + + return nil + }); err != nil { + return fmt.Errorf("attach to telemetry: %w", err) + } + + bkClient, bkInfo, err := newBuildkitClient(ctx, remote, connector) + if err != nil { + return fmt.Errorf("new client: %w", err) + } + + c.bkClient = bkClient + if c.EngineNameCallback != nil { engineName := fmt.Sprintf("%s (version %s)", bkInfo.BuildkitVersion.Revision, bkInfo.BuildkitVersion.Version) c.EngineNameCallback(engineName) } + return nil +} + +func (c *Client) startSession(ctx context.Context) (rerr error) { + ctx, sessionSpan := Tracer().Start(ctx, "starting session") + defer telemetry.End(sessionSpan, func() error { return rerr }) + + ctx, stdout, stderr := telemetry.WithStdioToOtel(ctx, InstrumentationLibrary) + hostname, err := os.Hostname() if err != nil { - return nil, nil, fmt.Errorf("get hostname: %w", err) + return fmt.Errorf("get hostname: %w", err) } c.hostname = hostname - sessionTask := loader.Task("starting session") - sharedKey := "" bkSession, err := bksession.NewSession(ctx, identity.NewID(), sharedKey) if err != nil { - return nil, nil, fmt.Errorf("new s: %w", err) + return fmt.Errorf("new session: %w", err) } c.bkSession = bkSession defer func() { @@ -247,18 +319,6 @@ func Connect(ctx context.Context, params Params) (_ *Client, _ context.Context, } }() - workdir, err := os.Getwd() - if err != nil { - return nil, nil, fmt.Errorf("get workdir: %w", err) - } - - labels := pipeline.Labels{} - labels.AppendCILabel() - labels = append(labels, pipeline.LoadVCSLabels(workdir)...) - labels = append(labels, pipeline.LoadClientLabels(engine.Version)...) - - c.labels = labels - c.internalCtx = engine.ContextWithClientMetadata(c.internalCtx, &engine.ClientMetadata{ ClientID: c.ID(), ClientSecretToken: c.SecretToken, @@ -269,9 +329,6 @@ func Connect(ctx context.Context, params Params) (_ *Client, _ context.Context, ModuleCallerDigest: c.ModuleCallerDigest, }) - // progress - bkSession.Allow(progRockAttachable{progMultiW}) - // filesync if !c.DisableHostRW { bkSession.Allow(AnyDirSource{}) @@ -287,8 +344,7 @@ func Connect(ctx context.Context, params Params) (_ *Client, _ context.Context, bkSession.Allow(authprovider.NewDockerAuthProvider(config.LoadDefaultConfigFile(os.Stderr), nil)) // host=>container networking - bkSession.Allow(session.NewTunnelListenerAttachable(c.Recorder)) - ctx = progrock.ToContext(ctx, c.Recorder) + bkSession.Allow(session.NewTunnelListenerAttachable(ctx)) // connect to the server, registering our session attachables and starting the server if not // already started @@ -320,51 +376,48 @@ func Connect(ctx context.Context, params Params) (_ *Client, _ context.Context, DisableKeepAlives: true, }} - bo := backoff.NewExponentialBackOff() - bo.InitialInterval = 10 * time.Millisecond - connectRetryCtx, connectRetryCancel := context.WithTimeout(ctx, 300*time.Second) - defer connectRetryCancel() - err = backoff.Retry(func() error { - nextBackoff := bo.NextBackOff() - ctx, cancel := context.WithTimeout(connectRetryCtx, nextBackoff) - defer cancel() - + if err := retry(ctx, func(elapsed time.Duration, ctx context.Context) error { // Make an introspection request, since those get ignored by telemetry and // we don't want this to show up, since it's just a health check. - innerErr := c.Do(ctx, `{__schema{description}}`, "", nil, nil) - if innerErr != nil { + err := c.Do(ctx, `{__schema{description}}`, "", nil, nil) + if err != nil { // only show errors once the time between attempts exceeds this threshold, otherwise common // cases of 1 or 2 retries become too noisy - if nextBackoff > time.Second { - fmt.Fprintln(loader.Stdout(), "Failed to connect; retrying...", progrock.ErrorLabel(innerErr)) + if elapsed > time.Second { + fmt.Fprintln(stderr, "Failed to connect; retrying...", err) } } else { - fmt.Fprintln(loader.Stdout(), "OK!") + fmt.Fprintln(stdout, "OK!") } - - return innerErr - }, backoff.WithContext(bo, connectRetryCtx)) - - sessionTask.Done(err) - - if err != nil { - return nil, nil, fmt.Errorf("connect: %w", err) - } - - if c.CloudURLCallback != nil && cloudURL != "" { - c.CloudURLCallback(cloudURL) + return err + }); err != nil { + return fmt.Errorf("poll for session: %w", err) } - if err := c.daggerConnect(ctx); err != nil { - return nil, nil, fmt.Errorf("failed to connect to dagger: %w", err) - } + return nil +} - return c, ctx, nil +func retry(ctx context.Context, fn func(time.Duration, context.Context) error) error { + bo := backoff.NewExponentialBackOff() + bo.InitialInterval = 10 * time.Millisecond + connectRetryCtx, connectRetryCancel := context.WithTimeout(ctx, 300*time.Second) + defer connectRetryCancel() + start := time.Now() + return backoff.Retry(func() error { + if ctx.Err() != nil { + return backoff.Permanent(ctx.Err()) + } + nextBackoff := bo.NextBackOff() + ctx, cancel := context.WithTimeout(connectRetryCtx, nextBackoff) + defer cancel() + return fn(time.Since(start), ctx) + }, backoff.WithContext(bo, connectRetryCtx)) } func (c *Client) daggerConnect(ctx context.Context) error { var err error - c.daggerClient, err = dagger.Connect(context.Background(), + c.daggerClient, err = dagger.Connect( + context.WithoutCancel(ctx), dagger.WithConn(EngineConn(c)), dagger.WithSkipCompatibilityCheck()) return err @@ -413,18 +466,106 @@ func (c *Client) Close() (rerr error) { rerr = errors.Join(rerr, err) } - // mark all groups completed - // close the recorder so the UI exits - if c.Recorder != nil { - c.Recorder.Complete() - c.Recorder.Close() + // Wait for telemetry to finish draining + if c.telemetry != nil { + if err := c.telemetry.Wait(); err != nil { + rerr = errors.Join(rerr, fmt.Errorf("flush telemetry: %w", err)) + } } return rerr } +func (c *Client) exportTraces(tracesClient telemetry.TracesSourceClient) error { + // NB: we never actually want to interrupt this, since it's relied upon for + // seeing what's going on, even during shutdown + ctx := context.WithoutCancel(c.internalCtx) + + traceID := trace.SpanContextFromContext(ctx).TraceID() + spans, err := tracesClient.Subscribe(ctx, &telemetry.TelemetryRequest{ + TraceId: traceID[:], + }) + if err != nil { + return fmt.Errorf("subscribe to spans: %w", err) + } + + slog.Debug("exporting spans from engine") + + c.telemetry.Go(func() error { + defer slog.Debug("done exporting spans from engine", "ctxErr", ctx.Err()) + + for { + data, err := spans.Recv() + if err != nil { + if errors.Is(err, context.Canceled) { + return nil + } + return fmt.Errorf("recv log: %w", err) + } + + spans := telemetry.SpansFromProto(data.GetResourceSpans()) + + slog.Debug("received spans from engine", "len", len(spans)) + + for _, span := range spans { + slog.Debug("received span from engine", "span", span.Name(), "id", span.SpanContext().SpanID(), "endTime", span.EndTime()) + } + + if err := c.Params.EngineTrace.ExportSpans(ctx, spans); err != nil { + return fmt.Errorf("export %d spans: %w", len(spans), err) + } + } + }) + + return nil +} + +func (c *Client) exportLogs(logsClient telemetry.LogsSourceClient) error { + // NB: we never actually want to interrupt this, since it's relied upon for + // seeing what's going on, even during shutdown + ctx := context.WithoutCancel(c.internalCtx) + + traceID := trace.SpanContextFromContext(ctx).TraceID() + logs, err := logsClient.Subscribe(ctx, &telemetry.TelemetryRequest{ + TraceId: traceID[:], + }) + if err != nil { + return fmt.Errorf("subscribe to logs: %w", err) + } + + slog.Debug("exporting logs from engine") + + c.telemetry.Go(func() error { + defer slog.Debug("done exporting logs from engine", "ctxErr", ctx.Err()) + + for { + data, err := logs.Recv() + if err != nil { + if errors.Is(err, context.Canceled) { + return nil + } + return fmt.Errorf("recv log: %w", err) + } + + logs := telemetry.TransformPBLogs(data.GetResourceLogs()) + + slog.Debug("received logs from engine", "len", len(logs)) + + if err := c.EngineLogs.ExportLogs(ctx, logs); err != nil { + return fmt.Errorf("export %d logs: %w", len(logs), err) + } + } + }) + + return nil +} + func (c *Client) shutdownServer() error { - ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) + // don't immediately cancel shutdown if we're shutting down because we were + // canceled + ctx := context.WithoutCancel(c.internalCtx) + + ctx, cancel := context.WithTimeout(ctx, 10*time.Second) defer cancel() req, err := http.NewRequestWithContext(ctx, "POST", "http://dagger/shutdown", nil) @@ -432,6 +573,12 @@ func (c *Client) shutdownServer() error { return fmt.Errorf("new request: %w", err) } + if c.rootCtx.Err() != nil { + req.URL.RawQuery = url.Values{ + "immediate": []string{"true"}, + }.Encode() + } + req.SetBasicAuth(c.SecretToken, "") resp, err := c.httpClient.Do(req) @@ -556,6 +703,8 @@ func (c *Client) ServeHTTP(w http.ResponseWriter, r *http.Request) { } func (c *Client) serveHijackedHTTP(ctx context.Context, cancel context.CancelFunc, w http.ResponseWriter, r *http.Request) { + slog.Warn("serving hijacked HTTP with trace", "ctx", trace.SpanContextFromContext(ctx).TraceID()) + serverConn, err := c.DialContext(ctx, "", "") if err != nil { w.WriteHeader(http.StatusBadGateway) @@ -845,14 +994,6 @@ func (AnyDirTarget) DiffCopy(stream filesync.FileSend_DiffCopyServer) (rerr erro } } -type progRockAttachable struct { - writer progrock.Writer -} - -func (a progRockAttachable) Register(srv *grpc.Server) { - progrock.RegisterProgressServiceServer(srv, progrock.NewRPCReceiver(a.writer)) -} - const ( // cache configs that should be applied to be import and export cacheConfigEnvName = "_EXPERIMENTAL_DAGGER_CACHE_CONFIG" @@ -946,7 +1087,6 @@ func (d doerWithHeaders) Do(req *http.Request) (*http.Response, error) { func EngineConn(engineClient *Client) DirectConn { return func(req *http.Request) (*http.Response, error) { - req.Header.Add(ProgrockParentHeader, progrock.FromContext(req.Context()).Parent) req.SetBasicAuth(engineClient.SecretToken, "") resp := httptest.NewRecorder() engineClient.ServeHTTP(resp, req) diff --git a/engine/client/drivers/dial.go b/engine/client/drivers/dial.go index 422cd7267cb..fe42d513c2b 100644 --- a/engine/client/drivers/dial.go +++ b/engine/client/drivers/dial.go @@ -7,7 +7,6 @@ import ( "strings" "github.com/pkg/errors" - "github.com/vito/progrock" connh "github.com/moby/buildkit/client/connhelper" connhDocker "github.com/moby/buildkit/client/connhelper/dockercontainer" @@ -30,7 +29,7 @@ type dialDriver struct { fn func(*url.URL) (*connh.ConnectionHelper, error) } -func (d *dialDriver) Provision(ctx context.Context, _ *progrock.VertexRecorder, target *url.URL, _ *DriverOpts) (Connector, error) { +func (d *dialDriver) Provision(ctx context.Context, target *url.URL, _ *DriverOpts) (Connector, error) { return dialConnector{dialDriver: d, target: target}, nil } diff --git a/engine/client/drivers/docker.go b/engine/client/drivers/docker.go index 7b0a61aca68..17866c32e7a 100644 --- a/engine/client/drivers/docker.go +++ b/engine/client/drivers/docker.go @@ -4,6 +4,8 @@ import ( "bytes" "context" "fmt" + "io" + "log/slog" "net" "net/url" "os/exec" @@ -13,11 +15,12 @@ import ( "github.com/google/go-containerregistry/pkg/authn" "github.com/google/go-containerregistry/pkg/name" "github.com/google/go-containerregistry/pkg/v1/remote" - "github.com/pkg/errors" - "github.com/vito/progrock" - connh "github.com/moby/buildkit/client/connhelper" connhDocker "github.com/moby/buildkit/client/connhelper/dockercontainer" + "github.com/pkg/errors" + "go.opentelemetry.io/otel" + + "github.com/dagger/dagger/telemetry" ) func init() { @@ -27,8 +30,8 @@ func init() { // dockerDriver creates and manages a container, then connects to it type dockerDriver struct{} -func (d *dockerDriver) Provision(ctx context.Context, rec *progrock.VertexRecorder, target *url.URL, opts *DriverOpts) (Connector, error) { - helper, err := d.create(ctx, rec, target.Host+target.Path, opts) +func (d *dockerDriver) Provision(ctx context.Context, target *url.URL, opts *DriverOpts) (Connector, error) { + helper, err := d.create(ctx, target.Host+target.Path, opts) if err != nil { return nil, err } @@ -55,7 +58,9 @@ const ( // previous executions of the engine at different versions (which // are identified by looking for containers with the prefix // "dagger-engine-"). -func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, imageRef string, opts *DriverOpts) (helper *connh.ConnectionHelper, rerr error) { +func (d *dockerDriver) create(ctx context.Context, imageRef string, opts *DriverOpts) (helper *connh.ConnectionHelper, rerr error) { + log := telemetry.ContextLogger(ctx, slog.LevelWarn) // TODO + // Get the SHA digest of the image to use as an ID for the container we'll create var id string fallbackToLeftoverEngine := false @@ -71,9 +76,9 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, // auth keychain parses the same docker credentials as used by the buildkit // session attachable. if img, err := remote.Get(ref, remote.WithAuthFromKeychain(authn.DefaultKeychain), remote.WithUserAgent(opts.UserAgent)); err != nil { - vtx.Recorder.Warn("failed to resolve image; falling back to leftover engine", progrock.ErrorLabel(err)) + log.Warn("failed to resolve image; falling back to leftover engine", "error", err) if strings.Contains(err.Error(), "DENIED") { - vtx.Recorder.Warn("check your docker registry auth; it might be incorrect or expired") + log.Warn("check your docker registry auth; it might be incorrect or expired") } fallbackToLeftoverEngine = true } else { @@ -85,7 +90,7 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, // And check if we are in a fallback case then perform fallback to most recent engine leftoverEngines, err := collectLeftoverEngines(ctx) if err != nil { - vtx.Recorder.Warn("failed to list containers", progrock.ErrorLabel(err)) + log.Warn("failed to list containers", "error", err) leftoverEngines = []string{} } if fallbackToLeftoverEngine { @@ -93,17 +98,14 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, return nil, errors.Errorf("no fallback container found") } - startTask := vtx.Task("starting engine") - defer startTask.Done(rerr) - // the first leftover engine may not be running, so make sure to start it firstEngine := leftoverEngines[0] cmd := exec.CommandContext(ctx, "docker", "start", firstEngine) - if output, err := cmd.CombinedOutput(); err != nil { + if output, err := traceExec(ctx, cmd); err != nil { return nil, errors.Wrapf(err, "failed to start container: %s", output) } - garbageCollectEngines(ctx, vtx, leftoverEngines[1:]) + garbageCollectEngines(ctx, log, leftoverEngines[1:]) return connhDocker.Helper(&url.URL{ Scheme: "docker-container", @@ -123,14 +125,11 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, for i, leftoverEngine := range leftoverEngines { // if we already have a container with that name, attempt to start it if leftoverEngine == containerName { - startTask := vtx.Task("starting engine") - defer startTask.Done(rerr) - cmd := exec.CommandContext(ctx, "docker", "start", leftoverEngine) - if output, err := cmd.CombinedOutput(); err != nil { + if output, err := traceExec(ctx, cmd); err != nil { return nil, errors.Wrapf(err, "failed to start container: %s", output) } - garbageCollectEngines(ctx, vtx, append(leftoverEngines[:i], leftoverEngines[i+1:]...)) + garbageCollectEngines(ctx, log, append(leftoverEngines[:i], leftoverEngines[i+1:]...)) return connhDocker.Helper(&url.URL{ Scheme: "docker-container", Host: containerName, @@ -139,16 +138,10 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, } // ensure the image is pulled - if err := exec.CommandContext(ctx, "docker", "inspect", "--type=image", imageRef).Run(); err != nil { - pullCmd := exec.CommandContext(ctx, "docker", "pull", imageRef) - pullCmd.Stdout = vtx.Stdout() - pullCmd.Stderr = vtx.Stderr() - pullTask := vtx.Task("pulling %s", imageRef) - if err := pullCmd.Run(); err != nil { - pullTask.Done(err) + if _, err := traceExec(ctx, exec.CommandContext(ctx, "docker", "inspect", "--type=image", imageRef)); err != nil { + if _, err := traceExec(ctx, exec.CommandContext(ctx, "docker", "pull", imageRef)); err != nil { return nil, errors.Wrapf(err, "failed to pull image") } - pullTask.Done(nil) } cmd := exec.CommandContext(ctx, @@ -171,10 +164,8 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, cmd.Args = append(cmd.Args, imageRef, "--debug") - startTask := vtx.Task("starting engine") - defer startTask.Done(rerr) - if output, err := cmd.CombinedOutput(); err != nil { - if !isContainerAlreadyInUseOutput(string(output)) { + if output, err := traceExec(ctx, cmd); err != nil { + if !isContainerAlreadyInUseOutput(output) { return nil, errors.Wrapf(err, "failed to run container: %s", output) } } @@ -182,7 +173,7 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, // garbage collect any other containers with the same name pattern, which // we assume to be leftover from previous runs of the engine using an older // version - garbageCollectEngines(ctx, vtx, leftoverEngines) + garbageCollectEngines(ctx, log, leftoverEngines) return connhDocker.Helper(&url.URL{ Scheme: "docker-container", @@ -190,21 +181,34 @@ func (d *dockerDriver) create(ctx context.Context, vtx *progrock.VertexRecorder, }) } -func garbageCollectEngines(ctx context.Context, rec *progrock.VertexRecorder, engines []string) { +func garbageCollectEngines(ctx context.Context, log *slog.Logger, engines []string) { for _, engine := range engines { if engine == "" { continue } - if output, err := exec.CommandContext(ctx, + if output, err := traceExec(ctx, exec.CommandContext(ctx, "docker", "rm", "-fv", engine, - ).CombinedOutput(); err != nil { - if !strings.Contains(string(output), "already in progress") { - rec.Recorder.Warn("failed to remove old container", progrock.ErrorLabel(err), progrock.Labelf("container", engine)) + )); err != nil { + if !strings.Contains(output, "already in progress") { + log.Warn("failed to remove old container", "container", engine, "error", err) } } } } +func traceExec(ctx context.Context, cmd *exec.Cmd) (out string, rerr error) { + ctx, span := otel.Tracer("").Start(ctx, fmt.Sprintf("exec %s", strings.Join(cmd.Args, " "))) + defer telemetry.End(span, func() error { return rerr }) + _, stdout, stderr := telemetry.WithStdioToOtel(ctx, "") + outBuf := new(bytes.Buffer) + cmd.Stdout = io.MultiWriter(stdout, outBuf) + cmd.Stdout = stderr + if err := cmd.Run(); err != nil { + return outBuf.String(), errors.Wrap(err, "failed to run command") + } + return outBuf.String(), nil +} + func collectLeftoverEngines(ctx context.Context) ([]string, error) { output, err := exec.CommandContext(ctx, "docker", "ps", diff --git a/engine/client/drivers/driver.go b/engine/client/drivers/driver.go index 68720eabd9f..ee43a77736b 100644 --- a/engine/client/drivers/driver.go +++ b/engine/client/drivers/driver.go @@ -5,14 +5,12 @@ import ( "fmt" "net" "net/url" - - "github.com/vito/progrock" ) type Driver interface { // Provision creates any underlying resources for a driver, and returns a // Connector that can connect to it. - Provision(ctx context.Context, rec *progrock.VertexRecorder, url *url.URL, opts *DriverOpts) (Connector, error) + Provision(ctx context.Context, url *url.URL, opts *DriverOpts) (Connector, error) } type Connector interface { diff --git a/engine/client/progrock.go b/engine/client/progrock.go deleted file mode 100644 index 339d3ed6357..00000000000 --- a/engine/client/progrock.go +++ /dev/null @@ -1,35 +0,0 @@ -package client - -import ( - "encoding/json" - "os" - - "github.com/vito/progrock" -) - -type progrockFileWriter struct { - f *os.File - enc *json.Encoder -} - -func newProgrockFileWriter(filePath string) (progrock.Writer, error) { - f, err := os.Create(filePath) - if err != nil { - return nil, err - } - - enc := json.NewEncoder(f) - - return progrockFileWriter{ - f: f, - enc: enc, - }, nil -} - -func (w progrockFileWriter) WriteStatus(ev *progrock.StatusUpdate) error { - return w.enc.Encode(ev) -} - -func (w progrockFileWriter) Close() error { - return w.f.Close() -} diff --git a/engine/client/tracing.go b/engine/client/tracing.go new file mode 100644 index 00000000000..8528dd43b50 --- /dev/null +++ b/engine/client/tracing.go @@ -0,0 +1,12 @@ +package client + +import ( + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/trace" +) + +const InstrumentationLibrary = "dagger.io/engine.client" + +func Tracer() trace.Tracer { + return otel.Tracer(InstrumentationLibrary) +} diff --git a/engine/opts.go b/engine/opts.go index b206e29d29c..795699bec66 100644 --- a/engine/opts.go +++ b/engine/opts.go @@ -10,7 +10,7 @@ import ( "strconv" "unicode" - "github.com/dagger/dagger/core/pipeline" + "github.com/dagger/dagger/telemetry" controlapi "github.com/moby/buildkit/api/services/control" "github.com/opencontainers/go-digest" "google.golang.org/grpc/metadata" @@ -57,7 +57,7 @@ type ClientMetadata struct { ClientHostname string `json:"client_hostname"` // (Optional) Pipeline labels for e.g. vcs info like branch, commit, etc. - Labels []pipeline.Label `json:"labels"` + Labels telemetry.Labels `json:"labels"` // ParentClientIDs is a list of session ids that are parents of the current // session. The first element is the direct parent, the second element is the diff --git a/engine/server/buildkitcontroller.go b/engine/server/buildkitcontroller.go index 4353b6905b8..f4d8fc7adba 100644 --- a/engine/server/buildkitcontroller.go +++ b/engine/server/buildkitcontroller.go @@ -5,11 +5,11 @@ import ( "errors" "fmt" "io" + "log/slog" "runtime/debug" "sync" "time" - "github.com/dagger/dagger/engine" controlapi "github.com/moby/buildkit/api/services/control" apitypes "github.com/moby/buildkit/api/types" "github.com/moby/buildkit/cache/remotecache" @@ -27,21 +27,21 @@ import ( "github.com/moby/buildkit/util/imageutil" "github.com/moby/buildkit/util/leaseutil" "github.com/moby/buildkit/util/throttle" - "github.com/moby/buildkit/util/tracing/transform" bkworker "github.com/moby/buildkit/worker" "github.com/moby/locker" "github.com/sirupsen/logrus" - "go.opentelemetry.io/otel/sdk/trace" + logsv1 "go.opentelemetry.io/proto/otlp/collector/logs/v1" + metricsv1 "go.opentelemetry.io/proto/otlp/collector/metrics/v1" tracev1 "go.opentelemetry.io/proto/otlp/collector/trace/v1" "golang.org/x/sync/errgroup" "google.golang.org/grpc" - "google.golang.org/grpc/codes" - "google.golang.org/grpc/status" + + "github.com/dagger/dagger/engine" + "github.com/dagger/dagger/telemetry" ) type BuildkitController struct { BuildkitControllerOpts - *tracev1.UnimplementedTraceServiceServer // needed for grpc service register to not complain llbSolver *llbsolver.Solver genericSolver *solver.Solver @@ -67,7 +67,7 @@ type BuildkitControllerOpts struct { Entitlements []string EngineName string Frontends map[string]frontend.Frontend - TraceCollector trace.SpanExporter + TelemetryPubSub *telemetry.PubSub UpstreamCacheExporters map[string]remotecache.ResolveCacheExporterFunc UpstreamCacheImporters map[string]remotecache.ResolveCacheImporterFunc DNSConfig *oci.DNSConfig @@ -188,6 +188,7 @@ func (e *BuildkitController) Session(stream controlapi.Control_SessionServer) (r bklog.G(ctx).Debug("session manager handling conn") err := e.SessionManager.HandleConn(egctx, conn, hijackmd) bklog.G(ctx).WithError(err).Debug("session manager handle conn done") + slog.Warn("session manager handle conn done", "err", err, "ctxErr", ctx.Err(), "egCtxErr", egctx.Err()) if err != nil { return fmt.Errorf("handleConn: %w", err) } @@ -369,20 +370,20 @@ func (e *BuildkitController) ListWorkers(ctx context.Context, r *controlapi.List return resp, nil } -func (e *BuildkitController) Export(ctx context.Context, req *tracev1.ExportTraceServiceRequest) (*tracev1.ExportTraceServiceResponse, error) { - if e.TraceCollector == nil { - return nil, status.Errorf(codes.Unavailable, "trace collector not configured") - } - err := e.TraceCollector.ExportSpans(ctx, transform.Spans(req.GetResourceSpans())) - if err != nil { - return nil, err - } - return &tracev1.ExportTraceServiceResponse{}, nil -} - func (e *BuildkitController) Register(server *grpc.Server) { controlapi.RegisterControlServer(server, e) - tracev1.RegisterTraceServiceServer(server, e) + + traceSrv := &telemetry.TraceServer{PubSub: e.TelemetryPubSub} + tracev1.RegisterTraceServiceServer(server, traceSrv) + telemetry.RegisterTracesSourceServer(server, traceSrv) + + logsSrv := &telemetry.LogsServer{PubSub: e.TelemetryPubSub} + logsv1.RegisterLogsServiceServer(server, logsSrv) + telemetry.RegisterLogsSourceServer(server, logsSrv) + + metricsSrv := &telemetry.MetricsServer{PubSub: e.TelemetryPubSub} + metricsv1.RegisterMetricsServiceServer(server, metricsSrv) + telemetry.RegisterMetricsSourceServer(server, metricsSrv) } func (e *BuildkitController) Close() error { @@ -438,7 +439,6 @@ func (e *BuildkitController) Solve(ctx context.Context, req *controlapi.SolveReq } func (e *BuildkitController) Status(req *controlapi.StatusRequest, stream controlapi.Control_StatusServer) error { - // we send status updates over progrock session attachables instead return fmt.Errorf("status not implemented") } diff --git a/engine/server/server.go b/engine/server/server.go index 4fdec56127a..dcc26da76ce 100644 --- a/engine/server/server.go +++ b/engine/server/server.go @@ -19,27 +19,26 @@ import ( "github.com/dagger/dagger/analytics" "github.com/dagger/dagger/auth" "github.com/dagger/dagger/core" - "github.com/dagger/dagger/core/pipeline" "github.com/dagger/dagger/core/schema" "github.com/dagger/dagger/dagql" "github.com/dagger/dagger/engine" "github.com/dagger/dagger/engine/buildkit" "github.com/dagger/dagger/engine/cache" - "github.com/dagger/dagger/engine/client" - "github.com/dagger/dagger/tracing" + "github.com/dagger/dagger/telemetry" "github.com/moby/buildkit/cache/remotecache" bkgw "github.com/moby/buildkit/frontend/gateway/client" - "github.com/moby/buildkit/identity" "github.com/moby/buildkit/session" "github.com/moby/buildkit/util/bklog" "github.com/opencontainers/go-digest" "github.com/sirupsen/logrus" "github.com/vektah/gqlparser/v2/gqlerror" - "github.com/vito/progrock" + "go.opentelemetry.io/otel/propagation" + "go.opentelemetry.io/otel/trace" ) type DaggerServer struct { serverID string + traceID trace.TraceID clientIDToSecretToken map[string]string connectedClients int @@ -58,9 +57,8 @@ type DaggerServer struct { services *core.Services - recorder *progrock.Recorder - analytics analytics.Tracker - progCleanup func() error + pubsub *telemetry.PubSub + analytics analytics.Tracker doneCh chan struct{} closeOnce sync.Once @@ -82,15 +80,25 @@ func (e *BuildkitController) newDaggerServer(ctx context.Context, clientMetadata doneCh: make(chan struct{}, 1), + pubsub: e.TelemetryPubSub, + services: core.NewServices(), mainClientCallerID: clientMetadata.ClientID, upstreamCacheExporters: e.UpstreamCacheExporters, } - labels := clientMetadata.Labels - labels = append(labels, pipeline.EngineLabel(e.EngineName)) - labels = append(labels, pipeline.LoadServerLabels(engine.Version, runtime.GOOS, runtime.GOARCH, e.cacheManager.ID() != cache.LocalCacheID)...) + if traceID := trace.SpanContextFromContext(ctx).TraceID(); traceID.IsValid() { + s.traceID = traceID + } else { + slog.Warn("invalid traceID", "traceID", traceID.String()) + } + + labels := clientMetadata.Labels. + WithEngineLabel(e.EngineName). + WithServerLabels(engine.Version, runtime.GOOS, runtime.GOARCH, + e.cacheManager.ID() != cache.LocalCacheID) + s.analytics = analytics.New(analytics.Config{ DoNotTrack: clientMetadata.DoNotTrack || analytics.DoNotTrack(), Labels: labels, @@ -103,35 +111,6 @@ func (e *BuildkitController) newDaggerServer(ctx context.Context, clientMetadata if err != nil { return nil, fmt.Errorf("get session: %w", err) } - clientConn := sessionCaller.Conn() - - // using a new random ID rather than server ID to squash any nefarious attempts to set - // a server id that has e.g. ../../.. or similar in it - progSockPath := fmt.Sprintf("/run/dagger/server-progrock-%s.sock", identity.NewID()) - - progClient := progrock.NewProgressServiceClient(clientConn) - progUpdates, err := progClient.WriteUpdates(ctx) - if err != nil { - return nil, err - } - - progWriter, progCleanup, err := buildkit.ProgrockForwarder(progSockPath, progrock.MultiWriter{ - progrock.NewRPCWriter(clientConn, progUpdates), - buildkit.ProgrockLogrusWriter{}, - }) - if err != nil { - return nil, err - } - s.progCleanup = progCleanup - - progrockLabels := []*progrock.Label{} - for _, label := range labels { - progrockLabels = append(progrockLabels, &progrock.Label{ - Name: label.Name, - Value: label.Value, - }) - } - s.recorder = progrock.NewRecorder(progWriter, progrock.WithLabels(progrockLabels...)) secretStore := core.NewSecretStore() authProvider := auth.NewRegistryAuthProvider() @@ -168,24 +147,21 @@ func (e *BuildkitController) newDaggerServer(ctx context.Context, clientMetadata AuthProvider: authProvider, PrivilegedExecEnabled: e.privilegedExecEnabled, UpstreamCacheImports: cacheImporterCfgs, - ProgSockPath: progSockPath, MainClientCaller: sessionCaller, MainClientCallerID: s.mainClientCallerID, DNSConfig: e.DNSConfig, Frontends: e.Frontends, }, - ProgrockSocketPath: progSockPath, - Services: s.services, - Platform: core.Platform(e.worker.Platforms(true)[0]), - Secrets: secretStore, - OCIStore: e.worker.ContentStore(), - LeaseManager: e.worker.LeaseManager(), - Auth: authProvider, - ClientCallContext: s.clientCallContext, - ClientCallMu: s.clientCallMu, - Endpoints: s.endpoints, - EndpointMu: s.endpointMu, - Recorder: s.recorder, + Services: s.services, + Platform: core.Platform(e.worker.Platforms(true)[0]), + Secrets: secretStore, + OCIStore: e.worker.ContentStore(), + LeaseManager: e.worker.LeaseManager(), + Auth: authProvider, + ClientCallContext: s.clientCallContext, + ClientCallMu: s.clientCallMu, + Endpoints: s.endpoints, + EndpointMu: s.endpointMu, }) if err != nil { return nil, err @@ -196,7 +172,7 @@ func (e *BuildkitController) newDaggerServer(ctx context.Context, clientMetadata // stash away the cache so we can share it between other servers root.Cache = dag.Cache - dag.Around(tracing.AroundFunc) + dag.Around(telemetry.AroundFunc) coreMod := &schema.CoreMod{Dag: dag} root.DefaultDeps = core.NewModDeps(root, []core.Mod{coreMod}) @@ -244,8 +220,12 @@ func (s *DaggerServer) ServeClientConn( Handler: s, ReadHeaderTimeout: 30 * time.Second, BaseContext: func(net.Listener) context.Context { - ctx := bklog.WithLogger(context.Background(), bklog.G(ctx)) - ctx = progrock.ToContext(ctx, s.recorder) + // FIXME(vito) not sure if this is right, being conservative and + // respecting original context.Background(). later things added to ctx + // might be redundant, or maybe we're OK with propagating cancellation + // too (seems less likely considering how delicate draining events is). + ctx := context.WithoutCancel(ctx) + ctx = bklog.WithLogger(ctx, bklog.G(ctx)) ctx = engine.ContextWithClientMetadata(ctx, clientMetadata) ctx = analytics.WithContext(ctx, s.analytics) return ctx @@ -257,6 +237,10 @@ func (s *DaggerServer) ServeClientConn( func (s *DaggerServer) ServeHTTP(w http.ResponseWriter, r *http.Request) { ctx := r.Context() + + // propagate span context from the client (i.e. for Dagger-in-Dagger) + ctx = propagation.TraceContext{}.Extract(ctx, propagation.HeaderCarrier(r.Header)) + errorOut := func(err error, code int) { bklog.G(ctx).WithError(err).Error("failed to serve request") http.Error(w, err.Error(), code) @@ -274,14 +258,6 @@ func (s *DaggerServer) ServeHTTP(w http.ResponseWriter, r *http.Request) { return } - rec := progrock.FromContext(ctx) - if header := r.Header.Get(client.ProgrockParentHeader); header != "" { - rec = rec.WithParent(header) - } else if callContext.ProgrockParent != "" { - rec = rec.WithParent(callContext.ProgrockParent) - } - ctx = progrock.ToContext(ctx, rec) - schema, err := callContext.Deps.Schema(ctx) if err != nil { // TODO: technically this is not *always* bad request, should ideally be more specific and differentiate @@ -328,32 +304,62 @@ func (s *DaggerServer) ServeHTTP(w http.ResponseWriter, r *http.Request) { // return res // }) mux := http.NewServeMux() + mux.Handle("/query", srv) + mux.Handle("/shutdown", http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { ctx := req.Context() - if len(s.upstreamCacheExporterCfgs) > 0 && clientMetadata.ClientID == s.mainClientCallerID { - bklog.G(ctx).Debugf("running cache export for client %s", clientMetadata.ClientID) - cacheExporterFuncs := make([]buildkit.ResolveCacheExporterFunc, len(s.upstreamCacheExporterCfgs)) - for i, cacheExportCfg := range s.upstreamCacheExporterCfgs { - cacheExportCfg := cacheExportCfg - cacheExporterFuncs[i] = func(ctx context.Context, sessionGroup session.Group) (remotecache.Exporter, error) { - exporterFunc, ok := s.upstreamCacheExporters[cacheExportCfg.Type] - if !ok { - return nil, fmt.Errorf("unknown cache exporter type %q", cacheExportCfg.Type) + + immediate := req.URL.Query().Get("immediate") == "true" + + slog := slog.With( + "isImmediate", immediate, + "isMainClient", clientMetadata.ClientID == s.mainClientCallerID, + "isModule", clientMetadata.ModuleCallerDigest != "", + "serverID", s.serverID, + "traceID", s.traceID, + "clientID", clientMetadata.ClientID, + "mainClientID", s.mainClientCallerID, + "callerID", clientMetadata.ModuleCallerDigest) + + slog.Debug("shutting down server") + defer slog.Debug("done shutting down server") + + if clientMetadata.ClientID == s.mainClientCallerID { + // Stop services, since the main client is going away, and we + // want the client to see them stop. + s.services.StopClientServices(ctx, s.serverID) + + // Start draining telemetry + s.pubsub.Drain(s.traceID, immediate) + + if len(s.upstreamCacheExporterCfgs) > 0 { + bklog.G(ctx).Debugf("running cache export for client %s", clientMetadata.ClientID) + cacheExporterFuncs := make([]buildkit.ResolveCacheExporterFunc, len(s.upstreamCacheExporterCfgs)) + for i, cacheExportCfg := range s.upstreamCacheExporterCfgs { + cacheExportCfg := cacheExportCfg + cacheExporterFuncs[i] = func(ctx context.Context, sessionGroup session.Group) (remotecache.Exporter, error) { + exporterFunc, ok := s.upstreamCacheExporters[cacheExportCfg.Type] + if !ok { + return nil, fmt.Errorf("unknown cache exporter type %q", cacheExportCfg.Type) + } + return exporterFunc(ctx, sessionGroup, cacheExportCfg.Attrs) } - return exporterFunc(ctx, sessionGroup, cacheExportCfg.Attrs) } + s.clientCallMu.RLock() + bk := s.clientCallContext[""].Root.Buildkit + s.clientCallMu.RUnlock() + err := bk.UpstreamCacheExport(ctx, cacheExporterFuncs) + if err != nil { + bklog.G(ctx).WithError(err).Errorf("error running cache export for client %s", clientMetadata.ClientID) + } + bklog.G(ctx).Debugf("done running cache export for client %s", clientMetadata.ClientID) } - s.clientCallMu.RLock() - bk := s.clientCallContext[""].Root.Buildkit - s.clientCallMu.RUnlock() - err := bk.UpstreamCacheExport(ctx, cacheExporterFuncs) - if err != nil { - bklog.G(ctx).WithError(err).Errorf("error running cache export for client %s", clientMetadata.ClientID) - } - bklog.G(ctx).Debugf("done running cache export for client %s", clientMetadata.ClientID) } + + telemetry.Flush(ctx) })) + s.endpointMu.RLock() for path, handler := range s.endpoints { mux.Handle(path, handler) @@ -429,27 +435,26 @@ func (s *DaggerServer) Close(ctx context.Context) error { close(s.doneCh) }) - var err error + var errs error + + slog.Debug("server closing; stopping client services and flushing", "server", s.serverID, "trace", s.traceID) if err := s.services.StopClientServices(ctx, s.serverID); err != nil { - slog.Error("failed to stop client services", "error", err) + errs = errors.Join(errs, fmt.Errorf("stop client services: %w", err)) } s.clientCallMu.RLock() for _, callCtx := range s.clientCallContext { - err = errors.Join(err, callCtx.Root.Buildkit.Close()) + errs = errors.Join(errs, callCtx.Root.Buildkit.Close()) } s.clientCallMu.RUnlock() - // mark all groups completed - s.recorder.Complete() - // close the recorder so the UI exits - err = errors.Join(err, s.recorder.Close()) - err = errors.Join(err, s.progCleanup()) // close the analytics recorder - err = errors.Join(err, s.analytics.Close()) + errs = errors.Join(errs, s.analytics.Close()) + + telemetry.Flush(ctx) - return err + return errs } func (s *DaggerServer) Wait(ctx context.Context) error { diff --git a/engine/session/h2c.go b/engine/session/h2c.go index b2fc353adf3..a732209e08d 100644 --- a/engine/session/h2c.go +++ b/engine/session/h2c.go @@ -7,21 +7,20 @@ import ( "net" "sync" + "github.com/dagger/dagger/telemetry" "github.com/moby/buildkit/util/grpcerrors" - "github.com/vito/progrock" "google.golang.org/grpc" codes "google.golang.org/grpc/codes" ) type TunnelListenerAttachable struct { - rec *progrock.Recorder + rootCtx context.Context + UnimplementedTunnelListenerServer } -func NewTunnelListenerAttachable(rec *progrock.Recorder) TunnelListenerAttachable { - return TunnelListenerAttachable{ - rec: rec, - } +func NewTunnelListenerAttachable(rootCtx context.Context) TunnelListenerAttachable { + return TunnelListenerAttachable{rootCtx: rootCtx} } func (s TunnelListenerAttachable) Register(srv *grpc.Server) { @@ -29,6 +28,8 @@ func (s TunnelListenerAttachable) Register(srv *grpc.Server) { } func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error { + log := telemetry.GlobalLogger(s.rootCtx) + req, err := srv.Recv() if err != nil { return err @@ -64,7 +65,7 @@ func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error conn, err := l.Accept() if err != nil { if !errors.Is(err, net.ErrClosed) { - s.rec.Warn("accept error", progrock.ErrorLabel(err)) + log.Warn("accept error", "error", err) } return } @@ -81,7 +82,7 @@ func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error }) sendL.Unlock() if err != nil { - s.rec.Warn("send connID error", progrock.ErrorLabel(err)) + log.Warn("send connID error", "error", err) return } @@ -96,7 +97,7 @@ func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error return } - s.rec.Warn("conn read error", progrock.ErrorLabel(err)) + log.Warn("conn read error", "error", err) return } @@ -107,7 +108,7 @@ func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error }) sendL.Unlock() if err != nil { - s.rec.Warn("listener send response error", progrock.ErrorLabel(err)) + log.Warn("listener send response error", "error", err) return } } @@ -133,13 +134,13 @@ func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error return nil } - s.rec.Error("listener receive request error", progrock.ErrorLabel(err)) + log.Error("listener receive request error", "error", err) return err } connID := req.GetConnId() if req.GetConnId() == "" { - s.rec.Warn("listener request with no connID") + log.Warn("listener request with no connID") continue } @@ -147,14 +148,14 @@ func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error conn, ok := conns[connID] connsL.Unlock() if !ok { - s.rec.Warn("listener request for unknown connID", progrock.Labelf("connID", connID)) + log.Warn("listener request for unknown connID", "connID", connID) continue } switch { case req.GetClose(): if err := conn.Close(); err != nil { - s.rec.Warn("conn close error", progrock.ErrorLabel(err)) + log.Warn("conn close error", "error", err) continue } connsL.Lock() @@ -168,7 +169,7 @@ func (s TunnelListenerAttachable) Listen(srv TunnelListener_ListenServer) error return nil } - s.rec.Warn("conn write error", progrock.ErrorLabel(err)) + log.Warn("conn write error", "error", err) continue } } diff --git a/go.mod b/go.mod index 056145a36fd..3bde5c04904 100644 --- a/go.mod +++ b/go.mod @@ -13,8 +13,8 @@ require ( github.com/a-h/templ v0.2.543 github.com/adrg/xdg v0.4.0 github.com/blang/semver v3.5.1+incompatible - github.com/cenkalti/backoff/v4 v4.2.1 - github.com/charmbracelet/bubbles v0.18.0 + github.com/cenkalti/backoff/v4 v4.3.0 + github.com/charmbracelet/bubbles v0.18.0 // indirect github.com/charmbracelet/bubbletea v0.25.0 github.com/charmbracelet/lipgloss v0.9.1 github.com/containerd/console v1.0.4-0.20230313162750-1ae8d489ac81 @@ -35,11 +35,10 @@ require ( github.com/gofrs/flock v0.8.1 github.com/gogo/protobuf v1.3.2 github.com/google/go-containerregistry v0.19.0 - github.com/google/go-github/v50 v50.2.0 - github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 + github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect github.com/google/uuid v1.6.0 github.com/gorilla/websocket v1.5.0 - github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 + github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect github.com/iancoleman/strcase v0.3.0 github.com/jackpal/gateway v1.0.7 github.com/juju/ansiterm v1.0.0 @@ -53,7 +52,6 @@ require ( github.com/moby/sys/mount v0.3.3 github.com/muesli/reflow v0.3.0 github.com/muesli/termenv v0.15.2 - github.com/nxadm/tail v1.4.8 github.com/opencontainers/go-digest v1.0.0 github.com/opencontainers/image-spec v1.1.0-rc5 github.com/opencontainers/runc v1.1.12 @@ -74,40 +72,47 @@ require ( github.com/stretchr/testify v1.9.0 github.com/tidwall/gjson v1.17.0 github.com/tonistiigi/fsutil v0.0.0-20240301111122-7525a1af2bb5 - github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea + github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea // indirect github.com/urfave/cli v1.22.14 github.com/vektah/gqlparser/v2 v2.5.10 - github.com/vito/midterm v0.1.5-0.20240215023001-e649b2677bfa + github.com/vito/midterm v0.1.5-0.20240307214207-d0271a7ca452 github.com/vito/progrock v0.10.2-0.20240221152222-63c8df30db8d - github.com/weaveworks/common v0.0.0-20230119144549-0aaa5abd1e63 github.com/zeebo/xxh3 v1.0.2 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.0 - go.opentelemetry.io/otel v1.21.0 - go.opentelemetry.io/otel/exporters/jaeger v1.17.0 - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 - go.opentelemetry.io/otel/sdk v1.21.0 - go.opentelemetry.io/otel/trace v1.21.0 - go.opentelemetry.io/proto/otlp v1.0.0 + go.opentelemetry.io/otel v1.24.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0 + go.opentelemetry.io/otel/sdk v1.24.0 + go.opentelemetry.io/otel/trace v1.24.0 + go.opentelemetry.io/proto/otlp v1.1.0 golang.org/x/crypto v0.20.0 golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa golang.org/x/mod v0.14.0 golang.org/x/net v0.21.0 golang.org/x/oauth2 v0.17.0 golang.org/x/sync v0.6.0 - golang.org/x/sys v0.17.0 + golang.org/x/sys v0.18.0 golang.org/x/term v0.17.0 golang.org/x/text v0.14.0 golang.org/x/tools v0.17.0 - google.golang.org/grpc v1.61.0 - google.golang.org/protobuf v1.32.0 + google.golang.org/grpc v1.62.1 + google.golang.org/protobuf v1.33.0 gopkg.in/yaml.v3 v3.0.1 gotest.tools/v3 v3.5.1 - oss.terrastruct.com/d2 v0.6.1 - oss.terrastruct.com/util-go v0.0.0-20231101220827-55b3812542c2 ) require ( - cdr.dev/slog v1.4.2 // indirect + github.com/google/go-github/v59 v59.0.0 + github.com/koron-go/prefixw v1.0.0 + github.com/lmittmann/tint v1.0.4 + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0 + go.opentelemetry.io/otel/log v0.0.1-alpha + go.opentelemetry.io/otel/sdk/metric v1.24.0 + google.golang.org/genproto/googleapis/rpc v0.0.0-20240325203815-454cdb8f5daa +) + +require ( dario.cat/mergo v1.0.0 // indirect github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.0 // indirect @@ -118,13 +123,9 @@ require ( github.com/Microsoft/go-winio v0.6.1 // indirect github.com/Microsoft/hcsshim v0.11.4 // indirect github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371 // indirect - github.com/PuerkitoBio/goquery v1.8.1 // indirect github.com/agext/levenshtein v1.2.3 // indirect github.com/agnivade/levenshtein v1.1.1 // indirect - github.com/alecthomas/chroma v0.10.0 // indirect - github.com/alecthomas/chroma/v2 v2.11.1 // indirect github.com/anchore/go-struct-converter v0.0.0-20221118182256-c68fdcfa2092 // indirect - github.com/andybalholm/cascadia v1.3.2 // indirect github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2 // indirect github.com/aws/aws-sdk-go-v2 v1.24.1 // indirect github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect @@ -148,7 +149,6 @@ require ( github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/cespare/xxhash/v2 v2.2.0 // indirect - github.com/charmbracelet/harmonica v0.2.0 // indirect github.com/cloudflare/circl v1.3.7 // indirect github.com/containerd/cgroups v1.1.0 // indirect github.com/containerd/fifo v1.1.0 // indirect @@ -165,33 +165,25 @@ require ( github.com/davecgh/go-spew v1.1.1 // indirect github.com/dimchansky/utfbom v1.1.1 // indirect github.com/distribution/reference v0.5.0 // indirect - github.com/dlclark/regexp2 v1.10.0 // indirect github.com/docker/docker-credential-helpers v0.8.0 // indirect github.com/docker/go-connections v0.5.0 // indirect github.com/docker/go-metrics v0.0.1 // indirect github.com/docker/go-units v0.5.0 // indirect - github.com/dop251/goja v0.0.0-20231027120936-b396bb4c349d // indirect github.com/emirpasic/gods v1.18.1 // indirect - github.com/fatih/color v1.16.0 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fogleman/ease v0.0.0-20170301025033-8da417bf1776 // indirect - github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect github.com/go-git/go-billy/v5 v5.5.0 // indirect - github.com/go-kit/log v0.2.1 // indirect - github.com/go-logfmt/logfmt v0.5.1 // indirect - github.com/go-logr/logr v1.3.0 // indirect + github.com/go-logr/logr v1.4.1 // indirect github.com/go-logr/stdr v1.2.2 // indirect - github.com/go-sourcemap/sourcemap v2.1.3+incompatible // indirect github.com/gogo/googleapis v1.4.1 // indirect github.com/golang-jwt/jwt/v4 v4.4.2 // indirect - github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/protobuf v1.5.3 // indirect github.com/google/go-cmp v0.6.0 // indirect github.com/google/go-querystring v1.1.0 // indirect github.com/google/pprof v0.0.0-20231101202521-4ca4178f5c7a // indirect - github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0 // indirect github.com/hanwen/go-fuse/v2 v2.4.0 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect @@ -204,7 +196,6 @@ require ( github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect - github.com/jonboulle/clockwork v0.4.0 // indirect github.com/kevinburke/ssh_config v1.2.0 // indirect github.com/klauspost/cpuid/v2 v2.0.9 // indirect github.com/kylelemons/godebug v1.1.0 // indirect @@ -213,11 +204,9 @@ require ( github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-localereader v0.0.1 // indirect github.com/mattn/go-runewidth v0.0.15 // indirect - github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect - github.com/mazznoer/csscolorparser v0.1.3 // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/mitchellh/hashstructure/v2 v2.0.2 // indirect - github.com/mitchellh/mapstructure v1.5.0 // indirect + github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c // indirect github.com/moby/sys/mountinfo v0.7.1 // indirect github.com/moby/sys/sequential v0.5.0 // indirect github.com/moby/sys/signal v0.7.0 // indirect @@ -230,10 +219,10 @@ require ( github.com/pjbgf/sha1cd v0.3.0 // indirect github.com/pkg/profile v1.5.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect - github.com/prometheus/client_golang v1.17.0 // indirect - github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16 // indirect - github.com/prometheus/common v0.44.0 // indirect - github.com/rivo/uniseg v0.4.6 // indirect + github.com/prometheus/client_golang v1.19.0 // indirect + github.com/prometheus/client_model v0.6.0 // indirect + github.com/prometheus/common v0.48.0 // indirect + github.com/rivo/uniseg v0.4.7 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/samber/lo v1.38.1 // indirect github.com/samber/slog-common v0.14.0 // indirect @@ -251,31 +240,20 @@ require ( github.com/vbatts/tar-split v0.11.5 // indirect github.com/vishvananda/netlink v1.2.1-beta.2 // indirect github.com/vishvananda/netns v0.0.4 // indirect - github.com/weaveworks/promrus v1.2.0 // indirect github.com/xanzy/ssh-agent v0.3.3 // indirect - github.com/yuin/goldmark v1.6.0 // indirect github.com/zmb3/spotify/v2 v2.3.1 // indirect go.etcd.io/bbolt v1.3.7 // indirect go.opencensus.io v0.24.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.45.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.45.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.42.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.42.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.42.0 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 // indirect - go.opentelemetry.io/otel/exporters/prometheus v0.42.0 // indirect - go.opentelemetry.io/otel/metric v1.21.0 // indirect - go.opentelemetry.io/otel/sdk/metric v1.19.0 // indirect + go.opentelemetry.io/otel/exporters/prometheus v0.46.0 // indirect + go.opentelemetry.io/otel/metric v1.24.0 // indirect go.uber.org/multierr v1.11.0 // indirect - golang.org/x/image v0.14.0 // indirect golang.org/x/time v0.3.0 // indirect - golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 // indirect - gonum.org/v1/plot v0.14.0 // indirect google.golang.org/appengine v1.6.8 // indirect - google.golang.org/genproto v0.0.0-20231106174013-bbf56f31fb17 // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20231106174013-bbf56f31fb17 // indirect - gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect + google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 // indirect gopkg.in/warnings.v0 v0.1.2 // indirect ) diff --git a/go.sum b/go.sum index b34632f6bed..c670cf5ea2d 100644 --- a/go.sum +++ b/go.sum @@ -1,7 +1,5 @@ bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8= bazil.org/fuse v0.0.0-20180421153158-65cc252bf669/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8= -cdr.dev/slog v1.4.2 h1:fIfiqASYQFJBZiASwL825atyzeA96NsqSxx2aL61P8I= -cdr.dev/slog v1.4.2/go.mod h1:0EkH+GkFNxizNR+GAXUEdUHanxUH5t9zqPILmPM/Vn8= cloud.google.com/go v0.25.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.31.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= @@ -23,34 +21,13 @@ cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKV cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs= cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc= cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY= -cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI= -cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk= -cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg= -cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8= -cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0= -cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY= -cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM= -cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY= -cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ= -cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI= -cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4= -cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc= -cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA= -cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A= -cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc= -cloud.google.com/go v0.110.10 h1:LXy9GEO+timppncPIAZoOj3l58LIU9k+kn48AN7IO3Y= +cloud.google.com/go v0.112.0 h1:tpFCD7hpHFlQ8yPwT3x+QeXqc2T6+n6T+hmABHfDUSM= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= -cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow= -cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM= -cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M= -cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s= -cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= -cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U= cloud.google.com/go/compute v1.23.3 h1:6sVlXXBmbd7jNX0Ipq0trII3e4n1/MsADLK6a+aiVlk= cloud.google.com/go/compute v1.23.3/go.mod h1:VCgBUoMnIVIR0CscqQiPJLAG25E3ZRZMzcFZeQ+h8CI= cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY= @@ -58,11 +35,6 @@ cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2Aawl cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk= -cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY= -cloud.google.com/go/logging v1.8.1 h1:26skQWPeYhvIasWKm48+Eq7oUqdcdbwsCVwz5Ys0FvU= -cloud.google.com/go/logging v1.8.1/go.mod h1:TJjR+SimHwuC8MZ9cjByQulAMgni+RkXeI3wwctHJEI= -cloud.google.com/go/longrunning v0.5.4 h1:w8xEcbZodnA2BbW6sVirkkoC+1gP8wS57EUUgGS0GVg= -cloud.google.com/go/longrunning v0.5.4/go.mod h1:zqNVncI0BOP8ST6XQD1+VcvuShMmq7+xFSzOL++V0dI= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= @@ -72,7 +44,6 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk= cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs= cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= -cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y= code.gitea.io/sdk/gitea v0.12.0/go.mod h1:z3uwDV/b9Ls47NGukYM9XhnHtqPh/J+t40lsUrR6JDY= contrib.go.opencensus.io/exporter/aws v0.0.0-20181029163544-2befc13012d0/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA= contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0= @@ -84,8 +55,6 @@ dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= git.apache.org/thrift.git v0.12.0/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= -git.sr.ht/~sbinet/gg v0.5.0 h1:6V43j30HM623V329xA9Ntq+WJrMjDxRjuAB1LFWF5m8= -git.sr.ht/~sbinet/gg v0.5.0/go.mod h1:G2C0eRESqlKhS7ErsNey6HHrqU1PwsnCQlekFi9Q2Oo= github.com/99designs/gqlgen v0.17.41 h1:C1/zYMhGVP5TWNCNpmZ9Mb6CqT1Vr5SHEWoTOEJ3v3I= github.com/99designs/gqlgen v0.17.41/go.mod h1:GQ6SyMhwFbgHR0a8r2Wn8fYgEwPxxmndLFPhU63+cJE= github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU= @@ -178,8 +147,6 @@ github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAE github.com/OpenPeeDeeP/depguard v1.0.1/go.mod h1:xsIw86fROiiwelg+jB2uM9PiKihMMmUx/1V+TNhjQvM= github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371 h1:kkhsdkhsCvIsutKu5zLMgWtgh9YxGCNAw8Ad8hjwfYg= github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371/go.mod h1:EjAoLdwvbIOoOQr3ihjnSoLZRtE8azugULFRteWMNc0= -github.com/PuerkitoBio/goquery v1.8.1 h1:uQxhNlArOIdbrH1tr0UXwdVFgDcZDrZVdcpygAcwmWM= -github.com/PuerkitoBio/goquery v1.8.1/go.mod h1:Q8ICL1kNUJ2sXGoAhPGUdYDJvgQgHzJsnnd3H7Ho5jQ= github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= @@ -196,34 +163,18 @@ github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7l github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= github.com/agnivade/levenshtein v1.1.1 h1:QY8M92nrzkmr798gCo3kmMyqXFzdQVpxLlGPRBij0P8= github.com/agnivade/levenshtein v1.1.1/go.mod h1:veldBMzWxcCG2ZvUTKD2kJNRdCk5hVbJomOvKkmgYbo= -github.com/ajstarks/svgo v0.0.0-20211024235047-1546f124cd8b h1:slYM766cy2nI3BwyRiyQj/Ud48djTMtMebDqepE95rw= -github.com/ajstarks/svgo v0.0.0-20211024235047-1546f124cd8b/go.mod h1:1KcenG0jGWcpt8ov532z81sp/kMMUG485J2InIOyADM= -github.com/alecthomas/assert/v2 v2.2.1 h1:XivOgYcduV98QCahG8T5XTezV5bylXe+lBxLG2K2ink= -github.com/alecthomas/assert/v2 v2.2.1/go.mod h1:pXcQ2Asjp247dahGEmsZ6ru0UVwnkhktn7S0bBDLxvQ= -github.com/alecthomas/chroma v0.10.0 h1:7XDcGkCQopCNKjZHfYrNLraA+M7e0fMiJ/Mfikbfjek= -github.com/alecthomas/chroma v0.10.0/go.mod h1:jtJATyUxlIORhUOFNA9NZDWGAQ8wpxQQqNSB4rjA/1s= -github.com/alecthomas/chroma/v2 v2.11.1 h1:m9uUtgcdAwgfFNxuqj7AIG75jD2YmL61BBIJWtdzJPs= -github.com/alecthomas/chroma/v2 v2.11.1/go.mod h1:4TQu7gdfuPjSh76j78ietmqh9LiurGF0EpseFXdKMBw= github.com/alecthomas/kingpin v2.2.6+incompatible/go.mod h1:59OFYbFVLKQKq+mqrL6Rw5bR0c3ACQaawgXx0QYndlE= -github.com/alecthomas/repr v0.2.0 h1:HAzS41CIzNW5syS8Mf9UwXhNH1J9aix/BvDRf1Ml2Yk= -github.com/alecthomas/repr v0.2.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= -github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho= -github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE= github.com/anchore/go-struct-converter v0.0.0-20221118182256-c68fdcfa2092 h1:aM1rlcoLz8y5B2r4tTLMiVTrMtpfY0O8EScKJxaSaEc= github.com/anchore/go-struct-converter v0.0.0-20221118182256-c68fdcfa2092/go.mod h1:rYqSE9HbjzpHTI74vwPvae4ZVYZd1lue2ta6xHPdblA= github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883 h1:bvNMNQO63//z+xNgfBlViaCIJKLlCJ6/fmUseuG0wVQ= github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8= -github.com/andybalholm/cascadia v1.3.1/go.mod h1:R4bJ1UQfqADjvDa4P6HZHLh/3OxWWEqc0Sk8XGwHqvA= -github.com/andybalholm/cascadia v1.3.2 h1:3Xi6Dw5lHF15JtdcmAHD3i1+T8plmv7BQ/nsViSLyss= -github.com/andybalholm/cascadia v1.3.2/go.mod h1:7gtRlve5FxPPgIgX36uWBX58OdBsSS6lUvCFb+h7KvU= github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c= github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8= github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4= -github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ= github.com/apex/log v1.1.4/go.mod h1:AlpoD9aScyQfJDVHmLMEcx4oU6LqzkWp4Mg9GdAcEvQ= github.com/apex/log v1.3.0/go.mod h1:jd8Vpsr46WAe3EZSQ/IUMs2qQD/GOycT5rPWCO1yGcs= @@ -248,7 +199,6 @@ github.com/aws/aws-sdk-go v1.19.18/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpi github.com/aws/aws-sdk-go v1.19.45/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.20.6/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.25.11/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= -github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.27.1/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.31.6/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= github.com/aws/aws-sdk-go-v2 v1.24.1 h1:xAojnj+ktS95YZlDf0zxWBkbFtymPeDP+rvUQIH3uAU= @@ -317,34 +267,25 @@ github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0Bsq github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE= github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= github.com/caarlos0/ctrlc v1.0.0/go.mod h1:CdXpj4rmq0q/1Eb44M9zi2nKB0QraNKuRGYGrrHhcQw= -github.com/campoy/embedmd v1.0.0 h1:V4kI2qTJJLf4J29RzI/MAt2c3Bl4dQSYPuflzwFH2hY= -github.com/campoy/embedmd v1.0.0/go.mod h1:oxyr9RCiSXg0M3VJ3ks0UGfp98BpSSGr0kpiX3MzVl8= github.com/campoy/unique v0.0.0-20180121183637-88950e537e7e/go.mod h1:9IOqJGCPMSc6E5ydlp5NIonxObaeu/Iub/X03EKPVYo= github.com/cavaliercoder/go-cpio v0.0.0-20180626203310-925f9528c45e/go.mod h1:oDpT4efm8tSYHXV5tHSdRvBet/b/QzxZ+XyyPehvm3A= -github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM= -github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= +github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= -github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/charmbracelet/bubbles v0.18.0 h1:PYv1A036luoBGroX6VWjQIE9Syf2Wby2oOl/39KLfy0= github.com/charmbracelet/bubbles v0.18.0/go.mod h1:08qhZhtIwzgrtBjAcJnij1t1H0ZRjwHyGsy6AL11PSw= github.com/charmbracelet/bubbletea v0.25.0 h1:bAfwk7jRz7FKFl9RzlIULPkStffg5k6pNt5dywy4TcM= github.com/charmbracelet/bubbletea v0.25.0/go.mod h1:EN3QDR1T5ZdWmdfDzYcqOCAps45+QIJbLOBxmVNWNNg= -github.com/charmbracelet/harmonica v0.2.0 h1:8NxJWRWg/bzKqqEaaeFNipOu77YR5t8aSwG4pgaUBiQ= -github.com/charmbracelet/harmonica v0.2.0/go.mod h1:KSri/1RMQOZLbw7AHqgcBycp8pgJnQMYYT8QZRqZ1Ao= github.com/charmbracelet/lipgloss v0.9.1 h1:PNyd3jvaJbg4jRHKWXnCj1akQm4rh8dbEzN1p/u1KWg= github.com/charmbracelet/lipgloss v0.9.1/go.mod h1:1mPmG4cxScwUQALAAnacHaigiiHB9Pmr+v1VEawJl6I= github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= -github.com/chzyer/logex v1.2.0/go.mod h1:9+9sk7u7pGNWYMkh0hdiL++6OeibzJccyQU4p4MedaY= github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= -github.com/chzyer/readline v1.5.0/go.mod h1:x22KAscuvRqlLoK9CsoYsmxoXZMMFVyOl86cAH8qUic= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= -github.com/chzyer/test v0.0.0-20210722231415-061457976a23/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg= github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= @@ -352,19 +293,10 @@ github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUK github.com/cloudflare/circl v1.3.7 h1:qlCDlTPz2n9fu58M0Nh1J/JzcFpfgkFHHX3O35r5vcU= github.com/cloudflare/circl v1.3.7/go.mod h1:sRTcRWXGLrKw6yIGJ+l7amYJFfAXbZG0kBSc8r4zxgA= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI= -github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20231109132714-523115ebc101 h1:7To3pQ+pZo0i3dsWEbinPNFs5gPSBOsJtx3wTT94VBY= -github.com/cncf/xds/go v0.0.0-20231109132714-523115ebc101/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= +github.com/cncf/xds/go v0.0.0-20231128003011-0fa0005c9caa h1:jQCWAUqqlij9Pgj2i/PB79y4KOPYVyFYdROxgaCwdTQ= +github.com/cncf/xds/go v0.0.0-20231128003011-0fa0005c9caa/go.mod h1:x/1Gn8zydmfq8dk6e9PdstVsDgu9RuyIIJqAaF//0IM= github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8= github.com/codahale/hdrhistogram v0.0.0-20160425231609-f8ad88b59a58/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI= -github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI= github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE= github.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4= github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko= @@ -437,7 +369,6 @@ github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7 github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk= github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk= -github.com/coreos/go-systemd/v22 v22.4.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs= github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= @@ -477,11 +408,6 @@ github.com/dimchansky/utfbom v1.1.1 h1:vV6w1AhK4VMnhBno/TPVCoK9U/LP0PkLCS9tbxHdi github.com/dimchansky/utfbom v1.1.1/go.mod h1:SxdoEBH5qIqFocHMyGOXVAybYJdr71b1Q/j0mACtrfE= github.com/distribution/reference v0.5.0 h1:/FUIFXtfc/x2gpa5/VGfiGLuOIdYa1t65IKK2OFGvA0= github.com/distribution/reference v0.5.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= -github.com/dlclark/regexp2 v1.4.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc= -github.com/dlclark/regexp2 v1.4.1-0.20201116162257-a2a8dda75c91/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc= -github.com/dlclark/regexp2 v1.7.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= -github.com/dlclark/regexp2 v1.10.0 h1:+/GIL799phkJqYW+3YbOd8LCcbHzT0Pbo8zl70MHsq0= -github.com/dlclark/regexp2 v1.10.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E= github.com/dnaeon/go-vcr v1.1.0 h1:ReYa/UBrRyQdant9B4fNHGoCNKw6qh6P0fsdGmZpR7c= github.com/dnaeon/go-vcr v1.1.0/go.mod h1:M7tiix8f0r6mKKJ3Yq/kqU1OYf3MnfmBWVbPx/yU9ko= @@ -522,11 +448,6 @@ github.com/docker/libnetwork v0.8.0-dev.2.0.20200917202933-d0951081b35f/go.mod h github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE= github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM= github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= -github.com/dop251/goja v0.0.0-20211022113120-dc8c55024d06/go.mod h1:R9ET47fwRVRPZnOGvHxxhuZcbrMCuiqOz3Rlrh4KSnk= -github.com/dop251/goja v0.0.0-20231027120936-b396bb4c349d h1:wi6jN5LVt/ljaBG4ue79Ekzb12QfJ52L9Q98tl8SWhw= -github.com/dop251/goja v0.0.0-20231027120936-b396bb4c349d/go.mod h1:QMWlm50DNe14hD7t24KEqZuUdC9sOTy8W6XbCU1mlw4= -github.com/dop251/goja_nodejs v0.0.0-20210225215109-d91c329300e7/go.mod h1:hn7BA7c8pLvoGndExHudxTDKZ84Pyvv+90pbBjbTz0Y= -github.com/dop251/goja_nodejs v0.0.0-20211022123610-8dd9abb0616d/go.mod h1:DngW8aVqWbuLRMHItjPUyqdj+HWPvnQe8V8y1nDpIbM= github.com/dschmidt/go-layerfs v0.1.0 h1:jE6aHDfjNzS/31DS48th6EkmELwTa1Uf+aO4jRkBs3U= github.com/dschmidt/go-layerfs v0.1.0/go.mod h1:m62aff0hn23Q/tQBRiNSeLD7EUuimDvsuCvCpzBr3Gw= github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= @@ -545,23 +466,15 @@ github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FM github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po= -github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ= -github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0= -github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/envoyproxy/protoc-gen-validate v1.0.2 h1:QkIBuU5k+x7/QXPvPPnWXWlCdaBFApVqftFV6k087DA= -github.com/envoyproxy/protoc-gen-validate v1.0.2/go.mod h1:GpiZQP3dDbg4JouG/NNS7QWXpgx6x8QiMKdmN72jogE= +github.com/envoyproxy/protoc-gen-validate v1.0.4 h1:gVPz/FMfvh57HdSJQyvBtF00j8JU4zdyUgIUNhlgg0A= +github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew= github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= -github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM= github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE= -github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= @@ -571,8 +484,6 @@ github.com/fortytw2/leaktest v1.2.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHqu github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= -github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= -github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY= github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= @@ -581,8 +492,6 @@ github.com/gliderlabs/ssh v0.3.5 h1:OcaySEmAQJgyYcArR+gGGTHCyE7nvhEMTlYY+Dp8CpY= github.com/gliderlabs/ssh v0.3.5/go.mod h1:8XB4KraRrX39qHhT6yxPsHedjA08I/uBVwj4xC+/+z4= github.com/go-critic/go-critic v0.4.1/go.mod h1:7/14rZGnZbY6E38VEGk2kVhoq6itzc1E68facVDK23g= github.com/go-critic/go-critic v0.4.3/go.mod h1:j4O3D4RoIwRqlZw5jJpx0BNfXWWbpcJoKu5cYSe4YmQ= -github.com/go-fonts/liberation v0.3.1 h1:9RPT2NhUpxQ7ukUvz3jeUckmN42T9D9TpjtQcqK/ceM= -github.com/go-fonts/liberation v0.3.1/go.mod h1:jdJ+cqF+F4SUL2V+qxBth8fvBpBDS7yloUL5Fi8GTGY= github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI= github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic= github.com/go-git/go-billy/v5 v5.5.0 h1:yEY4yhzCDuMGSv83oGxiBotRzhwhNr8VZyphhiu+mTU= @@ -597,23 +506,14 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2 github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= -github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY= -github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0= -github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU= -github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0= -github.com/go-latex/latex v0.0.0-20230307184459-12ec69307ad9 h1:NxXI5pTAtpEaU49bpLpQoDsu1zrteW/vxzTz8Cd2UAs= -github.com/go-latex/latex v0.0.0-20230307184459-12ec69307ad9/go.mod h1:gWuR/CrFDDeVRFQwHPvsv9soJVB/iqymhuZQuJ3a9OM= github.com/go-lintpack/lintpack v0.5.2/go.mod h1:NwZuYi2nUHho8XEIZ6SIxihrnPoqBTDqfpXvXAN0sXM= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= -github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A= -github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA= -github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs= github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas= github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.3.0 h1:2y3SDp0ZXuc6/cjLSZ+Q3ir+QB9T/iG5yYRXqsagWSY= -github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ= +github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= github.com/go-ole/go-ole v1.2.1/go.mod h1:7FAglXiTm7HKlQRDeOQ6ZNUHidzCWXuZWq/1dTyBNF8= @@ -628,10 +528,6 @@ github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8 github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I= github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= -github.com/go-pdf/fpdf v0.8.0 h1:IJKpdaagnWUeSkUFUjTcSzTppFxmv8ucGQyNPQWxYOQ= -github.com/go-pdf/fpdf v0.8.0/go.mod h1:gfqhcNwXrsd3XYKte9a7vM3smvU/jB4ZRDrmWSxpfdc= -github.com/go-sourcemap/sourcemap v2.1.3+incompatible h1:W1iEw64niKVGogNgBN3ePyLFfuisuzeidWPMPWmECqU= -github.com/go-sourcemap/sourcemap v2.1.3+incompatible/go.mod h1:F8jJfvm2KbVjc5NqelyYJmf/v5J0dwNLS2mL4sNA1Jg= github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= @@ -662,7 +558,6 @@ github.com/gofrs/flock v0.0.0-20190320160742-5135e617513b/go.mod h1:F1TvTiK9OcQq github.com/gofrs/flock v0.7.3/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU= github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw= github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU= -github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s= github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU= github.com/gogo/googleapis v1.3.2/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c= github.com/gogo/googleapis v1.4.1 h1:1Yx4Myt7BxzvUr5ldGSbwYiZG6t9wGBZ+8/fX3Wvtq0= @@ -671,16 +566,12 @@ github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7a github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= -github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/gogo/status v1.0.3/go.mod h1:SavQ51ycCLnc7dGyJxp8YAmudx8xqiVrRf+6IXRsugc= github.com/golang-jwt/jwt/v4 v4.1.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs= github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= -github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 h1:DACJavvAHhabrF08vX0COfcOBJRhZ8lUbR+ZWIs0Y5g= -github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= @@ -697,8 +588,6 @@ github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4= -github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8= -github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs= github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= @@ -715,12 +604,10 @@ github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QD github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM= github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= -github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2/go.mod h1:k9Qvh+8juN+UKMCS/3jFtGICgW8O96FVaZsaxdzDkR4= github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a/go.mod h1:ryS0uhF+x9jgbj/N71xsEqODy9BN81/GonCZiOzirOk= github.com/golangci/errcheck v0.0.0-20181223084120-ef45e06d44b6/go.mod h1:DbHgvLiFKX1Sh2T1w8Q/h4NAI8MHIpzCdnBUDTXU3I0= @@ -755,9 +642,6 @@ github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE= -github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= @@ -767,8 +651,8 @@ github.com/google/go-containerregistry v0.19.0 h1:uIsMRBV7m/HDkDxE/nXMnv1q+lOOSP github.com/google/go-containerregistry v0.19.0/go.mod h1:u0qB2l7mvtWVR5kNcbFIhFY1hLbf8eeGapA+vbFDCtQ= github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ= github.com/google/go-github/v28 v28.1.1/go.mod h1:bsqJWQX05omyWVmc00nEUql9mhQyv38lDZ8kPZcQVoM= -github.com/google/go-github/v50 v50.2.0 h1:j2FyongEHlO9nxXLc+LP3wuBSVU9mVxfpdYUexMpIfk= -github.com/google/go-github/v50 v50.2.0/go.mod h1:VBY8FB6yPIjrtKhozXv4FQupxKLS6H4m6xFZlT43q8Q= +github.com/google/go-github/v59 v59.0.0 h1:7h6bgpF5as0YQLLkEiVqpgtJqjimMYhBkD4jT5aN3VA= +github.com/google/go-github/v59 v59.0.0/go.mod h1:rJU4R0rQHFVFDOkqGWxfLNo6vEk4dv40oDjhV/gH6wM= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= @@ -781,8 +665,6 @@ github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/ github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= @@ -790,15 +672,7 @@ github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hf github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20230207041349-798e818bf904/go.mod h1:uglQLonpP8qtYCYyzA+8c/9qtqgA3qsXGYqCPKARAFg= github.com/google/pprof v0.0.0-20231101202521-4ca4178f5c7a h1:fEBsGL/sjAuJrgah5XqmmYsTLzJp/TO9Lhy39gkverk= github.com/google/pprof v0.0.0-20231101202521-4ca4178f5c7a/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= @@ -813,20 +687,13 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/wire v0.3.0/go.mod h1:i1DMg/Lu8Sz5yYl25iOdmc5CT5qusaa+zmRWs16741s= github.com/google/wire v0.4.0/go.mod h1:ngWDr9Qvq3yZA10YrxfyGELY/AFWGVpy9c1LTRi1EoU= -github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8= github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY= github.com/googleapis/gax-go v2.0.2+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= -github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0= -github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM= -github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/OthfcblKl4IGNaM= -github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM= -github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c= github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= github.com/googleapis/gnostic v0.2.2/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg= -github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4= github.com/gookit/color v1.2.4/go.mod h1:AhIE+pS6D4Ql0SQWbBeXPHw7gY0/sjHoA4s/n1KB7xg= github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= @@ -860,9 +727,8 @@ github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.2/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= -github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1 h1:6UKoz5ujsI55KNpsJH3UwCq3T8kKbZwNZBNPuTTje8U= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1/go.mod h1:YvJ2f6MplWDhfxiUC3KpyTy76kYUZA4W3pTv/wdKQ9Y= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0 h1:Wqo399gCIufwto+VfwCSvsnfGpF/w5E9CNxSwbpD6No= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0/go.mod h1:qmOFXW2epJhM0qSnUUYpldc7gVz2KMQwJ/QYCDIa7XU= github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw= github.com/hanwen/go-fuse v1.0.0/go.mod h1:unqXarDXqzAk0rt98O2tVndEPIpUgLD9+rwFisZH3Ok= github.com/hanwen/go-fuse/v2 v2.0.3/go.mod h1:0EQM6aH2ctVpvZ6a+onrQ/vaykxh2GH7hy3e13vzTUY= @@ -914,14 +780,11 @@ github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0m github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I= github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc= github.com/hashicorp/uuid v0.0.0-20160311170451-ebb0a03e909c/go.mod h1:fHzc09UnyJyqyW+bFuq864eh+wC7dj65aXmXLRe5to0= -github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= -github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/iancoleman/strcase v0.3.0 h1:nTXanmYxhfFAMjZL34Ov6gkzEsSJZ5DbhxWjvSASxEI= github.com/iancoleman/strcase v0.3.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= -github.com/ianlancetaylor/demangle v0.0.0-20220319035150-800ac71e25c2/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w= github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= @@ -953,25 +816,19 @@ github.com/jmoiron/sqlx v1.2.1-0.20190826204134-d7d95172beb5/go.mod h1:1FEQNm3xl github.com/joefitzgerald/rainbow-reporter v0.1.0/go.mod h1:481CNgqmVHQZzdIbN52CupLJyoVwB10FQ/IQlF1pdL8= github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= -github.com/jonboulle/clockwork v0.4.0 h1:p4Cf1aMWXnXAUh8lVfewRBx1zaTSYKrKMF2g3ST4RZ4= -github.com/jonboulle/clockwork v0.4.0/go.mod h1:xgRqUGwRcjKCO1vbZUEtSLrqKoPSsUpK7fnezOII0kc= github.com/jpillora/backoff v0.0.0-20180909062703-3050d21c67d7/go.mod h1:2iMrUgbbvHEiQClaW2NsSzMyGHqN+rDFqY705q49KG0= -github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4= github.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v0.0.0-20180701071628-ab8a2e0c74be/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= -github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= -github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/juju/ansiterm v1.0.0 h1:gmMvnZRq7JZJx6jkfSq9/+2LMrVEwGwt7UR6G+lmDEg= github.com/juju/ansiterm v1.0.0/go.mod h1:PyXUpnI3olx3bsPcHt98FGPX/KCFZ1Fi+hw1XLI6384= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= -github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4= github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= @@ -989,11 +846,12 @@ github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa02 github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/koron-go/prefixw v1.0.0 h1:p7OC1ffZ/z+Miz0j/Ddt4fVYr8g4W9BKWkViAZ+1LmI= +github.com/koron-go/prefixw v1.0.0/go.mod h1:WZvD0yrbCrkJD23tq03BhCu1ucn5ZenktcXt39QbPyk= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= -github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= @@ -1009,6 +867,8 @@ github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+ github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= +github.com/lmittmann/tint v1.0.4 h1:LeYihpJ9hyGvE0w+K2okPTGUdVLfng1+nDNVR4vWISc= +github.com/lmittmann/tint v1.0.4/go.mod h1:HIS3gSy7qNwGCj+5oRjAutErFBl4BzdQP6cJZ0NfMwE= github.com/logrusorgru/aurora v0.0.0-20181002194514-a7b3b318ed4e/go.mod h1:7rIyQOR62GCctdiQpZ/zOJlFyk6y+94wXzv6RNZgaR4= github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY= github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0= @@ -1030,7 +890,6 @@ github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcncea github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= -github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.10/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= @@ -1059,11 +918,7 @@ github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOq github.com/mattn/go-zglob v0.0.1/go.mod h1:9fxibJccNxU2cnpIKLRRFA7zX7qhkJIQWBb449FYHOo= github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= -github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo= -github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= github.com/maxbrunsfeld/counterfeiter/v6 v6.2.2/go.mod h1:eD9eIE7cdwcMi9rYluz88Jz2VyhSmden33/aXg4oVIY= -github.com/mazznoer/csscolorparser v0.1.3 h1:vug4zh6loQxAUxfU1DZEu70gTPufDPspamZlHAkKcxE= -github.com/mazznoer/csscolorparser v0.1.3/go.mod h1:Aj22+L/rYN/Y6bj3bYqO3N6g1dtdHtGfQ32xZ5PJQic= github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= @@ -1082,8 +937,8 @@ github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0Qu github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.3.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= -github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= -github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= +github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c h1:cqn374mizHuIWj+OSJCajGr/phAmuMug9qIX3l9CflE= +github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A= github.com/moby/buildkit v0.8.1/go.mod h1:/kyU1hKy/aYCuP39GZA9MaKioovHku57N6cqlKZIaiQ= github.com/moby/buildkit v0.13.0-beta3 h1:eefOGE6SsWYHFfymc09Q7VU5i3L9vUs8ZCZVCDXWNOo= @@ -1114,7 +969,6 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ github.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A= github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc= github.com/mozilla/tls-observatory v0.0.0-20190404164649-a3c1b6cfecfd/go.mod h1:SrKMQvPiws7F7iqYp8/TX+IhxCYhzr6N/1yb8cwHsGk= @@ -1131,14 +985,12 @@ github.com/muesli/termenv v0.15.2/go.mod h1:Epx+iuz8sNs7mNKhxzH4fWXGNpZwUaJKRS1n github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= -github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= github.com/nakabonne/nestif v0.3.0/go.mod h1:dI314BppzXjJ4HsCnbo7XzrJHPszZsjnk5wEBSYHI2c= github.com/nbutton23/zxcvbn-go v0.0.0-20180912185939-ae427f1e4c1d/go.mod h1:o96djdrsSGy3AWPyBgZMAGfxZNfgntdJG+11KU4QvbU= github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM= github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= -github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE= github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo= @@ -1192,8 +1044,6 @@ github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mo github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE= github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU= github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec= -github.com/opentracing-contrib/go-grpc v0.0.0-20180928155321-4b5a12d3ff02/go.mod h1:JNdpVEzCpXBgIiv4ds+TzhN1hrtxq6ClLrTlT9OQRSc= -github.com/opentracing-contrib/go-stdlib v0.0.0-20190519235532-cf7a6c988dc9/go.mod h1:PLldrQSroqzH70Xl+1DQcGnefIbqsKR7UDaiux3zV+w= github.com/opentracing-contrib/go-stdlib v1.0.0/go.mod h1:qtI1ogk+2JhVPIXVc6q+NHziSmy2W5GbdQZFUHADCBU= github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o= github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc= @@ -1234,20 +1084,15 @@ github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g= -github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= -github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= -github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY= -github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ= -github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q= -github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY= +github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU= +github.com/prometheus/client_golang v1.19.0/go.mod h1:ZRM9uEAypZakd+q/x7+gmsvXdURP+DABIEIjnmDdp+k= github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16 h1:v7DLqVdK4VrYkVD5diGdl4sxJurKJEMnODWRJlxV9oM= -github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU= +github.com/prometheus/client_model v0.6.0 h1:k1v3CzpSRUTrKMppY35TLwPvxHqBu0bYgxZzqGIgaos= +github.com/prometheus/client_model v0.6.0/go.mod h1:NTQHnmxFpouOD0DpvP4XujX3CdOAGQPoaGhyTchlyt8= github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= @@ -1255,13 +1100,8 @@ github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8 github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc= -github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= -github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc= -github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls= -github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA= -github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY= -github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY= -github.com/prometheus/exporter-toolkit v0.8.2/go.mod h1:00shzmJL7KxcsabLWcONwpyNEuWhREOnFqZW7vadFS0= +github.com/prometheus/common v0.48.0 h1:QO8U2CdOzSn1BBsmXJXduaaW+dY/5QLjfB8svtSzKKE= +github.com/prometheus/common v0.48.0/go.mod h1:0/KsvlIEfPQCQ5I2iNSAWKPZziNCvRs5EC6ILDTlAPc= github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= @@ -1271,10 +1111,7 @@ github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7z github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= -github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= -github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= -github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4= github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo= github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= @@ -1286,14 +1123,12 @@ github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqn github.com/remyoudompheng/bigfft v0.0.0-20170806203942-52369c62f446/go.mod h1:uYEyJGbgTkfkS4+E/PavXkNJcbFIpEtjt2B0KDQ5+9M= github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= -github.com/rivo/uniseg v0.4.6 h1:Sovz9sDSwbOz9tgUy8JpT+KgCkPYJEN/oYzlJiYTNLg= -github.com/rivo/uniseg v0.4.6/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v1.1.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= -github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.5.2/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= -github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= github.com/rs/cors v1.10.0 h1:62NOS1h+r8p1mW6FM0FSB0exioXLhd/sh15KpjWBZ+8= @@ -1328,7 +1163,6 @@ github.com/secure-systems-lab/go-securesystemslib v0.4.0/go.mod h1:FGBZgq2tXWICs github.com/securego/gosec v0.0.0-20200103095621-79fbf3af8d83/go.mod h1:vvbZ2Ae7AzSq3/kywjUDxSNq2SJ27RxCz2un0H3ePqE= github.com/securego/gosec v0.0.0-20200401082031-e946c8c39989/go.mod h1:i9l/TNj+yDFh9SZXUTvspXTjbFXgZGP/UvhU1S65A4A= github.com/securego/gosec/v2 v2.3.0/go.mod h1:UzeVyUXbxukhLeHKV3VVqo7HdoQR9MrRfFmZYotn8ME= -github.com/sercand/kuberesolver v2.4.0+incompatible/go.mod h1:lWF3GL0xptCB/vCiJPl/ZshwPsX/n4Y7u0CW9E7aQIQ= github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= github.com/sergi/go-diff v1.3.1 h1:xkr+Oxo4BOQKmkn/B9eMK0g5Kg/983T9DqqPHwYqD+8= github.com/sergi/go-diff v1.3.1/go.mod h1:aMJSSKb2lpPvRNec0+w3fl7LP9IOFzdc9Pa4NFbPK1I= @@ -1452,7 +1286,6 @@ github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea/go.mod h1:WPnis/6 github.com/tonistiigi/vt100 v0.0.0-20230623042737-f9a4f7ef6531 h1:Y/M5lygoNPKwVNLMPXgVfsRT40CSFKXCxuU8LoHySjs= github.com/tonistiigi/vt100 v0.0.0-20230623042737-f9a4f7ef6531/go.mod h1:ulncasL3N9uLrVann0m+CDlJKWsIAP34MPcOJF6VRvc= github.com/uber/jaeger-client-go v2.25.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk= -github.com/uber/jaeger-client-go v2.28.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk= github.com/uber/jaeger-lib v2.2.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U= github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= @@ -1483,15 +1316,11 @@ github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17 github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0= github.com/vishvananda/netns v0.0.4 h1:Oeaw1EM2JMxD51g9uhtC0D7erkIjgmj8+JZc26m1YX8= github.com/vishvananda/netns v0.0.4/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM= -github.com/vito/midterm v0.1.5-0.20240215023001-e649b2677bfa h1:2zKtb3ChJT0FBEt+AU64Rk3hT2kPjNf94QEyyDYaBaE= -github.com/vito/midterm v0.1.5-0.20240215023001-e649b2677bfa/go.mod h1:2ujYuyOObdWrQtnXAzwSBcPRKhC5Q96ex0nsf2Dmfzk= +github.com/vito/midterm v0.1.5-0.20240307214207-d0271a7ca452 h1:I5FdiUvkD++87hOiZYuDu0BqsaJXAnpOCed3kqkjCEE= +github.com/vito/midterm v0.1.5-0.20240307214207-d0271a7ca452/go.mod h1:2ujYuyOObdWrQtnXAzwSBcPRKhC5Q96ex0nsf2Dmfzk= github.com/vito/progrock v0.10.2-0.20240221152222-63c8df30db8d h1:hh9zh0tcr3/th/mtxEpzLVE8rvx+LejDrbWmqmi+RUM= github.com/vito/progrock v0.10.2-0.20240221152222-63c8df30db8d/go.mod h1:Q8hxIUXZW8vkezLwH4TnRmXv+XdRb4KfLtS9t5RtH9g= github.com/vmware/govmomi v0.20.3/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU= -github.com/weaveworks/common v0.0.0-20230119144549-0aaa5abd1e63 h1:qZcnPZbiX8gGs3VmipVc3ft29vPYBZzlox/04Von6+k= -github.com/weaveworks/common v0.0.0-20230119144549-0aaa5abd1e63/go.mod h1:KoQ+3z63GUJzQ7AhU0AWQNU+LPda2EwL/cx1PlbDzVQ= -github.com/weaveworks/promrus v1.2.0 h1:jOLf6pe6/vss4qGHjXmGz4oDJQA+AOCqEL3FvvZGz7M= -github.com/weaveworks/promrus v1.2.0/go.mod h1:SaE82+OJ91yqjrE1rsvBWVzNZKcHYFtMUyS1+Ogs/KA= github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4= github.com/xanzy/go-gitlab v0.31.0/go.mod h1:sPLojNBn68fMUWSxIJtdVVIP8uSBYqesTfDUseX11Ug= github.com/xanzy/go-gitlab v0.32.0/go.mod h1:sPLojNBn68fMUWSxIJtdVVIP8uSBYqesTfDUseX11Ug= @@ -1507,10 +1336,7 @@ github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -github.com/yuin/goldmark v1.6.0 h1:boZcn2GTjpsynOsC0iJHnBWa4Bi0qzfJjthwauItG68= -github.com/yuin/goldmark v1.6.0/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs= github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA= github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg= @@ -1535,49 +1361,44 @@ go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk= -go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.0 h1:PzIubN4/sjByhDRHLviCjJuweBXWFZWhghjg7cS28+M= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.0/go.mod h1:Ct6zzQEuGK3WpJs2n4dn+wfJYzd/+hNnxMRTWjGn30M= go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.45.0 h1:2ea0IkZBsWH+HA2GkD+7+hRw2u97jzdFyRtXuO14a1s= go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.45.0/go.mod h1:4m3RnBBb+7dB9d21y510oO1pdB1V4J6smNf14WXcBFQ= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.45.0 h1:x8Z78aZx8cOF0+Kkazoc7lwUNMGy0LrzEMxTm4BbTxg= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.45.0/go.mod h1:62CPTSry9QZtOaSsE3tOzhx6LzDhHnXJ6xHeMNNiM6Q= -go.opentelemetry.io/otel v1.21.0 h1:hzLeKBZEL7Okw2mGzZ0cc4k/A7Fta0uoPgaJCr8fsFc= -go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo= -go.opentelemetry.io/otel/exporters/jaeger v1.17.0 h1:D7UpUy2Xc2wsi1Ras6V40q806WM07rqoCWzXu7Sqy+4= -go.opentelemetry.io/otel/exporters/jaeger v1.17.0/go.mod h1:nPCqOnEH9rNLKqH/+rrUjiMzHJdV1BlpKcTwRTyKkKI= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw= +go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo= +go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo= go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.42.0 h1:ZtfnDL+tUrs1F0Pzfwbg2d59Gru9NCH3bgSHBM6LDwU= go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.42.0/go.mod h1:hG4Fj/y8TR/tlEDREo8tWstl9fO9gcFkn4xrx0Io8xU= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.42.0 h1:NmnYCiR0qNufkldjVvyQfZTHSdzeHoZ41zggMsdMcLM= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.42.0/go.mod h1:UVAO61+umUsHLtYb8KXXRoHtxUkdOPkYidzW3gipRLQ= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.42.0 h1:wNMDy/LVGLj2h3p6zg4d0gypKfWKSWI14E1C4smOgl8= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.42.0/go.mod h1:YfbDdXAAkemWJK3H/DshvlrxqFB2rtW4rY6ky/3x/H0= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 h1:cl5P5/GIfFh4t6xyruOgJP5QiA1pw4fYYdv6nc6CBWw= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0/go.mod h1:zgBdWWAu7oEEMC06MMKc5NLbA/1YDXV1sMpSqEeLQLg= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 h1:tIqheXEFWAZ7O8A7m+J0aPTmpJN3YQ7qetUAdkkkKpk= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0/go.mod h1:nUeKExfxAQVbiVFn32YXpXZZHZ61Cc3s3Rn1pDBGAb0= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 h1:IeMeyr1aBvBiPVYihXIaeIZba6b8E1bYp7lbdxK8CQg= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0/go.mod h1:oVdCUtjq9MK9BlS7TtucsQwUcXcymNiEDjgDD2jMtZU= -go.opentelemetry.io/otel/exporters/prometheus v0.42.0 h1:jwV9iQdvp38fxXi8ZC+lNpxjK16MRcZlpDYvbuO1FiA= -go.opentelemetry.io/otel/exporters/prometheus v0.42.0/go.mod h1:f3bYiqNqhoPxkvI2LrXqQVC546K7BuRDL/kKuxkujhA= -go.opentelemetry.io/otel/metric v1.21.0 h1:tlYWfeo+Bocx5kLEloTjbcDwBuELRrIFxwdQ36PlJu4= -go.opentelemetry.io/otel/metric v1.21.0/go.mod h1:o1p3CA8nNHW8j5yuQLdc1eeqEaPfzug24uvsyIEJRWM= -go.opentelemetry.io/otel/sdk v1.21.0 h1:FTt8qirL1EysG6sTQRZ5TokkU8d0ugCj8htOgThZXQ8= -go.opentelemetry.io/otel/sdk v1.21.0/go.mod h1:Nna6Yv7PWTdgJHVRD9hIYywQBRx7pbox6nwBnZIxl/E= -go.opentelemetry.io/otel/sdk/metric v1.19.0 h1:EJoTO5qysMsYCa+w4UghwFV/ptQgqSL/8Ni+hx+8i1k= -go.opentelemetry.io/otel/sdk/metric v1.19.0/go.mod h1:XjG0jQyFJrv2PbMvwND7LwCEhsJzCzV5210euduKcKY= -go.opentelemetry.io/otel/trace v1.21.0 h1:WD9i5gzvoUPuXIXH24ZNBudiarZDKuekPqi/E8fpfLc= -go.opentelemetry.io/otel/trace v1.21.0/go.mod h1:LGbsEB0f9LGjN+OZaQQ26sohbOmiMR+BaslueVtS/qQ= -go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= -go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I= -go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0 h1:t6wl9SPayj+c7lEIFgm4ooDBZVb01IhLB4InpomhRw8= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0/go.mod h1:iSDOcsnSA5INXzZtwaBPrKp/lWu/V14Dd+llD0oI2EA= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0 h1:Mw5xcxMwlqoJd97vwPxA8isEaIoxsta9/Q51+TTJLGE= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0/go.mod h1:CQNu9bj7o7mC6U7+CA/schKEYakYXWr79ucDHTMGhCM= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0 h1:Xw8U6u2f8DK2XAkGRFV7BBLENgnTGX9i4rQRxJf+/vs= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0/go.mod h1:6KW1Fm6R/s6Z3PGXwSJN2K4eT6wQB3vXX6CVnYX9NmM= +go.opentelemetry.io/otel/exporters/prometheus v0.46.0 h1:I8WIFXR351FoLJYuloU4EgXbtNX2URfU/85pUPheIEQ= +go.opentelemetry.io/otel/exporters/prometheus v0.46.0/go.mod h1:ztwVUHe5DTR/1v7PeuGRnU5Bbd4QKYwApWmuutKsJSs= +go.opentelemetry.io/otel/log v0.0.1-alpha h1:Gy4SxFnkHv2wmmzv//sblb4/PoCYVtuZbdFY/XamvHM= +go.opentelemetry.io/otel/log v0.0.1-alpha/go.mod h1:fg1zxLfxAyzlCLyULJTWXUbFVYyOwQZD/DgtGm7VvgA= +go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI= +go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco= +go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw= +go.opentelemetry.io/otel/sdk v1.24.0/go.mod h1:KVrIYw6tEubO9E96HQpcmpTKDVn9gdv35HoYiQWGDFg= +go.opentelemetry.io/otel/sdk/metric v1.24.0 h1:yyMQrPzF+k88/DbH7o4FMAs80puqd+9osbiBrJrz/w8= +go.opentelemetry.io/otel/sdk/metric v1.24.0/go.mod h1:I6Y5FjH6rvEnTTAYQz3Mmv2kl6Ek5IIrmwTLqMrrOE0= +go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI= +go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU= +go.opentelemetry.io/proto/otlp v1.1.0 h1:2Di21piLrCqJ3U3eXGCTPHE9R8Nh+0uglSnOyxikMeI= +go.opentelemetry.io/proto/otlp v1.1.0/go.mod h1:GpBHCBWiqvVLDqmHZsoMM3C5ySeKTC7ej/RNTae6MdY= go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= -go.uber.org/atomic v1.5.1/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ= -go.uber.org/goleak v1.2.1/go.mod h1:qlT2yGI9QafXHhZZLxlSuNsMw3FFLxBr+tBRlmO1xH4= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= @@ -1606,7 +1427,6 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh golang.org/x/crypto v0.0.0-20201117144127-c1f2f97bffc9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.0.0-20221012134737-56aed061732a/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4= golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= golang.org/x/crypto v0.20.0 h1:jmAMJJZXr5KiCw05dfYK9QnqaqKLYXijU23lsEdcQqg= @@ -1627,8 +1447,6 @@ golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa h1:FRnLl4eNAQl8hwxVVC17teOw8 golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa/go.mod h1:zk2irFbV9DP96SEBUUAy67IdHUaZuSnrz1n472HUCLE= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= -golang.org/x/image v0.14.0 h1:tNgSxAFe3jC4uYqvZdTr84SZoM1KfwdC9SKIFrLjFn4= -golang.org/x/image v0.14.0/go.mod h1:HUYqC05R2ZcZ3ejNQsIHQDQiwWM4JBqmm6MKANTp4LE= golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20181217174547-8f45f776aaf1/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= @@ -1641,8 +1459,6 @@ golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHl golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs= golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= @@ -1651,9 +1467,6 @@ golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzB golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0= @@ -1705,33 +1518,15 @@ golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81R golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc= -golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk= -golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20210916014120-12bc252f5db8/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= -golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns= golang.org/x/net v0.21.0 h1:AQyQV4dYCvJ7vGmJyKki9+PBdyvhkSd8EIx/qb0AYv4= golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44= golang.org/x/oauth2 v0.0.0-20180724155351-3d292e4d0cdc/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -1744,23 +1539,7 @@ golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4Iltr golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210810183815-faf39c7919d5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE= -golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= golang.org/x/oauth2 v0.17.0 h1:6m3ZPmLEFdVxKKWnKq4VqZ60gutO35zm+zrAHVmHyDQ= golang.org/x/oauth2 v0.17.0/go.mod h1:OzPDGQiuQMguemayvdylqddI7qcD9lnSDb+1FiwQ5HA= golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw= @@ -1774,8 +1553,6 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ= @@ -1824,7 +1601,6 @@ golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1841,76 +1617,41 @@ golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200831180312-196b9ba8737a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200917073148-efd3b9a0ff20/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201013081832-0aaa2718063a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y= -golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= +golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= -golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY= golang.org/x/term v0.17.0 h1:mkTF7LCd6WGJNL3K1Ad7kwxNfYAW6a8a8QqtMblp/4U= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -1919,15 +1660,12 @@ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= -golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -1973,7 +1711,6 @@ golang.org/x/tools v0.0.0-20190910044552-dd2b5c81c578/go.mod h1:b+2E5dAYhXwXZwtn golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= @@ -2011,19 +1748,8 @@ golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roY golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE= -golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= -golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.17.0 h1:FvmRgNOcs3kOa+T20R1uhfP9F6HgG2mfxDv1vrx1Htc= @@ -2032,17 +1758,9 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= -golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= -golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= -golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 h1:+cNy6SZtPcJQH3LJVLOSmiC7MMxXNOb3PU/VUEz+EhU= -golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90= gonum.org/v1/gonum v0.0.0-20190331200053-3d26580ed485/go.mod h1:2ltnJ7xHfj0zHS40VVPYEAAMTa3ZGguvHGBSJeRWqE0= gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw= gonum.org/v1/netlib v0.0.0-20190331212654-76723241ea4e/go.mod h1:kS+toOQn6AQKjmKJ7gzohV1XkqsFehRA2FbsbkopSuQ= -gonum.org/v1/plot v0.14.0 h1:+LBDVFYwFe4LHhdP8coW6296MBEY4nQ+Y4vuUpJopcE= -gonum.org/v1/plot v0.14.0/go.mod h1:MLdR9424SJed+5VqC6MsouEpig9pZX2VZ57H9ko2bXU= google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= google.golang.org/api v0.0.0-20181030000543-1d582fd0359e/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= @@ -2069,29 +1787,6 @@ google.golang.org/api v0.25.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0M google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM= google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc= -google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg= -google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE= -google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8= -google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU= -google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94= -google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo= -google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4= -google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw= -google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU= -google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k= -google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE= -google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE= -google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI= -google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I= -google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo= -google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g= -google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/SkfA= -google.golang.org/api v0.71.0/go.mod h1:4PyU6e6JogV1f9eA4voyrTY2batOLdgZ5qZ5HOCc4j8= -google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRRyDs= -google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA= -google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw= -google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg= -google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -2135,7 +1830,6 @@ google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfG google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U= google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA= @@ -2143,60 +1837,12 @@ google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7Fc google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210329143202-679c6ae281ee/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= -google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= -google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A= -google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= -google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= -google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= -google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24= -google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k= -google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k= -google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48= -google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48= -google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w= -google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= -google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= -google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= -google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= -google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= -google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= -google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E= -google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220421151946-72621c1f0bd3/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= -google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= -google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= -google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= -google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA= -google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA= -google.golang.org/genproto v0.0.0-20231106174013-bbf56f31fb17 h1:wpZ8pe2x1Q3f2KyT5f8oP/fa9rHAKgFPr/HZdNuS+PQ= -google.golang.org/genproto v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:J7XzRzVy1+IPwWHZUzoD0IccYZIrXILAQpc+Qy9CMhY= -google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17 h1:JpwMPBpFN3uKhdaekDpiNlImDdkUAyiJ6ez/uxGaUSo= -google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:0xJLfVdJqpAPl8tDg1ujOCGzx6LFLttXT5NhllGOXY4= -google.golang.org/genproto/googleapis/rpc v0.0.0-20231106174013-bbf56f31fb17 h1:Jyp0Hsi0bmHXG6k9eATXoYtjd6e2UzZ1SCn/wIupY14= -google.golang.org/genproto/googleapis/rpc v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:oQ5rr10WTTMvP4A36n8JpR1OrO1BEiV4f78CneXZxkA= +google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ= +google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80/go.mod h1:cc8bqMqtv9gMOr0zHg2Vzff5ULhhL2IXP4sbcn32Dro= +google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 h1:Lj5rbfG876hIAYFjqiJnPHfhXbv+nzTWfm04Fg/XSVU= +google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80/go.mod h1:4jWUdICTdgc3Ibxmr8nAJiiLHwQBY0UI0XZcEMaFKaA= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240325203815-454cdb8f5daa h1:RBgMaUMP+6soRkik4VoN8ojR2nex2TqZwjSSogic+eo= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240325203815-454cdb8f5daa/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY= google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio= @@ -2215,28 +1861,9 @@ google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKa google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= -google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8= -google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE= -google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE= -google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34= -google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34= -google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU= -google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ= -google.golang.org/grpc v1.46.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk= -google.golang.org/grpc v1.46.2/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk= -google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk= -google.golang.org/grpc v1.61.0 h1:TOvOcuXn30kRao+gfcvsebNEa5iZIiLkisYEkf7R7o0= -google.golang.org/grpc v1.61.0/go.mod h1:VUbo7IFqmF1QtCAstipjG0GIoq49KvMe9+h1jFLBNJs= -google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= +google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk= +google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= @@ -2250,10 +1877,8 @@ google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlba google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -google.golang.org/protobuf v1.32.0 h1:pPC6BG5ex8PDFnkbrGU3EixyhKcQ2aDuBS36lqK/C7I= -google.golang.org/protobuf v1.32.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= +google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -2274,7 +1899,6 @@ gopkg.in/ini.v1 v1.56.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/warnings.v0 v0.1.1/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME= @@ -2282,7 +1906,6 @@ gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRN gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= @@ -2352,14 +1975,8 @@ mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed/go.mod h1:Xkxe497xwlCKkIa mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b/go.mod h1:2odslEg/xrtNQqCYg2/jCoyKnw3vv5biOc3JnIcYfL4= mvdan.cc/unparam v0.0.0-20190720180237-d51796306d8f/go.mod h1:4G1h5nDURzA3bwVMZIVpwbkw+04kSxk3rAtzlimaUJw= mvdan.cc/unparam v0.0.0-20200501210554-b37ab49443f7/go.mod h1:HGC5lll35J70Y5v7vCGb9oLhHoScFwkHDJm/05RdSTc= -oss.terrastruct.com/d2 v0.6.1 h1:TEk7pl5yS1cnUxHOsIQ7bZCs+iRGsb5FaofaianfZk8= -oss.terrastruct.com/d2 v0.6.1/go.mod h1:ZyzsiefzsZ3w/BDnfF/hcDx9LKBlgieuolX8pXi7oJY= -oss.terrastruct.com/util-go v0.0.0-20231101220827-55b3812542c2 h1:n6y6RoZCgZDchN4gLGlzNRO1Jdf9xOGGqohDBph5BG8= -oss.terrastruct.com/util-go v0.0.0-20231101220827-55b3812542c2/go.mod h1:eMWv0sOtD9T2RUl90DLWfuShZCYp4NrsqNpI8eqO6U4= pack.ag/amqp v0.11.2/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= -rsc.io/pdf v0.1.1 h1:k1MczvYDUvJBe93bYd7wrZLLUEcLZAuF824/I4e5Xr4= -rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= sigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI= diff --git a/hack/with-dev b/hack/with-dev index 2ff520fccb3..175be2f2d17 100755 --- a/hack/with-dev +++ b/hack/with-dev @@ -6,6 +6,7 @@ DAGGER_SRC_ROOT="$(cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)" export _EXPERIMENTAL_DAGGER_CLI_BIN=$DAGGER_SRC_ROOT/bin/dagger export _EXPERIMENTAL_DAGGER_RUNNER_HOST=docker-container://dagger-engine.dev +export _DAGGER_TESTS_ENGINE_TAR=$DAGGER_SRC_ROOT/bin/engine.tar export PATH=$DAGGER_SRC_ROOT/bin:$PATH diff --git a/internal/mage/engine.go b/internal/mage/engine.go index 9560b0c1476..71c36958290 100644 --- a/internal/mage/engine.go +++ b/internal/mage/engine.go @@ -306,8 +306,9 @@ func (t Engine) Dev(ctx context.Context) error { if err != nil { return fmt.Errorf("docker load failed: %w: %s", err, output) } - _, imageID, ok := strings.Cut(string(output), "sha256:") + _, imageID, ok := strings.Cut(string(output), "Loaded image ID: sha256:") if !ok { + _, imageID, ok = strings.Cut(string(output), "Loaded image: sha256:") // podman return fmt.Errorf("unexpected output from docker load: %s", output) } imageID = strings.TrimSpace(imageID) @@ -317,7 +318,7 @@ func (t Engine) Dev(ctx context.Context) error { imageID, imageName, ).CombinedOutput(); err != nil { - return fmt.Errorf("docker tag: %w: %s", err, output) + return fmt.Errorf("docker tag %s %s: %w: %s", imageID, imageName, err, output) } if output, err := exec.CommandContext(ctx, "docker", @@ -339,8 +340,8 @@ func (t Engine) Dev(ctx context.Context) error { } runArgs = append(runArgs, []string{ "-e", util.CacheConfigEnvName, - "-e", "_EXPERIMENTAL_DAGGER_CLOUD_TOKEN", - "-e", "_EXPERIMENTAL_DAGGER_CLOUD_URL", + "-e", "DAGGER_CLOUD_TOKEN", + "-e", "DAGGER_CLOUD_URL", "-e", util.GPUSupportEnvName, "-v", volumeName + ":" + distconsts.EngineDefaultStateDir, "-p", "6060:6060", diff --git a/internal/mage/go.mod b/internal/mage/go.mod index da40ba4e657..b4bf0f2645c 100644 --- a/internal/mage/go.mod +++ b/internal/mage/go.mod @@ -25,7 +25,7 @@ require ( github.com/pkg/errors v0.9.1 // indirect github.com/sosodev/duration v1.1.0 // indirect github.com/vektah/gqlparser/v2 v2.5.10 // indirect - golang.org/x/sys v0.17.0 // indirect + golang.org/x/sys v0.18.0 // indirect ) replace github.com/dagger/dagger => ../../ diff --git a/internal/mage/go.sum b/internal/mage/go.sum index ad5dd782e8e..22fa138453d 100644 --- a/internal/mage/go.sum +++ b/internal/mage/go.sum @@ -48,8 +48,8 @@ golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ= golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y= -golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= +golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= diff --git a/internal/tui/details.go b/internal/tui/details.go deleted file mode 100644 index f8ae4b9e684..00000000000 --- a/internal/tui/details.go +++ /dev/null @@ -1,97 +0,0 @@ -package tui - -import ( - "fmt" - "strings" - - tea "github.com/charmbracelet/bubbletea" - "github.com/charmbracelet/lipgloss" -) - -type Details struct { - item TreeEntry - width int - height int - focus bool -} - -func (m Details) Init() tea.Cmd { - return nil -} - -func (m Details) Update(msg tea.Msg) (tea.Model, tea.Cmd) { - if m.item == nil { - return m, nil - } - - itemM, cmd := m.item.Update(msg) - m.item = itemM.(TreeEntry) - return m, cmd -} - -func (m *Details) SetItem(item TreeEntry) tea.Cmd { - if item == m.item { - return nil - } - m.item = item - return m.item.Init() -} - -func (m *Details) SetWidth(width int) { - m.width = width - if m.item != nil { - m.item.SetWidth(width) - } -} - -func (m *Details) SetHeight(height int) { - m.height = height -} - -func (m *Details) Focus(focus bool) { - m.focus = focus -} - -func (m *Details) Open() tea.Cmd { - return m.item.Open() -} - -func (m Details) headerView() string { - title := trunc(m.item.Name(), m.width) - info := fmt.Sprintf("%3.f%%", m.item.ScrollPercent()*100) - line := "" - borderWidth := lipgloss.Width(titleStyle.Render("")) - - if !m.focus { - info = infoStyle.Copy().Render(info) - title = trunc(title, m.width-lipgloss.Width(info)-borderWidth) - title = titleStyle.Copy().Render(title) - space := max(0, m.width-lipgloss.Width(title)-lipgloss.Width(info)) - line = titleBarStyle.Copy(). - Render(strings.Repeat("─", space)) - } else { - info = infoStyle.Copy().BorderForeground(colorSelected).Render(info) - title = trunc(title, m.width-lipgloss.Width(info)-borderWidth) - title = titleStyle.Copy().BorderForeground(colorSelected).Render(title) - space := max(0, m.width-lipgloss.Width(title)-lipgloss.Width(info)) - line = titleBarStyle.Copy(). - Foreground(colorSelected). - Render(strings.Repeat("─", space)) - } - - return lipgloss.JoinHorizontal(lipgloss.Center, - title, - line, - info) -} - -func (m Details) View() string { - if m.item == nil { - return strings.Repeat("\n", max(0, m.height-1)) - } - headerView := m.headerView() - - m.item.SetHeight(m.height - lipgloss.Height(headerView)) - - return fmt.Sprintf("%s\n%s", headerView, m.item.View()) -} diff --git a/internal/tui/editor.go b/internal/tui/editor.go deleted file mode 100644 index 00dd7a244d9..00000000000 --- a/internal/tui/editor.go +++ /dev/null @@ -1,68 +0,0 @@ -package tui - -import ( - "errors" - "os" - "os/exec" - - tea "github.com/charmbracelet/bubbletea" - "github.com/google/shlex" -) - -// I'm sure these 10 lines are controversial to some. If you find yourself -// caring about this, just set $EDITOR! -var editors = []string{ - // graphical editors - "code", - "subl", - "gedit", - "nodepad++", - - // editors that mere mortals might not remember how to exit - // - // also, editors that might not take kindly to being run from within a - // terminal in *another* editor - "vim", - "vi", - "emacs", - "helix", - - // everyone has these, right? - "nano", - "pico", -} - -func openEditor(filePath string) tea.Cmd { - editorCmd := os.Getenv("EDITOR") - if editorCmd == "" { - for _, editor := range editors { - if _, err := exec.LookPath(editor); err == nil { - editorCmd = editor - break - } - } - } - - if editorCmd == "" { - return func() tea.Msg { - return EditorExitMsg{errors.New("no $EDITOR available")} - } - } - - editorArgs, err := shlex.Split(editorCmd) - if err != nil { - return func() tea.Msg { - return EditorExitMsg{err} - } - } - editorArgs = append(editorArgs, filePath) - - cmd := exec.Command(editorArgs[0], editorArgs[1:]...) //nolint:gosec - cmd.Stdin = os.Stdin - cmd.Stdout = os.Stdout - cmd.Stderr = os.Stderr - - return tea.ExecProcess(cmd, func(err error) tea.Msg { - return EditorExitMsg{err} - }) -} diff --git a/internal/tui/group.go b/internal/tui/group.go deleted file mode 100644 index 2f524cde8ea..00000000000 --- a/internal/tui/group.go +++ /dev/null @@ -1,266 +0,0 @@ -package tui - -import ( - "os" - "path/filepath" - "sort" - "strings" - "time" - - tea "github.com/charmbracelet/bubbletea" - "github.com/dagger/dagger/dagql/idtui" -) - -type groupModel interface { - tea.Model - - SetHeight(int) - SetWidth(int) - ScrollPercent() float64 - - Save(dir string) (string, error) -} - -type Group struct { - groupModel - - id string - name string - entries []TreeEntry - entriesByID map[string]TreeEntry -} - -func NewGroup(id, name string) *Group { - return &Group{ - groupModel: &emptyGroup{}, - - id: id, - name: name, - entries: []TreeEntry{}, - entriesByID: map[string]TreeEntry{}, - } -} - -var _ TreeEntry = &Group{} - -func (g *Group) ID() string { - return g.id -} - -func (g *Group) Inputs() []string { - return nil -} - -func (g *Group) Name() string { - return g.name -} - -func (g *Group) Entries() []TreeEntry { - return g.entries -} - -func (g *Group) Save(dir string) (string, error) { - subDir := filepath.Join(dir, sanitizeFilename(g.Name())) - - if err := os.MkdirAll(subDir, 0700); err != nil { - return "", err - } - - if _, err := g.groupModel.Save(subDir); err != nil { - return "", err - } - - for _, e := range g.entries { - if _, err := e.Save(subDir); err != nil { - return "", err - } - } - - return subDir, nil -} - -func (g *Group) Open() tea.Cmd { - dir, err := os.MkdirTemp("", "dagger-logs.*") - if err != nil { - return func() tea.Msg { return EditorExitMsg{err} } - } - - subDir, err := g.Save(dir) - if err != nil { - return func() tea.Msg { return EditorExitMsg{err} } - } - - return openEditor(subDir) -} - -func (g *Group) Add(e TreeEntry) { - if e.ID() == idtui.PrimaryVertex { - g.name = e.Name() - g.groupModel = e - return - } - - _, has := g.entriesByID[e.ID()] - if has { - return - } - g.entriesByID[e.ID()] = e - g.entries = append(g.entries, e) - g.sort() -} - -func (g *Group) Cached() bool { - for _, e := range g.entries { - if !e.Cached() { - return false - } - } - return true -} - -func (g *Group) Error() *string { - for _, e := range g.entries { - if e.Error() != nil { - return e.Error() - } - } - return nil -} - -func (g *Group) Infinite() bool { - return false -} - -func (g *Group) Started() *time.Time { - timers := []*time.Time{} - for _, e := range g.entries { - timers = append(timers, e.Started()) - } - sort.Slice(timers, func(i, j int) bool { - if timers[i] == nil { - return false - } - if timers[j] == nil { - return true - } - return timers[i].Before(*timers[j]) - }) - - if len(timers) == 0 { - return nil - } - - return timers[0] -} - -func (g *Group) Completed() *time.Time { - timers := []*time.Time{} - for _, e := range g.entries { - timers = append(timers, e.Completed()) - } - sort.Slice(timers, func(i, j int) bool { - if timers[i] == nil { - return false - } - if timers[j] == nil { - return true - } - return timers[i].Before(*timers[j]) - }) - - if len(timers) == 0 { - return nil - } - - return timers[len(timers)-1] -} - -func (g *Group) SetWidth(w int) { - g.groupModel.SetWidth(w) -} - -func (g *Group) Update(msg tea.Msg) (tea.Model, tea.Cmd) { - m, cmd := g.groupModel.Update(msg) - g.groupModel = m.(groupModel) - return g, cmd -} - -func (g *Group) ScrollPercent() float64 { - return g.groupModel.ScrollPercent() -} - -func (g *Group) sort() { - sort.SliceStable(g.entries, func(i, j int) bool { - ie := g.entries[i] - je := g.entries[j] - switch { - case g.isAncestor(ie, je): - return true - case g.isAncestor(je, ie): - return false - case ie.Started() == nil && je.Started() == nil: - // both pending - return false - case ie.Started() == nil && je.Started() != nil: - // j started first - return false - case ie.Started() != nil && je.Started() == nil: - // i started first - return true - case ie.Started() != nil && je.Started() != nil: - return ie.Started().Before(*je.Started()) - default: - // impossible - return false - } - }) -} - -func (g *Group) isAncestor(i, j TreeEntry) bool { - if i == j { - return false - } - - id := i.ID() - - for _, d := range j.Inputs() { - if d == id { - return true - } - - e, ok := g.entriesByID[d] - if ok && g.isAncestor(i, e) { - return true - } - } - - return false -} - -type emptyGroup struct { - height int -} - -func (g *emptyGroup) SetHeight(height int) { - g.height = height -} - -func (g *emptyGroup) SetWidth(int) {} - -func (g *emptyGroup) ScrollPercent() float64 { return 1 } - -func (*emptyGroup) Init() tea.Cmd { - return nil -} - -func (g *emptyGroup) Update(tea.Msg) (tea.Model, tea.Cmd) { - return g, nil -} - -func (g emptyGroup) View() string { - return strings.Repeat("\n", g.height-1) -} - -func (g emptyGroup) Save(dir string) (string, error) { - return "", nil -} diff --git a/internal/tui/item.go b/internal/tui/item.go deleted file mode 100644 index b38860f0c8c..00000000000 --- a/internal/tui/item.go +++ /dev/null @@ -1,221 +0,0 @@ -package tui - -import ( - "bytes" - "fmt" - "os" - "path/filepath" - "strings" - "time" - - "github.com/charmbracelet/bubbles/progress" - "github.com/charmbracelet/bubbles/spinner" - "github.com/charmbracelet/bubbles/viewport" - tea "github.com/charmbracelet/bubbletea" - "github.com/charmbracelet/lipgloss" - "github.com/tonistiigi/units" - "github.com/vito/progrock" - "github.com/vito/progrock/ui" -) - -func NewItem(v *progrock.Vertex, width int) *Item { - saneName := strings.Join(strings.Fields(v.Name), " ") - - return &Item{ - id: v.Id, - inputs: v.Inputs, - name: saneName, - logs: &bytes.Buffer{}, - logsModel: ui.NewVterm(), - tasksModel: viewport.New(width, 1), - spinner: newSpinner(), - width: width, - } -} - -var _ TreeEntry = &Item{} - -type Item struct { - id string - inputs []string - name string - started *time.Time - completed *time.Time - cached bool - error *string - logs *bytes.Buffer - logsModel *ui.Vterm - tasks []*progrock.VertexTask - tasksModel viewport.Model - internal bool - spinner spinner.Model - width int - isInfinite bool -} - -func (i *Item) ID() string { return i.id } -func (i *Item) Inputs() []string { return i.inputs } -func (i *Item) Name() string { return i.name } -func (i *Item) Internal() bool { return i.internal } -func (i *Item) Entries() []TreeEntry { return nil } -func (i *Item) Started() *time.Time { return i.started } -func (i *Item) Completed() *time.Time { return i.completed } -func (i *Item) Cached() bool { return i.cached } -func (i *Item) Infinite() bool { return i.isInfinite } - -func (i *Item) Error() *string { - return i.error -} - -func (i *Item) Save(dir string) (string, error) { - filePath := filepath.Join(dir, sanitizeFilename(i.Name())) + ".log" - f, err := os.Create(filePath) - if err != nil { - return "", fmt.Errorf("save item to %s as %s: %w", dir, filePath, err) - } - - if err := i.logsModel.Print(f); err != nil { - return "", err - } - - if err := f.Close(); err != nil { - return "", err - } - - return filePath, nil -} - -func (i *Item) Open() tea.Cmd { - dir, err := os.MkdirTemp("", "dagger-logs.*") - if err != nil { - return func() tea.Msg { - return EditorExitMsg{err} - } - } - - filePath, err := i.Save(dir) - if err != nil { - return func() tea.Msg { - return EditorExitMsg{err} - } - } - - return openEditor(filePath) -} - -func (i *Item) UpdateVertex(v *progrock.Vertex) { - // Started clock might reset for each layer when pulling images. - // We want to keep the original started time and only updated the completed time. - if i.started == nil && v.Started != nil { - t := v.Started.AsTime() - i.started = &t - } - if v.Completed != nil { - t := v.Completed.AsTime() - i.completed = &t - } - i.cached = v.Cached - i.error = v.Error -} - -func (i *Item) UpdateLog(log *progrock.VertexLog) { - i.logsModel.Write(log.Data) -} - -func (i *Item) UpdateStatus(task *progrock.VertexTask) { - var current = -1 - for i, s := range i.tasks { - if s.Name == task.Name { - current = i - break - } - } - - if current == -1 { - i.tasks = append(i.tasks, task) - } else { - i.tasks[current] = task - } -} - -var _ tea.Model = &Item{} - -// Init is called when the item is first created _and_ when it is selected. -func (i *Item) Init() tea.Cmd { - return i.spinner.Tick -} - -func (i *Item) Update(msg tea.Msg) (tea.Model, tea.Cmd) { - switch msg := msg.(type) { - case spinner.TickMsg: - spinnerM, cmd := i.spinner.Update(msg) - i.spinner = spinnerM - return i, cmd - default: - if len(i.tasks) > 0 { - statusM, cmd := i.tasksModel.Update(msg) - i.tasksModel = statusM - return i, cmd - } - vtermM, cmd := i.logsModel.Update(msg) - i.logsModel = vtermM.(*ui.Vterm) - return i, cmd - } -} - -func (i *Item) View() string { - if len(i.tasks) > 0 { - i.tasksModel.SetContent(i.tasksView()) - return i.tasksModel.View() - } - - return i.logsModel.View() -} - -func (i *Item) SetHeight(height int) { - i.logsModel.SetHeight(height) - i.tasksModel.Height = height -} - -func (i *Item) SetWidth(width int) { - i.width = width - i.logsModel.SetWidth(width) - i.tasksModel.Width = width -} - -func (i *Item) ScrollPercent() float64 { - if len(i.tasks) > 0 { - return i.tasksModel.ScrollPercent() - } - return i.logsModel.ScrollPercent() -} - -func (i *Item) tasksView() string { - tasks := []string{} - - bar := progress.New(progress.WithSolidFill("2")) - bar.Width = i.width / 4 - - for _, t := range i.tasks { - status := completedStatus.String() + " " - if t.Completed == nil { - status = i.spinner.View() + " " - } - - name := t.Name - - progress := "" - if t.Total != 0 { - progress = fmt.Sprintf("%.2f / %.2f", units.Bytes(t.Current), units.Bytes(t.Total)) - progress += " " + bar.ViewAs(float64(t.Current)/float64(t.Total)) - } else if t.Current != 0 { - progress = fmt.Sprintf("%.2f", units.Bytes(t.Current)) - } - - pad := strings.Repeat(" ", max(0, i.width-lipgloss.Width(status)-lipgloss.Width(name)-lipgloss.Width(progress))) - view := status + name + pad + progress - tasks = append(tasks, view) - } - - return strings.Join(tasks, "\n") -} diff --git a/internal/tui/keys.go b/internal/tui/keys.go deleted file mode 100644 index 5eed4d2b5d3..00000000000 --- a/internal/tui/keys.go +++ /dev/null @@ -1,105 +0,0 @@ -package tui - -import "github.com/charmbracelet/bubbles/key" - -type keyMap struct { - Up key.Binding - Down key.Binding - - Home, End key.Binding - PageUp, PageDown key.Binding - - Switch key.Binding - - Collapse key.Binding - Expand key.Binding - CollapseAll key.Binding - ExpandAll key.Binding - - Follow key.Binding - - Open key.Binding - - Help key.Binding - Quit key.Binding -} - -func (k keyMap) ShortHelp() []key.Binding { - return []key.Binding{ - k.Up, k.Down, - k.Collapse, k.Expand, - k.Open, k.Switch, k.Follow, - k.Help, k.Quit, - } -} - -func (k keyMap) FullHelp() [][]key.Binding { - return [][]key.Binding{ - {k.Up, k.Down, k.Home, k.End, k.PageUp, k.PageDown}, - {k.Collapse, k.CollapseAll, k.Expand, k.ExpandAll}, - {k.Open, k.Switch, k.Follow, k.Help, k.Quit}, - } -} - -var keys = keyMap{ - Up: key.NewBinding( - key.WithKeys("up", "k"), - key.WithHelp("↑/k", "move up"), - ), - Down: key.NewBinding( - key.WithKeys("down", "j"), - key.WithHelp("↓/j", "move down"), - ), - Home: key.NewBinding( - key.WithKeys("home"), - key.WithHelp("home", "go to top"), - ), - End: key.NewBinding( - key.WithKeys("end"), - key.WithHelp("end", "go to bottom"), - ), - PageDown: key.NewBinding( - key.WithKeys("pgdown"), - key.WithHelp("pgdn", "page down"), - ), - PageUp: key.NewBinding( - key.WithKeys("pgup"), - key.WithHelp("pgup", "page up"), - ), - Switch: key.NewBinding( - key.WithKeys("tab"), - key.WithHelp("tab", "switch focus"), - ), - Collapse: key.NewBinding( - key.WithKeys("left", "h"), - key.WithHelp("←/h", "collapse"), - ), - Expand: key.NewBinding( - key.WithKeys("right", "l"), - key.WithHelp("→/l", "expand"), - ), - CollapseAll: key.NewBinding( - key.WithKeys("["), - key.WithHelp("[", "collapse all"), - ), - ExpandAll: key.NewBinding( - key.WithKeys("]"), - key.WithHelp("]", "expand all"), - ), - Follow: key.NewBinding( - key.WithKeys("f"), - key.WithHelp("f", "toggle follow"), - ), - Open: key.NewBinding( - key.WithKeys("o"), - key.WithHelp("o", "open logs in $EDITOR"), - ), - Help: key.NewBinding( - key.WithKeys("?"), - key.WithHelp("?", "toggle help"), - ), - Quit: key.NewBinding( - key.WithKeys("q", "esc", "ctrl+c"), - key.WithHelp("q", "quit"), - ), -} diff --git a/internal/tui/model.go b/internal/tui/model.go deleted file mode 100644 index b93aea592da..00000000000 --- a/internal/tui/model.go +++ /dev/null @@ -1,465 +0,0 @@ -package tui - -import ( - "fmt" - "strings" - "time" - - "github.com/charmbracelet/bubbles/help" - "github.com/charmbracelet/bubbles/key" - "github.com/charmbracelet/bubbles/spinner" - "github.com/charmbracelet/bubbles/viewport" - tea "github.com/charmbracelet/bubbletea" - "github.com/charmbracelet/lipgloss" - "github.com/dagger/dagger/telemetry" - "github.com/vito/progrock" - "google.golang.org/protobuf/proto" - "google.golang.org/protobuf/types/known/timestamppb" -) - -func New(quit func(), r progrock.Reader) *Model { - return &Model{ - quit: quit, - tree: &Tree{ - viewport: viewport.New(80, 1), - spinner: newSpinner(), - collapsed: make(map[TreeEntry]bool), - focus: true, - }, - itemsByID: make(map[string]*Item), - groupsByID: make(map[string]*Group), - futureMemberships: make(map[string][]*Group), - pipeliner: telemetry.NewPipeliner(), - details: Details{}, - follow: true, - updates: r, - help: help.New(), - } -} - -type Model struct { - quit func() - - updates progrock.Reader - itemsByID map[string]*Item - groupsByID map[string]*Group - futureMemberships map[string][]*Group - pipeliner *telemetry.Pipeliner - - tree *Tree - details Details - help help.Model - - width int - height int - - localTimeDiff time.Duration - done bool - - follow bool - detailsFocus bool - - errors []error -} - -func (m Model) Init() tea.Cmd { - return tea.Batch( - m.tree.Init(), - m.details.Init(), - m.waitForActivity(), - followTick(), - ) -} - -type CommandOutMsg struct { - Output []byte -} - -type CommandExitMsg struct { - Err error -} - -type EditorExitMsg struct { - Err error -} - -type endMsg struct{} - -func (m Model) adjustLocalTime(t *timestamppb.Timestamp) *timestamppb.Timestamp { - if t == nil { - return nil - } - - adjusted := t.AsTime().Add(m.localTimeDiff) - cp := proto.Clone(t).(*timestamppb.Timestamp) - cp.Seconds = adjusted.Unix() - cp.Nanos = int32(adjusted.Nanosecond()) - return cp -} - -type followMsg struct{} - -func Follow() tea.Msg { - return followMsg{} -} - -func followTick() tea.Cmd { - return tea.Tick(100*time.Millisecond, func(_ time.Time) tea.Msg { - return Follow() - }) -} - -func (m Model) IsDone() bool { - return m.done -} - -func (m Model) Update(msg tea.Msg) (tea.Model, tea.Cmd) { - switch msg := msg.(type) { - case tea.WindowSizeMsg: - m.width, m.height = msg.Width, msg.Height - m.tree.SetWidth(msg.Width) - m.details.SetWidth(msg.Width) - case tea.KeyMsg: - return m.processKeyMsg(msg) - case EditorExitMsg: - if msg.Err != nil { - m.errors = append(m.errors, msg.Err) - } - return m, nil - case followMsg: - if !m.follow { - return m, nil - } - - m.tree.Follow() - - return m, tea.Batch( - m.details.SetItem(m.tree.Current()), - followTick(), - ) - case *progrock.StatusUpdate: - return m.processUpdate(msg) - case spinner.TickMsg: - cmds := []tea.Cmd{} - - updatedDetails, cmd := m.details.Update(msg) - m.details = updatedDetails.(Details) - cmds = append(cmds, cmd) - - updatedTree, cmd := m.tree.Update(msg) - tree := updatedTree.(*Tree) - m.tree = tree - cmds = append(cmds, cmd) - - return m, tea.Batch(cmds...) - case endMsg: - // We've reached the end - m.done = true - // TODO(vito): print summary before exiting - // if m.follow { - // // automatically quit on completion in follow mode - // return m, tea.Quit - // } - return m, nil - default: - // ignore; we get an occasional message, not sure where it's from, - // but logging will disrupt the UI - } - - return m, nil -} - -func (m Model) processKeyMsg(msg tea.KeyMsg) (tea.Model, tea.Cmd) { - switch { - case key.Matches(msg, keys.Help): - m.help.ShowAll = !m.help.ShowAll - case key.Matches(msg, keys.Quit): - m.quit() - return m, tea.Quit - case key.Matches(msg, keys.Follow): - m.follow = !m.follow - return m, Follow - case key.Matches(msg, keys.Up): - if m.detailsFocus { - newDetails, cmd := m.details.Update(msg) - m.details = newDetails.(Details) - return m, cmd - } - m.follow = false - m.tree.MoveUp() - if m.tree.Current() != nil { - return m, m.details.SetItem(m.tree.Current()) - } - case key.Matches(msg, keys.Down): - if m.detailsFocus { - newDetails, cmd := m.details.Update(msg) - m.details = newDetails.(Details) - return m, cmd - } - m.follow = false - m.tree.MoveDown() - if m.tree.Current() != nil { - return m, m.details.SetItem(m.tree.Current()) - } - case key.Matches(msg, keys.Home): - if m.detailsFocus { - newDetails, cmd := m.details.Update(msg) - m.details = newDetails.(Details) - return m, cmd - } - m.follow = false - m.tree.MoveToTop() - if m.tree.Current() != nil { - return m, m.details.SetItem(m.tree.Current()) - } - case key.Matches(msg, keys.End): - if m.detailsFocus { - newDetails, cmd := m.details.Update(msg) - m.details = newDetails.(Details) - return m, cmd - } - m.follow = false - m.tree.MoveToBottom() - if m.tree.Current() != nil { - return m, m.details.SetItem(m.tree.Current()) - } - case key.Matches(msg, keys.PageUp): - if m.detailsFocus { - newDetails, cmd := m.details.Update(msg) - m.details = newDetails.(Details) - return m, cmd - } - m.follow = false - m.tree.PageUp() - if m.tree.Current() != nil { - return m, m.details.SetItem(m.tree.Current()) - } - case key.Matches(msg, keys.PageDown): - if m.detailsFocus { - newDetails, cmd := m.details.Update(msg) - m.details = newDetails.(Details) - return m, cmd - } - m.follow = false - m.tree.PageDown() - if m.tree.Current() != nil { - return m, m.details.SetItem(m.tree.Current()) - } - case key.Matches(msg, keys.Collapse): - m.tree.Collapse(m.tree.Current(), false) - case key.Matches(msg, keys.Expand): - m.tree.Expand(m.tree.Current(), false) - case key.Matches(msg, keys.CollapseAll): - m.tree.Collapse(m.tree.Root(), true) - case key.Matches(msg, keys.ExpandAll): - m.tree.Expand(m.tree.Root(), true) - case key.Matches(msg, keys.Switch): - m.detailsFocus = !m.detailsFocus - m.tree.Focus(!m.detailsFocus) - m.details.Focus(m.detailsFocus) - case key.Matches(msg, keys.Open): - return m, m.details.Open() - } - return m, nil -} - -func (m Model) processUpdate(msg *progrock.StatusUpdate) (tea.Model, tea.Cmd) { - m.pipeliner.TrackUpdate(msg) - - cmds := []tea.Cmd{ - m.waitForActivity(), - } - - for _, g := range msg.Groups { - grp, found := m.groupsByID[g.Id] - if !found { - grp = NewGroup(g.Id, g.Name) - m.groupsByID[g.Id] = grp - if g.Parent != nil { - parent := m.groupsByID[g.GetParent()] - parent.Add(grp) - } else { - m.tree.SetRoot(grp) - } - } - // TODO: update group completion - _ = grp - } - - for _, v := range msg.Vertexes { - if m.localTimeDiff == 0 && v.Started != nil { - m.localTimeDiff = time.Since(v.Started.AsTime()) - } - v.Started = m.adjustLocalTime(v.Started) - v.Completed = m.adjustLocalTime(v.Completed) - - if v.Internal { - // ignore - continue - } - - item := m.itemsByID[v.Id] - if item == nil { - item = NewItem(v, m.width) - cmds = append(cmds, item.Init()) - m.itemsByID[v.Id] = item - if !item.Internal() { - cmds = append(cmds, m.details.SetItem(m.tree.Current())) - } - } - - m.addToFirstGroup(v.Id) - - item.UpdateVertex(v) - } - - for _, mem := range msg.Memberships { - for _, id := range mem.Vertexes { - m.addToFirstGroup(id) - } - } - - for _, s := range msg.Tasks { - item := m.itemsByID[s.Vertex] - if item == nil { - continue - } - item.UpdateStatus(s) - } - - for _, l := range msg.Logs { - item := m.itemsByID[l.Vertex] - if item == nil { - continue - } - item.UpdateLog(l) - } - - return m, tea.Batch(cmds...) -} - -func (m Model) addToFirstGroup(id string) { - pipelineVertex, found := m.pipeliner.Vertex(id) - if !found { - return - } - - groups := pipelineVertex.Groups - if len(groups) == 0 { - return - } - - // always add vertex to the same group to avoid duplicating - g, found := m.groupsByID[groups[0]] - if !found { - panic("group not found: " + groups[0]) - } - - i, found := m.itemsByID[id] - if found { - g.Add(i) - } else { - m.futureMemberships[id] = append(m.futureMemberships[id], g) - } -} - -func (m Model) statusBarTimerView() string { - root := m.tree.Root() - if root == nil || root.Started() == nil { - return "0.0s" - } - now := time.Now() - if m.done && root.Completed() != nil { - now = *root.Completed() - } - diff := now.Sub(*root.Started()) - - prec := 1 - sec := diff.Seconds() - if sec < 10 { - prec = 2 - } else if sec < 100 { - prec = 1 - } - return strings.TrimSpace(fmt.Sprintf("%.[2]*[1]fs", sec, prec)) -} - -func (m Model) View() string { - maxTreeHeight := m.height / 2 - // hack: make the details header split the view evenly - // maxTreeHeight = max(0, maxTreeHeight-2) - treeHeight := min(maxTreeHeight, m.tree.UsedHeight()) - m.tree.SetHeight(treeHeight) - - helpView := m.helpView() - statusBarView := m.statusBarView() - errorsView := m.errorsView() - - detailsHeight := m.height - treeHeight - detailsHeight -= lipgloss.Height(helpView) - detailsHeight -= lipgloss.Height(statusBarView) - detailsHeight -= lipgloss.Height(errorsView) - detailsHeight = max(detailsHeight, 10) - m.details.SetHeight(detailsHeight) - - return lipgloss.JoinVertical(lipgloss.Left, - statusBarView, - m.tree.View(), - m.details.View(), - errorsView, - helpView, - ) -} - -func (m Model) errorsView() string { - if len(m.errors) == 0 { - return "" - } - - errs := make([]string, len(m.errors)) - for i, err := range m.errors { - errs[i] = errorStyle.Render(err.Error()) - } - - return lipgloss.JoinVertical(lipgloss.Left, errs...) -} - -func (m Model) statusBarView() string { - mode := browseMode.String() - if m.follow { - mode = followMode.String() - } - status := runningStatus.String() - if m.done { - status = completeStatus.String() - } - - timer := timerStyle.Render(m.statusBarTimerView()) - statusVal := statusText.Copy(). - Width(m.width - lipgloss.Width(mode) - lipgloss.Width(status) - lipgloss.Width(timer)). - Render("") - - return mode + statusVal + status + timer -} - -func (m Model) helpView() string { - return m.help.View(keys) -} - -func (m Model) waitForActivity() tea.Cmd { - return func() tea.Msg { - msg, ok := m.updates.ReadStatus() - if ok { - return msg - } - - return endMsg{} - } -} - -func newSpinner() spinner.Model { - return spinner.New( - spinner.WithStyle(lipgloss.NewStyle().Foreground(colorStarted)), - spinner.WithSpinner(spinner.MiniDot), - ) -} diff --git a/internal/tui/style.go b/internal/tui/style.go deleted file mode 100644 index daa5e505e61..00000000000 --- a/internal/tui/style.go +++ /dev/null @@ -1,124 +0,0 @@ -package tui - -import "github.com/charmbracelet/lipgloss" - -// palette -var ( - colorBackground = lipgloss.Color("0") // black - colorFailed = lipgloss.Color("1") // red - colorCompleted = lipgloss.Color("2") // green - colorStarted = lipgloss.Color("3") // yellow - colorSelected = lipgloss.Color("4") // blue - colorAccent1 = lipgloss.Color("5") // magenta - colorAccent2 = lipgloss.Color("6") // cyan - colorForeground = lipgloss.Color("7") // white - colorFaint = lipgloss.Color("8") // bright black - - colorLightForeground = lipgloss.AdaptiveColor{Light: "8", Dark: "15"} - colorLightBackground = lipgloss.AdaptiveColor{Light: "15", Dark: "8"} -) - -// status bar -var ( - statusNugget = lipgloss.NewStyle(). - Foreground(colorForeground). - Padding(0, 1) - - statusBarStyle = lipgloss.NewStyle(). - Foreground(colorLightForeground). - Background(colorLightBackground) - - followMode = lipgloss.NewStyle(). - Inherit(statusBarStyle). - Background(colorAccent2). - Foreground(colorBackground). - Padding(0, 1). - MarginRight(1). - SetString("FOLLOW") - - browseMode = followMode.Copy(). - Background(colorAccent1). - Foreground(colorBackground). - SetString("BROWSE") - - runningStatus = statusNugget.Copy(). - Background(colorStarted). - Foreground(colorBackground). - Align(lipgloss.Right). - PaddingRight(0). - SetString("RUNNING ") - - completeStatus = runningStatus.Copy(). - Background(colorAccent2). - Foreground(colorBackground). - Align(lipgloss.Right). - SetString("COMPLETE ") - - statusText = lipgloss.NewStyle().Inherit(statusBarStyle) - - timerStyle = statusNugget.Copy(). - Background(colorStarted). - Foreground(colorBackground) -) - -// tree -var ( - itemTimerStyle = lipgloss.NewStyle(). - Inline(true). - Foreground(colorFaint) - - selectedStyle = lipgloss.NewStyle(). - Inline(true). - Foreground(colorBackground). - Background(colorSelected). - Bold(false) - - selectedStyleBlur = lipgloss.NewStyle(). - Inline(true). - Background(colorFaint). - Foreground(colorBackground) - - completedStatus = lipgloss.NewStyle(). - Inline(true). - Foreground(colorCompleted). - SetString("✔") - failedStatus = lipgloss.NewStyle(). - Inline(true). - Foreground(colorFailed). - SetString("✖") - cachedStatus = lipgloss.NewStyle(). - Inline(true). - Foreground(colorFaint). - SetString("●") -) - -var ( - borderLeft = func() lipgloss.Border { - b := lipgloss.RoundedBorder() - b.Right = "├" - return b - }() - borderRight = func() lipgloss.Border { - b := lipgloss.RoundedBorder() - b.Left = "│" - return b - }() -) - -// details -var ( - titleStyle = lipgloss.NewStyle(). - Border(borderLeft). - Padding(0, 1). - Foreground(colorForeground) - - titleBarStyle = lipgloss.NewStyle(). - Foreground(colorForeground) - - infoStyle = lipgloss.NewStyle(). - Border(borderRight). - Padding(0, 1). - Foreground(colorForeground) - - errorStyle = lipgloss.NewStyle().Inline(true).Foreground(colorFailed) -) diff --git a/internal/tui/tree.go b/internal/tui/tree.go deleted file mode 100644 index 293875cf08f..00000000000 --- a/internal/tui/tree.go +++ /dev/null @@ -1,406 +0,0 @@ -package tui - -import ( - "fmt" - "strings" - "time" - - "github.com/charmbracelet/bubbles/spinner" - "github.com/charmbracelet/bubbles/viewport" - tea "github.com/charmbracelet/bubbletea" - "github.com/charmbracelet/lipgloss" -) - -type TreeEntry interface { - tea.Model - - ID() string - Inputs() []string - - Name() string - - Entries() []TreeEntry - - Infinite() bool - - Started() *time.Time - Completed() *time.Time - Cached() bool - Error() *string - - SetWidth(int) - SetHeight(int) - ScrollPercent() float64 - - Save(dir string) (string, error) - Open() tea.Cmd -} - -type Tree struct { - viewport viewport.Model - - root TreeEntry - currentOffset int - focus bool - - spinner spinner.Model - collapsed map[TreeEntry]bool -} - -func (m *Tree) Init() tea.Cmd { - return m.spinner.Tick -} - -func (m *Tree) Update(msg tea.Msg) (tea.Model, tea.Cmd) { - var cmd tea.Cmd - m.spinner, cmd = m.spinner.Update(msg) - return m, cmd -} - -func (m *Tree) SetRoot(root *Group) { - m.root = root -} - -func (m *Tree) SetWidth(width int) { - m.viewport.Width = width -} - -func (m *Tree) SetHeight(height int) { - m.viewport.Height = height -} - -func (m *Tree) UsedHeight() int { - if m.root == nil { - return 0 - } - - return m.height(m.root) -} - -func (m Tree) Root() TreeEntry { - return m.root -} - -func (m Tree) Current() TreeEntry { - return m.nth(m.root, m.currentOffset) -} - -func (m *Tree) Focus(focus bool) { - m.focus = focus -} - -func (m *Tree) Open() tea.Cmd { - return m.Current().Open() -} - -func (m *Tree) View() string { - if m.root == nil { - return "" - } - - offset := m.currentOffset - - views := m.itemView(0, m.root, []bool{}) - - m.viewport.SetContent(strings.Join(views, "\n")) - - if offset >= m.viewport.YOffset+m.viewport.Height { - m.viewport.SetYOffset(offset - m.viewport.Height + 1) - } - - if offset < m.viewport.YOffset { - m.viewport.SetYOffset(offset) - } - - return m.viewport.View() -} - -func (m *Tree) treePrefixView(padding []bool) string { - pad := strings.Builder{} - for i, last := range padding { - leaf := i == len(padding)-1 - - switch { - case leaf && !last: - pad.WriteString(" ├─") - case leaf && last: - pad.WriteString(" └─") - case !leaf && !last: - pad.WriteString(" │ ") - case !leaf && last: - pad.WriteString(" ") - } - } - return pad.String() -} - -func (m *Tree) statusView(item TreeEntry) string { - if item.Cached() { - return cachedStatus.String() - } - if item.Error() != nil { - return failedStatus.String() - } - if item.Started() != nil { - if item.Completed() != nil { - return completedStatus.String() - } - return m.spinner.View() - } - return " " -} - -func (m *Tree) timerView(item TreeEntry) string { - if item.Started() == nil { - return "" - } - if item.Cached() { - return itemTimerStyle.Render("CACHED ") - } - done := item.Completed() - if done == nil { - now := time.Now() - done = &now - } - diff := done.Sub(*item.Started()) - - prec := 1 - sec := diff.Seconds() - if sec < 10 { - prec = 2 - } else if sec < 100 { - prec = 1 - } - return itemTimerStyle.Render(fmt.Sprintf("%.[2]*[1]fs ", sec, prec)) -} - -func (m *Tree) height(item TreeEntry) int { - height := 1 - entries := item.Entries() - if entries == nil || m.collapsed[item] { - return height - } - - for _, e := range entries { - height += m.height(e) - } - - return height -} - -func (m *Tree) itemView(offset int, item TreeEntry, padding []bool) []string { - renderedItems := []string{} - - status := " " + m.statusView(item) + " " - treePrefix := m.treePrefixView(padding) - expandView := "" - if item.Entries() != nil { - if collapsed := m.collapsed[item]; collapsed { - expandView = "▶ " - } else { - expandView = "▼ " - } - } - timerView := m.timerView(item) - - itemWidth := m.viewport.Width - - lipgloss.Width(status) - - lipgloss.Width(treePrefix) - - lipgloss.Width(timerView) - - nameWidth := itemWidth - - lipgloss.Width(expandView) - - 2 // space on each side - - name := trunc(item.Name(), nameWidth) - - itemView := lipgloss.NewStyle(). - Inline(true). - Width(max(0, itemWidth)). - Render(" " + expandView + name + " ") - - view := status + treePrefix - if item == m.Current() { - if m.focus && offset == m.currentOffset { - view += selectedStyle.Render(itemView + timerView) - } else { - view += selectedStyleBlur.Render(itemView + timerView) - } - } else { - view += itemView + timerView - } - - renderedItems = append(renderedItems, view) - offset++ - - entries := item.Entries() - if entries == nil || m.collapsed[item] { - return renderedItems - } - - for i, s := range entries { - pad := append([]bool{}, padding...) - if i == len(entries)-1 { - pad = append(pad, true) - } else { - pad = append(pad, false) - } - - views := m.itemView(offset, s, pad) - offset += len(views) - renderedItems = append(renderedItems, views...) - } - - return renderedItems -} - -func (m *Tree) MoveUp() { - if m.currentOffset == 0 { - return - } - m.currentOffset-- -} - -func (m *Tree) MoveToTop() { - m.currentOffset = 0 -} - -func (m *Tree) MoveDown() { - if m.currentOffset == m.height(m.root)-1 { - return - } - m.currentOffset++ -} - -func (m *Tree) MoveToBottom() { - m.currentOffset = m.height(m.root) - 1 -} - -func (m *Tree) PageUp() { - for i := 0; i < m.viewport.Height; i++ { - m.MoveUp() - } -} - -func (m *Tree) PageDown() { - for i := 0; i < m.viewport.Height; i++ { - m.MoveDown() - } -} - -func (m *Tree) Collapse(entry TreeEntry, recursive bool) { - m.setCollapsed(entry, true, recursive) -} - -func (m *Tree) Expand(entry TreeEntry, recursive bool) { - m.setCollapsed(entry, false, recursive) -} - -func (m *Tree) setCollapsed(entry TreeEntry, collapsed, recursive bool) { - // Non collapsible - if entry == nil || entry.Entries() == nil { - return - } - m.collapsed[entry] = collapsed - if !recursive { - return - } - for _, e := range entry.Entries() { - m.setCollapsed(e, collapsed, recursive) - } -} - -func (m *Tree) Follow() { - if m.root == nil { - return - } - - if m.root.Completed() != nil { - // go back to the root node on completion - m.currentOffset = 0 - return - } - - current := m.Current() - if current == nil { - return - } - - if current.Completed() == nil && len(current.Entries()) == 0 { - return - } - - oldest := m.findOldestIncompleteEntry(m.root) - if oldest != -1 { - m.currentOffset = oldest - } -} - -func (m *Tree) findOldestIncompleteEntry(entry TreeEntry) int { - var oldestIncompleteEntry TreeEntry - oldestStartedTime := time.Time{} - - var search func(e TreeEntry) - - search = func(e TreeEntry) { - started := e.Started() - completed := e.Completed() - cached := e.Cached() - entries := e.Entries() - - if e.Infinite() { - // avoid following services, since they run forever - return - } - - if len(entries) == 0 && started != nil && completed == nil && !cached { - if oldestIncompleteEntry == nil || started.Before(oldestStartedTime) { - oldestStartedTime = *started - oldestIncompleteEntry = e - } - } - - for _, child := range entries { - search(child) - } - } - - search(entry) - - if oldestIncompleteEntry == nil { - return -1 - } - - return m.indexOf(0, entry, oldestIncompleteEntry) -} - -func (m *Tree) indexOf(offset int, entry TreeEntry, needle TreeEntry) int { - if entry == needle { - return offset - } - offset++ - for _, child := range entry.Entries() { - if found := m.indexOf(offset, child, needle); found != -1 { - return found - } - offset += m.height(child) - } - return -1 -} - -func (m *Tree) nth(entry TreeEntry, n int) TreeEntry { - if n == 0 { - return entry - } - if m.collapsed[entry] { - return nil - } - skipped := 1 - for _, child := range entry.Entries() { - if found := m.nth(child, n-skipped); found != nil { - return found - } - skipped += m.height(child) - } - return nil -} diff --git a/internal/tui/util.go b/internal/tui/util.go deleted file mode 100644 index 9592851bbba..00000000000 --- a/internal/tui/util.go +++ /dev/null @@ -1,41 +0,0 @@ -package tui - -import ( - "strings" - - "golang.org/x/exp/constraints" -) - -func max[T constraints.Ordered](i, j T) T { - if i > j { - return i - } - return j -} - -func min[T constraints.Ordered](a, b T) T { - if a < b { - return a - } - return b -} - -func trunc(str string, size int) string { - if len(str) <= size { - return str - } - - return str[:size-1] + "…" -} - -func sanitizeFilename(name string) string { - sanitized := strings.Map(func(r rune) rune { - switch r { - case '<', '>', ':', '"', '/', '\\', '|', '?', '*': - return ' ' - default: - return r - } - }, name) - return strings.Join(strings.Fields(sanitized), " ") -} diff --git a/sdk/go/dagger.gen.go b/sdk/go/dagger.gen.go index 8b1de199386..fa3e8af2929 100644 --- a/sdk/go/dagger.gen.go +++ b/sdk/go/dagger.gen.go @@ -11,10 +11,19 @@ import ( "strings" "github.com/vektah/gqlparser/v2/gqlerror" + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/trace" "dagger.io/dagger/querybuilder" ) +func Tracer() trace.Tracer { + return otel.Tracer("dagger.io/sdk.go") +} + +// reassigned at runtime after the span is initialized +var marshalCtx = context.Background() + // assertNotNil panic if the given value is nil. // This function is used to validate that input with pointer type are not nil. // See https://github.com/dagger/dagger/issues/5696 for more context. @@ -285,7 +294,7 @@ func (r *CacheVolume) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *CacheVolume) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -663,7 +672,7 @@ func (r *Container) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Container) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -1733,7 +1742,7 @@ func (r *CurrentModule) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *CurrentModule) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -2018,7 +2027,7 @@ func (r *Directory) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Directory) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -2260,7 +2269,7 @@ func (r *EnvVariable) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *EnvVariable) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -2356,7 +2365,7 @@ func (r *FieldTypeDef) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *FieldTypeDef) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -2483,7 +2492,7 @@ func (r *File) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *File) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -2637,7 +2646,7 @@ func (r *Function) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Function) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -2783,7 +2792,7 @@ func (r *FunctionArg) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *FunctionArg) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -2862,7 +2871,7 @@ func (r *FunctionCall) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *FunctionCall) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3003,7 +3012,7 @@ func (r *FunctionCallArgValue) XXX_GraphQLID(ctx context.Context) (string, error } func (r *FunctionCallArgValue) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3099,7 +3108,7 @@ func (r *GeneratedCode) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *GeneratedCode) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3245,7 +3254,7 @@ func (r *GitModuleSource) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *GitModuleSource) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3338,7 +3347,7 @@ func (r *GitRef) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *GitRef) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3438,7 +3447,7 @@ func (r *GitRepository) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *GitRepository) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3549,7 +3558,7 @@ func (r *Host) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Host) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3721,7 +3730,7 @@ func (r *InputTypeDef) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *InputTypeDef) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3836,7 +3845,7 @@ func (r *InterfaceTypeDef) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *InterfaceTypeDef) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -3917,7 +3926,7 @@ func (r *Label) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Label) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -4005,7 +4014,7 @@ func (r *ListTypeDef) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *ListTypeDef) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -4068,7 +4077,7 @@ func (r *LocalModuleSource) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *LocalModuleSource) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -4243,7 +4252,7 @@ func (r *Module) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Module) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -4474,7 +4483,7 @@ func (r *ModuleDependency) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *ModuleDependency) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -4670,7 +4679,7 @@ func (r *ModuleSource) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *ModuleSource) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -4949,7 +4958,7 @@ func (r *ModuleSourceView) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *ModuleSourceView) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -5116,7 +5125,7 @@ func (r *ObjectTypeDef) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *ObjectTypeDef) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -5225,7 +5234,7 @@ func (r *Port) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Port) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -6041,7 +6050,7 @@ func (r *Secret) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Secret) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -6173,7 +6182,7 @@ func (r *Service) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Service) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -6320,7 +6329,7 @@ func (r *Socket) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Socket) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -6374,7 +6383,7 @@ func (r *Terminal) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *Terminal) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } @@ -6486,7 +6495,7 @@ func (r *TypeDef) XXX_GraphQLID(ctx context.Context) (string, error) { } func (r *TypeDef) MarshalJSON() ([]byte, error) { - id, err := r.ID(context.Background()) + id, err := r.ID(marshalCtx) if err != nil { return nil, err } diff --git a/sdk/go/fs.go b/sdk/go/fs.go index 9b390476bf3..c8f66bf6ea2 100644 --- a/sdk/go/fs.go +++ b/sdk/go/fs.go @@ -9,6 +9,9 @@ import ( //go:embed querybuilder/marshal.go querybuilder/querybuilder.go var QueryBuilder embed.FS +//go:embed telemetry/**.go +var Telemetry embed.FS + //go:embed go.mod var GoMod []byte diff --git a/sdk/go/go.mod b/sdk/go/go.mod index 3ab83ed7122..45d5106015c 100644 --- a/sdk/go/go.mod +++ b/sdk/go/go.mod @@ -11,14 +11,31 @@ require ( github.com/adrg/xdg v0.4.0 github.com/stretchr/testify v1.9.0 github.com/vektah/gqlparser/v2 v2.5.6 + go.opentelemetry.io/otel v1.24.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0 + go.opentelemetry.io/otel/sdk v1.24.0 + go.opentelemetry.io/otel/trace v1.24.0 golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa golang.org/x/sync v0.6.0 + google.golang.org/grpc v1.62.1 ) require ( - github.com/kr/pretty v0.3.1 // indirect + github.com/cenkalti/backoff/v4 v4.3.0 // indirect + github.com/go-logr/logr v1.4.1 // indirect + github.com/go-logr/stdr v1.2.2 // indirect + github.com/golang/protobuf v1.5.3 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0 // indirect github.com/rogpeppe/go-internal v1.11.0 // indirect - gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0 // indirect + go.opentelemetry.io/otel/metric v1.24.0 // indirect + go.opentelemetry.io/proto/otlp v1.1.0 // indirect + golang.org/x/net v0.20.0 // indirect + golang.org/x/text v0.14.0 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20240325203815-454cdb8f5daa // indirect + google.golang.org/protobuf v1.33.0 // indirect ) require ( diff --git a/sdk/go/go.sum b/sdk/go/go.sum index 1416d1b8048..b22e6a0d9f1 100644 --- a/sdk/go/go.sum +++ b/sdk/go/go.sum @@ -8,13 +8,26 @@ github.com/agnivade/levenshtein v1.1.1/go.mod h1:veldBMzWxcCG2ZvUTKD2kJNRdCk5hVb github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883 h1:bvNMNQO63//z+xNgfBlViaCIJKLlCJ6/fmUseuG0wVQ= github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8= github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0/go.mod h1:t2tdKJDJF9BV14lnkjHmOQgcvEKgtqs5a1N3LNdJhGE= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dgryski/trifles v0.0.0-20200323201526-dd97f9abfb48/go.mod h1:if7Fbed8SFyPtHLHbg49SI7NAdJiC5WIA09pe59rfAA= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ= +github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= +github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= +github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= +github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= +github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= +github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0 h1:Wqo399gCIufwto+VfwCSvsnfGpF/w5E9CNxSwbpD6No= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0/go.mod h1:qmOFXW2epJhM0qSnUUYpldc7gVz2KMQwJ/QYCDIa7XU= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= @@ -23,10 +36,8 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= github.com/sergi/go-diff v1.3.1 h1:xkr+Oxo4BOQKmkn/B9eMK0g5Kg/983T9DqqPHwYqD+8= @@ -38,13 +49,48 @@ github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsT github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/vektah/gqlparser/v2 v2.5.6 h1:Ou14T0N1s191eRMZ1gARVqohcbe1e8FrcONScsq8cRU= github.com/vektah/gqlparser/v2 v2.5.6/go.mod h1:z8xXUff237NntSuH8mLFijZ+1tjV1swDbpDqjJmk6ME= +go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo= +go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0 h1:t6wl9SPayj+c7lEIFgm4ooDBZVb01IhLB4InpomhRw8= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.24.0/go.mod h1:iSDOcsnSA5INXzZtwaBPrKp/lWu/V14Dd+llD0oI2EA= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0 h1:Mw5xcxMwlqoJd97vwPxA8isEaIoxsta9/Q51+TTJLGE= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.24.0/go.mod h1:CQNu9bj7o7mC6U7+CA/schKEYakYXWr79ucDHTMGhCM= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0 h1:Xw8U6u2f8DK2XAkGRFV7BBLENgnTGX9i4rQRxJf+/vs= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.24.0/go.mod h1:6KW1Fm6R/s6Z3PGXwSJN2K4eT6wQB3vXX6CVnYX9NmM= +go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI= +go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco= +go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw= +go.opentelemetry.io/otel/sdk v1.24.0/go.mod h1:KVrIYw6tEubO9E96HQpcmpTKDVn9gdv35HoYiQWGDFg= +go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI= +go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU= +go.opentelemetry.io/proto/otlp v1.1.0 h1:2Di21piLrCqJ3U3eXGCTPHE9R8Nh+0uglSnOyxikMeI= +go.opentelemetry.io/proto/otlp v1.1.0/go.mod h1:GpBHCBWiqvVLDqmHZsoMM3C5ySeKTC7ej/RNTae6MdY= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa h1:FRnLl4eNAQl8hwxVVC17teOw8kdjVDVAiFMtgUdTSRQ= golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa/go.mod h1:zk2irFbV9DP96SEBUUAy67IdHUaZuSnrz1n472HUCLE= +golang.org/x/net v0.20.0 h1:aCL9BSgETF1k+blQaYUBx9hJ9LOGP3gAVemcZlf1Kpo= +golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY= golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ= golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= +golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ= +google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80/go.mod h1:cc8bqMqtv9gMOr0zHg2Vzff5ULhhL2IXP4sbcn32Dro= +google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 h1:Lj5rbfG876hIAYFjqiJnPHfhXbv+nzTWfm04Fg/XSVU= +google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80/go.mod h1:4jWUdICTdgc3Ibxmr8nAJiiLHwQBY0UI0XZcEMaFKaA= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240325203815-454cdb8f5daa h1:RBgMaUMP+6soRkik4VoN8ojR2nex2TqZwjSSogic+eo= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240325203815-454cdb8f5daa/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY= +google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk= +google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE= +google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= +google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= +google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= +google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= diff --git a/sdk/go/internal/engineconn/engineconn.go b/sdk/go/internal/engineconn/engineconn.go index 97c25146e25..0cc2bc21155 100644 --- a/sdk/go/internal/engineconn/engineconn.go +++ b/sdk/go/internal/engineconn/engineconn.go @@ -4,10 +4,14 @@ import ( "context" "fmt" "io" + "log/slog" "net" "net/http" + "os" "github.com/Khan/genqlient/graphql" + "go.opentelemetry.io/otel/propagation" + "go.opentelemetry.io/otel/trace" ) type EngineConn interface { @@ -64,6 +68,17 @@ func Get(ctx context.Context, cfg *Config) (EngineConn, error) { return conn, nil } +func fallbackSpanContext(ctx context.Context) context.Context { + if trace.SpanContextFromContext(ctx).IsValid() { + return ctx + } + if p, ok := os.LookupEnv("TRACEPARENT"); ok { + slog.Debug("falling back to $TRACEPARENT", "value", p) + return propagation.TraceContext{}.Extract(ctx, propagation.MapCarrier{"traceparent": p}) + } + return ctx +} + func defaultHTTPClient(p *ConnectParams) *http.Client { dialTransport := &http.Transport{ DialContext: func(_ context.Context, _, _ string) (net.Conn, error) { @@ -73,6 +88,13 @@ func defaultHTTPClient(p *ConnectParams) *http.Client { return &http.Client{ Transport: RoundTripperFunc(func(r *http.Request) (*http.Response, error) { r.SetBasicAuth(p.SessionToken, "") + + // detect $TRACEPARENT set by 'dagger run' + r = r.WithContext(fallbackSpanContext(r.Context())) + + // propagate span context via headers (i.e. for Dagger-in-Dagger) + propagation.TraceContext{}.Inject(r.Context(), propagation.HeaderCarrier(r.Header)) + return dialTransport.RoundTrip(r) }), } diff --git a/sdk/go/internal/engineconn/session.go b/sdk/go/internal/engineconn/session.go index ace2e4025b5..70e635fd86d 100644 --- a/sdk/go/internal/engineconn/session.go +++ b/sdk/go/internal/engineconn/session.go @@ -15,6 +15,8 @@ import ( "strings" "sync" "time" + + "go.opentelemetry.io/otel/propagation" ) type cliSessionConn struct { @@ -84,6 +86,16 @@ func startCLISession(ctx context.Context, binPath string, cfg *Config) (_ Engine env := os.Environ() + // detect $TRACEPARENT set by 'dagger run' + ctx = fallbackSpanContext(ctx) + + // propagate trace context to the child process (i.e. for Dagger-in-Dagger) + carrier := propagation.MapCarrier{} + propagation.TraceContext{}.Inject(ctx, carrier) + for key, value := range carrier { + env = append(env, strings.ToUpper(key)+"="+value) + } + cmdCtx, cmdCancel := context.WithCancel(ctx) // Workaround https://github.com/golang/go/issues/22315 diff --git a/sdk/go/telemetry/attrs.go b/sdk/go/telemetry/attrs.go new file mode 100644 index 00000000000..edca37708d3 --- /dev/null +++ b/sdk/go/telemetry/attrs.go @@ -0,0 +1,31 @@ +package telemetry + +const ( + DagDigestAttr = "dagger.io/dag.digest" + + DagInputsAttr = "dagger.io/dag.inputs" + DagOutputAttr = "dagger.io/dag.output" + + CachedAttr = "dagger.io/dag.cached" + CanceledAttr = "dagger.io/dag.canceled" + InternalAttr = "dagger.io/dag.internal" + + DagCallAttr = "dagger.io/dag.call" + + LLBOpAttr = "dagger.io/llb.op" + + // Hide child spans by default. + UIEncapsulateAttr = "dagger.io/ui.encapsulate" + + // The following are theoretical, if/when we want to express the same + // concepts from Progrock. + + // The parent span of this task. Might not need this at all, if we want to + // just rely on span parent, but the thinking is the span parent could be + // pretty brittle. + TaskParentAttr = "dagger.io/task.parent" + + // Progress bars. + ProgressCurrentAttr = "dagger.io/progress.current" + ProgressTotalAttr = "dagger.io/progress.total" +) diff --git a/sdk/go/telemetry/batch_processor.go b/sdk/go/telemetry/batch_processor.go new file mode 100644 index 00000000000..1ed04abb8b5 --- /dev/null +++ b/sdk/go/telemetry/batch_processor.go @@ -0,0 +1,455 @@ +package telemetry + +import ( + "context" + "sync" + "sync/atomic" + "time" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/sdk/trace" + otrace "go.opentelemetry.io/otel/trace" +) + +// Defaults for BatchSpanProcessorOptions. +const ( + DefaultMaxQueueSize = 2048 + DefaultScheduleDelay = 5000 + DefaultExportTimeout = 30000 + DefaultMaxExportBatchSize = 512 + + defaultSpanKeepAlive = 30 * time.Second +) + +// BatchSpanProcessorOption configures a BatchSpanProcessor. +type BatchSpanProcessorOption func(o *BatchSpanProcessorOptions) + +// BatchSpanProcessorOptions is configuration settings for a +// BatchSpanProcessor. +type BatchSpanProcessorOptions struct { + // MaxQueueSize is the maximum queue size to buffer spans for delayed processing. If the + // queue gets full it drops the spans. Use BlockOnQueueFull to change this behavior. + // The default value of MaxQueueSize is 2048. + MaxQueueSize int + + // BatchTimeout is the maximum duration for constructing a batch. Processor + // forcefully sends available spans when timeout is reached. + // The default value of BatchTimeout is 5000 msec. + BatchTimeout time.Duration + + // ExportTimeout specifies the maximum duration for exporting spans. If the timeout + // is reached, the export will be cancelled. + // The default value of ExportTimeout is 30000 msec. + ExportTimeout time.Duration + + // MaxExportBatchSize is the maximum number of spans to process in a single batch. + // If there are more than one batch worth of spans then it processes multiple batches + // of spans one batch after the other without any delay. + // The default value of MaxExportBatchSize is 512. + MaxExportBatchSize int + + // BlockOnQueueFull blocks onEnd() and onStart() method if the queue is full + // AND if BlockOnQueueFull is set to true. + // Blocking option should be used carefully as it can severely affect the performance of an + // application. + BlockOnQueueFull bool +} + +// batchSpanProcessor is a SpanProcessor that batches asynchronously-received +// spans and sends them to a trace.Exporter when complete. +type batchSpanProcessor struct { + e trace.SpanExporter + o BatchSpanProcessorOptions + + queue chan trace.ReadOnlySpan + dropped uint32 + + batch []trace.ReadOnlySpan + batchMutex sync.Mutex + batchSpans map[otrace.SpanID]int + inProgressSpans map[otrace.SpanID]*inProgressSpan + timer *time.Timer + stopWait sync.WaitGroup + stopOnce sync.Once + stopCh chan struct{} + stopped atomic.Bool +} + +type inProgressSpan struct { + trace.ReadOnlySpan + UpdatedAt time.Time +} + +var _ trace.SpanProcessor = (*batchSpanProcessor)(nil) + +// NewBatchSpanProcessor creates a new SpanProcessor that will send completed +// span batches to the exporter with the supplied options. +// +// If the exporter is nil, the span processor will perform no action. +func NewBatchSpanProcessor(exporter trace.SpanExporter, options ...BatchSpanProcessorOption) *batchSpanProcessor { + maxQueueSize := DefaultMaxQueueSize + maxExportBatchSize := DefaultMaxExportBatchSize + + if maxExportBatchSize > maxQueueSize { + if DefaultMaxExportBatchSize > maxQueueSize { + maxExportBatchSize = maxQueueSize + } else { + maxExportBatchSize = DefaultMaxExportBatchSize + } + } + + o := BatchSpanProcessorOptions{ + BatchTimeout: time.Duration(DefaultScheduleDelay) * time.Millisecond, + ExportTimeout: time.Duration(DefaultExportTimeout) * time.Millisecond, + MaxQueueSize: maxQueueSize, + MaxExportBatchSize: maxExportBatchSize, + } + for _, opt := range options { + opt(&o) + } + bsp := &batchSpanProcessor{ + e: exporter, + o: o, + batch: make([]trace.ReadOnlySpan, 0, o.MaxExportBatchSize), + batchSpans: make(map[otrace.SpanID]int), + inProgressSpans: make(map[otrace.SpanID]*inProgressSpan), + timer: time.NewTimer(o.BatchTimeout), + queue: make(chan trace.ReadOnlySpan, o.MaxQueueSize), + stopCh: make(chan struct{}), + } + + bsp.stopWait.Add(1) + go func() { + defer bsp.stopWait.Done() + bsp.processQueue() + bsp.drainQueue() + }() + + return bsp +} + +// OnStart method enqueues a trace.ReadOnlySpan for later processing. +func (bsp *batchSpanProcessor) OnStart(parent context.Context, s trace.ReadWriteSpan) { + bsp.enqueue(s) +} + +// OnUpdate method enqueues a trace.ReadOnlySpan for later processing. +func (bsp *batchSpanProcessor) OnUpdate(s trace.ReadOnlySpan) { + bsp.enqueue(s) +} + +// OnEnd method enqueues a trace.ReadOnlySpan for later processing. +func (bsp *batchSpanProcessor) OnEnd(s trace.ReadOnlySpan) { + bsp.enqueue(s) +} + +// Shutdown flushes the queue and waits until all spans are processed. +// It only executes once. Subsequent call does nothing. +func (bsp *batchSpanProcessor) Shutdown(ctx context.Context) error { + var err error + bsp.stopOnce.Do(func() { + bsp.stopped.Store(true) + wait := make(chan struct{}) + go func() { + close(bsp.stopCh) + bsp.stopWait.Wait() + if bsp.e != nil { + if err := bsp.e.Shutdown(ctx); err != nil { + otel.Handle(err) + } + } + close(wait) + }() + // Wait until the wait group is done or the context is cancelled + select { + case <-wait: + case <-ctx.Done(): + err = ctx.Err() + } + }) + return err +} + +type forceFlushSpan struct { + trace.ReadOnlySpan + flushed chan struct{} +} + +func (f forceFlushSpan) SpanContext() otrace.SpanContext { + return otrace.NewSpanContext(otrace.SpanContextConfig{TraceFlags: otrace.FlagsSampled}) +} + +// ForceFlush exports all ended spans that have not yet been exported. +func (bsp *batchSpanProcessor) ForceFlush(ctx context.Context) error { + // Interrupt if context is already canceled. + if err := ctx.Err(); err != nil { + return err + } + + // Do nothing after Shutdown. + if bsp.stopped.Load() { + return nil + } + + var err error + if bsp.e != nil { + flushCh := make(chan struct{}) + if bsp.enqueueBlockOnQueueFull(ctx, forceFlushSpan{flushed: flushCh}) { + select { + case <-bsp.stopCh: + // The batchSpanProcessor is Shutdown. + return nil + case <-flushCh: + // Processed any items in queue prior to ForceFlush being called + case <-ctx.Done(): + return ctx.Err() + } + } + + wait := make(chan error) + go func() { + wait <- bsp.exportSpans(ctx) + close(wait) + }() + // Wait until the export is finished or the context is cancelled/timed out + select { + case err = <-wait: + case <-ctx.Done(): + err = ctx.Err() + } + } + return err +} + +// WithMaxQueueSize returns a BatchSpanProcessorOption that configures the +// maximum queue size allowed for a BatchSpanProcessor. +func WithMaxQueueSize(size int) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.MaxQueueSize = size + } +} + +// WithMaxExportBatchSize returns a BatchSpanProcessorOption that configures +// the maximum export batch size allowed for a BatchSpanProcessor. +func WithMaxExportBatchSize(size int) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.MaxExportBatchSize = size + } +} + +// WithBatchTimeout returns a BatchSpanProcessorOption that configures the +// maximum delay allowed for a BatchSpanProcessor before it will export any +// held span (whether the queue is full or not). +func WithBatchTimeout(delay time.Duration) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.BatchTimeout = delay + } +} + +// WithExportTimeout returns a BatchSpanProcessorOption that configures the +// amount of time a BatchSpanProcessor waits for an exporter to export before +// abandoning the export. +func WithExportTimeout(timeout time.Duration) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.ExportTimeout = timeout + } +} + +// WithBlocking returns a BatchSpanProcessorOption that configures a +// BatchSpanProcessor to wait for enqueue operations to succeed instead of +// dropping data when the queue is full. +func WithBlocking() BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.BlockOnQueueFull = true + } +} + +// exportSpans is a subroutine of processing and draining the queue. +func (bsp *batchSpanProcessor) exportSpans(ctx context.Context) error { + bsp.timer.Reset(bsp.o.BatchTimeout) + + bsp.batchMutex.Lock() + defer bsp.batchMutex.Unlock() + + if bsp.o.ExportTimeout > 0 { + var cancel context.CancelFunc + ctx, cancel = context.WithTimeout(ctx, bsp.o.ExportTimeout) + defer cancel() + } + + // Update in progress spans + for _, span := range bsp.batch { + if span.EndTime().Before(span.StartTime()) { + bsp.inProgressSpans[span.SpanContext().SpanID()] = &inProgressSpan{ + ReadOnlySpan: span, + UpdatedAt: time.Now(), + } + } else { + delete(bsp.inProgressSpans, span.SpanContext().SpanID()) + } + } + + // add in progress spans that are not part of the batch + for _, span := range bsp.inProgressSpans { + // ignore spans that were recently updated + if span.UpdatedAt.IsZero() || span.UpdatedAt.Before(time.Now().Add(-defaultSpanKeepAlive)) { + bsp.addToBatch(span.ReadOnlySpan) + span.UpdatedAt = time.Now() + } + } + + if l := len(bsp.batch); l > 0 { + err := bsp.e.ExportSpans(ctx, bsp.batch) + + // A new batch is always created after exporting, even if the batch failed to be exported. + // + // It is up to the exporter to implement any type of retry logic if a batch is failing + // to be exported, since it is specific to the protocol and backend being sent to. + bsp.batch = bsp.batch[:0] + bsp.batchSpans = make(map[otrace.SpanID]int) + + if err != nil { + return err + } + } + return nil +} + +// processQueue removes spans from the `queue` channel until processor +// is shut down. It calls the exporter in batches of up to MaxExportBatchSize +// waiting up to BatchTimeout to form a batch. +func (bsp *batchSpanProcessor) processQueue() { + defer bsp.timer.Stop() + + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + for { + select { + case <-bsp.stopCh: + return + case <-bsp.timer.C: + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + case sd := <-bsp.queue: + if ffs, ok := sd.(forceFlushSpan); ok { + close(ffs.flushed) + continue + } + bsp.batchMutex.Lock() + bsp.addToBatch(sd) + shouldExport := len(bsp.batch) >= bsp.o.MaxExportBatchSize + bsp.batchMutex.Unlock() + if shouldExport { + if !bsp.timer.Stop() { + <-bsp.timer.C + } + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + } + } + } +} + +func (bsp *batchSpanProcessor) addToBatch(sd trace.ReadOnlySpan) { + if i, ok := bsp.batchSpans[sd.SpanContext().SpanID()]; ok { + bsp.batch[i] = sd + return + } + bsp.batchSpans[sd.SpanContext().SpanID()] = len(bsp.batch) + bsp.batch = append(bsp.batch, sd) +} + +// drainQueue awaits the any caller that had added to bsp.stopWait +// to finish the enqueue, then exports the final batch. +func (bsp *batchSpanProcessor) drainQueue() { + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + for { + select { + case sd := <-bsp.queue: + if _, ok := sd.(forceFlushSpan); ok { + // Ignore flush requests as they are not valid spans. + continue + } + + bsp.batchMutex.Lock() + bsp.addToBatch(sd) + shouldExport := len(bsp.batch) == bsp.o.MaxExportBatchSize + bsp.batchMutex.Unlock() + + if shouldExport { + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + } + default: + // There are no more enqueued spans. Make final export. + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + return + } + } +} + +func (bsp *batchSpanProcessor) enqueue(sd trace.ReadOnlySpan) { + ctx := context.TODO() + + // Do not enqueue spans after Shutdown. + if bsp.stopped.Load() { + return + } + + // Do not enqueue spans if we are just going to drop them. + if bsp.e == nil { + return + } + + if bsp.o.BlockOnQueueFull { + bsp.enqueueBlockOnQueueFull(ctx, sd) + } else { + bsp.enqueueDrop(ctx, sd) + } +} + +func (bsp *batchSpanProcessor) enqueueBlockOnQueueFull(ctx context.Context, sd trace.ReadOnlySpan) bool { + if !sd.SpanContext().IsSampled() { + return false + } + + select { + case bsp.queue <- sd: + return true + case <-ctx.Done(): + return false + } +} + +func (bsp *batchSpanProcessor) enqueueDrop(ctx context.Context, sd trace.ReadOnlySpan) bool { + if !sd.SpanContext().IsSampled() { + return false + } + + select { + case bsp.queue <- sd: + return true + default: + atomic.AddUint32(&bsp.dropped, 1) + } + return false +} + +// MarshalLog is the marshaling function used by the logging system to represent this Span Processor. +func (bsp *batchSpanProcessor) MarshalLog() interface{} { + return struct { + Type string + SpanExporter trace.SpanExporter + Config BatchSpanProcessorOptions + }{ + Type: "BatchSpanProcessor", + SpanExporter: bsp.e, + Config: bsp.o, + } +} diff --git a/sdk/go/telemetry/init.go b/sdk/go/telemetry/init.go new file mode 100644 index 00000000000..18607e0e1e0 --- /dev/null +++ b/sdk/go/telemetry/init.go @@ -0,0 +1,251 @@ +package telemetry + +import ( + "context" + "fmt" + "log/slog" + "net" + "net/url" + "os" + "strings" + "sync" + "time" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" + "go.opentelemetry.io/otel/propagation" + "go.opentelemetry.io/otel/sdk/resource" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" + "google.golang.org/grpc" +) + +func OtelConfigured() bool { + for _, env := range os.Environ() { + if strings.HasPrefix(env, "OTEL_") { + return true + } + } + return false +} + +var configuredSpanExporter sdktrace.SpanExporter +var configuredSpanExporterOnce sync.Once + +func ConfiguredSpanExporter(ctx context.Context) (sdktrace.SpanExporter, bool) { + ctx = context.WithoutCancel(ctx) + + configuredSpanExporterOnce.Do(func() { + if !OtelConfigured() { + return + } + + var err error + + var proto string + if v := os.Getenv("OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"); v != "" { + proto = v + } else if v := os.Getenv("OTEL_EXPORTER_OTLP_PROTOCOL"); v != "" { + proto = v + } else { + // https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/protocol/exporter.md#specify-protocol + proto = "http/protobuf" + } + + var endpoint string + if v := os.Getenv("OTEL_EXPORTER_OTLP_TRACES_ENDPOINT"); v != "" { + endpoint = v + } else if v := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"); v != "" { + if proto == "http/protobuf" { + endpoint, err = url.JoinPath(v, "v1", "traces") + if err != nil { + slog.Warn("failed to join path", "error", err) + return + } + } else { + endpoint = v + } + } + + slog.Debug("configuring tracing via env", "protocol", proto) + + switch proto { + case "http/protobuf", "http": + configuredSpanExporter, err = otlptracehttp.New(ctx, + otlptracehttp.WithEndpointURL(endpoint)) + case "grpc": + var u *url.URL + u, err = url.Parse(endpoint) + if err != nil { + slog.Warn("bad OTLP logs endpoint %q: %w", endpoint, err) + return + } + opts := []otlptracegrpc.Option{ + otlptracegrpc.WithEndpointURL(endpoint), + } + if u.Scheme == "unix" { + dialer := func(ctx context.Context, addr string) (net.Conn, error) { + return net.Dial(u.Scheme, u.Path) + } + opts = append(opts, + otlptracegrpc.WithDialOption(grpc.WithContextDialer(dialer)), + otlptracegrpc.WithInsecure()) + } + configuredSpanExporter, err = otlptracegrpc.New(ctx, opts...) + default: + err = fmt.Errorf("unknown OTLP protocol: %s", proto) + } + if err != nil { + slog.Warn("failed to configure tracing", "error", err) + } + }) + return configuredSpanExporter, configuredSpanExporter != nil +} + +func InitEmbedded(ctx context.Context, res *resource.Resource) context.Context { + traceCfg := Config{ + Detect: false, // false, since we want "live" exporting + Resource: res, + } + if exp, ok := ConfiguredSpanExporter(ctx); ok { + traceCfg.LiveTraceExporters = append(traceCfg.LiveTraceExporters, exp) + } + return Init(ctx, traceCfg) +} + +type Config struct { + // Auto-detect exporters from OTEL_* env variables. + Detect bool + + // LiveTraceExporters are exporters that can receive updates for spans at runtime, + // rather than waiting until the span ends. + // + // Example: TUI, Cloud + LiveTraceExporters []sdktrace.SpanExporter + + // BatchedTraceExporters are exporters that receive spans in batches, after the + // spans have ended. + // + // Example: Honeycomb, Jaeger, etc. + BatchedTraceExporters []sdktrace.SpanExporter + + // Resource is the resource describing this component and runtime + // environment. + Resource *resource.Resource +} + +// NearlyImmediate is 100ms, below which has diminishing returns in terms of +// visual perception vs. performance cost. +const NearlyImmediate = 100 * time.Millisecond + +// LiveSpanProcessor is a SpanProcessor that can additionally receive updates +// for a span at runtime, rather than waiting until the span ends. +type LiveSpanProcessor interface { + sdktrace.SpanProcessor + + // OnUpdate method enqueues a trace.ReadOnlySpan for later processing. + OnUpdate(s sdktrace.ReadOnlySpan) +} + +var SpanProcessors = []sdktrace.SpanProcessor{} +var tracerProvider *ProxyTraceProvider + +// Init sets up the global OpenTelemetry providers tracing, logging, and +// someday metrics providers. It is called by the CLI, the engine, and the +// container shim, so it needs to be versatile. +func Init(ctx context.Context, cfg Config) context.Context { + slog.Debug("initializing telemetry") + + if p, ok := os.LookupEnv("TRACEPARENT"); ok { + slog.Debug("found TRACEPARENT", "value", p) + ctx = propagation.TraceContext{}.Extract(ctx, propagation.MapCarrier{"traceparent": p}) + } + + // Set up a text map propagator so that things, well, propagate. The default + // is a noop. + otel.SetTextMapPropagator(propagation.TraceContext{}) + + // Log to slog. + otel.SetErrorHandler(otel.ErrorHandlerFunc(func(err error) { + slog.Error("OpenTelemetry error", "error", err) + })) + + if cfg.Detect { + if exp, ok := ConfiguredSpanExporter(ctx); ok { + cfg.BatchedTraceExporters = append(cfg.BatchedTraceExporters, exp) + } + } + + traceOpts := []sdktrace.TracerProviderOption{ + sdktrace.WithResource(cfg.Resource), + } + + for _, exporter := range cfg.BatchedTraceExporters { + traceOpts = append(traceOpts, sdktrace.WithBatcher(exporter)) + } + + liveProcessors := make([]LiveSpanProcessor, 0, len(cfg.LiveTraceExporters)) + for _, exporter := range cfg.LiveTraceExporters { + processor := NewBatchSpanProcessor(exporter, + WithBatchTimeout(NearlyImmediate)) + liveProcessors = append(liveProcessors, processor) + SpanProcessors = append(SpanProcessors, processor) + } + for _, exporter := range cfg.BatchedTraceExporters { + processor := sdktrace.NewBatchSpanProcessor(exporter) + SpanProcessors = append(SpanProcessors, processor) + } + for _, proc := range SpanProcessors { + traceOpts = append(traceOpts, sdktrace.WithSpanProcessor(proc)) + } + + tracerProvider = NewProxyTraceProvider( + sdktrace.NewTracerProvider(traceOpts...), + func(s trace.Span) { // OnUpdate + if ro, ok := s.(sdktrace.ReadOnlySpan); ok && s.IsRecording() { + for _, processor := range liveProcessors { + processor.OnUpdate(ro) + } + } + }, + ) + + // Register our TracerProvider as the global so any imported instrumentation + // in the future will default to using it. + // + // NB: this is also necessary so that we can establish a root span, otherwise + // telemetry doesn't work. + otel.SetTracerProvider(tracerProvider) + + return ctx +} + +// Flush drains telemetry data, and is typically called just before a client +// goes away. +// +// NB: now that we wait for all spans to complete, this is less necessary, but +// it seems wise to keep it anyway, as the spots where it are needed are hard +// to find. +func Flush(ctx context.Context) { + slog.Debug("flushing processors") + if tracerProvider != nil { + if err := tracerProvider.ForceFlush(ctx); err != nil { + slog.Error("failed to flush spans", "error", err) + } + } + slog.Debug("done flushing processors") +} + +// Close shuts down the global OpenTelemetry providers, flushing any remaining +// data to the configured exporters. +func Close() { + flushCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() + Flush(flushCtx) + if tracerProvider != nil { + if err := tracerProvider.Shutdown(flushCtx); err != nil { + slog.Error("failed to shut down tracer provider", "error", err) + } + } +} diff --git a/sdk/go/telemetry/processor.go b/sdk/go/telemetry/processor.go new file mode 100644 index 00000000000..c30b49cc4d4 --- /dev/null +++ b/sdk/go/telemetry/processor.go @@ -0,0 +1,139 @@ +package telemetry + +import ( + "context" + "sync" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/sdk/trace" +) + +type TracerUpdater interface{ tracer() } + +// simpleSpanProcessor is a SpanProcessor that synchronously sends all +// completed Spans to a trace.Exporter immediately. +type simpleSpanProcessor struct { + exporterMu sync.Mutex + exporter trace.SpanExporter + stopOnce sync.Once +} + +var _ trace.SpanProcessor = (*simpleSpanProcessor)(nil) + +// NewSimpleSpanProcessor returns a new SpanProcessor that will synchronously +// send completed spans to the exporter immediately. +// +// This SpanProcessor is not recommended for production use. The synchronous +// nature of this SpanProcessor make it good for testing, debugging, or +// showing examples of other feature, but it will be slow and have a high +// computation resource usage overhead. The BatchSpanProcessor is recommended +// for production use instead. +func NewSimpleSpanProcessor(exporter trace.SpanExporter) *simpleSpanProcessor { + ssp := &simpleSpanProcessor{ + exporter: exporter, + } + + return ssp +} + +// OnStart does nothing. +func (ssp *simpleSpanProcessor) OnStart(ctx context.Context, s trace.ReadWriteSpan) { + ssp.exporterMu.Lock() + defer ssp.exporterMu.Unlock() + + if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() { + if err := ssp.exporter.ExportSpans(context.Background(), []trace.ReadOnlySpan{s}); err != nil { + otel.Handle(err) + } + } +} + +// OnEnd immediately exports a ReadOnlySpan. +func (ssp *simpleSpanProcessor) OnEnd(s trace.ReadOnlySpan) { + ssp.exporterMu.Lock() + defer ssp.exporterMu.Unlock() + + if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() { + if err := ssp.exporter.ExportSpans(context.Background(), []trace.ReadOnlySpan{s}); err != nil { + otel.Handle(err) + } + } +} + +// OnStart does nothing. +func (ssp *simpleSpanProcessor) OnUpdate(s trace.ReadOnlySpan) { + ssp.exporterMu.Lock() + defer ssp.exporterMu.Unlock() + + if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() { + if err := ssp.exporter.ExportSpans(context.Background(), []trace.ReadOnlySpan{s}); err != nil { + otel.Handle(err) + } + } +} + +// Shutdown shuts down the exporter this SimpleSpanProcessor exports to. +func (ssp *simpleSpanProcessor) Shutdown(ctx context.Context) error { + var err error + ssp.stopOnce.Do(func() { + stopFunc := func(exp trace.SpanExporter) (<-chan error, func()) { + done := make(chan error) + return done, func() { done <- exp.Shutdown(ctx) } + } + + // The exporter field of the simpleSpanProcessor needs to be zeroed to + // signal it is shut down, meaning all subsequent calls to OnEnd will + // be gracefully ignored. This needs to be done synchronously to avoid + // any race condition. + // + // A closure is used to keep reference to the exporter and then the + // field is zeroed. This ensures the simpleSpanProcessor is shut down + // before the exporter. This order is important as it avoids a + // potential deadlock. If the exporter shut down operation generates a + // span, that span would need to be exported. Meaning, OnEnd would be + // called and try acquiring the lock that is held here. + ssp.exporterMu.Lock() + done, shutdown := stopFunc(ssp.exporter) + ssp.exporter = nil + ssp.exporterMu.Unlock() + + go shutdown() + + // Wait for the exporter to shut down or the deadline to expire. + select { + case err = <-done: + case <-ctx.Done(): + // It is possible for the exporter to have immediately shut down + // and the context to be done simultaneously. In that case this + // outer select statement will randomly choose a case. This will + // result in a different returned error for similar scenarios. + // Instead, double check if the exporter shut down at the same + // time and return that error if so. This will ensure consistency + // as well as ensure the caller knows the exporter shut down + // successfully (they can already determine if the deadline is + // expired given they passed the context). + select { + case err = <-done: + default: + err = ctx.Err() + } + } + }) + return err +} + +// ForceFlush does nothing as there is no data to flush. +func (ssp *simpleSpanProcessor) ForceFlush(context.Context) error { + return nil +} + +// MarshalLog is the marshaling function used by the logging system to represent this Span Processor. +func (ssp *simpleSpanProcessor) MarshalLog() interface{} { + return struct { + Type string + Exporter trace.SpanExporter + }{ + Type: "SimpleSpanProcessor", + Exporter: ssp.exporter, + } +} diff --git a/sdk/go/telemetry/proxy.go b/sdk/go/telemetry/proxy.go new file mode 100644 index 00000000000..d12bef5d9b6 --- /dev/null +++ b/sdk/go/telemetry/proxy.go @@ -0,0 +1,94 @@ +package telemetry + +import ( + "context" + + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/codes" + tracesdk "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/embedded" +) + +type ProxyTraceProvider struct { + embedded.TracerProvider + + tp *tracesdk.TracerProvider + onUpdate func(trace.Span) +} + +func NewProxyTraceProvider(tp *tracesdk.TracerProvider, onUpdate func(trace.Span)) *ProxyTraceProvider { + return &ProxyTraceProvider{ + tp: tp, + onUpdate: onUpdate, + } +} + +func (tp *ProxyTraceProvider) Tracer(name string, options ...trace.TracerOption) trace.Tracer { + return &ProxyTracer{ + tracer: tp.tp.Tracer(name, options...), + onUpdate: tp.onUpdate, + } +} + +func (tp *ProxyTraceProvider) ForceFlush(ctx context.Context) error { + return tp.tp.ForceFlush(ctx) +} + +func (tp *ProxyTraceProvider) Shutdown(ctx context.Context) error { + return tp.tp.Shutdown(ctx) +} + +type ProxyTracer struct { + embedded.Tracer + tracer trace.Tracer + onUpdate func(trace.Span) +} + +func (t ProxyTracer) Start(ctx context.Context, spanName string, opts ...trace.SpanStartOption) (context.Context, trace.Span) { + ctx, span := t.tracer.Start(ctx, spanName, opts...) + return ctx, proxySpan{sp: span, onUpdate: t.onUpdate} +} + +type proxySpan struct { + embedded.Span + sp trace.Span + onUpdate func(trace.Span) +} + +var _ trace.Span = proxySpan{} + +func (s proxySpan) SpanContext() trace.SpanContext { return s.sp.SpanContext() } + +func (s proxySpan) IsRecording() bool { return s.sp.IsRecording() } + +func (s proxySpan) SetStatus(code codes.Code, message string) { + s.sp.SetStatus(code, message) + s.onUpdate(s.sp) +} + +// func (s proxySpan) SetError(v bool) { s.sp.SetError(v) } + +func (s proxySpan) SetAttributes(attributes ...attribute.KeyValue) { + s.sp.SetAttributes(attributes...) + s.onUpdate(s.sp) +} + +func (s proxySpan) End(opts ...trace.SpanEndOption) { s.sp.End(opts...) } + +func (s proxySpan) RecordError(err error, opts ...trace.EventOption) { + s.sp.RecordError(err, opts...) + s.onUpdate(s.sp) +} + +func (s proxySpan) AddEvent(event string, opts ...trace.EventOption) { + s.sp.AddEvent(event, opts...) + s.onUpdate(s.sp) +} + +func (s proxySpan) SetName(name string) { + s.sp.SetName(name) + s.onUpdate(s.sp) +} + +func (s proxySpan) TracerProvider() trace.TracerProvider { return s.sp.TracerProvider() } diff --git a/sdk/go/telemetry/span.go b/sdk/go/telemetry/span.go new file mode 100644 index 00000000000..533a023c8ef --- /dev/null +++ b/sdk/go/telemetry/span.go @@ -0,0 +1,33 @@ +package telemetry + +import ( + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/codes" + "go.opentelemetry.io/otel/trace" +) + +// Encapsulate can be applied to a span to indicate that this span should +// collapse its children by default. +func Encapsulate() trace.SpanStartOption { + return trace.WithAttributes(attribute.Bool(UIEncapsulateAttr, true)) +} + +// Internal can be applied to a span to indicate that this span should not be +// shown to the user by default. +func Internal() trace.SpanStartOption { + return trace.WithAttributes(attribute.Bool(InternalAttr, true)) +} + +// End is a helper to end a span with an error if the function returns an error. +// +// It is optimized for use as a defer one-liner with a function that has a +// named error return value, conventionally `rerr`. +// +// defer telemetry.End(span, func() error { return rerr }) +func End(span trace.Span, fn func() error) { + if err := fn(); err != nil { + span.RecordError(err) + span.SetStatus(codes.Error, err.Error()) + } + span.End() +} diff --git a/sdk/java/dagger-codegen-maven-plugin/src/main/java/io/dagger/codegen/DaggerCLIUtils.java b/sdk/java/dagger-codegen-maven-plugin/src/main/java/io/dagger/codegen/DaggerCLIUtils.java index 693b47330d5..a3d84ed8e56 100644 --- a/sdk/java/dagger-codegen-maven-plugin/src/main/java/io/dagger/codegen/DaggerCLIUtils.java +++ b/sdk/java/dagger-codegen-maven-plugin/src/main/java/io/dagger/codegen/DaggerCLIUtils.java @@ -26,7 +26,9 @@ public static String getBinary(String defaultCLIPath) { public static InputStream query(InputStream query, String binPath) { ByteArrayOutputStream out = new ByteArrayOutputStream(); - FluentProcess.start(binPath, "query", "--silent") + // HACK: for some reason writing to stderr just causes it to hang since + // we're not reading from stderr, so we redirect it to /dev/null. + FluentProcess.start("sh", "-c", "$0 query 2>/dev/null", binPath) .withTimeout(Duration.of(60, ChronoUnit.SECONDS)) .inputStream(query) .writeToOutputStream(out); diff --git a/sdk/python/runtime/discovery.go b/sdk/python/runtime/discovery.go index 14878fd2bf1..0b5e8f736fd 100644 --- a/sdk/python/runtime/discovery.go +++ b/sdk/python/runtime/discovery.go @@ -5,6 +5,7 @@ import ( "fmt" "path" "strings" + "sync" "github.com/pelletier/go-toml/v2" "golang.org/x/sync/errgroup" @@ -66,6 +67,9 @@ type Discovery struct { // configuration, either from loading pyproject.toml or reacting to the // the presence of certain files like .python-version. EnableCustomConfig bool + + // Used to synchronize updates. + mu sync.Mutex } func NewDiscovery(cfg UserConfig) *Discovery { @@ -168,7 +172,9 @@ func (d *Discovery) loadModInfo(ctx context.Context) error { if err != nil { return fmt.Errorf("get module source subpath: %w", err) } + d.mu.Lock() d.SubPath = p + d.mu.Unlock() return nil }) @@ -176,9 +182,11 @@ func (d *Discovery) loadModInfo(ctx context.Context) error { // d.Source() depends on SubPath <-doneSubPath entries, _ := d.Source().Entries(gctx) + d.mu.Lock() for _, entry := range entries { d.FileSet[entry] = struct{}{} } + d.mu.Unlock() return nil }) @@ -187,7 +195,9 @@ func (d *Discovery) loadModInfo(ctx context.Context) error { if err != nil { return fmt.Errorf("get module name: %w", err) } + d.mu.Lock() d.ModName = modName + d.mu.Unlock() return nil }) @@ -210,7 +220,9 @@ func (d *Discovery) loadModInfo(ctx context.Context) error { return fmt.Errorf("check if config exists: %w", err) } if !exists { + d.mu.Lock() d.IsInit = true + d.mu.Unlock() } return nil }) @@ -247,7 +259,9 @@ func (d *Discovery) loadFiles(ctx context.Context) error { if err != nil { return fmt.Errorf("get file contents of %q: %w", name, err) } + d.mu.Lock() d.Files[name] = strings.TrimSpace(contents) + d.mu.Unlock() return nil }) } @@ -264,7 +278,9 @@ func (d *Discovery) loadFiles(ctx context.Context) error { // effort). entries, err := d.Source().Glob(gctx, "**/*.py") if len(entries) > 0 { + d.mu.Lock() d.FileSet["*.py"] = struct{}{} + d.mu.Unlock() } else if err == nil && !d.IsInit { // This can also happen on `dagger develop --sdk` if there's also // a pyproject.toml present to customize the base container. diff --git a/telemetry/attrs.go b/telemetry/attrs.go new file mode 100644 index 00000000000..cf4602ffbb2 --- /dev/null +++ b/telemetry/attrs.go @@ -0,0 +1,64 @@ +package telemetry + +// The following attributes are used by the UI to interpret spans and control +// their behavior in the UI. +const ( + // The base64-encoded, protobuf-marshalled callpbv1.Call that this span + // represents. + DagCallAttr = "dagger.io/dag.call" + + // The digest of the protobuf-marshalled Call that this span represents. + // + // This value acts as a node ID in the conceptual DAG. + DagDigestAttr = "dagger.io/dag.digest" + + // The list of DAG digests that the span depends on. + // + // This is not currently used by the UI, but it could be used to drive higher + // level DAG walking processes without having to unmarshal the full call. + DagInputsAttr = "dagger.io/dag.inputs" + + // The DAG call digest that the call returned, if the call returned an + // Object. + // + // This information is used to simplify values in the UI by showing their + // highest-level creator. For example, if foo().bar() returns a().b().c(), we + // will show foo().bar() instead of a().b().c() as it will be a more + // recognizable value to the user. + DagOutputAttr = "dagger.io/dag.output" + + // Indicates that this span is "internal" and can be hidden by default. + // + // Internal spans may typically be revealed with a toggle. + UIInternalAttr = "dagger.io/ui.internal" + + // Hide child spans by default. + UIEncapsulateAttr = "dagger.io/ui.encapsulate" + + // Substitute the span for its children and move its logs to its parent. + UIPassthroughAttr = "dagger.io/ui.passthrough" //nolint: gosec // lol + + // Causes the parent span to act as if Passthrough was set. + UIMaskAttr = "dagger.io/ui.mask" + + // NB: the following attributes are not currently used. + + // Indicates that this span was a cache hit and did nothing. + CachedAttr = "dagger.io/dag.cached" + + // Indicates that this span was interrupted. + CanceledAttr = "dagger.io/dag.canceled" + + // The base64-encoded, protobuf-marshalled Buildkit LLB op payload that this + // span represents. + LLBOpAttr = "dagger.io/llb.op" + + // The amount of progress that needs to be reached. + ProgressTotalAttr = "dagger.io/progress.total" + + // Current value for the progress. + ProgressCurrentAttr = "dagger.io/progress.current" + + // Indicates the units for the progress numbers. + ProgressUnitsAttr = "dagger.io/progress.units" +) diff --git a/telemetry/env.go b/telemetry/env.go new file mode 100644 index 00000000000..0ed40a9b1f2 --- /dev/null +++ b/telemetry/env.go @@ -0,0 +1,22 @@ +package telemetry + +import ( + "context" + "strings" + + "go.opentelemetry.io/otel/propagation" +) + +func PropagationEnv(ctx context.Context) []string { + tc := propagation.TraceContext{} + carrier := propagation.MapCarrier{} + tc.Inject(ctx, carrier) + env := []string{} + for _, f := range tc.Fields() { + if val, ok := carrier[f]; ok { + // traceparent vs. TRACEPARENT matters + env = append(env, strings.ToUpper(f)+"="+val) + } + } + return env +} diff --git a/telemetry/env/env.go b/telemetry/env/env.go new file mode 100644 index 00000000000..90b3d0afc13 --- /dev/null +++ b/telemetry/env/env.go @@ -0,0 +1,173 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package env // import "go.opentelemetry.io/otel/sdk/internal/env" + +import ( + "os" + "strconv" +) + +// Environment variable names. +const ( + // BatchSpanProcessorScheduleDelayKey is the delay interval between two + // consecutive exports (i.e. 5000). + BatchSpanProcessorScheduleDelayKey = "OTEL_BSP_SCHEDULE_DELAY" + // BatchSpanProcessorExportTimeoutKey is the maximum allowed time to + // export data (i.e. 3000). + BatchSpanProcessorExportTimeoutKey = "OTEL_BSP_EXPORT_TIMEOUT" + // BatchSpanProcessorMaxQueueSizeKey is the maximum queue size (i.e. 2048). + BatchSpanProcessorMaxQueueSizeKey = "OTEL_BSP_MAX_QUEUE_SIZE" + // BatchSpanProcessorMaxExportBatchSizeKey is the maximum batch size (i.e. + // 512). Note: it must be less than or equal to + // EnvBatchSpanProcessorMaxQueueSize. + BatchSpanProcessorMaxExportBatchSizeKey = "OTEL_BSP_MAX_EXPORT_BATCH_SIZE" + + // AttributeValueLengthKey is the maximum allowed attribute value size. + AttributeValueLengthKey = "OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT" + + // AttributeCountKey is the maximum allowed span attribute count. + AttributeCountKey = "OTEL_ATTRIBUTE_COUNT_LIMIT" + + // SpanAttributeValueLengthKey is the maximum allowed attribute value size + // for a span. + SpanAttributeValueLengthKey = "OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT" + + // SpanAttributeCountKey is the maximum allowed span attribute count for a + // span. + SpanAttributeCountKey = "OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT" + + // SpanEventCountKey is the maximum allowed span event count. + SpanEventCountKey = "OTEL_SPAN_EVENT_COUNT_LIMIT" + + // SpanEventAttributeCountKey is the maximum allowed attribute per span + // event count. + SpanEventAttributeCountKey = "OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT" + + // SpanLinkCountKey is the maximum allowed span link count. + SpanLinkCountKey = "OTEL_SPAN_LINK_COUNT_LIMIT" + + // SpanLinkAttributeCountKey is the maximum allowed attribute per span + // link count. + SpanLinkAttributeCountKey = "OTEL_LINK_ATTRIBUTE_COUNT_LIMIT" +) + +// firstInt returns the value of the first matching environment variable from +// keys. If the value is not an integer or no match is found, defaultValue is +// returned. +func firstInt(defaultValue int, keys ...string) int { + for _, key := range keys { + value := os.Getenv(key) + if value == "" { + continue + } + + intValue, err := strconv.Atoi(value) + if err != nil { + return defaultValue + } + + return intValue + } + + return defaultValue +} + +// IntEnvOr returns the int value of the environment variable with name key if +// it exists, it is not empty, and the value is an int. Otherwise, defaultValue is returned. +func IntEnvOr(key string, defaultValue int) int { + value := os.Getenv(key) + if value == "" { + return defaultValue + } + + intValue, err := strconv.Atoi(value) + if err != nil { + return defaultValue + } + + return intValue +} + +// BatchSpanProcessorScheduleDelay returns the environment variable value for +// the OTEL_BSP_SCHEDULE_DELAY key if it exists, otherwise defaultValue is +// returned. +func BatchSpanProcessorScheduleDelay(defaultValue int) int { + return IntEnvOr(BatchSpanProcessorScheduleDelayKey, defaultValue) +} + +// BatchSpanProcessorExportTimeout returns the environment variable value for +// the OTEL_BSP_EXPORT_TIMEOUT key if it exists, otherwise defaultValue is +// returned. +func BatchSpanProcessorExportTimeout(defaultValue int) int { + return IntEnvOr(BatchSpanProcessorExportTimeoutKey, defaultValue) +} + +// BatchSpanProcessorMaxQueueSize returns the environment variable value for +// the OTEL_BSP_MAX_QUEUE_SIZE key if it exists, otherwise defaultValue is +// returned. +func BatchSpanProcessorMaxQueueSize(defaultValue int) int { + return IntEnvOr(BatchSpanProcessorMaxQueueSizeKey, defaultValue) +} + +// BatchSpanProcessorMaxExportBatchSize returns the environment variable value for +// the OTEL_BSP_MAX_EXPORT_BATCH_SIZE key if it exists, otherwise defaultValue +// is returned. +func BatchSpanProcessorMaxExportBatchSize(defaultValue int) int { + return IntEnvOr(BatchSpanProcessorMaxExportBatchSizeKey, defaultValue) +} + +// SpanAttributeValueLength returns the environment variable value for the +// OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT key if it exists. Otherwise, the +// environment variable value for OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT is +// returned or defaultValue if that is not set. +func SpanAttributeValueLength(defaultValue int) int { + return firstInt(defaultValue, SpanAttributeValueLengthKey, AttributeValueLengthKey) +} + +// SpanAttributeCount returns the environment variable value for the +// OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT key if it exists. Otherwise, the +// environment variable value for OTEL_ATTRIBUTE_COUNT_LIMIT is returned or +// defaultValue if that is not set. +func SpanAttributeCount(defaultValue int) int { + return firstInt(defaultValue, SpanAttributeCountKey, AttributeCountKey) +} + +// SpanEventCount returns the environment variable value for the +// OTEL_SPAN_EVENT_COUNT_LIMIT key if it exists, otherwise defaultValue is +// returned. +func SpanEventCount(defaultValue int) int { + return IntEnvOr(SpanEventCountKey, defaultValue) +} + +// SpanEventAttributeCount returns the environment variable value for the +// OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT key if it exists, otherwise defaultValue +// is returned. +func SpanEventAttributeCount(defaultValue int) int { + return IntEnvOr(SpanEventAttributeCountKey, defaultValue) +} + +// SpanLinkCount returns the environment variable value for the +// OTEL_SPAN_LINK_COUNT_LIMIT key if it exists, otherwise defaultValue is +// returned. +func SpanLinkCount(defaultValue int) int { + return IntEnvOr(SpanLinkCountKey, defaultValue) +} + +// SpanLinkAttributeCount returns the environment variable value for the +// OTEL_LINK_ATTRIBUTE_COUNT_LIMIT key if it exists, otherwise defaultValue is +// returned. +func SpanLinkAttributeCount(defaultValue int) int { + return IntEnvOr(SpanLinkAttributeCountKey, defaultValue) +} diff --git a/telemetry/event.go b/telemetry/event.go deleted file mode 100644 index c72a95949e0..00000000000 --- a/telemetry/event.go +++ /dev/null @@ -1,68 +0,0 @@ -package telemetry - -import ( - "time" - - "github.com/dagger/dagger/core/pipeline" -) - -const eventVersion = "2023-02-28.01" - -type Event struct { - Version string `json:"v"` - Timestamp time.Time `json:"ts"` - - RunID string `json:"run_id,omitempty"` - - Type EventType `json:"type"` - Payload Payload `json:"payload"` -} - -type EventType string - -type EventScope string - -const ( - EventScopeSystem = EventScope("system") - EventScopeRun = EventScope("run") -) - -const ( - EventTypeOp = EventType("op") - EventTypeLog = EventType("log") - EventTypeAnalytics = EventType("analytics") -) - -type Payload interface { - Type() EventType - Scope() EventScope -} - -var _ Payload = OpPayload{} - -type OpPayload struct { - OpID string `json:"op_id"` - OpName string `json:"op_name"` - Pipeline pipeline.Path `json:"pipeline"` - Internal bool `json:"internal"` - Inputs []string `json:"inputs"` - - Started *time.Time `json:"started"` - Completed *time.Time `json:"completed"` - Cached bool `json:"cached"` - Error string `json:"error"` -} - -func (OpPayload) Type() EventType { return EventTypeOp } -func (OpPayload) Scope() EventScope { return EventScopeRun } - -var _ Payload = LogPayload{} - -type LogPayload struct { - OpID string `json:"op_id"` - Data string `json:"data"` - Stream int `json:"stream"` -} - -func (LogPayload) Type() EventType { return EventTypeLog } -func (LogPayload) Scope() EventScope { return EventScopeRun } diff --git a/telemetry/exporters.go b/telemetry/exporters.go new file mode 100644 index 00000000000..2f9c8bef8a1 --- /dev/null +++ b/telemetry/exporters.go @@ -0,0 +1,140 @@ +package telemetry + +import ( + "context" + "log/slog" + + "github.com/dagger/dagger/telemetry/sdklog" + "github.com/moby/buildkit/identity" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/noop" + "golang.org/x/sync/errgroup" +) + +type MultiSpanExporter []sdktrace.SpanExporter + +var _ sdktrace.SpanExporter = MultiSpanExporter{} + +func (m MultiSpanExporter) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error { + eg := new(errgroup.Group) + for _, e := range m { + e := e + eg.Go(func() error { + return e.ExportSpans(ctx, spans) + }) + } + return eg.Wait() +} + +func (m MultiSpanExporter) Shutdown(ctx context.Context) error { + eg := new(errgroup.Group) + for _, e := range m { + e := e + eg.Go(func() error { + return e.Shutdown(ctx) + }) + } + return eg.Wait() +} + +type SpanForwarder struct { + Processors []sdktrace.SpanProcessor +} + +var _ sdktrace.SpanExporter = SpanForwarder{} + +type discardWritesSpan struct { + noop.Span + sdktrace.ReadOnlySpan +} + +func (s discardWritesSpan) SpanContext() trace.SpanContext { + return s.ReadOnlySpan.SpanContext() +} + +func (m SpanForwarder) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error { + eg := new(errgroup.Group) + for _, p := range m.Processors { + p := p + eg.Go(func() error { + for _, span := range spans { + if span.EndTime().Before(span.StartTime()) { + p.OnStart(ctx, discardWritesSpan{noop.Span{}, span}) + } else { + p.OnEnd(span) + } + } + return nil + }) + } + return eg.Wait() +} + +func (m SpanForwarder) Shutdown(ctx context.Context) error { + eg := new(errgroup.Group) + for _, p := range m.Processors { + p := p + eg.Go(func() error { + return p.Shutdown(ctx) + }) + } + return eg.Wait() +} + +// FilterLiveSpansExporter is a SpanExporter that filters out spans that are +// currently running, as indicated by an end time older than its start time +// (typically year 1753). +type FilterLiveSpansExporter struct { + sdktrace.SpanExporter +} + +// ExportSpans passes each span to the span processor's OnEnd hook so that it +// can be batched and emitted more efficiently. +func (exp FilterLiveSpansExporter) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error { + batch := identity.NewID() + filtered := make([]sdktrace.ReadOnlySpan, 0, len(spans)) + for _, span := range spans { + if span.StartTime().After(span.EndTime()) { + slog.Debug("skipping unfinished span", "batch", batch, "span", span.Name(), "id", span.SpanContext().SpanID()) + } else { + slog.Debug("keeping finished span", "batch", batch, "span", span.Name(), "id", span.SpanContext().SpanID()) + filtered = append(filtered, span) + } + } + if len(filtered) == 0 { + return nil + } + return exp.SpanExporter.ExportSpans(ctx, filtered) +} + +type LogForwarder struct { + Processors []sdklog.LogProcessor +} + +var _ sdklog.LogExporter = LogForwarder{} + +func (m LogForwarder) ExportLogs(ctx context.Context, logs []*sdklog.LogData) error { + eg := new(errgroup.Group) + for _, e := range m.Processors { + e := e + eg.Go(func() error { + for _, log := range logs { + e.OnEmit(ctx, log) + } + return nil + }) + } + return eg.Wait() +} + +func (m LogForwarder) Shutdown(ctx context.Context) error { + eg := new(errgroup.Group) + for _, e := range m.Processors { + e := e + eg.Go(func() error { + return e.Shutdown(ctx) + }) + } + return eg.Wait() +} diff --git a/telemetry/generate.go b/telemetry/generate.go new file mode 100644 index 00000000000..e559f3c88c6 --- /dev/null +++ b/telemetry/generate.go @@ -0,0 +1,3 @@ +package telemetry + +//go:generate protoc -I=./ -I=./opentelemetry-proto --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative servers.proto diff --git a/telemetry/graphql.go b/telemetry/graphql.go new file mode 100644 index 00000000000..76f1b8e9269 --- /dev/null +++ b/telemetry/graphql.go @@ -0,0 +1,106 @@ +package telemetry + +import ( + "context" + "fmt" + "log/slog" + "strings" + + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/trace" + + "github.com/dagger/dagger/dagql" + "github.com/dagger/dagger/dagql/call" +) + +func AroundFunc(ctx context.Context, self dagql.Object, id *call.ID) (context.Context, func(res dagql.Typed, cached bool, rerr error)) { + if isIntrospection(id) { + return ctx, dagql.NoopDone + } + + var base string + if id.Base() == nil { + base = "Query" + } else { + base = id.Base().Type().ToAST().Name() + } + spanName := fmt.Sprintf("%s.%s", base, id.Field()) + + callAttr, err := id.Call().Encode() + if err != nil { + slog.Warn("failed to encode call", "id", id.Display(), "err", err) + return ctx, dagql.NoopDone + } + attrs := []attribute.KeyValue{ + attribute.String(DagDigestAttr, id.Digest().String()), + attribute.String(DagCallAttr, callAttr), + } + if idInputs, err := id.Inputs(); err != nil { + slog.Warn("failed to compute inputs(id)", "id", id.Display(), "err", err) + } else { + inputs := make([]string, len(idInputs)) + for i, input := range idInputs { + inputs[i] = input.String() + } + attrs = append(attrs, attribute.StringSlice(DagInputsAttr, inputs)) + } + if dagql.IsInternal(ctx) { + attrs = append(attrs, attribute.Bool(UIInternalAttr, true)) + } + + ctx, span := dagql.Tracer().Start(ctx, spanName, trace.WithAttributes(attrs...)) + ctx, _, _ = WithStdioToOtel(ctx, dagql.InstrumentationLibrary) + + return ctx, func(res dagql.Typed, cached bool, err error) { + defer End(span, func() error { return err }) + + if cached { + // TODO maybe this should be an event? + span.SetAttributes(attribute.Bool(CachedAttr, true)) + } + + if err != nil { + // NB: we do +id.Display() instead of setting it as a field to avoid + // dobule quoting + slog.Warn("error resolving "+id.Display(), "error", err) + } + + // don't consider loadFooFromID to be a 'creator' as that would only + // obfuscate the real ID. + // + // NB: so long as the simplifying process rejects larger IDs, this + // shouldn't be necessary, but it seems like a good idea to just never even + // consider it. + isLoader := strings.HasPrefix(id.Field(), "load") && strings.HasSuffix(id.Field(), "FromID") + + // record an object result as an output of this vertex + // + // this allows the UI to "simplify" this ID back to its creator ID when it + // sees it in the future if it wants to, e.g. showing mymod.unit().stdout() + // instead of the full container().from().[...].stdout() ID + if obj, ok := res.(dagql.Object); ok && !isLoader { + objDigest := obj.ID().Digest() + span.SetAttributes(attribute.String(DagOutputAttr, objDigest.String())) + } + } +} + +// isIntrospection detects whether an ID is an introspection query. +// +// These queries tend to be very large and are not interesting for users to +// see. +func isIntrospection(id *call.ID) bool { + if id.Base() == nil { + switch id.Field() { + case "__schema", + "currentTypeDefs", + "currentFunctionCall", + "currentModule": + return true + default: + return false + } + } else { + return isIntrospection(id.Base()) + } +} diff --git a/telemetry/grpc.go b/telemetry/grpc.go new file mode 100644 index 00000000000..17e393c604e --- /dev/null +++ b/telemetry/grpc.go @@ -0,0 +1,63 @@ +package telemetry + +import ( + "context" + "log/slog" + + grpc "google.golang.org/grpc" + "google.golang.org/protobuf/proto" +) + +func MeasuringUnaryClientInterceptor() grpc.UnaryClientInterceptor { + return func(ctx context.Context, method string, req, reply any, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error { + reqSize := proto.Size(req.(proto.Message)) + err := invoker(ctx, method, req, reply, cc, opts...) + respSize := proto.Size(reply.(proto.Message)) + slog.Debug("measuring gRPC client request", + "reqSize", reqSize, + "respSize", respSize) + return err + } +} + +func MeasuringUnaryServerInterceptor() grpc.UnaryServerInterceptor { + return func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp any, err error) { + reqSize := proto.Size(req.(proto.Message)) + resp, err = handler(ctx, req) + respSize := proto.Size(resp.(proto.Message)) + slog.Debug("measuring gRPC server method", + "method", info.FullMethod, + "reqSize", reqSize, + "respSize", respSize) + return resp, err + } +} + +func MeasuringStreamClientInterceptor() grpc.StreamClientInterceptor { + return func(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, streamer grpc.Streamer, opts ...grpc.CallOption) (grpc.ClientStream, error) { + clientStream, err := streamer(ctx, desc, cc, method, opts...) + if err != nil { + return nil, err + } + return &measuringClientStream{ClientStream: clientStream}, nil + } +} + +type measuringClientStream struct { + grpc.ClientStream +} + +func (s *measuringClientStream) SendMsg(m any) error { + msgSize := proto.Size(m.(proto.Message)) + slog.Debug("measuring client stream SendMsg", "msgSize", msgSize) + return s.ClientStream.SendMsg(m) +} + +func (s *measuringClientStream) RecvMsg(m any) error { + err := s.ClientStream.RecvMsg(m) + if err == nil { + msgSize := proto.Size(m.(proto.Message)) + slog.Debug("measuring client stream RecvMsg", "msgSize", msgSize) + } + return err +} diff --git a/telemetry/inflight/batch_processor.go b/telemetry/inflight/batch_processor.go new file mode 100644 index 00000000000..116bff64c91 --- /dev/null +++ b/telemetry/inflight/batch_processor.go @@ -0,0 +1,457 @@ +package inflight + +import ( + "context" + "sync" + "sync/atomic" + "time" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/sdk/trace" + otrace "go.opentelemetry.io/otel/trace" + + "github.com/dagger/dagger/telemetry/env" +) + +// Defaults for BatchSpanProcessorOptions. +const ( + DefaultMaxQueueSize = 2048 + DefaultScheduleDelay = 5000 + DefaultExportTimeout = 30000 + DefaultMaxExportBatchSize = 512 + + defaultSpanKeepAlive = 30 * time.Second +) + +// BatchSpanProcessorOption configures a BatchSpanProcessor. +type BatchSpanProcessorOption func(o *BatchSpanProcessorOptions) + +// BatchSpanProcessorOptions is configuration settings for a +// BatchSpanProcessor. +type BatchSpanProcessorOptions struct { + // MaxQueueSize is the maximum queue size to buffer spans for delayed processing. If the + // queue gets full it drops the spans. Use BlockOnQueueFull to change this behavior. + // The default value of MaxQueueSize is 2048. + MaxQueueSize int + + // BatchTimeout is the maximum duration for constructing a batch. Processor + // forcefully sends available spans when timeout is reached. + // The default value of BatchTimeout is 5000 msec. + BatchTimeout time.Duration + + // ExportTimeout specifies the maximum duration for exporting spans. If the timeout + // is reached, the export will be cancelled. + // The default value of ExportTimeout is 30000 msec. + ExportTimeout time.Duration + + // MaxExportBatchSize is the maximum number of spans to process in a single batch. + // If there are more than one batch worth of spans then it processes multiple batches + // of spans one batch after the other without any delay. + // The default value of MaxExportBatchSize is 512. + MaxExportBatchSize int + + // BlockOnQueueFull blocks onEnd() and onStart() method if the queue is full + // AND if BlockOnQueueFull is set to true. + // Blocking option should be used carefully as it can severely affect the performance of an + // application. + BlockOnQueueFull bool +} + +// batchSpanProcessor is a SpanProcessor that batches asynchronously-received +// spans and sends them to a trace.Exporter when complete. +type batchSpanProcessor struct { + e trace.SpanExporter + o BatchSpanProcessorOptions + + queue chan trace.ReadOnlySpan + dropped uint32 + + batch []trace.ReadOnlySpan + batchMutex sync.Mutex + batchSpans map[otrace.SpanID]int + inProgressSpans map[otrace.SpanID]*inProgressSpan + timer *time.Timer + stopWait sync.WaitGroup + stopOnce sync.Once + stopCh chan struct{} + stopped atomic.Bool +} + +type inProgressSpan struct { + trace.ReadOnlySpan + UpdatedAt time.Time +} + +var _ trace.SpanProcessor = (*batchSpanProcessor)(nil) + +// NewBatchSpanProcessor creates a new SpanProcessor that will send completed +// span batches to the exporter with the supplied options. +// +// If the exporter is nil, the span processor will perform no action. +func NewBatchSpanProcessor(exporter trace.SpanExporter, options ...BatchSpanProcessorOption) *batchSpanProcessor { + maxQueueSize := env.BatchSpanProcessorMaxQueueSize(DefaultMaxQueueSize) + maxExportBatchSize := env.BatchSpanProcessorMaxExportBatchSize(DefaultMaxExportBatchSize) + + if maxExportBatchSize > maxQueueSize { + if DefaultMaxExportBatchSize > maxQueueSize { + maxExportBatchSize = maxQueueSize + } else { + maxExportBatchSize = DefaultMaxExportBatchSize + } + } + + o := BatchSpanProcessorOptions{ + BatchTimeout: time.Duration(env.BatchSpanProcessorScheduleDelay(DefaultScheduleDelay)) * time.Millisecond, + ExportTimeout: time.Duration(env.BatchSpanProcessorExportTimeout(DefaultExportTimeout)) * time.Millisecond, + MaxQueueSize: maxQueueSize, + MaxExportBatchSize: maxExportBatchSize, + } + for _, opt := range options { + opt(&o) + } + bsp := &batchSpanProcessor{ + e: exporter, + o: o, + batch: make([]trace.ReadOnlySpan, 0, o.MaxExportBatchSize), + batchSpans: make(map[otrace.SpanID]int), + inProgressSpans: make(map[otrace.SpanID]*inProgressSpan), + timer: time.NewTimer(o.BatchTimeout), + queue: make(chan trace.ReadOnlySpan, o.MaxQueueSize), + stopCh: make(chan struct{}), + } + + bsp.stopWait.Add(1) + go func() { + defer bsp.stopWait.Done() + bsp.processQueue() + bsp.drainQueue() + }() + + return bsp +} + +// OnStart method enqueues a trace.ReadOnlySpan for later processing. +func (bsp *batchSpanProcessor) OnStart(parent context.Context, s trace.ReadWriteSpan) { + bsp.enqueue(s) +} + +// OnUpdate method enqueues a trace.ReadOnlySpan for later processing. +func (bsp *batchSpanProcessor) OnUpdate(s trace.ReadOnlySpan) { + bsp.enqueue(s) +} + +// OnEnd method enqueues a trace.ReadOnlySpan for later processing. +func (bsp *batchSpanProcessor) OnEnd(s trace.ReadOnlySpan) { + bsp.enqueue(s) +} + +// Shutdown flushes the queue and waits until all spans are processed. +// It only executes once. Subsequent call does nothing. +func (bsp *batchSpanProcessor) Shutdown(ctx context.Context) error { + var err error + bsp.stopOnce.Do(func() { + bsp.stopped.Store(true) + wait := make(chan struct{}) + go func() { + close(bsp.stopCh) + bsp.stopWait.Wait() + if bsp.e != nil { + if err := bsp.e.Shutdown(ctx); err != nil { + otel.Handle(err) + } + } + close(wait) + }() + // Wait until the wait group is done or the context is cancelled + select { + case <-wait: + case <-ctx.Done(): + err = ctx.Err() + } + }) + return err +} + +type forceFlushSpan struct { + trace.ReadOnlySpan + flushed chan struct{} +} + +func (f forceFlushSpan) SpanContext() otrace.SpanContext { + return otrace.NewSpanContext(otrace.SpanContextConfig{TraceFlags: otrace.FlagsSampled}) +} + +// ForceFlush exports all ended spans that have not yet been exported. +func (bsp *batchSpanProcessor) ForceFlush(ctx context.Context) error { + // Interrupt if context is already canceled. + if err := ctx.Err(); err != nil { + return err + } + + // Do nothing after Shutdown. + if bsp.stopped.Load() { + return nil + } + + var err error + if bsp.e != nil { + flushCh := make(chan struct{}) + if bsp.enqueueBlockOnQueueFull(ctx, forceFlushSpan{flushed: flushCh}) { + select { + case <-bsp.stopCh: + // The batchSpanProcessor is Shutdown. + return nil + case <-flushCh: + // Processed any items in queue prior to ForceFlush being called + case <-ctx.Done(): + return ctx.Err() + } + } + + wait := make(chan error) + go func() { + wait <- bsp.exportSpans(ctx) + close(wait) + }() + // Wait until the export is finished or the context is cancelled/timed out + select { + case err = <-wait: + case <-ctx.Done(): + err = ctx.Err() + } + } + return err +} + +// WithMaxQueueSize returns a BatchSpanProcessorOption that configures the +// maximum queue size allowed for a BatchSpanProcessor. +func WithMaxQueueSize(size int) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.MaxQueueSize = size + } +} + +// WithMaxExportBatchSize returns a BatchSpanProcessorOption that configures +// the maximum export batch size allowed for a BatchSpanProcessor. +func WithMaxExportBatchSize(size int) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.MaxExportBatchSize = size + } +} + +// WithBatchTimeout returns a BatchSpanProcessorOption that configures the +// maximum delay allowed for a BatchSpanProcessor before it will export any +// held span (whether the queue is full or not). +func WithBatchTimeout(delay time.Duration) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.BatchTimeout = delay + } +} + +// WithExportTimeout returns a BatchSpanProcessorOption that configures the +// amount of time a BatchSpanProcessor waits for an exporter to export before +// abandoning the export. +func WithExportTimeout(timeout time.Duration) BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.ExportTimeout = timeout + } +} + +// WithBlocking returns a BatchSpanProcessorOption that configures a +// BatchSpanProcessor to wait for enqueue operations to succeed instead of +// dropping data when the queue is full. +func WithBlocking() BatchSpanProcessorOption { + return func(o *BatchSpanProcessorOptions) { + o.BlockOnQueueFull = true + } +} + +// exportSpans is a subroutine of processing and draining the queue. +func (bsp *batchSpanProcessor) exportSpans(ctx context.Context) error { + bsp.timer.Reset(bsp.o.BatchTimeout) + + bsp.batchMutex.Lock() + defer bsp.batchMutex.Unlock() + + if bsp.o.ExportTimeout > 0 { + var cancel context.CancelFunc + ctx, cancel = context.WithTimeout(ctx, bsp.o.ExportTimeout) + defer cancel() + } + + // Update in progress spans + for _, span := range bsp.batch { + if span.EndTime().Before(span.StartTime()) { + bsp.inProgressSpans[span.SpanContext().SpanID()] = &inProgressSpan{ + ReadOnlySpan: span, + UpdatedAt: time.Now(), + } + } else { + delete(bsp.inProgressSpans, span.SpanContext().SpanID()) + } + } + + // add in progress spans that are not part of the batch + for _, span := range bsp.inProgressSpans { + // ignore spans that were recently updated + if span.UpdatedAt.IsZero() || span.UpdatedAt.Before(time.Now().Add(-defaultSpanKeepAlive)) { + bsp.addToBatch(span.ReadOnlySpan) + span.UpdatedAt = time.Now() + } + } + + if l := len(bsp.batch); l > 0 { + err := bsp.e.ExportSpans(ctx, bsp.batch) + + // A new batch is always created after exporting, even if the batch failed to be exported. + // + // It is up to the exporter to implement any type of retry logic if a batch is failing + // to be exported, since it is specific to the protocol and backend being sent to. + bsp.batch = bsp.batch[:0] + bsp.batchSpans = make(map[otrace.SpanID]int) + + if err != nil { + return err + } + } + return nil +} + +// processQueue removes spans from the `queue` channel until processor +// is shut down. It calls the exporter in batches of up to MaxExportBatchSize +// waiting up to BatchTimeout to form a batch. +func (bsp *batchSpanProcessor) processQueue() { + defer bsp.timer.Stop() + + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + for { + select { + case <-bsp.stopCh: + return + case <-bsp.timer.C: + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + case sd := <-bsp.queue: + if ffs, ok := sd.(forceFlushSpan); ok { + close(ffs.flushed) + continue + } + bsp.batchMutex.Lock() + bsp.addToBatch(sd) + shouldExport := len(bsp.batch) >= bsp.o.MaxExportBatchSize + bsp.batchMutex.Unlock() + if shouldExport { + if !bsp.timer.Stop() { + <-bsp.timer.C + } + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + } + } + } +} + +func (bsp *batchSpanProcessor) addToBatch(sd trace.ReadOnlySpan) { + if i, ok := bsp.batchSpans[sd.SpanContext().SpanID()]; ok { + bsp.batch[i] = sd + return + } + bsp.batchSpans[sd.SpanContext().SpanID()] = len(bsp.batch) + bsp.batch = append(bsp.batch, sd) +} + +// drainQueue awaits the any caller that had added to bsp.stopWait +// to finish the enqueue, then exports the final batch. +func (bsp *batchSpanProcessor) drainQueue() { + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + for { + select { + case sd := <-bsp.queue: + if _, ok := sd.(forceFlushSpan); ok { + // Ignore flush requests as they are not valid spans. + continue + } + + bsp.batchMutex.Lock() + bsp.addToBatch(sd) + shouldExport := len(bsp.batch) == bsp.o.MaxExportBatchSize + bsp.batchMutex.Unlock() + + if shouldExport { + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + } + default: + // There are no more enqueued spans. Make final export. + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + return + } + } +} + +func (bsp *batchSpanProcessor) enqueue(sd trace.ReadOnlySpan) { + ctx := context.TODO() + + // Do not enqueue spans after Shutdown. + if bsp.stopped.Load() { + return + } + + // Do not enqueue spans if we are just going to drop them. + if bsp.e == nil { + return + } + + if bsp.o.BlockOnQueueFull { + bsp.enqueueBlockOnQueueFull(ctx, sd) + } else { + bsp.enqueueDrop(ctx, sd) + } +} + +func (bsp *batchSpanProcessor) enqueueBlockOnQueueFull(ctx context.Context, sd trace.ReadOnlySpan) bool { + if !sd.SpanContext().IsSampled() { + return false + } + + select { + case bsp.queue <- sd: + return true + case <-ctx.Done(): + return false + } +} + +func (bsp *batchSpanProcessor) enqueueDrop(ctx context.Context, sd trace.ReadOnlySpan) bool { + if !sd.SpanContext().IsSampled() { + return false + } + + select { + case bsp.queue <- sd: + return true + default: + atomic.AddUint32(&bsp.dropped, 1) + } + return false +} + +// MarshalLog is the marshaling function used by the logging system to represent this Span Processor. +func (bsp *batchSpanProcessor) MarshalLog() interface{} { + return struct { + Type string + SpanExporter trace.SpanExporter + Config BatchSpanProcessorOptions + }{ + Type: "BatchSpanProcessor", + SpanExporter: bsp.e, + Config: bsp.o, + } +} diff --git a/telemetry/inflight/processor.go b/telemetry/inflight/processor.go new file mode 100644 index 00000000000..2ecdac5a136 --- /dev/null +++ b/telemetry/inflight/processor.go @@ -0,0 +1,139 @@ +package inflight + +import ( + "context" + "sync" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/sdk/trace" +) + +type TracerUpdater interface{ tracer() } + +// simpleSpanProcessor is a SpanProcessor that synchronously sends all +// completed Spans to a trace.Exporter immediately. +type simpleSpanProcessor struct { + exporterMu sync.Mutex + exporter trace.SpanExporter + stopOnce sync.Once +} + +var _ trace.SpanProcessor = (*simpleSpanProcessor)(nil) + +// NewSimpleSpanProcessor returns a new SpanProcessor that will synchronously +// send completed spans to the exporter immediately. +// +// This SpanProcessor is not recommended for production use. The synchronous +// nature of this SpanProcessor make it good for testing, debugging, or +// showing examples of other feature, but it will be slow and have a high +// computation resource usage overhead. The BatchSpanProcessor is recommended +// for production use instead. +func NewSimpleSpanProcessor(exporter trace.SpanExporter) *simpleSpanProcessor { + ssp := &simpleSpanProcessor{ + exporter: exporter, + } + + return ssp +} + +// OnStart does nothing. +func (ssp *simpleSpanProcessor) OnStart(ctx context.Context, s trace.ReadWriteSpan) { + ssp.exporterMu.Lock() + defer ssp.exporterMu.Unlock() + + if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() { + if err := ssp.exporter.ExportSpans(context.Background(), []trace.ReadOnlySpan{s}); err != nil { + otel.Handle(err) + } + } +} + +// OnEnd immediately exports a ReadOnlySpan. +func (ssp *simpleSpanProcessor) OnEnd(s trace.ReadOnlySpan) { + ssp.exporterMu.Lock() + defer ssp.exporterMu.Unlock() + + if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() { + if err := ssp.exporter.ExportSpans(context.Background(), []trace.ReadOnlySpan{s}); err != nil { + otel.Handle(err) + } + } +} + +// OnStart does nothing. +func (ssp *simpleSpanProcessor) OnUpdate(s trace.ReadOnlySpan) { + ssp.exporterMu.Lock() + defer ssp.exporterMu.Unlock() + + if ssp.exporter != nil && s.SpanContext().TraceFlags().IsSampled() { + if err := ssp.exporter.ExportSpans(context.Background(), []trace.ReadOnlySpan{s}); err != nil { + otel.Handle(err) + } + } +} + +// Shutdown shuts down the exporter this SimpleSpanProcessor exports to. +func (ssp *simpleSpanProcessor) Shutdown(ctx context.Context) error { + var err error + ssp.stopOnce.Do(func() { + stopFunc := func(exp trace.SpanExporter) (<-chan error, func()) { + done := make(chan error) + return done, func() { done <- exp.Shutdown(ctx) } + } + + // The exporter field of the simpleSpanProcessor needs to be zeroed to + // signal it is shut down, meaning all subsequent calls to OnEnd will + // be gracefully ignored. This needs to be done synchronously to avoid + // any race condition. + // + // A closure is used to keep reference to the exporter and then the + // field is zeroed. This ensures the simpleSpanProcessor is shut down + // before the exporter. This order is important as it avoids a + // potential deadlock. If the exporter shut down operation generates a + // span, that span would need to be exported. Meaning, OnEnd would be + // called and try acquiring the lock that is held here. + ssp.exporterMu.Lock() + done, shutdown := stopFunc(ssp.exporter) + ssp.exporter = nil + ssp.exporterMu.Unlock() + + go shutdown() + + // Wait for the exporter to shut down or the deadline to expire. + select { + case err = <-done: + case <-ctx.Done(): + // It is possible for the exporter to have immediately shut down + // and the context to be done simultaneously. In that case this + // outer select statement will randomly choose a case. This will + // result in a different returned error for similar scenarios. + // Instead, double check if the exporter shut down at the same + // time and return that error if so. This will ensure consistency + // as well as ensure the caller knows the exporter shut down + // successfully (they can already determine if the deadline is + // expired given they passed the context). + select { + case err = <-done: + default: + err = ctx.Err() + } + } + }) + return err +} + +// ForceFlush does nothing as there is no data to flush. +func (ssp *simpleSpanProcessor) ForceFlush(context.Context) error { + return nil +} + +// MarshalLog is the marshaling function used by the logging system to represent this Span Processor. +func (ssp *simpleSpanProcessor) MarshalLog() interface{} { + return struct { + Type string + Exporter trace.SpanExporter + }{ + Type: "SimpleSpanProcessor", + Exporter: ssp.exporter, + } +} diff --git a/telemetry/inflight/proxy.go b/telemetry/inflight/proxy.go new file mode 100644 index 00000000000..df5d93a9953 --- /dev/null +++ b/telemetry/inflight/proxy.go @@ -0,0 +1,94 @@ +package inflight + +import ( + "context" + + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/codes" + tracesdk "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/embedded" +) + +type ProxyTraceProvider struct { + embedded.TracerProvider + + tp *tracesdk.TracerProvider + onUpdate func(trace.Span) +} + +func NewProxyTraceProvider(tp *tracesdk.TracerProvider, onUpdate func(trace.Span)) *ProxyTraceProvider { + return &ProxyTraceProvider{ + tp: tp, + onUpdate: onUpdate, + } +} + +func (tp *ProxyTraceProvider) Tracer(name string, options ...trace.TracerOption) trace.Tracer { + return &ProxyTracer{ + tracer: tp.tp.Tracer(name, options...), + onUpdate: tp.onUpdate, + } +} + +func (tp *ProxyTraceProvider) ForceFlush(ctx context.Context) error { + return tp.tp.ForceFlush(ctx) +} + +func (tp *ProxyTraceProvider) Shutdown(ctx context.Context) error { + return tp.tp.Shutdown(ctx) +} + +type ProxyTracer struct { + embedded.Tracer + tracer trace.Tracer + onUpdate func(trace.Span) +} + +func (t ProxyTracer) Start(ctx context.Context, spanName string, opts ...trace.SpanStartOption) (context.Context, trace.Span) { + ctx, span := t.tracer.Start(ctx, spanName, opts...) + return ctx, proxySpan{sp: span, onUpdate: t.onUpdate} +} + +type proxySpan struct { + embedded.Span + sp trace.Span + onUpdate func(trace.Span) +} + +var _ trace.Span = proxySpan{} + +func (s proxySpan) SpanContext() trace.SpanContext { return s.sp.SpanContext() } + +func (s proxySpan) IsRecording() bool { return s.sp.IsRecording() } + +func (s proxySpan) SetStatus(code codes.Code, message string) { + s.sp.SetStatus(code, message) + s.onUpdate(s.sp) +} + +// func (s proxySpan) SetError(v bool) { s.sp.SetError(v) } + +func (s proxySpan) SetAttributes(attributes ...attribute.KeyValue) { + s.sp.SetAttributes(attributes...) + s.onUpdate(s.sp) +} + +func (s proxySpan) End(opts ...trace.SpanEndOption) { s.sp.End(opts...) } + +func (s proxySpan) RecordError(err error, opts ...trace.EventOption) { + s.sp.RecordError(err, opts...) + s.onUpdate(s.sp) +} + +func (s proxySpan) AddEvent(event string, opts ...trace.EventOption) { + s.sp.AddEvent(event, opts...) + s.onUpdate(s.sp) +} + +func (s proxySpan) SetName(name string) { + s.sp.SetName(name) + s.onUpdate(s.sp) +} + +func (s proxySpan) TracerProvider() trace.TracerProvider { return s.sp.TracerProvider() } diff --git a/telemetry/init.go b/telemetry/init.go new file mode 100644 index 00000000000..588c06175e2 --- /dev/null +++ b/telemetry/init.go @@ -0,0 +1,443 @@ +package telemetry + +import ( + "context" + "fmt" + "log/slog" + "net" + "net/url" + "os" + "strings" + "sync" + "time" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" + "go.opentelemetry.io/otel/log" + "go.opentelemetry.io/otel/propagation" + "go.opentelemetry.io/otel/sdk/resource" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + semconv "go.opentelemetry.io/otel/semconv/v1.24.0" + "go.opentelemetry.io/otel/trace" + "google.golang.org/grpc" + + "github.com/dagger/dagger/telemetry/inflight" + "github.com/dagger/dagger/telemetry/sdklog" + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc" + "github.com/dagger/dagger/telemetry/sdklog/otlploghttp" +) + +var configuredCloudSpanExporter sdktrace.SpanExporter +var configuredCloudLogsExporter sdklog.LogExporter +var ocnfiguredCloudExportersOnce sync.Once + +func ConfiguredCloudExporters(ctx context.Context) (sdktrace.SpanExporter, sdklog.LogExporter, bool) { + ocnfiguredCloudExportersOnce.Do(func() { + cloudToken := os.Getenv("DAGGER_CLOUD_TOKEN") + if cloudToken == "" { + return + } + + cloudURL := os.Getenv("DAGGER_CLOUD_URL") + if cloudURL == "" { + cloudURL = "https://api.dagger.cloud" + } + + cloudEndpoint, err := url.Parse(cloudURL) + if err != nil { + slog.Warn("bad cloud URL", "error", err) + return + } + + tracesURL := cloudEndpoint.JoinPath("v1", "traces") + logsURL := cloudEndpoint.JoinPath("v1", "logs") + + headers := map[string]string{ + "Authorization": "Bearer " + cloudToken, + } + + configuredCloudSpanExporter, err = otlptracehttp.New(ctx, + otlptracehttp.WithEndpointURL(tracesURL.String()), + otlptracehttp.WithHeaders(headers)) + if err != nil { + slog.Warn("failed to configure cloud tracing", "error", err) + return + } + + cfg := otlploghttp.Config{ + Endpoint: logsURL.Host, + URLPath: logsURL.Path, + Insecure: logsURL.Scheme != "https", + Headers: headers, + } + configuredCloudLogsExporter = otlploghttp.NewClient(cfg) + }) + + return configuredCloudSpanExporter, configuredCloudLogsExporter, + configuredCloudSpanExporter != nil +} + +func OtelConfigured() bool { + for _, env := range os.Environ() { + if strings.HasPrefix(env, "OTEL_") { + return true + } + } + return false +} + +var configuredSpanExporter sdktrace.SpanExporter +var configuredSpanExporterOnce sync.Once + +func ConfiguredSpanExporter(ctx context.Context) (sdktrace.SpanExporter, bool) { + ctx = context.WithoutCancel(ctx) + + configuredSpanExporterOnce.Do(func() { + if !OtelConfigured() { + return + } + + var err error + + var proto string + if v := os.Getenv("OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"); v != "" { + proto = v + } else if v := os.Getenv("OTEL_EXPORTER_OTLP_PROTOCOL"); v != "" { + proto = v + } else { + // https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/protocol/exporter.md#specify-protocol + proto = "http/protobuf" + } + + var endpoint string + if v := os.Getenv("OTEL_EXPORTER_OTLP_TRACES_ENDPOINT"); v != "" { + endpoint = v + } else if v := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"); v != "" { + if proto == "http/protobuf" { + endpoint, err = url.JoinPath(v, "v1", "traces") + if err != nil { + slog.Warn("failed to join path", "error", err) + return + } + } else { + endpoint = v + } + } + + slog.Debug("configuring tracing via env", "protocol", proto) + + switch proto { + case "http/protobuf", "http": + configuredSpanExporter, err = otlptracehttp.New(ctx, + otlptracehttp.WithEndpointURL(endpoint)) + case "grpc": + var u *url.URL + u, err = url.Parse(endpoint) + if err != nil { + slog.Warn("bad OTLP logs endpoint %q: %w", endpoint, err) + return + } + opts := []otlptracegrpc.Option{ + otlptracegrpc.WithEndpointURL(endpoint), + } + if u.Scheme == "unix" { + dialer := func(ctx context.Context, addr string) (net.Conn, error) { + return net.Dial(u.Scheme, u.Path) + } + opts = append(opts, + otlptracegrpc.WithDialOption(grpc.WithContextDialer(dialer)), + otlptracegrpc.WithInsecure()) + } + configuredSpanExporter, err = otlptracegrpc.New(ctx, opts...) + default: + err = fmt.Errorf("unknown OTLP protocol: %s", proto) + } + if err != nil { + slog.Warn("failed to configure tracing", "error", err) + } + }) + return configuredSpanExporter, configuredSpanExporter != nil +} + +var configuredLogExporter sdklog.LogExporter +var configuredLogExporterOnce sync.Once + +func ConfiguredLogExporter(ctx context.Context) (sdklog.LogExporter, bool) { + ctx = context.WithoutCancel(ctx) + + configuredLogExporterOnce.Do(func() { + var err error + + var endpoint string + if v := os.Getenv("OTEL_EXPORTER_OTLP_LOGS_ENDPOINT"); v != "" { + endpoint = v + } else if v := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"); v != "" { + // we can't assume all OTLP endpoints supprot logs. better to be + // explicit than have noisy otel errors. + slog.Debug("note: intentionally not sending logs to OTEL_EXPORTER_OTLP_ENDPOINT; set OTEL_EXPORTER_OTLP_LOGS_ENDPOINT if needed") + return + } + if endpoint == "" { + return + } + + var proto string + if v := os.Getenv("OTEL_EXPORTER_OTLP_LOGS_PROTOCOL"); v != "" { + proto = v + } else if v := os.Getenv("OTEL_EXPORTER_OTLP_PROTOCOL"); v != "" { + proto = v + } else { + // https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/protocol/exporter.md#specify-protocol + proto = "http/protobuf" + } + + slog.Debug("configuring logging via env", "protocol", proto, "endpoint", endpoint) + + u, err := url.Parse(endpoint) + if err != nil { + slog.Warn("bad OTLP logs endpoint %q: %w", endpoint, err) + return + } + + switch proto { + case "http/protobuf", "http": + cfg := otlploghttp.Config{ + Endpoint: u.Host, + URLPath: u.Path, + Insecure: u.Scheme != "https", + Headers: map[string]string{}, + } + if headers := os.Getenv("OTEL_EXPORTER_OTLP_HEADERS"); headers != "" { + for _, header := range strings.Split(headers, ",") { + name, value, _ := strings.Cut(header, "=") + cfg.Headers[name] = value + } + } + configuredLogExporter = otlploghttp.NewClient(cfg) + + case "grpc": + opts := []otlploggrpc.Option{ + otlploggrpc.WithEndpointURL(endpoint), + } + if u.Scheme == "unix" { + dialer := func(ctx context.Context, addr string) (net.Conn, error) { + return net.Dial(u.Scheme, u.Path) + } + opts = append(opts, + otlploggrpc.WithDialOption(grpc.WithContextDialer(dialer)), + otlploggrpc.WithInsecure()) + } + client := otlploggrpc.NewClient(opts...) + err = client.Start(ctx) + configuredLogExporter = client + default: + err = fmt.Errorf("unknown OTLP protocol: %s", proto) + } + if err != nil { + slog.Warn("failed to configure logging", "error", err) + } + }) + return configuredLogExporter, configuredLogExporter != nil +} + +// FallbackResource is the fallback resource definition. A more specific +// resource should be set in Init. +func FallbackResource() *resource.Resource { + return resource.NewWithAttributes( + semconv.SchemaURL, + semconv.ServiceNameKey.String("dagger"), + ) +} + +var ( + // set by Init, closed by Close + tracerProvider *inflight.ProxyTraceProvider + loggerProvider *sdklog.LoggerProvider +) + +// LiveSpanProcessor is a SpanProcessor that can additionally receive updates +// for a span at runtime, rather than waiting until the span ends. +type LiveSpanProcessor interface { + sdktrace.SpanProcessor + + // OnUpdate method enqueues a trace.ReadOnlySpan for later processing. + OnUpdate(s sdktrace.ReadOnlySpan) +} + +type Config struct { + // Auto-detect exporters from OTEL_* env variables. + Detect bool + + // LiveTraceExporters are exporters that can receive updates for spans at runtime, + // rather than waiting until the span ends. + // + // Example: TUI, Cloud + LiveTraceExporters []sdktrace.SpanExporter + + // BatchedTraceExporters are exporters that receive spans in batches, after the + // spans have ended. + // + // Example: Honeycomb, Jaeger, etc. + BatchedTraceExporters []sdktrace.SpanExporter + + // LiveLogExporters are exporters that receive logs in batches of ~100ms. + LiveLogExporters []sdklog.LogExporter + + // Resource is the resource describing this component and runtime + // environment. + Resource *resource.Resource +} + +// NearlyImmediate is 100ms, below which has diminishing returns in terms of +// visual perception vs. performance cost. +const NearlyImmediate = 100 * time.Millisecond + +var ForceLiveTrace = os.Getenv("FORCE_LIVE_TRACE") != "" + +// Logger returns a logger with the given name. +func Logger(name string) log.Logger { + return loggerProvider.Logger(name) // TODO more instrumentation attrs +} + +var SpanProcessors = []sdktrace.SpanProcessor{} +var LogProcessors = []sdklog.LogProcessor{} + +// Init sets up the global OpenTelemetry providers tracing, logging, and +// someday metrics providers. It is called by the CLI, the engine, and the +// container shim, so it needs to be versatile. +func Init(ctx context.Context, cfg Config) context.Context { + slog.Debug("initializing telemetry") + + if p, ok := os.LookupEnv("TRACEPARENT"); ok { + slog.Debug("found TRACEPARENT", "value", p) + ctx = propagation.TraceContext{}.Extract(ctx, propagation.MapCarrier{"traceparent": p}) + } + + // Set up a text map propagator so that things, well, propagate. The default + // is a noop. + otel.SetTextMapPropagator(propagation.TraceContext{}) + + // Log to slog. + otel.SetErrorHandler(otel.ErrorHandlerFunc(func(err error) { + slog.Error("OpenTelemetry error", "error", err) + })) + + if cfg.Resource == nil { + cfg.Resource = FallbackResource() + } + + if cfg.Detect { + if spans, logs, ok := ConfiguredCloudExporters(ctx); ok { + cfg.LiveTraceExporters = append(cfg.LiveTraceExporters, spans) + cfg.LiveLogExporters = append(cfg.LiveLogExporters, logs) + } + if exp, ok := ConfiguredSpanExporter(ctx); ok { + if ForceLiveTrace { + cfg.LiveTraceExporters = append(cfg.LiveTraceExporters, exp) + } else { + cfg.BatchedTraceExporters = append(cfg.BatchedTraceExporters, + // Filter out unfinished spans to avoid confusing external systems. + // + // Normally we avoid sending them here by virtue of putting this into + // BatchedTraceExporters, but that only applies to the local process. + // Unfinished spans may end up here if they're proxied out of the + // engine via Params.EngineTrace. + FilterLiveSpansExporter{exp}) + } + } + if exp, ok := ConfiguredLogExporter(ctx); ok { + cfg.LiveLogExporters = append(cfg.LiveLogExporters, exp) + } + } + + traceOpts := []sdktrace.TracerProviderOption{ + sdktrace.WithResource(cfg.Resource), + } + + liveProcessors := make([]LiveSpanProcessor, 0, len(cfg.LiveTraceExporters)) + for _, exporter := range cfg.LiveTraceExporters { + processor := inflight.NewBatchSpanProcessor(exporter, + inflight.WithBatchTimeout(NearlyImmediate)) + liveProcessors = append(liveProcessors, processor) + SpanProcessors = append(SpanProcessors, processor) + } + for _, exporter := range cfg.BatchedTraceExporters { + processor := sdktrace.NewBatchSpanProcessor(exporter) + SpanProcessors = append(SpanProcessors, processor) + } + for _, proc := range SpanProcessors { + traceOpts = append(traceOpts, sdktrace.WithSpanProcessor(proc)) + } + + tracerProvider = inflight.NewProxyTraceProvider( + sdktrace.NewTracerProvider(traceOpts...), + func(s trace.Span) { // OnUpdate + if ro, ok := s.(sdktrace.ReadOnlySpan); ok && s.IsRecording() { + for _, processor := range liveProcessors { + processor.OnUpdate(ro) + } + } + }, + ) + + // Register our TracerProvider as the global so any imported instrumentation + // in the future will default to using it. + // + // NB: this is also necessary so that we can establish a root span, otherwise + // telemetry doesn't work. + otel.SetTracerProvider(tracerProvider) + + // Set up a log provider if configured. + if len(cfg.LiveLogExporters) > 0 { + lp := sdklog.NewLoggerProvider(cfg.Resource) + for _, exp := range cfg.LiveLogExporters { + processor := sdklog.NewBatchLogProcessor(exp, + sdklog.WithBatchTimeout(NearlyImmediate)) + LogProcessors = append(LogProcessors, processor) + lp.RegisterLogProcessor(processor) + } + loggerProvider = lp + + // TODO: someday do the following (once it exists) + // Register our TracerProvider as the global so any imported + // instrumentation in the future will default to using it. + // otel.SetLoggerProvider(loggerProvider) + } + + return ctx +} + +// Flush drains telemetry data, and is typically called just before a client +// goes away. +// +// NB: now that we wait for all spans to complete, this is less necessary, but +// it seems wise to keep it anyway, as the spots where it are needed are hard +// to find. +func Flush(ctx context.Context) { + slog.Debug("flushing processors") + if tracerProvider != nil { + if err := tracerProvider.ForceFlush(ctx); err != nil { + slog.Error("failed to flush spans", "error", err) + } + } + slog.Debug("done flushing processors") +} + +// Close shuts down the global OpenTelemetry providers, flushing any remaining +// data to the configured exporters. +func Close() { + flushCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() + Flush(flushCtx) + if tracerProvider != nil { + if err := tracerProvider.Shutdown(flushCtx); err != nil { + slog.Error("failed to shut down tracer provider", "error", err) + } + } + if loggerProvider != nil { + if err := loggerProvider.Shutdown(flushCtx); err != nil { + slog.Error("failed to shut down logger provider", "error", err) + } + } +} diff --git a/telemetry/labels.go b/telemetry/labels.go new file mode 100644 index 00000000000..209e92e6956 --- /dev/null +++ b/telemetry/labels.go @@ -0,0 +1,498 @@ +package telemetry + +import ( + "crypto/sha256" + "encoding/base64" + "errors" + "fmt" + "log/slog" + "os" + "os/exec" + "regexp" + "runtime" + "strconv" + "strings" + "sync" + + "github.com/denisbrodbeck/machineid" + "github.com/go-git/go-git/v5" + "github.com/go-git/go-git/v5/plumbing" + "github.com/go-git/go-git/v5/plumbing/object" + "github.com/google/go-github/v59/github" +) + +type Labels map[string]string + +var defaultLabels Labels +var labelsOnce sync.Once + +func LoadDefaultLabels(workdir, clientEngineVersion string) Labels { + labelsOnce.Do(func() { + defaultLabels = Labels{}. + WithCILabels(). + WithClientLabels(clientEngineVersion). + WithVCSLabels(workdir) + }) + return defaultLabels +} + +func (labels Labels) UserAgent() string { + out := []string{} + for k, v := range labels { + out = append(out, fmt.Sprintf("%s:%s", k, v)) + } + return strings.Join(out, ",") +} + +func (labels Labels) WithEngineLabel(engineName string) Labels { + labels["dagger.io/engine"] = engineName + return labels +} + +func (labels Labels) WithServerLabels(engineVersion, os, arch string, cacheEnabled bool) Labels { + labels["dagger.io/server.os"] = os + labels["dagger.io/server.arch"] = arch + labels["dagger.io/server.version"] = engineVersion + labels["dagger.io/server.cache.enabled"] = strconv.FormatBool(cacheEnabled) + return labels +} + +func (labels Labels) WithClientLabels(engineVersion string) Labels { + labels["dagger.io/client.os"] = runtime.GOOS + labels["dagger.io/client.arch"] = runtime.GOARCH + labels["dagger.io/client.version"] = engineVersion + + machineID, err := machineid.ProtectedID("dagger") + if err == nil { + labels["dagger.io/client.machine_id"] = machineID + } + + return labels +} + +func (labels Labels) WithVCSLabels(workdir string) Labels { + return labels. + WithGitLabels(workdir). + WithGitHubLabels(). + WithGitLabLabels(). + WithCircleCILabels() +} + +func (labels Labels) WithGitLabels(workdir string) Labels { + repo, err := git.PlainOpenWithOptions(workdir, &git.PlainOpenOptions{ + DetectDotGit: true, + }) + if err != nil { + if !errors.Is(err, git.ErrRepositoryNotExists) { + slog.Warn("failed to open git repository", "err", err) + } + return labels + } + + origin, err := repo.Remote("origin") + if err == nil { + urls := origin.Config().URLs + if len(urls) == 0 { + return labels + } + + endpoint, err := parseGitURL(urls[0]) + if err != nil { + slog.Warn("failed to parse git remote URL", "err", err) + return labels + } + + labels["dagger.io/git.remote"] = endpoint + } + + head, err := repo.Head() + if err != nil { + slog.Warn("failed to get repo HEAD", "err", err) + return labels + } + + commit, err := repo.CommitObject(head.Hash()) + if err != nil { + slog.Warn("failed to get commit object", "err", err) + return labels + } + + // Checks if the commit is a merge commit in the context of pull request + // Only GitHub needs to be handled, as GitLab doesn't detach the head in MR context + if os.Getenv("GITHUB_EVENT_NAME") == "pull_request" && commit.NumParents() > 1 { + // Get the pull request's origin branch name + branch := os.Getenv("GITHUB_HEAD_REF") + + // List of remotes function to try fetching from: origin and fork + fetchFuncs := []fetchFunc{fetchFromOrigin, fetchFromFork} + + var branchCommit *object.Commit + var err error + + for _, fetch := range fetchFuncs { + branchCommit, err = fetch(repo, branch) + if err == nil { + commit = branchCommit + break + } else { + slog.Warn("failed to fetch branch", "err", err) + } + } + } + + title, _, _ := strings.Cut(commit.Message, "\n") + + labels["dagger.io/git.ref"] = commit.Hash.String() + labels["dagger.io/git.author.name"] = commit.Author.Name + labels["dagger.io/git.author.email"] = commit.Author.Email + labels["dagger.io/git.committer.name"] = commit.Committer.Name + labels["dagger.io/git.committer.email"] = commit.Committer.Email + labels["dagger.io/git.title"] = title // first line from commit message + + // check if ref is a tag or branch + refs, err := repo.References() + if err != nil { + slog.Warn("failed to get refs", "err", err) + return labels + } + + err = refs.ForEach(func(ref *plumbing.Reference) error { + if ref.Hash() == commit.Hash { + if ref.Name().IsTag() { + labels["dagger.io/git.tag"] = ref.Name().Short() + } + if ref.Name().IsBranch() { + labels["dagger.io/git.branch"] = ref.Name().Short() + } + } + return nil + }) + if err != nil { + slog.Warn("failed to get refs", "err", err) + return labels + } + + return labels +} + +func (labels Labels) WithGitHubLabels() Labels { + if os.Getenv("GITHUB_ACTIONS") != "true" { //nolint:goconst + return labels + } + + eventType := os.Getenv("GITHUB_EVENT_NAME") + + labels["dagger.io/vcs.event.type"] = eventType + labels["dagger.io/vcs.job.name"] = os.Getenv("GITHUB_JOB") + labels["dagger.io/vcs.triggerer.login"] = os.Getenv("GITHUB_ACTOR") + labels["dagger.io/vcs.workflow.name"] = os.Getenv("GITHUB_WORKFLOW") + + eventPath := os.Getenv("GITHUB_EVENT_PATH") + if eventPath == "" { + return labels + } + + payload, err := os.ReadFile(eventPath) + if err != nil { + slog.Warn("failed to read $GITHUB_EVENT_PATH", "err", err) + return labels + } + + event, err := github.ParseWebHook(eventType, payload) + if err != nil { + slog.Warn("failed to parse $GITHUB_EVENT_PATH", "err", err) + return labels + } + + if event, ok := event.(interface { + GetAction() string + }); ok && event.GetAction() != "" { + labels["github.com/event.action"] = event.GetAction() + } + + if repo, ok := getRepoIsh(event); ok { + labels["dagger.io/vcs.repo.full_name"] = repo.GetFullName() + labels["dagger.io/vcs.repo.url"] = repo.GetHTMLURL() + } + + if event, ok := event.(interface { + GetPullRequest() *github.PullRequest + }); ok && event.GetPullRequest() != nil { + pr := event.GetPullRequest() + + labels["dagger.io/vcs.change.number"] = fmt.Sprintf("%d", pr.GetNumber()) + labels["dagger.io/vcs.change.title"] = pr.GetTitle() + labels["dagger.io/vcs.change.url"] = pr.GetHTMLURL() + labels["dagger.io/vcs.change.branch"] = pr.GetHead().GetRef() + labels["dagger.io/vcs.change.head_sha"] = pr.GetHead().GetSHA() + labels["dagger.io/vcs.change.label"] = pr.GetHead().GetLabel() + } + + return labels +} + +func (labels Labels) WithGitLabLabels() Labels { + if os.Getenv("GITLAB_CI") != "true" { //nolint:goconst + return labels + } + + branchName := os.Getenv("CI_MERGE_REQUEST_SOURCE_BRANCH_NAME") + if branchName == "" { + // for a branch job, CI_MERGE_REQUEST_SOURCE_BRANCH_NAME is empty + branchName = os.Getenv("CI_COMMIT_BRANCH") + } + + changeTitle := os.Getenv("CI_MERGE_REQUEST_TITLE") + if changeTitle == "" { + changeTitle = os.Getenv("CI_COMMIT_TITLE") + } + + labels["dagger.io/vcs.repo.url"] = os.Getenv("CI_PROJECT_URL") + labels["dagger.io/vcs.repo.full_name"] = os.Getenv("CI_PROJECT_PATH") + labels["dagger.io/vcs.change.branch"] = branchName + labels["dagger.io/vcs.change.title"] = changeTitle + labels["dagger.io/vcs.change.head_sha"] = os.Getenv("CI_COMMIT_SHA") + labels["dagger.io/vcs.triggerer.login"] = os.Getenv("GITLAB_USER_LOGIN") + labels["dagger.io/vcs.event.type"] = os.Getenv("CI_PIPELINE_SOURCE") + labels["dagger.io/vcs.job.name"] = os.Getenv("CI_JOB_NAME") + labels["dagger.io/vcs.workflow.name"] = os.Getenv("CI_PIPELINE_NAME") + labels["dagger.io/vcs.change.label"] = os.Getenv("CI_MERGE_REQUEST_LABELS") + labels["gitlab.com/job.id"] = os.Getenv("CI_JOB_ID") + labels["gitlab.com/triggerer.id"] = os.Getenv("GITLAB_USER_ID") + labels["gitlab.com/triggerer.email"] = os.Getenv("GITLAB_USER_EMAIL") + labels["gitlab.com/triggerer.name"] = os.Getenv("GITLAB_USER_NAME") + + projectURL := os.Getenv("CI_MERGE_REQUEST_PROJECT_URL") + mrIID := os.Getenv("CI_MERGE_REQUEST_IID") + if projectURL != "" && mrIID != "" { + labels["dagger.io/vcs.change.url"] = fmt.Sprintf("%s/-/merge_requests/%s", projectURL, mrIID) + labels["dagger.io/vcs.change.number"] = mrIID + } + + return labels +} + +func (labels Labels) WithCircleCILabels() Labels { + if os.Getenv("CIRCLECI") != "true" { //nolint:goconst + return labels + } + + labels["dagger.io/vcs.change.branch"] = os.Getenv("CIRCLE_BRANCH") + labels["dagger.io/vcs.change.head_sha"] = os.Getenv("CIRCLE_SHA1") + labels["dagger.io/vcs.job.name"] = os.Getenv("CIRCLE_JOB") + + firstEnvLabel := func(label string, envVar []string) { + for _, envVar := range envVar { + triggererLogin := os.Getenv(envVar) + if triggererLogin != "" { + labels[label] = triggererLogin + return + } + } + } + + // environment variables beginning with "CIRCLE_PIPELINE_" are set in `.circle-ci` pipeline config + pipelineNumber := []string{ + "CIRCLE_PIPELINE_NUMBER", + } + firstEnvLabel("dagger.io/vcs.change.number", pipelineNumber) + + triggererLabels := []string{ + "CIRCLE_USERNAME", // all, but account needs to exist on circleCI + "CIRCLE_PROJECT_USERNAME", // github / bitbucket + "CIRCLE_PIPELINE_TRIGGER_LOGIN", // gitlab + } + firstEnvLabel("dagger.io/vcs.triggerer.login", triggererLabels) + + repoNameLabels := []string{ + "CIRCLE_PROJECT_REPONAME", // github / bitbucket + "CIRCLE_PIPELINE_REPO_FULL_NAME", // gitlab + } + firstEnvLabel("dagger.io/vcs.repo.full_name", repoNameLabels) + + vcsChangeURL := []string{ + "CIRCLE_PULL_REQUEST", // github / bitbucket, only from forks + } + firstEnvLabel("dagger.io/vcs.change.url", vcsChangeURL) + + pipelineRepoURL := os.Getenv("CIRCLE_PIPELINE_REPO_URL") + repositoryURL := os.Getenv("CIRCLE_REPOSITORY_URL") + if pipelineRepoURL != "" { // gitlab + labels["dagger.io/vcs.repo.url"] = pipelineRepoURL + } else if repositoryURL != "" { // github / bitbucket (returns the remote) + transformedURL := repositoryURL + if strings.Contains(repositoryURL, "@") { // from ssh to https + re := regexp.MustCompile(`git@(.*?):(.*?)/(.*)\.git`) + transformedURL = re.ReplaceAllString(repositoryURL, "https://$1/$2/$3") + } + labels["dagger.io/vcs.repo.url"] = transformedURL + } + + return labels +} + +type repoIsh interface { + GetFullName() string + GetHTMLURL() string +} + +func getRepoIsh(event any) (repoIsh, bool) { + switch x := event.(type) { + case *github.PushEvent: + // push event repositories aren't quite a *github.Repository for silly + // legacy reasons + return x.GetRepo(), true + case interface{ GetRepo() *github.Repository }: + return x.GetRepo(), true + default: + return nil, false + } +} + +func (labels Labels) WithCILabels() Labels { + isCIValue := "false" + if isCI() { + isCIValue = "true" + } + labels["dagger.io/ci"] = isCIValue + + vendor := "" + switch { + case os.Getenv("GITHUB_ACTIONS") == "true": //nolint:goconst + vendor = "GitHub" + case os.Getenv("CIRCLECI") == "true": //nolint:goconst + vendor = "CircleCI" + case os.Getenv("GITLAB_CI") == "true": //nolint:goconst + vendor = "GitLab" + } + if vendor != "" { + labels["dagger.io/ci.vendor"] = vendor + } + + return labels +} + +func isCI() bool { + return os.Getenv("CI") != "" || // GitHub Actions, Travis CI, CircleCI, Cirrus CI, GitLab CI, AppVeyor, CodeShip, dsari + os.Getenv("BUILD_NUMBER") != "" || // Jenkins, TeamCity + os.Getenv("RUN_ID") != "" // TaskCluster, dsari +} + +func (labels Labels) WithAnonymousGitLabels(workdir string) Labels { + labels = labels.WithGitLabels(workdir) + + for name, value := range labels { + if name == "dagger.io/git.author.email" { + labels[name] = fmt.Sprintf("%x", sha256.Sum256([]byte(value))) + } + if name == "dagger.io/git.remote" { + labels[name] = base64.StdEncoding.EncodeToString([]byte(value)) + } + } + + return labels +} + +// Define a type for functions that fetch a branch commit +type fetchFunc func(repo *git.Repository, branch string) (*object.Commit, error) + +// Function to fetch from the origin remote +func fetchFromOrigin(repo *git.Repository, branch string) (*object.Commit, error) { + // Fetch from the origin remote + cmd := exec.Command("git", "fetch", "--depth", "1", "origin", branch) + err := cmd.Run() + if err != nil { + return nil, fmt.Errorf("error fetching branch from origin: %w", err) + } + + // Get the reference of the fetched branch + refName := plumbing.ReferenceName(fmt.Sprintf("refs/remotes/origin/%s", branch)) + ref, err := repo.Reference(refName, true) + if err != nil { + return nil, fmt.Errorf("error getting reference: %w", err) + } + + // Get the commit object of the fetched branch + branchCommit, err := repo.CommitObject(ref.Hash()) + if err != nil { + return nil, fmt.Errorf("error getting commit: %w", err) + } + + return branchCommit, nil +} + +// Function to fetch from the fork remote +// GitHub forks are not added as remotes by default, so we need to guess the fork URL +// This is a heuristic approach, as the fork might not exist from the information we have +func fetchFromFork(repo *git.Repository, branch string) (*object.Commit, error) { + // Get the username of the person who initiated the workflow run + username := os.Getenv("GITHUB_ACTOR") + + // Get the repository name (owner/repo) + repository := os.Getenv("GITHUB_REPOSITORY") + parts := strings.Split(repository, "/") + if len(parts) < 2 { + return nil, fmt.Errorf("invalid repository format: %s", repository) + } + + // Get the server URL: "https://github.com/" in general, + // but can be different for GitHub Enterprise + serverURL := os.Getenv("GITHUB_SERVER_URL") + + forkURL := fmt.Sprintf("%s/%s/%s", serverURL, username, parts[1]) + + cmd := exec.Command("git", "remote", "add", "fork", forkURL) + err := cmd.Run() + if err != nil { + return nil, fmt.Errorf("error adding fork as remote: %w", err) + } + + cmd = exec.Command("git", "fetch", "--depth", "1", "fork", branch) + err = cmd.Run() + if err != nil { + return nil, fmt.Errorf("error fetching branch from fork: %w", err) + } + + // Get the reference of the fetched branch + refName := plumbing.ReferenceName(fmt.Sprintf("refs/remotes/fork/%s", branch)) + ref, err := repo.Reference(refName, true) + if err != nil { + return nil, fmt.Errorf("error getting reference: %w", err) + } + + // Get the commit object of the fetched branch + branchCommit, err := repo.CommitObject(ref.Hash()) + if err != nil { + return nil, fmt.Errorf("error getting commit: %w", err) + } + + return branchCommit, nil +} + +type LabelFlag struct { + Labels +} + +func NewLabelFlag() LabelFlag { + return LabelFlag{Labels: Labels{}} +} + +func (flag LabelFlag) Set(s string) error { + name, val, ok := strings.Cut(s, ":") + if !ok { + return errors.New("invalid label format (must be name:value)") + } + if flag.Labels == nil { + flag.Labels = Labels{} + } + flag.Labels[name] = val + return nil +} + +func (flag LabelFlag) Type() string { + return "labels" +} + +func (flag LabelFlag) String() string { + return flag.Labels.UserAgent() // it's fine +} diff --git a/telemetry/labels_test.go b/telemetry/labels_test.go new file mode 100644 index 00000000000..ecf2cb3f4ac --- /dev/null +++ b/telemetry/labels_test.go @@ -0,0 +1,354 @@ +package telemetry_test + +import ( + "os" + "os/exec" + "runtime" + "strings" + "testing" + + "github.com/dagger/dagger/engine" + "github.com/dagger/dagger/telemetry" + "github.com/stretchr/testify/require" +) + +func TestLoadClientLabels(t *testing.T) { + labels := telemetry.Labels{}.WithClientLabels(engine.Version) + + expected := telemetry.Labels{ + "dagger.io/client.os": runtime.GOOS, + "dagger.io/client.arch": runtime.GOARCH, + "dagger.io/client.version": engine.Version, + } + + require.Subset(t, labels, expected) +} + +func TestLoadServerLabels(t *testing.T) { + labels := telemetry.Labels{}.WithServerLabels("0.8.4", "linux", "amd64", false) + + expected := telemetry.Labels{ + "dagger.io/server.os": "linux", + "dagger.io/server.arch": "amd64", + "dagger.io/server.version": "0.8.4", + "dagger.io/server.cache.enabled": "false", + } + + require.Subset(t, labels, expected) +} + +func TestLoadGitLabels(t *testing.T) { + normalRepo := setupRepo(t) + repoHead := run(t, "git", "-C", normalRepo, "rev-parse", "HEAD") + + detachedRepo := setupRepo(t) + run(t, "git", "-C", detachedRepo, "commit", "--allow-empty", "-m", "second") + run(t, "git", "-C", detachedRepo, "commit", "--allow-empty", "-m", "third") + run(t, "git", "-C", detachedRepo, "checkout", "HEAD~2") + run(t, "git", "-C", detachedRepo, "merge", "main") + detachedHead := run(t, "git", "-C", detachedRepo, "rev-parse", "HEAD") + + type Example struct { + Name string + Repo string + Labels telemetry.Labels + } + + for _, example := range []Example{ + { + Name: "normal branch state", + Repo: normalRepo, + Labels: telemetry.Labels{ + "dagger.io/git.remote": "example.com", + "dagger.io/git.branch": "main", + "dagger.io/git.ref": repoHead, + "dagger.io/git.author.name": "Test User", + "dagger.io/git.author.email": "test@example.com", + "dagger.io/git.committer.name": "Test User", + "dagger.io/git.committer.email": "test@example.com", + "dagger.io/git.title": "init", + }, + }, + { + Name: "detached HEAD state", + Repo: detachedRepo, + Labels: telemetry.Labels{ + "dagger.io/git.remote": "example.com", + "dagger.io/git.branch": "main", + "dagger.io/git.ref": detachedHead, + "dagger.io/git.author.name": "Test User", + "dagger.io/git.author.email": "test@example.com", + "dagger.io/git.committer.name": "Test User", + "dagger.io/git.committer.email": "test@example.com", + "dagger.io/git.title": "third", + }, + }, + } { + example := example + t.Run(example.Name, func(t *testing.T) { + labels := telemetry.Labels{}.WithGitLabels(example.Repo) + require.Subset(t, labels, example.Labels) + }) + } +} + +func TestLoadGitHubLabels(t *testing.T) { + type Example struct { + Name string + Env []string + Labels telemetry.Labels + } + + for _, example := range []Example{ + { + Name: "workflow_dispatch", + Env: []string{ + "GITHUB_ACTIONS=true", + "GITHUB_ACTOR=vito", + "GITHUB_WORKFLOW=some-workflow", + "GITHUB_JOB=some-job", + "GITHUB_EVENT_NAME=workflow_dispatch", + "GITHUB_EVENT_PATH=testdata/workflow_dispatch.json", + }, + Labels: telemetry.Labels{ + "dagger.io/vcs.triggerer.login": "vito", + "dagger.io/vcs.event.type": "workflow_dispatch", + "dagger.io/vcs.workflow.name": "some-workflow", + "dagger.io/vcs.job.name": "some-job", + "dagger.io/vcs.repo.full_name": "dagger/testdata", + "dagger.io/vcs.repo.url": "https://github.com/dagger/testdata", + }, + }, + { + Name: "pull_request.synchronize", + Env: []string{ + "GITHUB_ACTIONS=true", + "GITHUB_ACTOR=vito", + "GITHUB_WORKFLOW=some-workflow", + "GITHUB_JOB=some-job", + "GITHUB_EVENT_NAME=pull_request", + "GITHUB_EVENT_PATH=testdata/pull_request.synchronize.json", + }, + Labels: telemetry.Labels{ + "dagger.io/vcs.triggerer.login": "vito", + "dagger.io/vcs.event.type": "pull_request", + "dagger.io/vcs.workflow.name": "some-workflow", + "dagger.io/vcs.job.name": "some-job", + "github.com/event.action": "synchronize", + "dagger.io/vcs.repo.full_name": "dagger/testdata", + "dagger.io/vcs.repo.url": "https://github.com/dagger/testdata", + "dagger.io/vcs.change.number": "2018", + "dagger.io/vcs.change.title": "dump env, use session binary from submodule", + "dagger.io/vcs.change.url": "https://github.com/dagger/testdata/pull/2018", + "dagger.io/vcs.change.head_sha": "81be07d3103b512159628bfa3aae2fbb5d255964", + "dagger.io/vcs.change.branch": "dump-env", + "dagger.io/vcs.change.label": "vito:dump-env", + }, + }, + { + Name: "push", + Env: []string{ + "GITHUB_ACTIONS=true", + "GITHUB_ACTOR=vito", + "GITHUB_WORKFLOW=some-workflow", + "GITHUB_JOB=some-job", + "GITHUB_EVENT_NAME=push", + "GITHUB_EVENT_PATH=testdata/push.json", + }, + Labels: telemetry.Labels{ + "dagger.io/vcs.triggerer.login": "vito", + "dagger.io/vcs.event.type": "push", + "dagger.io/vcs.workflow.name": "some-workflow", + "dagger.io/vcs.job.name": "some-job", + "dagger.io/vcs.repo.full_name": "vito/bass", + "dagger.io/vcs.repo.url": "https://github.com/vito/bass", + }, + }, + } { + example := example + t.Run(example.Name, func(t *testing.T) { + for _, e := range example.Env { + k, v, _ := strings.Cut(e, "=") + os.Setenv(k, v) + } + + labels := telemetry.Labels{}.WithGitHubLabels() + require.Subset(t, labels, example.Labels) + }) + } +} + +func TestLoadGitLabLabels(t *testing.T) { + type Example struct { + Name string + Env map[string]string + Labels telemetry.Labels + } + + for _, example := range []Example{ + { + Name: "GitLab CI merge request job", + Env: map[string]string{ + "GITLAB_CI": "true", + "CI_PROJECT_URL": "https://gitlab.com/dagger/testdata", + "CI_PROJECT_PATH": "dagger/testdata", + "CI_MERGE_REQUEST_SOURCE_BRANCH_NAME": "feature-branch", + "CI_MERGE_REQUEST_TITLE": "Some title", + "CI_MERGE_REQUEST_LABELS": "label1,label2", + "CI_COMMIT_SHA": "123abc", + "CI_PIPELINE_SOURCE": "push", + "CI_PIPELINE_NAME": "pipeline-name", + "CI_JOB_ID": "123", + "CI_JOB_NAME": "test-job", + "GITLAB_USER_ID": "789", + "GITLAB_USER_EMAIL": "user@gitlab.com", + "GITLAB_USER_NAME": "Gitlab User", + "GITLAB_USER_LOGIN": "gitlab-user", + }, + Labels: telemetry.Labels{ + "dagger.io/vcs.repo.url": "https://gitlab.com/dagger/testdata", + "dagger.io/vcs.repo.full_name": "dagger/testdata", + "dagger.io/vcs.change.branch": "feature-branch", + "dagger.io/vcs.change.title": "Some title", + "dagger.io/vcs.change.head_sha": "123abc", + "dagger.io/vcs.triggerer.login": "gitlab-user", + "dagger.io/vcs.event.type": "push", + "dagger.io/vcs.job.name": "test-job", + "dagger.io/vcs.workflow.name": "pipeline-name", + "dagger.io/vcs.change.label": "label1,label2", + "gitlab.com/job.id": "123", + "gitlab.com/triggerer.id": "789", + "gitlab.com/triggerer.email": "user@gitlab.com", + "gitlab.com/triggerer.name": "Gitlab User", + }, + }, + { + Name: "GitLab CI branch job", + Env: map[string]string{ + "GITLAB_CI": "true", + "CI_PROJECT_URL": "https://gitlab.com/dagger/testdata", + "CI_PROJECT_PATH": "dagger/testdata", + "CI_COMMIT_BRANCH": "feature-branch", + "CI_COMMIT_TITLE": "Some title", + "CI_COMMIT_SHA": "123abc", + "CI_PIPELINE_SOURCE": "push", + "CI_PIPELINE_NAME": "pipeline-name", + "CI_JOB_ID": "123", + "CI_JOB_NAME": "test-job", + "GITLAB_USER_ID": "789", + "GITLAB_USER_EMAIL": "user@gitlab.com", + "GITLAB_USER_NAME": "Gitlab User", + "GITLAB_USER_LOGIN": "gitlab-user", + }, + Labels: telemetry.Labels{ + "dagger.io/vcs.repo.url": "https://gitlab.com/dagger/testdata", + "dagger.io/vcs.repo.full_name": "dagger/testdata", + "dagger.io/vcs.change.branch": "feature-branch", + "dagger.io/vcs.change.title": "Some title", + "dagger.io/vcs.change.head_sha": "123abc", + "dagger.io/vcs.triggerer.login": "gitlab-user", + "dagger.io/vcs.event.type": "push", + "dagger.io/vcs.job.name": "test-job", + "dagger.io/vcs.workflow.name": "pipeline-name", + "dagger.io/vcs.change.label": "", + "gitlab.com/job.id": "123", + "gitlab.com/triggerer.id": "789", + "gitlab.com/triggerer.email": "user@gitlab.com", + "gitlab.com/triggerer.name": "Gitlab User", + }, + }, + } { + example := example + t.Run(example.Name, func(t *testing.T) { + // Set environment variables + for k, v := range example.Env { + os.Setenv(k, v) + } + + // Run the function and collect the result + labels := telemetry.Labels{}.WithGitLabLabels() + + // Clean up environment variables + for k := range example.Env { + os.Unsetenv(k) + } + + // Make assertions + require.Subset(t, labels, example.Labels) + }) + } +} + +func TestLoadCircleCILabels(t *testing.T) { + type Example struct { + Name string + Env map[string]string + Labels telemetry.Labels + } + + for _, example := range []Example{ + { + Name: "CircleCI", + Env: map[string]string{ + "CIRCLECI": "true", + "CIRCLE_BRANCH": "main", + "CIRCLE_SHA1": "abc123", + "CIRCLE_JOB": "build", + "CIRCLE_PIPELINE_NUMBER": "42", + "CIRCLE_PIPELINE_TRIGGER_LOGIN": "circle-user", + "CIRCLE_REPOSITORY_URL": "git@github.com:user/repo.git", + "CIRCLE_PROJECT_REPONAME": "repo", + "CIRCLE_PULL_REQUEST": "https://github.com/circle/repo/pull/1", + }, + Labels: telemetry.Labels{ + "dagger.io/vcs.change.branch": "main", + "dagger.io/vcs.change.head_sha": "abc123", + "dagger.io/vcs.job.name": "build", + "dagger.io/vcs.change.number": "42", + "dagger.io/vcs.triggerer.login": "circle-user", + "dagger.io/vcs.repo.url": "https://github.com/user/repo", + "dagger.io/vcs.repo.full_name": "repo", + "dagger.io/vcs.change.url": "https://github.com/circle/repo/pull/1", + }, + }, + } { + example := example + t.Run(example.Name, func(t *testing.T) { + // Set environment variables + for k, v := range example.Env { + os.Setenv(k, v) + } + + // Run the function and collect the result + labels := telemetry.Labels{}.WithCircleCILabels() + + // Clean up environment variables + for k := range example.Env { + os.Unsetenv(k) + } + + // Make assertions + require.Subset(t, labels, example.Labels) + }) + } +} + +func run(t *testing.T, exe string, args ...string) string { //nolint: unparam + t.Helper() + cmd := exec.Command(exe, args...) + cmd.Stderr = os.Stderr + out, err := cmd.Output() + require.NoError(t, err) + return strings.TrimSpace(string(out)) +} + +func setupRepo(t *testing.T) string { + repo := t.TempDir() + run(t, "git", "-C", repo, "init") + run(t, "git", "-C", repo, "config", "--local", "--add", "user.name", "Test User") + run(t, "git", "-C", repo, "config", "--local", "--add", "user.email", "test@example.com") + run(t, "git", "-C", repo, "remote", "add", "origin", "https://example.com") + run(t, "git", "-C", repo, "checkout", "-b", "main") + run(t, "git", "-C", repo, "commit", "--allow-empty", "-m", "init") + return repo +} diff --git a/telemetry/legacy.go b/telemetry/legacy.go deleted file mode 100644 index c271c017678..00000000000 --- a/telemetry/legacy.go +++ /dev/null @@ -1,41 +0,0 @@ -package telemetry - -import ( - "github.com/dagger/dagger/tracing" - "github.com/vito/progrock" - "google.golang.org/protobuf/proto" -) - -type LegacyIDInternalizer struct { - w progrock.Writer -} - -func NewLegacyIDInternalizer(w progrock.Writer) LegacyIDInternalizer { - return LegacyIDInternalizer{w: w} -} - -var _ progrock.Writer = LegacyIDInternalizer{} - -// WriteStatus marks any vertexes with a label "id" as internal so that they -// are hidden from interfaces that predate Zenith. -func (f LegacyIDInternalizer) WriteStatus(status *progrock.StatusUpdate) error { - var foundIds []int - for i, v := range status.Vertexes { - if v.Label(tracing.IDLabel) == "true" { - foundIds = append(foundIds, i) - } - } - if len(foundIds) == 0 { - // avoid a full copy in the common case - return f.w.WriteStatus(status) - } - downstream := proto.Clone(status).(*progrock.StatusUpdate) - for _, i := range foundIds { - downstream.Vertexes[i].Internal = true - } - return f.w.WriteStatus(downstream) -} - -func (f LegacyIDInternalizer) Close() error { - return f.w.Close() -} diff --git a/telemetry/logging.go b/telemetry/logging.go new file mode 100644 index 00000000000..97ba43bfee6 --- /dev/null +++ b/telemetry/logging.go @@ -0,0 +1,77 @@ +package telemetry + +import ( + "context" + "io" + "log/slog" + "time" + + "github.com/lmittmann/tint" + "go.opentelemetry.io/otel/log" + + "github.com/dagger/dagger/dagql/ioctx" +) + +type OtelWriter struct { + Ctx context.Context + Logger log.Logger + Stream int +} + +func ContextLogger(ctx context.Context, level slog.Level) *slog.Logger { + return PrettyLogger(ioctx.Stderr(ctx), level) +} + +func PrettyLogger(dest io.Writer, level slog.Level) *slog.Logger { + slogOpts := &tint.Options{ + TimeFormat: time.TimeOnly, + NoColor: false, + Level: level, + } + return slog.New(tint.NewHandler(dest, slogOpts)) +} + +const ( + // We use this to identify which logs should be bubbled up to the user + // "globally" regardless of which span they came from. + GlobalLogs = "dagger.io/global" +) + +func GlobalLogger(ctx context.Context) *slog.Logger { + logW := &OtelWriter{ + Ctx: ctx, + Logger: Logger(GlobalLogs), + Stream: 2, + } + return PrettyLogger(logW, slog.LevelDebug) +} + +func WithStdioToOtel(ctx context.Context, name string) (context.Context, io.Writer, io.Writer) { + logger := Logger(name) + stdout, stderr := &OtelWriter{ + Ctx: ctx, + Logger: logger, + Stream: 1, + }, &OtelWriter{ + Ctx: ctx, + Logger: logger, + Stream: 2, + } + ctx = ioctx.WithStdout(ctx, stdout) + ctx = ioctx.WithStderr(ctx, stderr) + return ctx, stdout, stderr +} + +const ( + LogStreamAttr = "log.stream" + LogDataAttr = "log.data" +) + +func (w *OtelWriter) Write(p []byte) (int, error) { + rec := log.Record{} + rec.SetTimestamp(time.Now()) + rec.SetBody(log.StringValue(string(p))) + rec.AddAttributes(log.Int(LogStreamAttr, w.Stream)) + w.Logger.Emit(w.Ctx, rec) + return len(p), nil +} diff --git a/telemetry/opentelemetry-proto b/telemetry/opentelemetry-proto new file mode 160000 index 00000000000..9d139c87b52 --- /dev/null +++ b/telemetry/opentelemetry-proto @@ -0,0 +1 @@ +Subproject commit 9d139c87b52669a3e2825b835dd828b57a455a55 diff --git a/telemetry/pipeliner.go b/telemetry/pipeliner.go deleted file mode 100644 index c0795ade035..00000000000 --- a/telemetry/pipeliner.go +++ /dev/null @@ -1,143 +0,0 @@ -package telemetry - -import ( - "sync" - - "github.com/dagger/dagger/core/pipeline" - "github.com/vito/progrock" -) - -// Pipeliner listens to events and collects pipeline paths for vertices based -// on groups and group memberships. -type Pipeliner struct { - mu sync.Mutex - - closed bool - - // pipelinePaths stores the mapping from group IDs to pipeline paths - pipelinePaths map[string]pipeline.Path - - // memberships stores the groups IDs that a vertex is a member of - memberships map[string][]string - - // vertices stores and updates vertexes as they received, so that they can - // be associated to pipeline paths once their membership is known. - vertices map[string]*progrock.Vertex -} - -func NewPipeliner() *Pipeliner { - return &Pipeliner{ - pipelinePaths: map[string]pipeline.Path{}, - memberships: map[string][]string{}, - vertices: map[string]*progrock.Vertex{}, - } -} - -// PipelinedVertex is a Progrock vertex paired with all of its pipeline paths. -type PipelinedVertex struct { - *progrock.Vertex - - // Groups stores the group IDs that this vertex is a member of. Each entry - // has a corresponding entry in Pipelines. - Groups []string - - // Pipelines stores the pipeline paths computed from Progrock groups. - Pipelines []pipeline.Path -} - -func (t *Pipeliner) TrackUpdate(ev *progrock.StatusUpdate) { - t.mu.Lock() - defer t.mu.Unlock() - - if t.closed { - return - } - - for _, g := range ev.Groups { - t.pipelinePaths[g.Id] = t.groupPath(g) - } - - for _, m := range ev.Memberships { - for _, vid := range m.Vertexes { - t.memberships[vid] = append(t.memberships[vid], m.Group) - } - } - - for _, v := range ev.Vertexes { - t.vertices[v.Id] = v - } -} - -func (t *Pipeliner) Close() error { - t.mu.Lock() - t.closed = true - t.mu.Unlock() - return nil -} - -func (t *Pipeliner) Vertex(id string) (*PipelinedVertex, bool) { - t.mu.Lock() - defer t.mu.Unlock() - - v, found := t.vertices[id] - if !found { - return nil, false - } - - return t.vertex(v), true -} - -func (t *Pipeliner) Vertices() []*PipelinedVertex { - t.mu.Lock() - defer t.mu.Unlock() - - vertices := make([]*PipelinedVertex, 0, len(t.vertices)) - for _, v := range t.vertices { - vertices = append(vertices, t.vertex(v)) - } - return vertices -} - -func (t *Pipeliner) vertex(v *progrock.Vertex) *PipelinedVertex { - return &PipelinedVertex{ - Vertex: v, - Groups: t.memberships[v.Id], - Pipelines: t.pipelines(t.memberships[v.Id]), - } -} - -func (t *Pipeliner) pipelines(groups []string) []pipeline.Path { - paths := make([]pipeline.Path, 0, len(groups)) - for _, gid := range groups { - paths = append(paths, t.pipelinePaths[gid]) - } - return paths -} - -func (t *Pipeliner) groupPath(group *progrock.Group) pipeline.Path { - self := pipeline.Pipeline{ - Name: group.Name, - Weak: group.Weak, - } - for _, l := range group.Labels { - if l.Name == pipeline.ProgrockDescriptionLabel { - // Progrock doesn't have a separate 'description' field, so we escort it - // through labels instead - self.Description = l.Value - } else { - self.Labels = append(self.Labels, pipeline.Label{ - Name: l.Name, - Value: l.Value, - }) - } - } - path := pipeline.Path{} - if group.Parent != nil { - parentPath, found := t.pipelinePaths[group.GetParent()] - if found { - path = parentPath.Copy() - } - } - path = path.Add(self) - return path -} diff --git a/telemetry/pubsub.go b/telemetry/pubsub.go new file mode 100644 index 00000000000..66c3def4d45 --- /dev/null +++ b/telemetry/pubsub.go @@ -0,0 +1,351 @@ +package telemetry + +import ( + "context" + "log/slog" + "sync" + + "github.com/dagger/dagger/telemetry/sdklog" + "github.com/moby/buildkit/identity" + "github.com/sourcegraph/conc/pool" + sdkmetric "go.opentelemetry.io/otel/sdk/metric" + "go.opentelemetry.io/otel/sdk/metric/metricdata" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" +) + +type PubSub struct { + spanSubs map[trace.TraceID][]sdktrace.SpanExporter + spanSubsL sync.Mutex + logSubs map[trace.TraceID][]sdklog.LogExporter + logSubsL sync.Mutex + metricSubs map[trace.TraceID][]sdkmetric.Exporter + metricSubsL sync.Mutex + traces map[trace.TraceID]*activeTrace + tracesL sync.Mutex +} + +func NewPubSub() *PubSub { + return &PubSub{ + spanSubs: map[trace.TraceID][]sdktrace.SpanExporter{}, + logSubs: map[trace.TraceID][]sdklog.LogExporter{}, + metricSubs: map[trace.TraceID][]sdkmetric.Exporter{}, + traces: map[trace.TraceID]*activeTrace{}, + } +} + +func (ps *PubSub) Drain(id trace.TraceID, immediate bool) { + slog.Debug("draining", "trace", id.String(), "immediate", immediate) + ps.tracesL.Lock() + trace, ok := ps.traces[id] + if ok { + trace.cond.L.Lock() + trace.draining = true + trace.drainImmediately = immediate + trace.cond.Broadcast() + trace.cond.L.Unlock() + } else { + slog.Warn("draining nonexistant trace", "trace", id.String(), "immediate", immediate) + } + ps.tracesL.Unlock() +} + +func (ps *PubSub) initTrace(id trace.TraceID) *activeTrace { + if t, ok := ps.traces[id]; ok { + return t + } + t := &activeTrace{ + id: id, + cond: sync.NewCond(&sync.Mutex{}), + activeSpans: map[trace.SpanID]sdktrace.ReadOnlySpan{}, + } + ps.traces[id] = t + return t +} + +func (ps *PubSub) SubscribeToSpans(ctx context.Context, traceID trace.TraceID, exp sdktrace.SpanExporter) error { + slog.Debug("subscribing to spans", "trace", traceID.String()) + ps.tracesL.Lock() + trace := ps.initTrace(traceID) + ps.tracesL.Unlock() + ps.spanSubsL.Lock() + ps.spanSubs[traceID] = append(ps.spanSubs[traceID], exp) + ps.spanSubsL.Unlock() + defer ps.unsubSpans(traceID, exp) + trace.wait(ctx) + return nil +} + +var _ sdktrace.SpanExporter = (*PubSub)(nil) + +func (ps *PubSub) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error { + export := identity.NewID() + + slog.Debug("exporting spans to pubsub", "call", export, "spans", len(spans)) + + byTrace := map[trace.TraceID][]sdktrace.ReadOnlySpan{} + conds := map[trace.TraceID]*sync.Cond{} + + ps.tracesL.Lock() + for _, s := range spans { + traceID := s.SpanContext().TraceID() + spanID := s.SpanContext().SpanID() + + slog.Debug("pubsub exporting span", + "call", export, + "trace", traceID.String(), + "id", spanID, + "span", s.Name(), + "status", s.Status().Code, + "endTime", s.EndTime()) + + byTrace[traceID] = append(byTrace[traceID], s) + + activeTrace := ps.initTrace(traceID) + + if s.EndTime().Before(s.StartTime()) { + activeTrace.startSpan(spanID, s) + } else { + activeTrace.finishSpan(spanID) + } + + conds[traceID] = activeTrace.cond + } + ps.tracesL.Unlock() + + eg := pool.New().WithErrors() + + // export to local subscribers + for traceID, spans := range byTrace { + traceID := traceID + spans := spans + for _, sub := range ps.SpanSubscribers(traceID) { + sub := sub + eg.Go(func() error { + slog.Debug("exporting spans to subscriber", "trace", traceID.String(), "spans", len(spans)) + return sub.ExportSpans(ctx, spans) + }) + } + } + + // export to global subscribers + for _, sub := range ps.SpanSubscribers(trace.TraceID{}) { + sub := sub + eg.Go(func() error { + slog.Debug("exporting spans to global subscriber", "spans", len(spans)) + return sub.ExportSpans(ctx, spans) + }) + } + + // notify anyone waiting to drain + for _, cond := range conds { + cond.Broadcast() + } + + return eg.Wait() +} + +func (ps *PubSub) SpanSubscribers(session trace.TraceID) []sdktrace.SpanExporter { + ps.spanSubsL.Lock() + defer ps.spanSubsL.Unlock() + subs := ps.spanSubs[session] + cp := make([]sdktrace.SpanExporter, len(subs)) + copy(cp, subs) + return cp +} + +func (ps *PubSub) SubscribeToLogs(ctx context.Context, traceID trace.TraceID, exp sdklog.LogExporter) error { + slog.Debug("subscribing to logs", "trace", traceID.String()) + ps.tracesL.Lock() + trace := ps.initTrace(traceID) + ps.tracesL.Unlock() + ps.logSubsL.Lock() + ps.logSubs[traceID] = append(ps.logSubs[traceID], exp) + ps.logSubsL.Unlock() + defer ps.unsubLogs(traceID, exp) + trace.wait(ctx) + return nil +} + +var _ sdklog.LogExporter = (*PubSub)(nil) + +func (ps *PubSub) ExportLogs(ctx context.Context, logs []*sdklog.LogData) error { + slog.Debug("exporting logs to pub/sub", "logs", len(logs)) + + byTrace := map[trace.TraceID][]*sdklog.LogData{} + for _, log := range logs { + // NB: break glass if stuck troubleshooting otel stuff + // slog.Debug("exporting logs", "trace", log.Body().AsString()) + traceID := log.TraceID + byTrace[traceID] = append(byTrace[traceID], log) + } + + eg := pool.New().WithErrors() + + // export to local subscribers + for traceID, logs := range byTrace { + traceID := traceID + logs := logs + for _, sub := range ps.LogSubscribers(traceID) { + sub := sub + eg.Go(func() error { + slog.Debug("exporting logs to subscriber", "trace", traceID.String(), "logs", len(logs)) + return sub.ExportLogs(ctx, logs) + }) + } + } + + // export to global subscribers + for _, sub := range ps.LogSubscribers(trace.TraceID{}) { + sub := sub + eg.Go(func() error { + slog.Debug("exporting logs to global subscriber", "logs", len(logs)) + return sub.ExportLogs(ctx, logs) + }) + } + + return eg.Wait() +} + +func (ps *PubSub) LogSubscribers(session trace.TraceID) []sdklog.LogExporter { + ps.logSubsL.Lock() + defer ps.logSubsL.Unlock() + subs := ps.logSubs[session] + cp := make([]sdklog.LogExporter, len(subs)) + copy(cp, subs) + return cp +} + +// Metric exporter implementation below. Fortunately there are no overlaps with +// the other exporter signatures. +var _ sdkmetric.Exporter = (*PubSub)(nil) + +func (ps *PubSub) Temporality(kind sdkmetric.InstrumentKind) metricdata.Temporality { + return sdkmetric.DefaultTemporalitySelector(kind) +} + +func (ps *PubSub) Aggregation(kind sdkmetric.InstrumentKind) sdkmetric.Aggregation { + return sdkmetric.DefaultAggregationSelector(kind) +} + +func (ps *PubSub) Export(ctx context.Context, metrics *metricdata.ResourceMetrics) error { + slog.Warn("TODO: support exporting metrics to pub/sub", "metrics", len(metrics.ScopeMetrics)) + return nil +} + +func (ps *PubSub) MetricSubscribers(session trace.TraceID) []sdkmetric.Exporter { + ps.metricSubsL.Lock() + defer ps.metricSubsL.Unlock() + subs := ps.metricSubs[session] + cp := make([]sdkmetric.Exporter, len(subs)) + copy(cp, subs) + return cp +} + +// NB: this is part of the Metrics exporter interface only for some reason, but +// it would be the same signature across the others too anyway. +func (ps *PubSub) ForceFlush(ctx context.Context) error { + slog.Warn("TODO: forcing flush of metrics") + return nil +} + +func (ps *PubSub) Shutdown(ctx context.Context) error { + slog.Debug("shutting down otel pub/sub") + ps.spanSubsL.Lock() + defer ps.spanSubsL.Unlock() + eg := pool.New().WithErrors() + for _, ses := range ps.spanSubs { + for _, se := range ses { + se := se + eg.Go(func() error { + return se.Shutdown(ctx) + }) + } + } + return eg.Wait() +} + +func (ps *PubSub) unsubSpans(traceID trace.TraceID, exp sdktrace.SpanExporter) { + slog.Debug("unsubscribing from trace", "trace", traceID.String()) + ps.spanSubsL.Lock() + removed := make([]sdktrace.SpanExporter, 0, len(ps.spanSubs[traceID])-1) + for _, s := range ps.spanSubs[traceID] { + if s != exp { + removed = append(removed, s) + } + } + ps.spanSubs[traceID] = removed + ps.spanSubsL.Unlock() +} + +func (ps *PubSub) unsubLogs(traceID trace.TraceID, exp sdklog.LogExporter) { + slog.Debug("unsubscribing from logs", "trace", traceID.String()) + ps.logSubsL.Lock() + removed := make([]sdklog.LogExporter, 0, len(ps.logSubs[traceID])-1) + for _, s := range ps.logSubs[traceID] { + if s != exp { + removed = append(removed, s) + } + } + ps.logSubs[traceID] = removed + ps.logSubsL.Unlock() +} + +// activeTrace keeps track of in-flight spans so that we can wait for them all +// to complete, ensuring we don't drop the last few spans, which ruins an +// entire trace. +type activeTrace struct { + id trace.TraceID + activeSpans map[trace.SpanID]sdktrace.ReadOnlySpan + draining bool + drainImmediately bool + cond *sync.Cond +} + +func (trace *activeTrace) startSpan(id trace.SpanID, span sdktrace.ReadOnlySpan) { + trace.cond.L.Lock() + trace.activeSpans[id] = span + trace.cond.L.Unlock() +} + +func (trace *activeTrace) finishSpan(id trace.SpanID) { + trace.cond.L.Lock() + delete(trace.activeSpans, id) + trace.cond.L.Unlock() +} + +func (trace *activeTrace) wait(ctx context.Context) { + slog := slog.With("trace", trace.id.String()) + + go func() { + // wake up the loop below if ctx context is interrupted + <-ctx.Done() + trace.cond.Broadcast() + }() + + trace.cond.L.Lock() + for !trace.draining || len(trace.activeSpans) > 0 { + slog = slog.With( + "draining", trace.draining, + "immediate", trace.drainImmediately, + "activeSpans", len(trace.activeSpans), + ) + if ctx.Err() != nil { + slog.Debug("wait interrupted") + break + } + if trace.drainImmediately { + slog.Debug("draining immediately") + break + } + if trace.draining { + slog.Debug("waiting for spans", "activeSpans", len(trace.activeSpans)) + for id, span := range trace.activeSpans { + slog.Debug("waiting for span", "id", id, "span", span.Name()) + } + } + trace.cond.Wait() + } + slog.Debug("done waiting", "ctxErr", ctx.Err()) + trace.cond.L.Unlock() +} diff --git a/telemetry/sdklog/batch_processor.go b/telemetry/sdklog/batch_processor.go new file mode 100644 index 00000000000..29d5b80ed62 --- /dev/null +++ b/telemetry/sdklog/batch_processor.go @@ -0,0 +1,334 @@ +package sdklog + +import ( + "context" + "sync" + "sync/atomic" + "time" + + "go.opentelemetry.io/otel" + + "github.com/dagger/dagger/telemetry/env" +) + +// Defaults for BatchSpanProcessorOptions. +const ( + DefaultMaxQueueSize = 2048 + DefaultScheduleDelay = 5000 + DefaultExportTimeout = 30000 + DefaultMaxExportBatchSize = 512 +) + +// BatchLogProcessorOption configures a BatchSpanProcessor. +type BatchLogProcessorOption func(o *BatchLogProcessorOptions) + +// BatchLogProcessorOptions is configuration settings for a +// BatchSpanProcessor. +type BatchLogProcessorOptions struct { + // MaxQueueSize is the maximum queue size to buffer spans for delayed processing. If the + // queue gets full it drops the spans. Use BlockOnQueueFull to change this behavior. + // The default value of MaxQueueSize is 2048. + MaxQueueSize int + + // BatchTimeout is the maximum duration for constructing a batch. Processor + // forcefully sends available spans when timeout is reached. + // The default value of BatchTimeout is 5000 msec. + BatchTimeout time.Duration + + // ExportTimeout specifies the maximum duration for exporting spans. If the timeout + // is reached, the export will be cancelled. + // The default value of ExportTimeout is 30000 msec. + ExportTimeout time.Duration + + // MaxExportBatchSize is the maximum number of spans to process in a single batch. + // If there are more than one batch worth of spans then it processes multiple batches + // of spans one batch after the other without any delay. + // The default value of MaxExportBatchSize is 512. + MaxExportBatchSize int + + // BlockOnQueueFull blocks onEnd() and onStart() method if the queue is full + // AND if BlockOnQueueFull is set to true. + // Blocking option should be used carefully as it can severely affect the performance of an + // application. + BlockOnQueueFull bool +} + +// batchLogProcessor is a SpanProcessor that batches asynchronously-received +// spans and sends them to a trace.Exporter when complete. +type batchLogProcessor struct { + e LogExporter + o BatchLogProcessorOptions + + queue chan *LogData + dropped uint32 + + batch []*LogData + batchMutex sync.Mutex + timer *time.Timer + stopWait sync.WaitGroup + stopOnce sync.Once + stopCh chan struct{} + stopped atomic.Bool +} + +var _ LogProcessor = (*batchLogProcessor)(nil) + +// NewBatchLogProcessor creates a new SpanProcessor that will send completed +// span batches to the exporter with the supplied options. +// +// If the exporter is nil, the span processor will perform no action. +func NewBatchLogProcessor(exporter LogExporter, options ...BatchLogProcessorOption) *batchLogProcessor { + maxQueueSize := env.BatchSpanProcessorMaxQueueSize(DefaultMaxQueueSize) + maxExportBatchSize := env.BatchSpanProcessorMaxExportBatchSize(DefaultMaxExportBatchSize) + + if maxExportBatchSize > maxQueueSize { + if DefaultMaxExportBatchSize > maxQueueSize { + maxExportBatchSize = maxQueueSize + } else { + maxExportBatchSize = DefaultMaxExportBatchSize + } + } + + o := BatchLogProcessorOptions{ + BatchTimeout: time.Duration(env.BatchSpanProcessorScheduleDelay(DefaultScheduleDelay)) * time.Millisecond, + ExportTimeout: time.Duration(env.BatchSpanProcessorExportTimeout(DefaultExportTimeout)) * time.Millisecond, + MaxQueueSize: maxQueueSize, + MaxExportBatchSize: maxExportBatchSize, + } + for _, opt := range options { + opt(&o) + } + bsp := &batchLogProcessor{ + e: exporter, + o: o, + batch: make([]*LogData, 0, o.MaxExportBatchSize), + timer: time.NewTimer(o.BatchTimeout), + queue: make(chan *LogData, o.MaxQueueSize), + stopCh: make(chan struct{}), + } + + bsp.stopWait.Add(1) + go func() { + defer bsp.stopWait.Done() + bsp.processQueue() + bsp.drainQueue() + }() + + return bsp +} + +func (bsp *batchLogProcessor) OnEmit(ctx context.Context, log *LogData) { + bsp.enqueue(log) +} + +// Shutdown flushes the queue and waits until all spans are processed. +// It only executes once. Subsequent call does nothing. +func (bsp *batchLogProcessor) Shutdown(ctx context.Context) error { + var err error + bsp.stopOnce.Do(func() { + bsp.stopped.Store(true) + wait := make(chan struct{}) + go func() { + close(bsp.stopCh) + bsp.stopWait.Wait() + if bsp.e != nil { + if err := bsp.e.Shutdown(ctx); err != nil { + otel.Handle(err) + } + } + close(wait) + }() + // Wait until the wait group is done or the context is cancelled + select { + case <-wait: + case <-ctx.Done(): + err = ctx.Err() + } + }) + return err +} + +// WithMaxQueueSize returns a BatchSpanProcessorOption that configures the +// maximum queue size allowed for a BatchSpanProcessor. +func WithMaxQueueSize(size int) BatchLogProcessorOption { + return func(o *BatchLogProcessorOptions) { + o.MaxQueueSize = size + } +} + +// WithMaxExportBatchSize returns a BatchSpanProcessorOption that configures +// the maximum export batch size allowed for a BatchSpanProcessor. +func WithMaxExportBatchSize(size int) BatchLogProcessorOption { + return func(o *BatchLogProcessorOptions) { + o.MaxExportBatchSize = size + } +} + +// WithBatchTimeout returns a BatchSpanProcessorOption that configures the +// maximum delay allowed for a BatchSpanProcessor before it will export any +// held span (whether the queue is full or not). +func WithBatchTimeout(delay time.Duration) BatchLogProcessorOption { + return func(o *BatchLogProcessorOptions) { + o.BatchTimeout = delay + } +} + +// WithExportTimeout returns a BatchSpanProcessorOption that configures the +// amount of time a BatchSpanProcessor waits for an exporter to export before +// abandoning the export. +func WithExportTimeout(timeout time.Duration) BatchLogProcessorOption { + return func(o *BatchLogProcessorOptions) { + o.ExportTimeout = timeout + } +} + +// WithBlocking returns a BatchSpanProcessorOption that configures a +// BatchSpanProcessor to wait for enqueue operations to succeed instead of +// dropping data when the queue is full. +func WithBlocking() BatchLogProcessorOption { + return func(o *BatchLogProcessorOptions) { + o.BlockOnQueueFull = true + } +} + +// exportSpans is a subroutine of processing and draining the queue. +func (bsp *batchLogProcessor) exportSpans(ctx context.Context) error { + bsp.timer.Reset(bsp.o.BatchTimeout) + + bsp.batchMutex.Lock() + defer bsp.batchMutex.Unlock() + + if bsp.o.ExportTimeout > 0 { + var cancel context.CancelFunc + ctx, cancel = context.WithTimeout(ctx, bsp.o.ExportTimeout) + defer cancel() + } + + if l := len(bsp.batch); l > 0 { + err := bsp.e.ExportLogs(ctx, bsp.batch) + + // A new batch is always created after exporting, even if the batch failed to be exported. + // + // It is up to the exporter to implement any type of retry logic if a batch is failing + // to be exported, since it is specific to the protocol and backend being sent to. + bsp.batch = bsp.batch[:0] + + if err != nil { + return err + } + } + return nil +} + +// processQueue removes spans from the `queue` channel until processor +// is shut down. It calls the exporter in batches of up to MaxExportBatchSize +// waiting up to BatchTimeout to form a batch. +func (bsp *batchLogProcessor) processQueue() { + defer bsp.timer.Stop() + + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + for { + select { + case <-bsp.stopCh: + return + case <-bsp.timer.C: + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + case sd := <-bsp.queue: + bsp.batchMutex.Lock() + bsp.batch = append(bsp.batch, sd) + shouldExport := len(bsp.batch) >= bsp.o.MaxExportBatchSize + bsp.batchMutex.Unlock() + if shouldExport { + if !bsp.timer.Stop() { + <-bsp.timer.C + } + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + } + } + } +} + +// drainQueue awaits the any caller that had added to bsp.stopWait +// to finish the enqueue, then exports the final batch. +func (bsp *batchLogProcessor) drainQueue() { + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + for { + select { + case sd := <-bsp.queue: + bsp.batchMutex.Lock() + bsp.batch = append(bsp.batch, sd) + shouldExport := len(bsp.batch) == bsp.o.MaxExportBatchSize + bsp.batchMutex.Unlock() + + if shouldExport { + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + } + default: + // There are no more enqueued spans. Make final export. + if err := bsp.exportSpans(ctx); err != nil { + otel.Handle(err) + } + return + } + } +} + +func (bsp *batchLogProcessor) enqueue(log *LogData) { + ctx := context.TODO() + + // Do not enqueue spans after Shutdown. + if bsp.stopped.Load() { + return + } + + // Do not enqueue spans if we are just going to drop them. + if bsp.e == nil { + return + } + + if bsp.o.BlockOnQueueFull { + bsp.enqueueBlockOnQueueFull(ctx, log) + } else { + bsp.enqueueDrop(ctx, log) + } +} + +func (bsp *batchLogProcessor) enqueueBlockOnQueueFull(ctx context.Context, log *LogData) bool { + select { + case bsp.queue <- log: + return true + case <-ctx.Done(): + return false + } +} + +func (bsp *batchLogProcessor) enqueueDrop(ctx context.Context, log *LogData) bool { + select { + case bsp.queue <- log: + return true + default: + atomic.AddUint32(&bsp.dropped, 1) + } + return false +} + +// MarshalLog is the marshaling function used by the logging system to represent this Span Processor. +func (bsp *batchLogProcessor) MarshalLog() interface{} { + return struct { + Type string + LogExporter LogExporter + Config BatchLogProcessorOptions + }{ + Type: "BatchLogProcessor", + LogExporter: bsp.e, + Config: bsp.o, + } +} diff --git a/telemetry/sdklog/exporter.go b/telemetry/sdklog/exporter.go new file mode 100644 index 00000000000..9a3c5055a91 --- /dev/null +++ b/telemetry/sdklog/exporter.go @@ -0,0 +1,8 @@ +package sdklog + +import "context" + +type LogExporter interface { + ExportLogs(ctx context.Context, logs []*LogData) error + Shutdown(ctx context.Context) error +} diff --git a/telemetry/sdklog/logger.go b/telemetry/sdklog/logger.go new file mode 100644 index 00000000000..5d05db31aba --- /dev/null +++ b/telemetry/sdklog/logger.go @@ -0,0 +1,36 @@ +package sdklog + +import ( + "context" + + "go.opentelemetry.io/otel/log" + "go.opentelemetry.io/otel/log/embedded" + "go.opentelemetry.io/otel/sdk/instrumentation" + "go.opentelemetry.io/otel/sdk/resource" + otrace "go.opentelemetry.io/otel/trace" +) + +var _ log.Logger = &logger{} + +type logger struct { + embedded.Logger + + provider *LoggerProvider + resource *resource.Resource + instrumentationScope instrumentation.Scope +} + +func (l logger) Emit(ctx context.Context, r log.Record) { + span := otrace.SpanFromContext(ctx) + + log := &LogData{ + Record: r, + TraceID: span.SpanContext().TraceID(), + SpanID: span.SpanContext().SpanID(), + Resource: l.resource, + } + + for _, proc := range l.provider.getLogProcessors() { + proc.OnEmit(ctx, log) + } +} diff --git a/telemetry/sdklog/otlploggrpc/client.go b/telemetry/sdklog/otlploggrpc/client.go new file mode 100644 index 00000000000..1aede1ca50e --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/client.go @@ -0,0 +1,298 @@ +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package otlploggrpc + +import ( + "context" + "errors" + "log/slog" + "sync" + "time" + + "github.com/dagger/dagger/telemetry/sdklog" + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc/internal" + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc/internal/otlpconfig" + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc/internal/retry" + "github.com/dagger/dagger/telemetry/sdklog/otlploghttp/transform" + "google.golang.org/genproto/googleapis/rpc/errdetails" + "google.golang.org/grpc" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/metadata" + "google.golang.org/grpc/status" + + "go.opentelemetry.io/otel" + collogpb "go.opentelemetry.io/proto/otlp/collector/logs/v1" +) + +type Client struct { + endpoint string + dialOpts []grpc.DialOption + metadata metadata.MD + exportTimeout time.Duration + requestFunc retry.RequestFunc + + // stopCtx is used as a parent context for all exports. Therefore, when it + // is canceled with the stopFunc all exports are canceled. + stopCtx context.Context + // stopFunc cancels stopCtx, stopping any active exports. + stopFunc context.CancelFunc + + // ourConn keeps track of where conn was created: true if created here on + // Start, or false if passed with an option. This is important on Shutdown + // as the conn should only be closed if created here on start. Otherwise, + // it is up to the processes that passed the conn to close it. + ourConn bool + conn *grpc.ClientConn + lscMu sync.RWMutex + lsc collogpb.LogsServiceClient +} + +// Compile time check *client implements sdklog.LogExporter. +var _ sdklog.LogExporter = (*Client)(nil) + +// NewClient creates a new gRPC log export client. +func NewClient(opts ...Option) *Client { + cfg := otlpconfig.NewGRPCConfig(asGRPCOptions(opts)...) + + ctx, cancel := context.WithCancel(context.Background()) + + c := &Client{ + endpoint: cfg.Traces.Endpoint, + exportTimeout: cfg.Traces.Timeout, + requestFunc: cfg.RetryConfig.RequestFunc(retryable), + dialOpts: cfg.DialOptions, + stopCtx: ctx, + stopFunc: cancel, + conn: cfg.GRPCConn, + } + + if len(cfg.Traces.Headers) > 0 { + c.metadata = metadata.New(cfg.Traces.Headers) + } + + return c +} + +// Start establishes a gRPC connection to the collector. +func (c *Client) Start(ctx context.Context) error { + if c.conn == nil { + // If the caller did not provide a ClientConn when the client was + // created, create one using the configuration they did provide. + conn, err := grpc.DialContext(ctx, c.endpoint, c.dialOpts...) + if err != nil { + return err + } + // Keep track that we own the lifecycle of this conn and need to close + // it on Shutdown. + c.ourConn = true + c.conn = conn + } + + // The otlptrace.Client interface states this method is called just once, + // so no need to check if already started. + c.lscMu.Lock() + c.lsc = collogpb.NewLogsServiceClient(c.conn) + c.lscMu.Unlock() + + return nil +} + +var errAlreadyStopped = errors.New("the client is already stopped") + +// Stop shuts down the client. +// +// Any active connections to a remote endpoint are closed if they were created +// by the client. Any gRPC connection passed during creation using +// WithGRPCConn will not be closed. It is the caller's responsibility to +// handle cleanup of that resource. +// +// This method synchronizes with the UploadTraces method of the client. It +// will wait for any active calls to that method to complete unimpeded, or it +// will cancel any active calls if ctx expires. If ctx expires, the context +// error will be forwarded as the returned error. All client held resources +// will still be released in this situation. +// +// If the client has already stopped, an error will be returned describing +// this. +func (c *Client) Stop(ctx context.Context) error { + // Make sure to return context error if the context is done when calling this method. + err := ctx.Err() + + // Acquire the c.tscMu lock within the ctx lifetime. + acquired := make(chan struct{}) + go func() { + c.lscMu.Lock() + close(acquired) + }() + + select { + case <-ctx.Done(): + // The Stop timeout is reached. Kill any remaining exports to force + // the clear of the lock and save the timeout error to return and + // signal the shutdown timed out before cleanly stopping. + c.stopFunc() + err = ctx.Err() + + // To ensure the client is not left in a dirty state c.tsc needs to be + // set to nil. To avoid the race condition when doing this, ensure + // that all the exports are killed (initiated by c.stopFunc). + <-acquired + case <-acquired: + } + // Hold the tscMu lock for the rest of the function to ensure no new + // exports are started. + defer c.lscMu.Unlock() + + // The otlptrace.Client interface states this method is called only + // once, but there is no guarantee it is called after Start. Ensure the + // client is started before doing anything and let the called know if they + // made a mistake. + if c.lsc == nil { + return errAlreadyStopped + } + + // Clear c.tsc to signal the client is stopped. + c.lsc = nil + + if c.ourConn { + closeErr := c.conn.Close() + // A context timeout error takes precedence over this error. + if err == nil && closeErr != nil { + err = closeErr + } + } + return err +} + +var errShutdown = errors.New("the client is shutdown") + +// UploadTraces sends a batch of spans. +// +// Retryable errors from the server will be handled according to any +// RetryConfig the client was created with. +func (c *Client) ExportLogs(ctx context.Context, logDatas []*sdklog.LogData) error { + // Hold a read lock to ensure a shut down initiated after this starts does + // not abandon the export. This read lock acquire has less priority than a + // write lock acquire (i.e. Stop), meaning if the client is shutting down + // this will come after the shut down. + c.lscMu.RLock() + defer c.lscMu.RUnlock() + + if c.lsc == nil { + slog.Warn("gRPC log client is shut down", "endpoint", c.endpoint) + return errShutdown + } + + ctx, cancel := c.exportContext(ctx) + defer cancel() + + return c.requestFunc(ctx, func(iCtx context.Context) error { + resp, err := c.lsc.Export(iCtx, &collogpb.ExportLogsServiceRequest{ + ResourceLogs: transform.Logs(logDatas), + }) + if resp != nil && resp.PartialSuccess != nil { + msg := resp.PartialSuccess.GetErrorMessage() + n := resp.PartialSuccess.GetRejectedLogRecords() + if n != 0 || msg != "" { + err := internal.LogsPartialSuccessError(n, msg) + otel.Handle(err) + } + } + // nil is converted to OK. + if status.Code(err) == codes.OK { + // Success. + return nil + } + return err + }) +} + +// Shutdown is an alias for Stop. +func (c *Client) Shutdown(ctx context.Context) error { + return c.Stop(ctx) +} + +// exportContext returns a copy of parent with an appropriate deadline and +// cancellation function. +// +// It is the callers responsibility to cancel the returned context once its +// use is complete, via the parent or directly with the returned CancelFunc, to +// ensure all resources are correctly released. +func (c *Client) exportContext(parent context.Context) (context.Context, context.CancelFunc) { + var ( + ctx context.Context + cancel context.CancelFunc + ) + + if c.exportTimeout > 0 { + ctx, cancel = context.WithTimeout(parent, c.exportTimeout) + } else { + ctx, cancel = context.WithCancel(parent) + } + + if c.metadata.Len() > 0 { + ctx = metadata.NewOutgoingContext(ctx, c.metadata) + } + + // Unify the client stopCtx with the parent. + go func() { + select { + case <-ctx.Done(): + case <-c.stopCtx.Done(): + // Cancel the export as the shutdown has timed out. + cancel() + } + }() + + return ctx, cancel +} + +// retryable returns if err identifies a request that can be retried and a +// duration to wait for if an explicit throttle time is included in err. +func retryable(err error) (bool, time.Duration) { + s := status.Convert(err) + return retryableGRPCStatus(s) +} + +func retryableGRPCStatus(s *status.Status) (bool, time.Duration) { + switch s.Code() { + case codes.Canceled, + codes.DeadlineExceeded, + codes.Aborted, + codes.OutOfRange, + codes.Unavailable, + codes.DataLoss: + // Additionally handle RetryInfo. + _, d := throttleDelay(s) + return true, d + case codes.ResourceExhausted: + // Retry only if the server signals that the recovery from resource exhaustion is possible. + return throttleDelay(s) + } + + // Not a retry-able error. + return false, 0 +} + +// throttleDelay returns of the status is RetryInfo +// and the its duration to wait for if an explicit throttle time. +func throttleDelay(s *status.Status) (bool, time.Duration) { + for _, detail := range s.Details() { + if t, ok := detail.(*errdetails.RetryInfo); ok { + return true, t.RetryDelay.AsDuration() + } + } + return false, 0 +} + +// MarshalLog is the marshaling function used by the logging system to represent this Client. +func (c *Client) MarshalLog() interface{} { + return struct { + Type string + Endpoint string + }{ + Type: "otlphttpgrpc", + Endpoint: c.endpoint, + } +} diff --git a/telemetry/sdklog/otlploggrpc/internal/envconfig/envconfig.go b/telemetry/sdklog/otlploggrpc/internal/envconfig/envconfig.go new file mode 100644 index 00000000000..f040d78f60d --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/internal/envconfig/envconfig.go @@ -0,0 +1,190 @@ +// Code created by gotmpl. DO NOT MODIFY. +// source: internal/shared/otlp/envconfig/envconfig.go.tmpl + +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package envconfig + +import ( + "crypto/tls" + "crypto/x509" + "errors" + "fmt" + "log/slog" + "net/url" + "strconv" + "strings" + "time" +) + +// ConfigFn is the generic function used to set a config. +type ConfigFn func(*EnvOptionsReader) + +// EnvOptionsReader reads the required environment variables. +type EnvOptionsReader struct { + GetEnv func(string) string + ReadFile func(string) ([]byte, error) + Namespace string +} + +// Apply runs every ConfigFn. +func (e *EnvOptionsReader) Apply(opts ...ConfigFn) { + for _, o := range opts { + o(e) + } +} + +// GetEnvValue gets an OTLP environment variable value of the specified key +// using the GetEnv function. +// This function prepends the OTLP specified namespace to all key lookups. +func (e *EnvOptionsReader) GetEnvValue(key string) (string, bool) { + v := strings.TrimSpace(e.GetEnv(keyWithNamespace(e.Namespace, key))) + return v, v != "" +} + +// WithString retrieves the specified config and passes it to ConfigFn as a string. +func WithString(n string, fn func(string)) func(e *EnvOptionsReader) { + return func(e *EnvOptionsReader) { + if v, ok := e.GetEnvValue(n); ok { + fn(v) + } + } +} + +// WithBool returns a ConfigFn that reads the environment variable n and if it exists passes its parsed bool value to fn. +func WithBool(n string, fn func(bool)) ConfigFn { + return func(e *EnvOptionsReader) { + if v, ok := e.GetEnvValue(n); ok { + b := strings.ToLower(v) == "true" + fn(b) + } + } +} + +// WithDuration retrieves the specified config and passes it to ConfigFn as a duration. +func WithDuration(n string, fn func(time.Duration)) func(e *EnvOptionsReader) { + return func(e *EnvOptionsReader) { + if v, ok := e.GetEnvValue(n); ok { + d, err := strconv.Atoi(v) + if err != nil { + slog.Error("parse duration", "error", err, "input", v) + return + } + fn(time.Duration(d) * time.Millisecond) + } + } +} + +// WithHeaders retrieves the specified config and passes it to ConfigFn as a map of HTTP headers. +func WithHeaders(n string, fn func(map[string]string)) func(e *EnvOptionsReader) { + return func(e *EnvOptionsReader) { + if v, ok := e.GetEnvValue(n); ok { + fn(stringToHeader(v)) + } + } +} + +// WithURL retrieves the specified config and passes it to ConfigFn as a net/url.URL. +func WithURL(n string, fn func(*url.URL)) func(e *EnvOptionsReader) { + return func(e *EnvOptionsReader) { + if v, ok := e.GetEnvValue(n); ok { + u, err := url.Parse(v) + if err != nil { + slog.Error("parse url", "error", err, "input", v) + return + } + fn(u) + } + } +} + +// WithCertPool returns a ConfigFn that reads the environment variable n as a filepath to a TLS certificate pool. If it exists, it is parsed as a crypto/x509.CertPool and it is passed to fn. +func WithCertPool(n string, fn func(*x509.CertPool)) ConfigFn { + return func(e *EnvOptionsReader) { + if v, ok := e.GetEnvValue(n); ok { + b, err := e.ReadFile(v) + if err != nil { + slog.Error("read tls ca cert file", "error", err, "file", v) + return + } + c, err := createCertPool(b) + if err != nil { + slog.Error("create tls cert pool", "error", err) + return + } + fn(c) + } + } +} + +// WithClientCert returns a ConfigFn that reads the environment variable nc and nk as filepaths to a client certificate and key pair. If they exists, they are parsed as a crypto/tls.Certificate and it is passed to fn. +func WithClientCert(nc, nk string, fn func(tls.Certificate)) ConfigFn { + return func(e *EnvOptionsReader) { + vc, okc := e.GetEnvValue(nc) + vk, okk := e.GetEnvValue(nk) + if !okc || !okk { + return + } + cert, err := e.ReadFile(vc) + if err != nil { + slog.Error("read tls client cert", "error", err, "file", vc) + return + } + key, err := e.ReadFile(vk) + if err != nil { + slog.Error("read tls client key", "error", err, "file", vk) + return + } + crt, err := tls.X509KeyPair(cert, key) + if err != nil { + slog.Error("create tls client key pair", "error", err) + return + } + fn(crt) + } +} + +func keyWithNamespace(ns, key string) string { + if ns == "" { + return key + } + return fmt.Sprintf("%s_%s", ns, key) +} + +func stringToHeader(value string) map[string]string { + headersPairs := strings.Split(value, ",") + headers := make(map[string]string) + + for _, header := range headersPairs { + n, v, found := strings.Cut(header, "=") + if !found { + slog.Error("parse headers", "error", errors.New("missing '="), "input", header) + continue + } + name, err := url.PathUnescape(n) + if err != nil { + slog.Error("escape header key", "error", err, "key", n) + continue + } + trimmedName := strings.TrimSpace(name) + value, err := url.PathUnescape(v) + if err != nil { + slog.Error("escape header value", "error", err, "value", v) + continue + } + trimmedValue := strings.TrimSpace(value) + + headers[trimmedName] = trimmedValue + } + + return headers +} + +func createCertPool(certBytes []byte) (*x509.CertPool, error) { + cp := x509.NewCertPool() + if ok := cp.AppendCertsFromPEM(certBytes); !ok { + return nil, errors.New("failed to append certificate to the cert pool") + } + return cp, nil +} diff --git a/telemetry/sdklog/otlploggrpc/internal/otlpconfig/envconfig.go b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/envconfig.go new file mode 100644 index 00000000000..46d28498b14 --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/envconfig.go @@ -0,0 +1,141 @@ +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package otlpconfig + +import ( + "crypto/tls" + "crypto/x509" + "net/url" + "os" + "path" + "strings" + "time" + + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc/internal/envconfig" +) + +// DefaultEnvOptionsReader is the default environments reader. +var DefaultEnvOptionsReader = envconfig.EnvOptionsReader{ + GetEnv: os.Getenv, + ReadFile: os.ReadFile, + Namespace: "OTEL_EXPORTER_OTLP", +} + +// ApplyGRPCEnvConfigs applies the env configurations for gRPC. +func ApplyGRPCEnvConfigs(cfg Config) Config { + opts := getOptionsFromEnv() + for _, opt := range opts { + cfg = opt.ApplyGRPCOption(cfg) + } + return cfg +} + +// ApplyHTTPEnvConfigs applies the env configurations for HTTP. +func ApplyHTTPEnvConfigs(cfg Config) Config { + opts := getOptionsFromEnv() + for _, opt := range opts { + cfg = opt.ApplyHTTPOption(cfg) + } + return cfg +} + +func getOptionsFromEnv() []GenericOption { + opts := []GenericOption{} + + tlsConf := &tls.Config{ + MinVersion: tls.VersionTLS12, + } + DefaultEnvOptionsReader.Apply( + envconfig.WithURL("ENDPOINT", func(u *url.URL) { + opts = append(opts, withEndpointScheme(u)) + opts = append(opts, newSplitOption(func(cfg Config) Config { + cfg.Traces.Endpoint = u.Host + // For OTLP/HTTP endpoint URLs without a per-signal + // configuration, the passed endpoint is used as a base URL + // and the signals are sent to these paths relative to that. + cfg.Traces.URLPath = path.Join(u.Path, DefaultLogsPath) + return cfg + }, withEndpointForGRPC(u))) + }), + envconfig.WithURL("TRACES_ENDPOINT", func(u *url.URL) { + opts = append(opts, withEndpointScheme(u)) + opts = append(opts, newSplitOption(func(cfg Config) Config { + cfg.Traces.Endpoint = u.Host + // For endpoint URLs for OTLP/HTTP per-signal variables, the + // URL MUST be used as-is without any modification. The only + // exception is that if an URL contains no path part, the root + // path / MUST be used. + path := u.Path + if path == "" { + path = "/" + } + cfg.Traces.URLPath = path + return cfg + }, withEndpointForGRPC(u))) + }), + envconfig.WithCertPool("CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }), + envconfig.WithCertPool("TRACES_CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }), + envconfig.WithClientCert("CLIENT_CERTIFICATE", "CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }), + envconfig.WithClientCert("TRACES_CLIENT_CERTIFICATE", "TRACES_CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }), + withTLSConfig(tlsConf, func(c *tls.Config) { opts = append(opts, WithTLSClientConfig(c)) }), + envconfig.WithBool("INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }), + envconfig.WithBool("TRACES_INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }), + envconfig.WithHeaders("HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }), + envconfig.WithHeaders("TRACES_HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }), + WithEnvCompression("COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }), + WithEnvCompression("TRACES_COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }), + envconfig.WithDuration("TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }), + envconfig.WithDuration("TRACES_TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }), + ) + + return opts +} + +func withEndpointScheme(u *url.URL) GenericOption { + switch strings.ToLower(u.Scheme) { + case "http", "unix": + return WithInsecure() + default: + return WithSecure() + } +} + +func withEndpointForGRPC(u *url.URL) func(cfg Config) Config { + return func(cfg Config) Config { + // For OTLP/gRPC endpoints, this is the target to which the + // exporter is going to send telemetry. + cfg.Traces.Endpoint = path.Join(u.Host, u.Path) + return cfg + } +} + +// WithEnvCompression retrieves the specified config and passes it to ConfigFn as a Compression. +func WithEnvCompression(n string, fn func(Compression)) func(e *envconfig.EnvOptionsReader) { + return func(e *envconfig.EnvOptionsReader) { + if v, ok := e.GetEnvValue(n); ok { + cp := NoCompression + if v == "gzip" { + cp = GzipCompression + } + + fn(cp) + } + } +} + +// revive:disable-next-line:flag-parameter +func withInsecure(b bool) GenericOption { + if b { + return WithInsecure() + } + return WithSecure() +} + +func withTLSConfig(c *tls.Config, fn func(*tls.Config)) func(e *envconfig.EnvOptionsReader) { + return func(e *envconfig.EnvOptionsReader) { + if c.RootCAs != nil || len(c.Certificates) > 0 { + fn(c) + } + } +} diff --git a/telemetry/sdklog/otlploggrpc/internal/otlpconfig/options.go b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/options.go new file mode 100644 index 00000000000..1f814828833 --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/options.go @@ -0,0 +1,332 @@ +// Code created by gotmpl. DO NOT MODIFY. +// source: internal/shared/otlp/otlptrace/otlpconfig/options.go.tmpl + +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package otlpconfig + +import ( + "crypto/tls" + "fmt" + "log/slog" + "net/url" + "path" + "strings" + "time" + + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc/internal/retry" + "google.golang.org/grpc" + "google.golang.org/grpc/backoff" + "google.golang.org/grpc/credentials" + "google.golang.org/grpc/credentials/insecure" + "google.golang.org/grpc/encoding/gzip" +) + +const ( + // DefaultLogsPath is a default URL path for endpoint that + // receives logs. + DefaultLogsPath string = "/v1/logs" + // DefaultTimeout is a default max waiting time for the backend to process + // each logs batch. + DefaultTimeout time.Duration = 10 * time.Second +) + +type ( + SignalConfig struct { + Endpoint string + Insecure bool + TLSCfg *tls.Config + Headers map[string]string + Compression Compression + Timeout time.Duration + URLPath string + + // gRPC configurations + GRPCCredentials credentials.TransportCredentials + } + + Config struct { + // Signal specific configurations + Traces SignalConfig + + RetryConfig retry.Config + + // gRPC configurations + ReconnectionPeriod time.Duration + ServiceConfig string + DialOptions []grpc.DialOption + GRPCConn *grpc.ClientConn + } +) + +// NewHTTPConfig returns a new Config with all settings applied from opts and +// any unset setting using the default HTTP config values. +func NewHTTPConfig(opts ...HTTPOption) Config { + cfg := Config{ + Traces: SignalConfig{ + Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorHTTPPort), + URLPath: DefaultLogsPath, + Compression: NoCompression, + Timeout: DefaultTimeout, + }, + RetryConfig: retry.DefaultConfig, + } + cfg = ApplyHTTPEnvConfigs(cfg) + for _, opt := range opts { + cfg = opt.ApplyHTTPOption(cfg) + } + cfg.Traces.URLPath = cleanPath(cfg.Traces.URLPath, DefaultLogsPath) + return cfg +} + +// cleanPath returns a path with all spaces trimmed and all redundancies +// removed. If urlPath is empty or cleaning it results in an empty string, +// defaultPath is returned instead. +func cleanPath(urlPath string, defaultPath string) string { + tmp := path.Clean(strings.TrimSpace(urlPath)) + if tmp == "." { + return defaultPath + } + if !path.IsAbs(tmp) { + tmp = fmt.Sprintf("/%s", tmp) + } + return tmp +} + +// NewGRPCConfig returns a new Config with all settings applied from opts and +// any unset setting using the default gRPC config values. +func NewGRPCConfig(opts ...GRPCOption) Config { + userAgent := "OTel OTLP Exporter Go/dagger" + cfg := Config{ + Traces: SignalConfig{ + Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorGRPCPort), + URLPath: DefaultLogsPath, + Compression: NoCompression, + Timeout: DefaultTimeout, + }, + RetryConfig: retry.DefaultConfig, + DialOptions: []grpc.DialOption{grpc.WithUserAgent(userAgent)}, + } + cfg = ApplyGRPCEnvConfigs(cfg) + for _, opt := range opts { + cfg = opt.ApplyGRPCOption(cfg) + } + + if cfg.ServiceConfig != "" { + cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultServiceConfig(cfg.ServiceConfig)) + } + // Priroritize GRPCCredentials over Insecure (passing both is an error). + if cfg.Traces.GRPCCredentials != nil { //nolint: gocritic + cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(cfg.Traces.GRPCCredentials)) + } else if cfg.Traces.Insecure { + cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(insecure.NewCredentials())) + } else { + // Default to using the host's root CA. + creds := credentials.NewTLS(nil) + cfg.Traces.GRPCCredentials = creds + cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(creds)) + } + if cfg.Traces.Compression == GzipCompression { + cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name))) + } + if cfg.ReconnectionPeriod != 0 { + p := grpc.ConnectParams{ + Backoff: backoff.DefaultConfig, + MinConnectTimeout: cfg.ReconnectionPeriod, + } + cfg.DialOptions = append(cfg.DialOptions, grpc.WithConnectParams(p)) + } + + return cfg +} + +type ( + // GenericOption applies an option to the HTTP or gRPC driver. + GenericOption interface { + ApplyHTTPOption(Config) Config + ApplyGRPCOption(Config) Config + + // A private method to prevent users implementing the + // interface and so future additions to it will not + // violate compatibility. + private() + } + + // HTTPOption applies an option to the HTTP driver. + HTTPOption interface { + ApplyHTTPOption(Config) Config + + // A private method to prevent users implementing the + // interface and so future additions to it will not + // violate compatibility. + private() + } + + // GRPCOption applies an option to the gRPC driver. + GRPCOption interface { + ApplyGRPCOption(Config) Config + + // A private method to prevent users implementing the + // interface and so future additions to it will not + // violate compatibility. + private() + } +) + +// genericOption is an option that applies the same logic +// for both gRPC and HTTP. +type genericOption struct { + fn func(Config) Config +} + +func (g *genericOption) ApplyGRPCOption(cfg Config) Config { + return g.fn(cfg) +} + +func (g *genericOption) ApplyHTTPOption(cfg Config) Config { + return g.fn(cfg) +} + +func (genericOption) private() {} + +func newGenericOption(fn func(cfg Config) Config) GenericOption { + return &genericOption{fn: fn} +} + +// splitOption is an option that applies different logics +// for gRPC and HTTP. +type splitOption struct { + httpFn func(Config) Config + grpcFn func(Config) Config +} + +func (g *splitOption) ApplyGRPCOption(cfg Config) Config { + return g.grpcFn(cfg) +} + +func (g *splitOption) ApplyHTTPOption(cfg Config) Config { + return g.httpFn(cfg) +} + +func (splitOption) private() {} + +func newSplitOption(httpFn func(cfg Config) Config, grpcFn func(cfg Config) Config) GenericOption { + return &splitOption{httpFn: httpFn, grpcFn: grpcFn} +} + +// httpOption is an option that is only applied to the HTTP driver. +type httpOption struct { + fn func(Config) Config +} + +func (h *httpOption) ApplyHTTPOption(cfg Config) Config { + return h.fn(cfg) +} + +func (httpOption) private() {} + +func NewHTTPOption(fn func(cfg Config) Config) HTTPOption { + return &httpOption{fn: fn} +} + +// grpcOption is an option that is only applied to the gRPC driver. +type grpcOption struct { + fn func(Config) Config +} + +func (h *grpcOption) ApplyGRPCOption(cfg Config) Config { + return h.fn(cfg) +} + +func (grpcOption) private() {} + +func NewGRPCOption(fn func(cfg Config) Config) GRPCOption { + return &grpcOption{fn: fn} +} + +// Generic Options + +func WithEndpoint(endpoint string) GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.Traces.Endpoint = endpoint + return cfg + }) +} + +func WithEndpointURL(v string) GenericOption { + return newGenericOption(func(cfg Config) Config { + u, err := url.Parse(v) + if err != nil { + slog.Error("otlplog: parse endpoint url", "error", err, "url", v) + return cfg + } + + cfg.Traces.Endpoint = u.Host + cfg.Traces.URLPath = u.Path + if u.Scheme != "https" { + cfg.Traces.Insecure = true + } + + return cfg + }) +} + +func WithCompression(compression Compression) GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.Traces.Compression = compression + return cfg + }) +} + +func WithURLPath(urlPath string) GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.Traces.URLPath = urlPath + return cfg + }) +} + +func WithRetry(rc retry.Config) GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.RetryConfig = rc + return cfg + }) +} + +func WithTLSClientConfig(tlsCfg *tls.Config) GenericOption { + return newSplitOption(func(cfg Config) Config { + cfg.Traces.TLSCfg = tlsCfg.Clone() + return cfg + }, func(cfg Config) Config { + cfg.Traces.GRPCCredentials = credentials.NewTLS(tlsCfg) + return cfg + }) +} + +func WithInsecure() GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.Traces.Insecure = true + return cfg + }) +} + +func WithSecure() GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.Traces.Insecure = false + return cfg + }) +} + +func WithHeaders(headers map[string]string) GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.Traces.Headers = headers + return cfg + }) +} + +func WithTimeout(duration time.Duration) GenericOption { + return newGenericOption(func(cfg Config) Config { + cfg.Traces.Timeout = duration + return cfg + }) +} diff --git a/telemetry/sdklog/otlploggrpc/internal/otlpconfig/optiontypes.go b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/optiontypes.go new file mode 100644 index 00000000000..ef64ea4c682 --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/optiontypes.go @@ -0,0 +1,40 @@ +// Code created by gotmpl. DO NOT MODIFY. +// source: internal/shared/otlp/otlptrace/otlpconfig/optiontypes.go.tmpl + +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package otlpconfig + +const ( + // DefaultCollectorGRPCPort is the default gRPC port of the collector. + DefaultCollectorGRPCPort uint16 = 4317 + // DefaultCollectorHTTPPort is the default HTTP port of the collector. + DefaultCollectorHTTPPort uint16 = 4318 + // DefaultCollectorHost is the host address the Exporter will attempt + // connect to if no collector address is provided. + DefaultCollectorHost string = "localhost" +) + +// Compression describes the compression used for payloads sent to the +// collector. +type Compression int + +const ( + // NoCompression tells the driver to send payloads without + // compression. + NoCompression Compression = iota + // GzipCompression tells the driver to send payloads after + // compressing them with gzip. + GzipCompression +) + +// Marshaler describes the kind of message format sent to the collector. +type Marshaler int + +const ( + // MarshalProto tells the driver to send using the protobuf binary format. + MarshalProto Marshaler = iota + // MarshalJSON tells the driver to send using json format. + MarshalJSON +) diff --git a/telemetry/sdklog/otlploggrpc/internal/otlpconfig/tls.go b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/tls.go new file mode 100644 index 00000000000..ad85b2aa7f9 --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/internal/otlpconfig/tls.go @@ -0,0 +1,27 @@ +// Code created by gotmpl. DO NOT MODIFY. +// source: internal/shared/otlp/otlptrace/otlpconfig/tls.go.tmpl + +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package otlpconfig + +import ( + "crypto/tls" + "crypto/x509" + "errors" +) + +// CreateTLSConfig creates a tls.Config from a raw certificate bytes +// to verify a server certificate. +func CreateTLSConfig(certBytes []byte) (*tls.Config, error) { + cp := x509.NewCertPool() + if ok := cp.AppendCertsFromPEM(certBytes); !ok { + return nil, errors.New("failed to append certificate to the cert pool") + } + + return &tls.Config{ + MinVersion: tls.VersionTLS12, + RootCAs: cp, + }, nil +} diff --git a/telemetry/sdklog/otlploggrpc/internal/partialsuccess.go b/telemetry/sdklog/otlploggrpc/internal/partialsuccess.go new file mode 100644 index 00000000000..ccb5a717209 --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/internal/partialsuccess.go @@ -0,0 +1,46 @@ +// Code created by gotmpl. DO NOT MODIFY. +// source: internal/shared/otlp/partialsuccess.go + +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package internal + +import "fmt" + +// PartialSuccess represents the underlying error for all handling +// OTLP partial success messages. Use `errors.Is(err, +// PartialSuccess{})` to test whether an error passed to the OTel +// error handler belongs to this category. +type PartialSuccess struct { + ErrorMessage string + RejectedItems int64 + RejectedKind string +} + +var _ error = PartialSuccess{} + +// Error implements the error interface. +func (ps PartialSuccess) Error() string { + msg := ps.ErrorMessage + if msg == "" { + msg = "empty message" + } + return fmt.Sprintf("OTLP partial success: %s (%d %s rejected)", msg, ps.RejectedItems, ps.RejectedKind) +} + +// Is supports the errors.Is() interface. +func (ps PartialSuccess) Is(err error) bool { + _, ok := err.(PartialSuccess) + return ok +} + +// LogsPartialSuccessError returns an error describing a partial success +// response for the trace signal. +func LogsPartialSuccessError(itemsRejected int64, errorMessage string) error { + return PartialSuccess{ + ErrorMessage: errorMessage, + RejectedItems: itemsRejected, + RejectedKind: "logs", + } +} diff --git a/telemetry/sdklog/otlploggrpc/internal/retry/retry.go b/telemetry/sdklog/otlploggrpc/internal/retry/retry.go new file mode 100644 index 00000000000..02d3c2147c1 --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/internal/retry/retry.go @@ -0,0 +1,145 @@ +// Code created by gotmpl. DO NOT MODIFY. +// source: internal/shared/otlp/retry/retry.go.tmpl + +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +// Package retry provides request retry functionality that can perform +// configurable exponential backoff for transient errors and honor any +// explicit throttle responses received. +package retry + +import ( + "context" + "fmt" + "time" + + "github.com/cenkalti/backoff/v4" +) + +// DefaultConfig are the recommended defaults to use. +var DefaultConfig = Config{ + Enabled: true, + InitialInterval: 5 * time.Second, + MaxInterval: 30 * time.Second, + MaxElapsedTime: time.Minute, +} + +// Config defines configuration for retrying batches in case of export failure +// using an exponential backoff. +type Config struct { + // Enabled indicates whether to not retry sending batches in case of + // export failure. + Enabled bool + // InitialInterval the time to wait after the first failure before + // retrying. + InitialInterval time.Duration + // MaxInterval is the upper bound on backoff interval. Once this value is + // reached the delay between consecutive retries will always be + // `MaxInterval`. + MaxInterval time.Duration + // MaxElapsedTime is the maximum amount of time (including retries) spent + // trying to send a request/batch. Once this value is reached, the data + // is discarded. + MaxElapsedTime time.Duration +} + +// RequestFunc wraps a request with retry logic. +type RequestFunc func(context.Context, func(context.Context) error) error + +// EvaluateFunc returns if an error is retry-able and if an explicit throttle +// duration should be honored that was included in the error. +// +// The function must return true if the error argument is retry-able, +// otherwise it must return false for the first return parameter. +// +// The function must return a non-zero time.Duration if the error contains +// explicit throttle duration that should be honored, otherwise it must return +// a zero valued time.Duration. +type EvaluateFunc func(error) (bool, time.Duration) + +// RequestFunc returns a RequestFunc using the evaluate function to determine +// if requests can be retried and based on the exponential backoff +// configuration of c. +func (c Config) RequestFunc(evaluate EvaluateFunc) RequestFunc { + if !c.Enabled { + return func(ctx context.Context, fn func(context.Context) error) error { + return fn(ctx) + } + } + + return func(ctx context.Context, fn func(context.Context) error) error { + // Do not use NewExponentialBackOff since it calls Reset and the code here + // must call Reset after changing the InitialInterval (this saves an + // unnecessary call to Now). + b := &backoff.ExponentialBackOff{ + InitialInterval: c.InitialInterval, + RandomizationFactor: backoff.DefaultRandomizationFactor, + Multiplier: backoff.DefaultMultiplier, + MaxInterval: c.MaxInterval, + MaxElapsedTime: c.MaxElapsedTime, + Stop: backoff.Stop, + Clock: backoff.SystemClock, + } + b.Reset() + + for { + err := fn(ctx) + if err == nil { + return nil + } + + retryable, throttle := evaluate(err) + if !retryable { + return err + } + + bOff := b.NextBackOff() + if bOff == backoff.Stop { + return fmt.Errorf("max retry time elapsed: %w", err) + } + + // Wait for the greater of the backoff or throttle delay. + var delay time.Duration + if bOff > throttle { + delay = bOff + } else { + elapsed := b.GetElapsedTime() + if b.MaxElapsedTime != 0 && elapsed+throttle > b.MaxElapsedTime { + return fmt.Errorf("max retry time would elapse: %w", err) + } + delay = throttle + } + + if ctxErr := waitFunc(ctx, delay); ctxErr != nil { + return fmt.Errorf("%w: %s", ctxErr, err) + } + } + } +} + +// Allow override for testing. +var waitFunc = wait + +// wait takes the caller's context, and the amount of time to wait. It will +// return nil if the timer fires before or at the same time as the context's +// deadline. This indicates that the call can be retried. +func wait(ctx context.Context, delay time.Duration) error { + timer := time.NewTimer(delay) + defer timer.Stop() + + select { + case <-ctx.Done(): + // Handle the case where the timer and context deadline end + // simultaneously by prioritizing the timer expiration nil value + // response. + select { + case <-timer.C: + default: + return ctx.Err() + } + case <-timer.C: + } + + return nil +} diff --git a/telemetry/sdklog/otlploggrpc/options.go b/telemetry/sdklog/otlploggrpc/options.go new file mode 100644 index 00000000000..bb75f9d52ee --- /dev/null +++ b/telemetry/sdklog/otlploggrpc/options.go @@ -0,0 +1,202 @@ +// Copyright The OpenTelemetry Authors +// SPDX-License-Identifier: Apache-2.0 + +package otlploggrpc + +import ( + "fmt" + "time" + + "go.opentelemetry.io/otel" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials" + + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc/internal/otlpconfig" + "github.com/dagger/dagger/telemetry/sdklog/otlploggrpc/internal/retry" +) + +// Option applies an option to the gRPC driver. +type Option interface { + applyGRPCOption(otlpconfig.Config) otlpconfig.Config +} + +func asGRPCOptions(opts []Option) []otlpconfig.GRPCOption { + converted := make([]otlpconfig.GRPCOption, len(opts)) + for i, o := range opts { + converted[i] = otlpconfig.NewGRPCOption(o.applyGRPCOption) + } + return converted +} + +// RetryConfig defines configuration for retrying export of span batches that +// failed to be received by the target endpoint. +// +// This configuration does not define any network retry strategy. That is +// entirely handled by the gRPC ClientConn. +type RetryConfig retry.Config + +type wrappedOption struct { + otlpconfig.GRPCOption +} + +func (w wrappedOption) applyGRPCOption(cfg otlpconfig.Config) otlpconfig.Config { + return w.ApplyGRPCOption(cfg) +} + +// WithInsecure disables client transport security for the exporter's gRPC +// connection just like grpc.WithInsecure() +// (https://pkg.go.dev/google.golang.org/grpc#WithInsecure) does. Note, by +// default, client security is required unless WithInsecure is used. +// +// This option has no effect if WithGRPCConn is used. +func WithInsecure() Option { + return wrappedOption{otlpconfig.WithInsecure()} +} + +// WithEndpoint sets the target endpoint the Exporter will connect to. +// +// If the OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_METRICS_ENDPOINT +// environment variable is set, and this option is not passed, that variable +// value will be used. If both are set, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT +// will take precedence. +// +// If both this option and WithEndpointURL are used, the last used option will +// take precedence. +// +// By default, if an environment variable is not set, and this option is not +// passed, "localhost:4317" will be used. +// +// This option has no effect if WithGRPCConn is used. +func WithEndpoint(endpoint string) Option { + return wrappedOption{otlpconfig.WithEndpoint(endpoint)} +} + +// WithEndpointURL sets the target endpoint URL the Exporter will connect to. +// +// If the OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_METRICS_ENDPOINT +// environment variable is set, and this option is not passed, that variable +// value will be used. If both are set, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT +// will take precedence. +// +// If both this option and WithEndpoint are used, the last used option will +// take precedence. +// +// If an invalid URL is provided, the default value will be kept. +// +// By default, if an environment variable is not set, and this option is not +// passed, "localhost:4317" will be used. +// +// This option has no effect if WithGRPCConn is used. +func WithEndpointURL(u string) Option { + return wrappedOption{otlpconfig.WithEndpointURL(u)} +} + +// WithReconnectionPeriod set the minimum amount of time between connection +// attempts to the target endpoint. +// +// This option has no effect if WithGRPCConn is used. +func WithReconnectionPeriod(rp time.Duration) Option { + return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config { + cfg.ReconnectionPeriod = rp + return cfg + })} +} + +func compressorToCompression(compressor string) otlpconfig.Compression { + if compressor == "gzip" { + return otlpconfig.GzipCompression + } + + otel.Handle(fmt.Errorf("invalid compression type: '%s', using no compression as default", compressor)) + return otlpconfig.NoCompression +} + +// WithCompressor sets the compressor for the gRPC client to use when sending +// requests. Supported compressor values: "gzip". +func WithCompressor(compressor string) Option { + return wrappedOption{otlpconfig.WithCompression(compressorToCompression(compressor))} +} + +// WithHeaders will send the provided headers with each gRPC requests. +func WithHeaders(headers map[string]string) Option { + return wrappedOption{otlpconfig.WithHeaders(headers)} +} + +// WithTLSCredentials allows the connection to use TLS credentials when +// talking to the server. It takes in grpc.TransportCredentials instead of say +// a Certificate file or a tls.Certificate, because the retrieving of these +// credentials can be done in many ways e.g. plain file, in code tls.Config or +// by certificate rotation, so it is up to the caller to decide what to use. +// +// This option has no effect if WithGRPCConn is used. +func WithTLSCredentials(creds credentials.TransportCredentials) Option { + return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config { + cfg.Traces.GRPCCredentials = creds + return cfg + })} +} + +// WithServiceConfig defines the default gRPC service config used. +// +// This option has no effect if WithGRPCConn is used. +func WithServiceConfig(serviceConfig string) Option { + return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config { + cfg.ServiceConfig = serviceConfig + return cfg + })} +} + +// WithDialOption sets explicit grpc.DialOptions to use when making a +// connection. The options here are appended to the internal grpc.DialOptions +// used so they will take precedence over any other internal grpc.DialOptions +// they might conflict with. +// +// This option has no effect if WithGRPCConn is used. +func WithDialOption(opts ...grpc.DialOption) Option { + return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config { + cfg.DialOptions = opts + return cfg + })} +} + +// WithGRPCConn sets conn as the gRPC ClientConn used for all communication. +// +// This option takes precedence over any other option that relates to +// establishing or persisting a gRPC connection to a target endpoint. Any +// other option of those types passed will be ignored. +// +// It is the callers responsibility to close the passed conn. The client +// Shutdown method will not close this connection. +func WithGRPCConn(conn *grpc.ClientConn) Option { + return wrappedOption{otlpconfig.NewGRPCOption(func(cfg otlpconfig.Config) otlpconfig.Config { + cfg.GRPCConn = conn + return cfg + })} +} + +// WithTimeout sets the max amount of time a client will attempt to export a +// batch of spans. This takes precedence over any retry settings defined with +// WithRetry, once this time limit has been reached the export is abandoned +// and the batch of spans is dropped. +// +// If unset, the default timeout will be set to 10 seconds. +func WithTimeout(duration time.Duration) Option { + return wrappedOption{otlpconfig.WithTimeout(duration)} +} + +// WithRetry sets the retry policy for transient retryable errors that may be +// returned by the target endpoint when exporting a batch of spans. +// +// If the target endpoint responds with not only a retryable error, but +// explicitly returns a backoff time in the response. That time will take +// precedence over these settings. +// +// These settings do not define any network retry strategy. That is entirely +// handled by the gRPC ClientConn. +// +// If unset, the default retry policy will be used. It will retry the export +// 5 seconds after receiving a retryable error and increase exponentially +// after each error for no more than a total time of 1 minute. +func WithRetry(settings RetryConfig) Option { + return wrappedOption{otlpconfig.WithRetry(retry.Config(settings))} +} diff --git a/telemetry/sdklog/otlploghttp/client.go b/telemetry/sdklog/otlploghttp/client.go new file mode 100644 index 00000000000..87c92c640cf --- /dev/null +++ b/telemetry/sdklog/otlploghttp/client.go @@ -0,0 +1,250 @@ +package otlploghttp + +import ( + "bytes" + "context" + "errors" + "fmt" + + "io" + "net" + "net/http" + "net/url" + "strconv" + "sync" + "time" + + "github.com/dagger/dagger/telemetry/sdklog" + "github.com/dagger/dagger/telemetry/sdklog/otlploghttp/transform" + "google.golang.org/protobuf/proto" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace" + + collogspb "go.opentelemetry.io/proto/otlp/collector/logs/v1" +) + +const contentTypeProto = "application/x-protobuf" + +// Keep it in sync with golang's DefaultTransport from net/http! We +// have our own copy to avoid handling a situation where the +// DefaultTransport is overwritten with some different implementation +// of http.RoundTripper or it's modified by other package. +var ourTransport = &http.Transport{ + Proxy: http.ProxyFromEnvironment, + DialContext: (&net.Dialer{ + Timeout: 30 * time.Second, + KeepAlive: 30 * time.Second, + }).DialContext, + ForceAttemptHTTP2: true, + MaxIdleConns: 100, + IdleConnTimeout: 90 * time.Second, + TLSHandshakeTimeout: 10 * time.Second, + ExpectContinueTimeout: 1 * time.Second, +} + +type Config struct { + Endpoint string + URLPath string + Headers map[string]string + Insecure bool + Timeout time.Duration +} + +type client struct { + name string + cfg Config + client *http.Client + stopCh chan struct{} + stopOnce sync.Once +} + +var _ sdklog.LogExporter = (*client)(nil) + +func NewClient(cfg Config) sdklog.LogExporter { + httpClient := &http.Client{ + Transport: ourTransport, + Timeout: cfg.Timeout, + } + + stopCh := make(chan struct{}) + return &client{ + name: "logs", + cfg: cfg, + stopCh: stopCh, + client: httpClient, + } +} + +// Stop shuts down the client and interrupt any in-flight request. +func (d *client) Shutdown(ctx context.Context) error { + d.stopOnce.Do(func() { + close(d.stopCh) + }) + select { + case <-ctx.Done(): + return ctx.Err() + default: + } + return nil +} + +// UploadLogs sends a batch of records to the collector. +func (d *client) ExportLogs(ctx context.Context, logs []*sdklog.LogData) error { + pbRequest := &collogspb.ExportLogsServiceRequest{ + ResourceLogs: transform.Logs(logs), + } + + rawRequest, err := proto.Marshal(pbRequest) + if err != nil { + return err + } + + ctx, cancel := d.contextWithStop(ctx) + defer cancel() + + request, err := d.newRequest(rawRequest) + if err != nil { + return err + } + + select { + case <-ctx.Done(): + return ctx.Err() + default: + } + + request.reset(ctx) + resp, err := d.client.Do(request.Request) + var urlErr *url.Error + if errors.As(err, &urlErr) && urlErr.Temporary() { + return newResponseError(http.Header{}) + } + if err != nil { + return err + } + + if resp != nil && resp.Body != nil { + defer func() { + if err := resp.Body.Close(); err != nil { + otel.Handle(err) + } + }() + } + + switch sc := resp.StatusCode; { + case sc >= 200 && sc <= 299: + return nil + case sc == http.StatusTooManyRequests, + sc == http.StatusBadGateway, + sc == http.StatusServiceUnavailable, + sc == http.StatusGatewayTimeout: + // Retry-able failures. Drain the body to reuse the connection. + if _, err := io.Copy(io.Discard, resp.Body); err != nil { + otel.Handle(err) + } + return newResponseError(resp.Header) + default: + return fmt.Errorf("failed to send to %s: %s", request.URL, resp.Status) + } +} + +func (d *client) newRequest(body []byte) (request, error) { + u := url.URL{Scheme: d.getScheme(), Host: d.cfg.Endpoint, Path: d.cfg.URLPath} + r, err := http.NewRequest(http.MethodPost, u.String(), nil) + if err != nil { + return request{Request: r}, err + } + + userAgent := "OTel OTLP Exporter Go/" + otlptrace.Version() + r.Header.Set("User-Agent", userAgent) + + for k, v := range d.cfg.Headers { + r.Header.Set(k, v) + } + r.Header.Set("Content-Type", contentTypeProto) + + req := request{Request: r} + r.ContentLength = (int64)(len(body)) + req.bodyReader = bodyReader(body) + + return req, nil +} + +// MarshalLog is the marshaling function used by the logging system to represent this Client. +func (d *client) MarshalLog() interface{} { + return struct { + Type string + Endpoint string + Insecure bool + }{ + Type: "otlploghttp", + Endpoint: d.cfg.Endpoint, + Insecure: d.cfg.Insecure, + } +} + +// bodyReader returns a closure returning a new reader for buf. +func bodyReader(buf []byte) func() io.ReadCloser { + return func() io.ReadCloser { + return io.NopCloser(bytes.NewReader(buf)) + } +} + +// request wraps an http.Request with a resettable body reader. +type request struct { + *http.Request + + // bodyReader allows the same body to be used for multiple requests. + bodyReader func() io.ReadCloser +} + +// reset reinitializes the request Body and uses ctx for the request. +func (r *request) reset(ctx context.Context) { + r.Body = r.bodyReader() + r.Request = r.Request.WithContext(ctx) +} + +// retryableError represents a request failure that can be retried. +type retryableError struct { + throttle int64 +} + +// newResponseError returns a retryableError and will extract any explicit +// throttle delay contained in headers. +func newResponseError(header http.Header) error { + var rErr retryableError + if s, ok := header["Retry-After"]; ok { + if t, err := strconv.ParseInt(s[0], 10, 64); err == nil { + rErr.throttle = t + } + } + return rErr +} + +func (e retryableError) Error() string { + return "retry-able request failure" +} + +func (d *client) getScheme() string { + if d.cfg.Insecure { + return "http" + } + return "https" +} + +func (d *client) contextWithStop(ctx context.Context) (context.Context, context.CancelFunc) { + // Unify the parent context Done signal with the client's stop + // channel. + ctx, cancel := context.WithCancel(ctx) + go func(ctx context.Context, cancel context.CancelFunc) { + select { + case <-ctx.Done(): + // Nothing to do, either cancelled or deadline + // happened. + case <-d.stopCh: + cancel() + } + }(ctx, cancel) + return ctx, cancel +} diff --git a/telemetry/sdklog/otlploghttp/transform/resource.go b/telemetry/sdklog/otlploghttp/transform/resource.go new file mode 100644 index 00000000000..9c00a41b6b7 --- /dev/null +++ b/telemetry/sdklog/otlploghttp/transform/resource.go @@ -0,0 +1,137 @@ +package transform + +import ( + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/sdk/resource" + commonpb "go.opentelemetry.io/proto/otlp/common/v1" + resourcepb "go.opentelemetry.io/proto/otlp/resource/v1" +) + +// Resource transforms a Resource into an OTLP Resource. +func Resource(r *resource.Resource) *resourcepb.Resource { + if r == nil { + return nil + } + return &resourcepb.Resource{Attributes: resourceAttributes(r)} +} + +func resourceAttributes(res *resource.Resource) []*commonpb.KeyValue { + return iterator(res.Iter()) +} + +func iterator(iter attribute.Iterator) []*commonpb.KeyValue { + l := iter.Len() + if l == 0 { + return nil + } + + out := make([]*commonpb.KeyValue, 0, l) + for iter.Next() { + out = append(out, keyValue(iter.Attribute())) + } + return out +} + +func keyValue(kv attribute.KeyValue) *commonpb.KeyValue { + return &commonpb.KeyValue{Key: string(kv.Key), Value: value(kv.Value)} +} + +// value transforms an attribute value into an OTLP AnyValue. +func value(v attribute.Value) *commonpb.AnyValue { + av := new(commonpb.AnyValue) + switch v.Type() { + case attribute.BOOL: + av.Value = &commonpb.AnyValue_BoolValue{ + BoolValue: v.AsBool(), + } + case attribute.BOOLSLICE: + av.Value = &commonpb.AnyValue_ArrayValue{ + ArrayValue: &commonpb.ArrayValue{ + Values: boolSliceValues(v.AsBoolSlice()), + }, + } + case attribute.INT64: + av.Value = &commonpb.AnyValue_IntValue{ + IntValue: v.AsInt64(), + } + case attribute.INT64SLICE: + av.Value = &commonpb.AnyValue_ArrayValue{ + ArrayValue: &commonpb.ArrayValue{ + Values: int64SliceValues(v.AsInt64Slice()), + }, + } + case attribute.FLOAT64: + av.Value = &commonpb.AnyValue_DoubleValue{ + DoubleValue: v.AsFloat64(), + } + case attribute.FLOAT64SLICE: + av.Value = &commonpb.AnyValue_ArrayValue{ + ArrayValue: &commonpb.ArrayValue{ + Values: float64SliceValues(v.AsFloat64Slice()), + }, + } + case attribute.STRING: + av.Value = &commonpb.AnyValue_StringValue{ + StringValue: v.AsString(), + } + case attribute.STRINGSLICE: + av.Value = &commonpb.AnyValue_ArrayValue{ + ArrayValue: &commonpb.ArrayValue{ + Values: stringSliceValues(v.AsStringSlice()), + }, + } + default: + av.Value = &commonpb.AnyValue_StringValue{ + StringValue: "INVALID", + } + } + return av +} + +func boolSliceValues(vals []bool) []*commonpb.AnyValue { + converted := make([]*commonpb.AnyValue, len(vals)) + for i, v := range vals { + converted[i] = &commonpb.AnyValue{ + Value: &commonpb.AnyValue_BoolValue{ + BoolValue: v, + }, + } + } + return converted +} + +func int64SliceValues(vals []int64) []*commonpb.AnyValue { + converted := make([]*commonpb.AnyValue, len(vals)) + for i, v := range vals { + converted[i] = &commonpb.AnyValue{ + Value: &commonpb.AnyValue_IntValue{ + IntValue: v, + }, + } + } + return converted +} + +func float64SliceValues(vals []float64) []*commonpb.AnyValue { + converted := make([]*commonpb.AnyValue, len(vals)) + for i, v := range vals { + converted[i] = &commonpb.AnyValue{ + Value: &commonpb.AnyValue_DoubleValue{ + DoubleValue: v, + }, + } + } + return converted +} + +func stringSliceValues(vals []string) []*commonpb.AnyValue { + converted := make([]*commonpb.AnyValue, len(vals)) + for i, v := range vals { + converted[i] = &commonpb.AnyValue{ + Value: &commonpb.AnyValue_StringValue{ + StringValue: v, + }, + } + } + return converted +} diff --git a/telemetry/sdklog/otlploghttp/transform/tranform.go b/telemetry/sdklog/otlploghttp/transform/tranform.go new file mode 100644 index 00000000000..d32c47bd3ad --- /dev/null +++ b/telemetry/sdklog/otlploghttp/transform/tranform.go @@ -0,0 +1,159 @@ +package transform + +import ( + "github.com/dagger/dagger/telemetry/sdklog" + "go.opentelemetry.io/otel/attribute" + olog "go.opentelemetry.io/otel/log" + "go.opentelemetry.io/otel/sdk/instrumentation" + commonpb "go.opentelemetry.io/proto/otlp/common/v1" + logspb "go.opentelemetry.io/proto/otlp/logs/v1" +) + +// Spans transforms a slice of OpenTelemetry spans into a slice of OTLP +// ResourceSpans. +func Logs(sdl []*sdklog.LogData) []*logspb.ResourceLogs { + if len(sdl) == 0 { + return nil + } + + rsm := make(map[attribute.Distinct]*logspb.ResourceLogs) + + type key struct { + r attribute.Distinct + is instrumentation.Scope + } + ssm := make(map[key]*logspb.ScopeLogs) + + var resources int + for _, sd := range sdl { + if sd == nil { + continue + } + + rKey := sd.Resource.Equivalent() + k := key{ + r: rKey, + is: sd.InstrumentationScope, + } + scopeLog, iOk := ssm[k] + if !iOk { + // Either the resource or instrumentation scope were unknown. + scopeLog = &logspb.ScopeLogs{ + Scope: InstrumentationScope(sd.InstrumentationScope), + LogRecords: []*logspb.LogRecord{}, + SchemaUrl: sd.InstrumentationScope.SchemaURL, + } + } + scopeLog.LogRecords = append(scopeLog.LogRecords, logRecord(sd)) + ssm[k] = scopeLog + + rs, rOk := rsm[rKey] + if !rOk { + resources++ + // The resource was unknown. + rs = &logspb.ResourceLogs{ + Resource: Resource(sd.Resource), + ScopeLogs: []*logspb.ScopeLogs{scopeLog}, + } + if sd.Resource != nil { + rs.SchemaUrl = sd.Resource.SchemaURL() + } + rsm[rKey] = rs + continue + } + + // The resource has been seen before. Check if the instrumentation + // library lookup was unknown because if so we need to add it to the + // ResourceSpans. Otherwise, the instrumentation library has already + // been seen and the append we did above will be included it in the + // ScopeSpans reference. + if !iOk { + rs.ScopeLogs = append(rs.ScopeLogs, scopeLog) + } + } + + // Transform the categorized map into a slice + rss := make([]*logspb.ResourceLogs, 0, resources) + for _, rs := range rsm { + rss = append(rss, rs) + } + return rss +} + +func InstrumentationScope(il instrumentation.Scope) *commonpb.InstrumentationScope { + if il == (instrumentation.Scope{}) { + return nil + } + return &commonpb.InstrumentationScope{ + Name: il.Name, + Version: il.Version, + } +} + +// Value transforms an attribute Value into an OTLP AnyValue. +func logValue(v olog.Value) *commonpb.AnyValue { + av := new(commonpb.AnyValue) + switch v.Kind() { + case olog.KindBool: + av.Value = &commonpb.AnyValue_BoolValue{ + BoolValue: v.AsBool(), + } + case olog.KindInt64: + av.Value = &commonpb.AnyValue_IntValue{ + IntValue: v.AsInt64(), + } + case olog.KindFloat64: + av.Value = &commonpb.AnyValue_DoubleValue{ + DoubleValue: v.AsFloat64(), + } + case olog.KindString: + av.Value = &commonpb.AnyValue_StringValue{ + StringValue: v.AsString(), + } + case olog.KindSlice: + array := &commonpb.ArrayValue{} + for _, e := range v.AsSlice() { + array.Values = append(array.Values, logValue(e)) + } + av.Value = &commonpb.AnyValue_ArrayValue{ + ArrayValue: array, + } + case olog.KindMap: + panic("not supported") + default: + av.Value = &commonpb.AnyValue_StringValue{ + StringValue: "INVALID", + } + } + return av +} + +// span transforms a Span into an OTLP span. +func logRecord(l *sdklog.LogData) *logspb.LogRecord { + if l == nil { + return nil + } + + attrs := []*commonpb.KeyValue{} + l.WalkAttributes(func(kv olog.KeyValue) bool { + attrs = append(attrs, &commonpb.KeyValue{ + Key: kv.Key, + Value: logValue(kv.Value), + }) + return true + }) + + s := &logspb.LogRecord{ + TimeUnixNano: uint64(l.Timestamp().UnixNano()), + SeverityNumber: logspb.SeverityNumber(l.Severity()), + SeverityText: l.SeverityText(), + Body: logValue(l.Body()), + Attributes: attrs, + // DroppedAttributesCount: 0, + // Flags: 0, + TraceId: l.TraceID[:], + SpanId: l.SpanID[:], + } + + return s +} diff --git a/telemetry/sdklog/processor.go b/telemetry/sdklog/processor.go new file mode 100644 index 00000000000..90f6cec1b8d --- /dev/null +++ b/telemetry/sdklog/processor.go @@ -0,0 +1,48 @@ +package sdklog + +import ( + "context" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/log" + "go.opentelemetry.io/otel/sdk/instrumentation" + "go.opentelemetry.io/otel/sdk/resource" + "go.opentelemetry.io/otel/trace" +) + +type LogData struct { + log.Record + + Resource *resource.Resource + InstrumentationScope instrumentation.Scope + + TraceID trace.TraceID + SpanID trace.SpanID +} + +type LogProcessor interface { + OnEmit(context.Context, *LogData) + Shutdown(context.Context) error +} + +var _ LogProcessor = &simpleLogProcessor{} + +type simpleLogProcessor struct { + exporter LogExporter +} + +func NewSimpleLogProcessor(exporter LogExporter) LogProcessor { + return &simpleLogProcessor{ + exporter: exporter, + } +} + +func (p *simpleLogProcessor) OnEmit(ctx context.Context, log *LogData) { + if err := p.exporter.ExportLogs(ctx, []*LogData{log}); err != nil { + otel.Handle(err) + } +} + +func (p *simpleLogProcessor) Shutdown(ctx context.Context) error { + return nil +} diff --git a/telemetry/sdklog/provider.go b/telemetry/sdklog/provider.go new file mode 100644 index 00000000000..f194ba37ee6 --- /dev/null +++ b/telemetry/sdklog/provider.go @@ -0,0 +1,80 @@ +package sdklog + +import ( + "context" + "sync" + "sync/atomic" + + "go.opentelemetry.io/otel/log" + "go.opentelemetry.io/otel/log/embedded" + "go.opentelemetry.io/otel/log/noop" + "go.opentelemetry.io/otel/sdk/instrumentation" + "go.opentelemetry.io/otel/sdk/resource" + "golang.org/x/sync/errgroup" +) + +var _ log.LoggerProvider = &LoggerProvider{} + +type LoggerProvider struct { + embedded.LoggerProvider + + mu sync.RWMutex + resource *resource.Resource + isShutdown atomic.Bool + processors []LogProcessor +} + +func NewLoggerProvider(resource *resource.Resource) *LoggerProvider { + return &LoggerProvider{ + resource: resource, + } +} + +func (p *LoggerProvider) Logger(name string, opts ...log.LoggerOption) log.Logger { + if p.isShutdown.Load() { + return noop.NewLoggerProvider().Logger(name, opts...) + } + + c := log.NewLoggerConfig(opts...) + is := instrumentation.Scope{ + Name: name, + Version: c.InstrumentationVersion(), + SchemaURL: c.SchemaURL(), + } + + return &logger{ + provider: p, + instrumentationScope: is, + resource: p.resource, + } +} + +func (p *LoggerProvider) RegisterLogProcessor(lp LogProcessor) { + p.mu.Lock() + defer p.mu.Unlock() + if p.isShutdown.Load() { + return + } + + p.processors = append(p.processors, lp) +} + +func (p *LoggerProvider) Shutdown(ctx context.Context) error { + p.mu.RLock() + defer p.mu.RUnlock() + eg := new(errgroup.Group) + for _, lp := range p.processors { + lp := lp + eg.Go(func() error { + return lp.Shutdown(ctx) + }) + } + return eg.Wait() +} + +func (p *LoggerProvider) getLogProcessors() []LogProcessor { + p.mu.RLock() + defer p.mu.RUnlock() + + return p.processors +} diff --git a/telemetry/servers.go b/telemetry/servers.go new file mode 100644 index 00000000000..e187b9c6c20 --- /dev/null +++ b/telemetry/servers.go @@ -0,0 +1,231 @@ +package telemetry + +import ( + "context" + "fmt" + "log/slog" + "time" + + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace" + "go.opentelemetry.io/otel/log" + "go.opentelemetry.io/otel/sdk/instrumentation" + "go.opentelemetry.io/otel/sdk/resource" + "go.opentelemetry.io/otel/trace" + colllogsv1 "go.opentelemetry.io/proto/otlp/collector/logs/v1" + collmetricsv1 "go.opentelemetry.io/proto/otlp/collector/metrics/v1" + colltracev1 "go.opentelemetry.io/proto/otlp/collector/trace/v1" + otlpcommonv1 "go.opentelemetry.io/proto/otlp/common/v1" + otlplogsv1 "go.opentelemetry.io/proto/otlp/logs/v1" + otlptracev1 "go.opentelemetry.io/proto/otlp/trace/v1" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" + + "github.com/dagger/dagger/telemetry/sdklog" + logtransform "github.com/dagger/dagger/telemetry/sdklog/otlploghttp/transform" +) + +type TraceServer struct { + PubSub *PubSub + + *colltracev1.UnimplementedTraceServiceServer + *UnimplementedTracesSourceServer +} + +func (e *TraceServer) Export(ctx context.Context, req *colltracev1.ExportTraceServiceRequest) (*colltracev1.ExportTraceServiceResponse, error) { + err := e.PubSub.ExportSpans(ctx, SpansFromProto(req.GetResourceSpans())) + if err != nil { + return nil, err + } + return &colltracev1.ExportTraceServiceResponse{}, nil +} + +func (e *TraceServer) Subscribe(req *TelemetryRequest, srv TracesSource_SubscribeServer) error { + exp, err := otlptrace.New(srv.Context(), &traceStreamExporter{stream: srv}) + if err != nil { + return err + } + return e.PubSub.SubscribeToSpans(srv.Context(), trace.TraceID(req.TraceId), exp) +} + +type traceStreamExporter struct { + stream TracesSource_SubscribeServer +} + +var _ otlptrace.Client = (*traceStreamExporter)(nil) + +func (s *traceStreamExporter) Start(ctx context.Context) error { + return nil +} + +func (s *traceStreamExporter) Stop(ctx context.Context) error { + return nil +} + +func (s *traceStreamExporter) UploadTraces(ctx context.Context, spans []*otlptracev1.ResourceSpans) error { + return s.stream.Send(&otlptracev1.TracesData{ + ResourceSpans: spans, + }) +} + +type LogsServer struct { + PubSub *PubSub + + *colllogsv1.UnimplementedLogsServiceServer + *UnimplementedLogsSourceServer +} + +func (e *LogsServer) Export(ctx context.Context, req *colllogsv1.ExportLogsServiceRequest) (*colllogsv1.ExportLogsServiceResponse, error) { + err := e.PubSub.ExportLogs(ctx, TransformPBLogs(req.GetResourceLogs())) + if err != nil { + return nil, err + } + return &colllogsv1.ExportLogsServiceResponse{}, nil +} + +func (e *LogsServer) Subscribe(req *TelemetryRequest, stream LogsSource_SubscribeServer) error { + return e.PubSub.SubscribeToLogs(stream.Context(), trace.TraceID(req.TraceId), &logStreamExporter{ + stream: stream, + }) +} + +type logStreamExporter struct { + stream LogsSource_SubscribeServer +} + +var _ sdklog.LogExporter = (*logStreamExporter)(nil) + +func (s *logStreamExporter) ExportLogs(ctx context.Context, logs []*sdklog.LogData) error { + return s.stream.Send(&otlplogsv1.LogsData{ + ResourceLogs: logtransform.Logs(logs), + }) +} + +func (s *logStreamExporter) Shutdown(ctx context.Context) error { + return nil +} + +type MetricsServer struct { + PubSub *PubSub + + *collmetricsv1.UnimplementedMetricsServiceServer + *UnimplementedMetricsSourceServer +} + +func (e *MetricsServer) Export(ctx context.Context, req *collmetricsv1.ExportMetricsServiceRequest) (*collmetricsv1.ExportMetricsServiceResponse, error) { + // TODO + slog.Warn("MetricsServer.Export ignoring export (TODO)") + return &collmetricsv1.ExportMetricsServiceResponse{}, nil +} + +func (e *MetricsServer) Subscribe(req *TelemetryRequest, srv MetricsSource_SubscribeServer) error { + return status.Errorf(codes.Unimplemented, "Subscribe not implemented") +} + +func TransformPBLogs(resLogs []*otlplogsv1.ResourceLogs) []*sdklog.LogData { + logs := []*sdklog.LogData{} + for _, rl := range resLogs { + res := resource.NewWithAttributes(rl.GetSchemaUrl(), attrKVs(rl.GetResource().GetAttributes())...) + for _, scopeLog := range rl.GetScopeLogs() { + scope := scopeLog.GetScope() + for _, rec := range scopeLog.GetLogRecords() { + var logRec log.Record + // spare me my life! + // spare me my life! + logRec.SetTimestamp(time.Unix(0, int64(rec.GetTimeUnixNano()))) + logRec.SetBody(logValue(rec.GetBody())) + logRec.AddAttributes(logKVs(rec.GetAttributes())...) + logRec.SetSeverity(log.Severity(rec.GetSeverityNumber())) + logRec.SetSeverityText(rec.GetSeverityText()) + logRec.SetObservedTimestamp(time.Unix(0, int64(rec.GetObservedTimeUnixNano()))) + logs = append(logs, &sdklog.LogData{ + Record: logRec, + Resource: res, + InstrumentationScope: instrumentation.Scope{ + Name: scope.GetName(), + Version: scope.GetVersion(), + SchemaURL: scopeLog.GetSchemaUrl(), + }, + TraceID: trace.TraceID(rec.GetTraceId()), + SpanID: trace.SpanID(rec.GetSpanId()), + }) + } + } + } + return logs +} + +func logKVs(kvs []*otlpcommonv1.KeyValue) []log.KeyValue { + res := make([]log.KeyValue, len(kvs)) + for i, kv := range kvs { + res[i] = logKeyValue(kv) + } + return res +} + +func logKeyValue(v *otlpcommonv1.KeyValue) log.KeyValue { + return log.KeyValue{ + Key: v.GetKey(), + Value: logValue(v.GetValue()), + } +} + +func attrKVs(kvs []*otlpcommonv1.KeyValue) []attribute.KeyValue { + res := make([]attribute.KeyValue, len(kvs)) + for i, kv := range kvs { + res[i] = attrKeyValue(kv) + } + return res +} + +func attrKeyValue(v *otlpcommonv1.KeyValue) attribute.KeyValue { + return attribute.KeyValue{ + Key: attribute.Key(v.GetKey()), + Value: attrValue(v.GetValue()), + } +} + +func attrValue(v *otlpcommonv1.AnyValue) attribute.Value { + switch x := v.Value.(type) { + case *otlpcommonv1.AnyValue_StringValue: + return attribute.StringValue(v.GetStringValue()) + case *otlpcommonv1.AnyValue_DoubleValue: + return attribute.Float64Value(v.GetDoubleValue()) + case *otlpcommonv1.AnyValue_IntValue: + return attribute.Int64Value(v.GetIntValue()) + case *otlpcommonv1.AnyValue_BoolValue: + return attribute.BoolValue(v.GetBoolValue()) + default: + // TODO slices, bleh + return attribute.StringValue(fmt.Sprintf("UNHANDLED ATTR TYPE: %v", x)) + } +} + +func logValue(v *otlpcommonv1.AnyValue) log.Value { + switch x := v.Value.(type) { + case *otlpcommonv1.AnyValue_StringValue: + return log.StringValue(v.GetStringValue()) + case *otlpcommonv1.AnyValue_DoubleValue: + return log.Float64Value(v.GetDoubleValue()) + case *otlpcommonv1.AnyValue_IntValue: + return log.Int64Value(v.GetIntValue()) + case *otlpcommonv1.AnyValue_BoolValue: + return log.BoolValue(v.GetBoolValue()) + case *otlpcommonv1.AnyValue_KvlistValue: + kvs := make([]log.KeyValue, len(x.KvlistValue.GetValues())) + for _, kv := range x.KvlistValue.GetValues() { + kvs = append(kvs, logKeyValue(kv)) + } + return log.MapValue(kvs...) + case *otlpcommonv1.AnyValue_ArrayValue: + vals := make([]log.Value, len(x.ArrayValue.GetValues())) + for _, v := range x.ArrayValue.GetValues() { + vals = append(vals, logValue(v)) + } + return log.SliceValue(vals...) + case *otlpcommonv1.AnyValue_BytesValue: + return log.BytesValue(x.BytesValue) + default: + panic(fmt.Sprintf("unknown value type: %T", x)) + } +} diff --git a/telemetry/servers.pb.go b/telemetry/servers.pb.go new file mode 100644 index 00000000000..5401290e0ee --- /dev/null +++ b/telemetry/servers.pb.go @@ -0,0 +1,182 @@ +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.33.0 +// protoc v3.19.6 +// source: servers.proto + +package telemetry + +import ( + v11 "go.opentelemetry.io/proto/otlp/logs/v1" + v12 "go.opentelemetry.io/proto/otlp/metrics/v1" + v1 "go.opentelemetry.io/proto/otlp/trace/v1" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +type TelemetryRequest struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + TraceId []byte `protobuf:"bytes,1,opt,name=trace_id,json=traceId,proto3" json:"trace_id,omitempty"` +} + +func (x *TelemetryRequest) Reset() { + *x = TelemetryRequest{} + if protoimpl.UnsafeEnabled { + mi := &file_servers_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } +} + +func (x *TelemetryRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*TelemetryRequest) ProtoMessage() {} + +func (x *TelemetryRequest) ProtoReflect() protoreflect.Message { + mi := &file_servers_proto_msgTypes[0] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use TelemetryRequest.ProtoReflect.Descriptor instead. +func (*TelemetryRequest) Descriptor() ([]byte, []int) { + return file_servers_proto_rawDescGZIP(), []int{0} +} + +func (x *TelemetryRequest) GetTraceId() []byte { + if x != nil { + return x.TraceId + } + return nil +} + +var File_servers_proto protoreflect.FileDescriptor + +var file_servers_proto_rawDesc = []byte{ + 0x0a, 0x0d, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, + 0x09, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x1a, 0x28, 0x6f, 0x70, 0x65, 0x6e, + 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, + 0x74, 0x72, 0x61, 0x63, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x74, 0x72, 0x61, 0x63, 0x65, 0x2e, 0x70, + 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x26, 0x6f, 0x70, 0x65, 0x6e, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, + 0x74, 0x72, 0x79, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x6c, 0x6f, 0x67, 0x73, 0x2f, 0x76, + 0x31, 0x2f, 0x6c, 0x6f, 0x67, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x2c, 0x6f, 0x70, + 0x65, 0x6e, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x2f, 0x70, 0x72, 0x6f, 0x74, + 0x6f, 0x2f, 0x6d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x73, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x65, 0x74, + 0x72, 0x69, 0x63, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x2d, 0x0a, 0x10, 0x54, 0x65, + 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x19, + 0x0a, 0x08, 0x74, 0x72, 0x61, 0x63, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, + 0x52, 0x07, 0x74, 0x72, 0x61, 0x63, 0x65, 0x49, 0x64, 0x32, 0x66, 0x0a, 0x0c, 0x54, 0x72, 0x61, + 0x63, 0x65, 0x73, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12, 0x56, 0x0a, 0x09, 0x53, 0x75, 0x62, + 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x12, 0x1b, 0x2e, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, + 0x72, 0x79, 0x2e, 0x54, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x52, 0x65, 0x71, 0x75, + 0x65, 0x73, 0x74, 0x1a, 0x28, 0x2e, 0x6f, 0x70, 0x65, 0x6e, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, + 0x74, 0x72, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x74, 0x72, 0x61, 0x63, 0x65, 0x2e, + 0x76, 0x31, 0x2e, 0x54, 0x72, 0x61, 0x63, 0x65, 0x73, 0x44, 0x61, 0x74, 0x61, 0x22, 0x00, 0x30, + 0x01, 0x32, 0x61, 0x0a, 0x0a, 0x4c, 0x6f, 0x67, 0x73, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12, + 0x53, 0x0a, 0x09, 0x53, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x12, 0x1b, 0x2e, 0x74, + 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x2e, 0x54, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, + 0x72, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x6f, 0x70, 0x65, 0x6e, + 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, + 0x6c, 0x6f, 0x67, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x4c, 0x6f, 0x67, 0x73, 0x44, 0x61, 0x74, 0x61, + 0x22, 0x00, 0x30, 0x01, 0x32, 0x6a, 0x0a, 0x0d, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x73, 0x53, + 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12, 0x59, 0x0a, 0x09, 0x53, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, + 0x62, 0x65, 0x12, 0x1b, 0x2e, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x2e, 0x54, + 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, + 0x2b, 0x2e, 0x6f, 0x70, 0x65, 0x6e, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x6d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x73, 0x2e, 0x76, 0x31, + 0x2e, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x73, 0x44, 0x61, 0x74, 0x61, 0x22, 0x00, 0x30, 0x01, + 0x42, 0x0d, 0x5a, 0x0b, 0x2e, 0x2f, 0x74, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x74, 0x72, 0x79, 0x62, + 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, +} + +var ( + file_servers_proto_rawDescOnce sync.Once + file_servers_proto_rawDescData = file_servers_proto_rawDesc +) + +func file_servers_proto_rawDescGZIP() []byte { + file_servers_proto_rawDescOnce.Do(func() { + file_servers_proto_rawDescData = protoimpl.X.CompressGZIP(file_servers_proto_rawDescData) + }) + return file_servers_proto_rawDescData +} + +var file_servers_proto_msgTypes = make([]protoimpl.MessageInfo, 1) +var file_servers_proto_goTypes = []interface{}{ + (*TelemetryRequest)(nil), // 0: telemetry.TelemetryRequest + (*v1.TracesData)(nil), // 1: opentelemetry.proto.trace.v1.TracesData + (*v11.LogsData)(nil), // 2: opentelemetry.proto.logs.v1.LogsData + (*v12.MetricsData)(nil), // 3: opentelemetry.proto.metrics.v1.MetricsData +} +var file_servers_proto_depIdxs = []int32{ + 0, // 0: telemetry.TracesSource.Subscribe:input_type -> telemetry.TelemetryRequest + 0, // 1: telemetry.LogsSource.Subscribe:input_type -> telemetry.TelemetryRequest + 0, // 2: telemetry.MetricsSource.Subscribe:input_type -> telemetry.TelemetryRequest + 1, // 3: telemetry.TracesSource.Subscribe:output_type -> opentelemetry.proto.trace.v1.TracesData + 2, // 4: telemetry.LogsSource.Subscribe:output_type -> opentelemetry.proto.logs.v1.LogsData + 3, // 5: telemetry.MetricsSource.Subscribe:output_type -> opentelemetry.proto.metrics.v1.MetricsData + 3, // [3:6] is the sub-list for method output_type + 0, // [0:3] is the sub-list for method input_type + 0, // [0:0] is the sub-list for extension type_name + 0, // [0:0] is the sub-list for extension extendee + 0, // [0:0] is the sub-list for field type_name +} + +func init() { file_servers_proto_init() } +func file_servers_proto_init() { + if File_servers_proto != nil { + return + } + if !protoimpl.UnsafeEnabled { + file_servers_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*TelemetryRequest); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: file_servers_proto_rawDesc, + NumEnums: 0, + NumMessages: 1, + NumExtensions: 0, + NumServices: 3, + }, + GoTypes: file_servers_proto_goTypes, + DependencyIndexes: file_servers_proto_depIdxs, + MessageInfos: file_servers_proto_msgTypes, + }.Build() + File_servers_proto = out.File + file_servers_proto_rawDesc = nil + file_servers_proto_goTypes = nil + file_servers_proto_depIdxs = nil +} diff --git a/telemetry/servers.proto b/telemetry/servers.proto new file mode 100644 index 00000000000..ee3a16c3e36 --- /dev/null +++ b/telemetry/servers.proto @@ -0,0 +1,24 @@ +syntax = "proto3"; +package telemetry; + +import "opentelemetry/proto/trace/v1/trace.proto"; +import "opentelemetry/proto/logs/v1/logs.proto"; +import "opentelemetry/proto/metrics/v1/metrics.proto"; + +option go_package = "./telemetry"; + +service TracesSource { + rpc Subscribe (TelemetryRequest) returns (stream opentelemetry.proto.trace.v1.TracesData) {} +} + +service LogsSource { + rpc Subscribe (TelemetryRequest) returns (stream opentelemetry.proto.logs.v1.LogsData) {} +} + +service MetricsSource { + rpc Subscribe (TelemetryRequest) returns (stream opentelemetry.proto.metrics.v1.MetricsData) {} +} + +message TelemetryRequest { + bytes trace_id = 1; +}; diff --git a/telemetry/servers_grpc.pb.go b/telemetry/servers_grpc.pb.go new file mode 100644 index 00000000000..960c357175d --- /dev/null +++ b/telemetry/servers_grpc.pb.go @@ -0,0 +1,373 @@ +// Code generated by protoc-gen-go-grpc. DO NOT EDIT. +// versions: +// - protoc-gen-go-grpc v1.3.0 +// - protoc v3.19.6 +// source: servers.proto + +package telemetry + +import ( + context "context" + v11 "go.opentelemetry.io/proto/otlp/logs/v1" + v12 "go.opentelemetry.io/proto/otlp/metrics/v1" + v1 "go.opentelemetry.io/proto/otlp/trace/v1" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" +) + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +// Requires gRPC-Go v1.32.0 or later. +const _ = grpc.SupportPackageIsVersion7 + +const ( + TracesSource_Subscribe_FullMethodName = "/telemetry.TracesSource/Subscribe" +) + +// TracesSourceClient is the client API for TracesSource service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +type TracesSourceClient interface { + Subscribe(ctx context.Context, in *TelemetryRequest, opts ...grpc.CallOption) (TracesSource_SubscribeClient, error) +} + +type tracesSourceClient struct { + cc grpc.ClientConnInterface +} + +func NewTracesSourceClient(cc grpc.ClientConnInterface) TracesSourceClient { + return &tracesSourceClient{cc} +} + +func (c *tracesSourceClient) Subscribe(ctx context.Context, in *TelemetryRequest, opts ...grpc.CallOption) (TracesSource_SubscribeClient, error) { + stream, err := c.cc.NewStream(ctx, &TracesSource_ServiceDesc.Streams[0], TracesSource_Subscribe_FullMethodName, opts...) + if err != nil { + return nil, err + } + x := &tracesSourceSubscribeClient{stream} + if err := x.ClientStream.SendMsg(in); err != nil { + return nil, err + } + if err := x.ClientStream.CloseSend(); err != nil { + return nil, err + } + return x, nil +} + +type TracesSource_SubscribeClient interface { + Recv() (*v1.TracesData, error) + grpc.ClientStream +} + +type tracesSourceSubscribeClient struct { + grpc.ClientStream +} + +func (x *tracesSourceSubscribeClient) Recv() (*v1.TracesData, error) { + m := new(v1.TracesData) + if err := x.ClientStream.RecvMsg(m); err != nil { + return nil, err + } + return m, nil +} + +// TracesSourceServer is the server API for TracesSource service. +// All implementations must embed UnimplementedTracesSourceServer +// for forward compatibility +type TracesSourceServer interface { + Subscribe(*TelemetryRequest, TracesSource_SubscribeServer) error + mustEmbedUnimplementedTracesSourceServer() +} + +// UnimplementedTracesSourceServer must be embedded to have forward compatible implementations. +type UnimplementedTracesSourceServer struct { +} + +func (UnimplementedTracesSourceServer) Subscribe(*TelemetryRequest, TracesSource_SubscribeServer) error { + return status.Errorf(codes.Unimplemented, "method Subscribe not implemented") +} +func (UnimplementedTracesSourceServer) mustEmbedUnimplementedTracesSourceServer() {} + +// UnsafeTracesSourceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to TracesSourceServer will +// result in compilation errors. +type UnsafeTracesSourceServer interface { + mustEmbedUnimplementedTracesSourceServer() +} + +func RegisterTracesSourceServer(s grpc.ServiceRegistrar, srv TracesSourceServer) { + s.RegisterService(&TracesSource_ServiceDesc, srv) +} + +func _TracesSource_Subscribe_Handler(srv interface{}, stream grpc.ServerStream) error { + m := new(TelemetryRequest) + if err := stream.RecvMsg(m); err != nil { + return err + } + return srv.(TracesSourceServer).Subscribe(m, &tracesSourceSubscribeServer{stream}) +} + +type TracesSource_SubscribeServer interface { + Send(*v1.TracesData) error + grpc.ServerStream +} + +type tracesSourceSubscribeServer struct { + grpc.ServerStream +} + +func (x *tracesSourceSubscribeServer) Send(m *v1.TracesData) error { + return x.ServerStream.SendMsg(m) +} + +// TracesSource_ServiceDesc is the grpc.ServiceDesc for TracesSource service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var TracesSource_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "telemetry.TracesSource", + HandlerType: (*TracesSourceServer)(nil), + Methods: []grpc.MethodDesc{}, + Streams: []grpc.StreamDesc{ + { + StreamName: "Subscribe", + Handler: _TracesSource_Subscribe_Handler, + ServerStreams: true, + }, + }, + Metadata: "servers.proto", +} + +const ( + LogsSource_Subscribe_FullMethodName = "/telemetry.LogsSource/Subscribe" +) + +// LogsSourceClient is the client API for LogsSource service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +type LogsSourceClient interface { + Subscribe(ctx context.Context, in *TelemetryRequest, opts ...grpc.CallOption) (LogsSource_SubscribeClient, error) +} + +type logsSourceClient struct { + cc grpc.ClientConnInterface +} + +func NewLogsSourceClient(cc grpc.ClientConnInterface) LogsSourceClient { + return &logsSourceClient{cc} +} + +func (c *logsSourceClient) Subscribe(ctx context.Context, in *TelemetryRequest, opts ...grpc.CallOption) (LogsSource_SubscribeClient, error) { + stream, err := c.cc.NewStream(ctx, &LogsSource_ServiceDesc.Streams[0], LogsSource_Subscribe_FullMethodName, opts...) + if err != nil { + return nil, err + } + x := &logsSourceSubscribeClient{stream} + if err := x.ClientStream.SendMsg(in); err != nil { + return nil, err + } + if err := x.ClientStream.CloseSend(); err != nil { + return nil, err + } + return x, nil +} + +type LogsSource_SubscribeClient interface { + Recv() (*v11.LogsData, error) + grpc.ClientStream +} + +type logsSourceSubscribeClient struct { + grpc.ClientStream +} + +func (x *logsSourceSubscribeClient) Recv() (*v11.LogsData, error) { + m := new(v11.LogsData) + if err := x.ClientStream.RecvMsg(m); err != nil { + return nil, err + } + return m, nil +} + +// LogsSourceServer is the server API for LogsSource service. +// All implementations must embed UnimplementedLogsSourceServer +// for forward compatibility +type LogsSourceServer interface { + Subscribe(*TelemetryRequest, LogsSource_SubscribeServer) error + mustEmbedUnimplementedLogsSourceServer() +} + +// UnimplementedLogsSourceServer must be embedded to have forward compatible implementations. +type UnimplementedLogsSourceServer struct { +} + +func (UnimplementedLogsSourceServer) Subscribe(*TelemetryRequest, LogsSource_SubscribeServer) error { + return status.Errorf(codes.Unimplemented, "method Subscribe not implemented") +} +func (UnimplementedLogsSourceServer) mustEmbedUnimplementedLogsSourceServer() {} + +// UnsafeLogsSourceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to LogsSourceServer will +// result in compilation errors. +type UnsafeLogsSourceServer interface { + mustEmbedUnimplementedLogsSourceServer() +} + +func RegisterLogsSourceServer(s grpc.ServiceRegistrar, srv LogsSourceServer) { + s.RegisterService(&LogsSource_ServiceDesc, srv) +} + +func _LogsSource_Subscribe_Handler(srv interface{}, stream grpc.ServerStream) error { + m := new(TelemetryRequest) + if err := stream.RecvMsg(m); err != nil { + return err + } + return srv.(LogsSourceServer).Subscribe(m, &logsSourceSubscribeServer{stream}) +} + +type LogsSource_SubscribeServer interface { + Send(*v11.LogsData) error + grpc.ServerStream +} + +type logsSourceSubscribeServer struct { + grpc.ServerStream +} + +func (x *logsSourceSubscribeServer) Send(m *v11.LogsData) error { + return x.ServerStream.SendMsg(m) +} + +// LogsSource_ServiceDesc is the grpc.ServiceDesc for LogsSource service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var LogsSource_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "telemetry.LogsSource", + HandlerType: (*LogsSourceServer)(nil), + Methods: []grpc.MethodDesc{}, + Streams: []grpc.StreamDesc{ + { + StreamName: "Subscribe", + Handler: _LogsSource_Subscribe_Handler, + ServerStreams: true, + }, + }, + Metadata: "servers.proto", +} + +const ( + MetricsSource_Subscribe_FullMethodName = "/telemetry.MetricsSource/Subscribe" +) + +// MetricsSourceClient is the client API for MetricsSource service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. +type MetricsSourceClient interface { + Subscribe(ctx context.Context, in *TelemetryRequest, opts ...grpc.CallOption) (MetricsSource_SubscribeClient, error) +} + +type metricsSourceClient struct { + cc grpc.ClientConnInterface +} + +func NewMetricsSourceClient(cc grpc.ClientConnInterface) MetricsSourceClient { + return &metricsSourceClient{cc} +} + +func (c *metricsSourceClient) Subscribe(ctx context.Context, in *TelemetryRequest, opts ...grpc.CallOption) (MetricsSource_SubscribeClient, error) { + stream, err := c.cc.NewStream(ctx, &MetricsSource_ServiceDesc.Streams[0], MetricsSource_Subscribe_FullMethodName, opts...) + if err != nil { + return nil, err + } + x := &metricsSourceSubscribeClient{stream} + if err := x.ClientStream.SendMsg(in); err != nil { + return nil, err + } + if err := x.ClientStream.CloseSend(); err != nil { + return nil, err + } + return x, nil +} + +type MetricsSource_SubscribeClient interface { + Recv() (*v12.MetricsData, error) + grpc.ClientStream +} + +type metricsSourceSubscribeClient struct { + grpc.ClientStream +} + +func (x *metricsSourceSubscribeClient) Recv() (*v12.MetricsData, error) { + m := new(v12.MetricsData) + if err := x.ClientStream.RecvMsg(m); err != nil { + return nil, err + } + return m, nil +} + +// MetricsSourceServer is the server API for MetricsSource service. +// All implementations must embed UnimplementedMetricsSourceServer +// for forward compatibility +type MetricsSourceServer interface { + Subscribe(*TelemetryRequest, MetricsSource_SubscribeServer) error + mustEmbedUnimplementedMetricsSourceServer() +} + +// UnimplementedMetricsSourceServer must be embedded to have forward compatible implementations. +type UnimplementedMetricsSourceServer struct { +} + +func (UnimplementedMetricsSourceServer) Subscribe(*TelemetryRequest, MetricsSource_SubscribeServer) error { + return status.Errorf(codes.Unimplemented, "method Subscribe not implemented") +} +func (UnimplementedMetricsSourceServer) mustEmbedUnimplementedMetricsSourceServer() {} + +// UnsafeMetricsSourceServer may be embedded to opt out of forward compatibility for this service. +// Use of this interface is not recommended, as added methods to MetricsSourceServer will +// result in compilation errors. +type UnsafeMetricsSourceServer interface { + mustEmbedUnimplementedMetricsSourceServer() +} + +func RegisterMetricsSourceServer(s grpc.ServiceRegistrar, srv MetricsSourceServer) { + s.RegisterService(&MetricsSource_ServiceDesc, srv) +} + +func _MetricsSource_Subscribe_Handler(srv interface{}, stream grpc.ServerStream) error { + m := new(TelemetryRequest) + if err := stream.RecvMsg(m); err != nil { + return err + } + return srv.(MetricsSourceServer).Subscribe(m, &metricsSourceSubscribeServer{stream}) +} + +type MetricsSource_SubscribeServer interface { + Send(*v12.MetricsData) error + grpc.ServerStream +} + +type metricsSourceSubscribeServer struct { + grpc.ServerStream +} + +func (x *metricsSourceSubscribeServer) Send(m *v12.MetricsData) error { + return x.ServerStream.SendMsg(m) +} + +// MetricsSource_ServiceDesc is the grpc.ServiceDesc for MetricsSource service. +// It's only intended for direct use with grpc.RegisterService, +// and not to be introspected or modified (even as a copy) +var MetricsSource_ServiceDesc = grpc.ServiceDesc{ + ServiceName: "telemetry.MetricsSource", + HandlerType: (*MetricsSourceServer)(nil), + Methods: []grpc.MethodDesc{}, + Streams: []grpc.StreamDesc{ + { + StreamName: "Subscribe", + Handler: _MetricsSource_Subscribe_Handler, + ServerStreams: true, + }, + }, + Metadata: "servers.proto", +} diff --git a/telemetry/span.go b/telemetry/span.go new file mode 100644 index 00000000000..d64a1809520 --- /dev/null +++ b/telemetry/span.go @@ -0,0 +1,398 @@ +package telemetry + +import ( + "time" + + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/codes" + "go.opentelemetry.io/otel/sdk/instrumentation" + "go.opentelemetry.io/otel/sdk/resource" + sdktrace "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/trace" + otlpcommonv1 "go.opentelemetry.io/proto/otlp/common/v1" + otlpresourcev1 "go.opentelemetry.io/proto/otlp/resource/v1" + otlptracev1 "go.opentelemetry.io/proto/otlp/trace/v1" +) + +// Encapsulate can be applied to a span to indicate that this span should +// collapse its children by default. +func Encapsulate() trace.SpanStartOption { + return trace.WithAttributes(attribute.Bool(UIEncapsulateAttr, true)) +} + +// Internal can be applied to a span to indicate that this span should not be +// shown to the user by default. +func Internal() trace.SpanStartOption { + return trace.WithAttributes(attribute.Bool(UIInternalAttr, true)) +} + +// End is a helper to end a span with an error if the function returns an error. +// +// It is optimized for use as a defer one-liner with a function that has a +// named error return value, conventionally `rerr`. +// +// defer telemetry.End(span, func() error { return rerr }) +func End(span trace.Span, fn func() error) { + if err := fn(); err != nil { + span.RecordError(err) + span.SetStatus(codes.Error, err.Error()) + } + span.End() +} + +// SpansFromProto transforms a slice of OTLP ResourceSpans into a slice of +// ReadOnlySpans. +func SpansFromProto(sdl []*otlptracev1.ResourceSpans) []sdktrace.ReadOnlySpan { + if len(sdl) == 0 { + return nil + } + + var out []sdktrace.ReadOnlySpan + + for _, sd := range sdl { + if sd == nil { + continue + } + + for _, sdi := range sd.ScopeSpans { + if sdi == nil { + continue + } + sda := make([]sdktrace.ReadOnlySpan, 0, len(sdi.Spans)) + for _, s := range sdi.Spans { + if s == nil { + continue + } + sda = append(sda, &readOnlySpan{ + pb: s, + is: sdi.Scope, + resource: sd.Resource, + schemaURL: sd.SchemaUrl, + }) + } + out = append(out, sda...) + } + } + + return out +} + +type readOnlySpan struct { + // Embed the interface to implement the private method. + sdktrace.ReadOnlySpan + + pb *otlptracev1.Span + is *otlpcommonv1.InstrumentationScope + resource *otlpresourcev1.Resource + schemaURL string +} + +func (s *readOnlySpan) Name() string { + return s.pb.Name +} + +func (s *readOnlySpan) SpanContext() trace.SpanContext { + var tid trace.TraceID + copy(tid[:], s.pb.TraceId) + var sid trace.SpanID + copy(sid[:], s.pb.SpanId) + + st, _ := trace.ParseTraceState(s.pb.TraceState) + + return trace.NewSpanContext(trace.SpanContextConfig{ + TraceID: tid, + SpanID: sid, + TraceState: st, + TraceFlags: trace.FlagsSampled, + }) +} + +func (s *readOnlySpan) Parent() trace.SpanContext { + if len(s.pb.ParentSpanId) == 0 { + return trace.SpanContext{} + } + var tid trace.TraceID + copy(tid[:], s.pb.TraceId) + var psid trace.SpanID + copy(psid[:], s.pb.ParentSpanId) + return trace.NewSpanContext(trace.SpanContextConfig{ + TraceID: tid, + SpanID: psid, + }) +} + +func (s *readOnlySpan) SpanKind() trace.SpanKind { + return spanKind(s.pb.Kind) +} + +func (s *readOnlySpan) StartTime() time.Time { + return time.Unix(0, int64(s.pb.StartTimeUnixNano)) +} + +func (s *readOnlySpan) EndTime() time.Time { + return time.Unix(0, int64(s.pb.EndTimeUnixNano)) +} + +func (s *readOnlySpan) Attributes() []attribute.KeyValue { + return AttributesFromProto(s.pb.Attributes) +} + +func (s *readOnlySpan) Links() []sdktrace.Link { + return links(s.pb.Links) +} + +func (s *readOnlySpan) Events() []sdktrace.Event { + return spanEvents(s.pb.Events) +} + +func (s *readOnlySpan) Status() sdktrace.Status { + return sdktrace.Status{ + Code: statusCode(s.pb.Status), + Description: s.pb.Status.GetMessage(), + } +} + +func (s *readOnlySpan) InstrumentationScope() instrumentation.Scope { + return instrumentationScope(s.is) +} + +// Deprecated: use InstrumentationScope. +func (s *readOnlySpan) InstrumentationLibrary() instrumentation.Library { + return s.InstrumentationScope() +} + +// Resource returns information about the entity that produced the span. +func (s *readOnlySpan) Resource() *resource.Resource { + if s.resource == nil { + return nil + } + if s.schemaURL != "" { + return resource.NewWithAttributes(s.schemaURL, AttributesFromProto(s.resource.Attributes)...) + } + return resource.NewSchemaless(AttributesFromProto(s.resource.Attributes)...) +} + +// DroppedAttributes returns the number of attributes dropped by the span +// due to limits being reached. +func (s *readOnlySpan) DroppedAttributes() int { + return int(s.pb.DroppedAttributesCount) +} + +// DroppedLinks returns the number of links dropped by the span due to +// limits being reached. +func (s *readOnlySpan) DroppedLinks() int { + return int(s.pb.DroppedLinksCount) +} + +// DroppedEvents returns the number of events dropped by the span due to +// limits being reached. +func (s *readOnlySpan) DroppedEvents() int { + return int(s.pb.DroppedEventsCount) +} + +// ChildSpanCount returns the count of spans that consider the span a +// direct parent. +func (s *readOnlySpan) ChildSpanCount() int { + return 0 +} + +var _ sdktrace.ReadOnlySpan = &readOnlySpan{} + +// status transform a OTLP span status into span code. +func statusCode(st *otlptracev1.Status) codes.Code { + if st == nil { + return codes.Unset + } + switch st.Code { + case otlptracev1.Status_STATUS_CODE_ERROR: + return codes.Error + default: + return codes.Ok + } +} + +// links transforms OTLP span links to span Links. +func links(links []*otlptracev1.Span_Link) []sdktrace.Link { + if len(links) == 0 { + return nil + } + + sl := make([]sdktrace.Link, 0, len(links)) + for _, otLink := range links { + if otLink == nil { + continue + } + // This redefinition is necessary to prevent otLink.*ID[:] copies + // being reused -- in short we need a new otLink per iteration. + otLink := otLink + + var tid trace.TraceID + copy(tid[:], otLink.TraceId) + var sid trace.SpanID + copy(sid[:], otLink.SpanId) + + sctx := trace.NewSpanContext(trace.SpanContextConfig{ + TraceID: tid, + SpanID: sid, + }) + + sl = append(sl, sdktrace.Link{ + SpanContext: sctx, + Attributes: AttributesFromProto(otLink.Attributes), + }) + } + return sl +} + +// spanEvents transforms OTLP span events to span Events. +func spanEvents(es []*otlptracev1.Span_Event) []sdktrace.Event { + if len(es) == 0 { + return nil + } + + evCount := len(es) + events := make([]sdktrace.Event, 0, evCount) + messageEvents := 0 + + // Transform message events + for _, e := range es { + if e == nil { + continue + } + messageEvents++ + events = append(events, + sdktrace.Event{ + Name: e.Name, + Time: time.Unix(0, int64(e.TimeUnixNano)), + Attributes: AttributesFromProto(e.Attributes), + DroppedAttributeCount: int(e.DroppedAttributesCount), + }, + ) + } + + return events +} + +// spanKind transforms a an OTLP span kind to SpanKind. +func spanKind(kind otlptracev1.Span_SpanKind) trace.SpanKind { + switch kind { + case otlptracev1.Span_SPAN_KIND_INTERNAL: + return trace.SpanKindInternal + case otlptracev1.Span_SPAN_KIND_CLIENT: + return trace.SpanKindClient + case otlptracev1.Span_SPAN_KIND_SERVER: + return trace.SpanKindServer + case otlptracev1.Span_SPAN_KIND_PRODUCER: + return trace.SpanKindProducer + case otlptracev1.Span_SPAN_KIND_CONSUMER: + return trace.SpanKindConsumer + default: + return trace.SpanKindUnspecified + } +} + +// AttributesFromProto transforms a slice of OTLP attribute key-values into a slice of KeyValues +func AttributesFromProto(attrs []*otlpcommonv1.KeyValue) []attribute.KeyValue { + if len(attrs) == 0 { + return nil + } + + out := make([]attribute.KeyValue, 0, len(attrs)) + for _, a := range attrs { + if a == nil { + continue + } + kv := attribute.KeyValue{ + Key: attribute.Key(a.Key), + Value: toValue(a.Value), + } + out = append(out, kv) + } + return out +} + +func toValue(v *otlpcommonv1.AnyValue) attribute.Value { + switch vv := v.Value.(type) { + case *otlpcommonv1.AnyValue_BoolValue: + return attribute.BoolValue(vv.BoolValue) + case *otlpcommonv1.AnyValue_IntValue: + return attribute.Int64Value(vv.IntValue) + case *otlpcommonv1.AnyValue_DoubleValue: + return attribute.Float64Value(vv.DoubleValue) + case *otlpcommonv1.AnyValue_StringValue: + return attribute.StringValue(vv.StringValue) + case *otlpcommonv1.AnyValue_ArrayValue: + return arrayValues(vv.ArrayValue.Values) + default: + return attribute.StringValue("INVALID") + } +} + +func boolArray(kv []*otlpcommonv1.AnyValue) attribute.Value { + arr := make([]bool, len(kv)) + for i, v := range kv { + if v != nil { + arr[i] = v.GetBoolValue() + } + } + return attribute.BoolSliceValue(arr) +} + +func intArray(kv []*otlpcommonv1.AnyValue) attribute.Value { + arr := make([]int64, len(kv)) + for i, v := range kv { + if v != nil { + arr[i] = v.GetIntValue() + } + } + return attribute.Int64SliceValue(arr) +} + +func doubleArray(kv []*otlpcommonv1.AnyValue) attribute.Value { + arr := make([]float64, len(kv)) + for i, v := range kv { + if v != nil { + arr[i] = v.GetDoubleValue() + } + } + return attribute.Float64SliceValue(arr) +} + +func stringArray(kv []*otlpcommonv1.AnyValue) attribute.Value { + arr := make([]string, len(kv)) + for i, v := range kv { + if v != nil { + arr[i] = v.GetStringValue() + } + } + return attribute.StringSliceValue(arr) +} + +func arrayValues(kv []*otlpcommonv1.AnyValue) attribute.Value { + if len(kv) == 0 || kv[0] == nil { + return attribute.StringSliceValue([]string{}) + } + + switch kv[0].Value.(type) { + case *otlpcommonv1.AnyValue_BoolValue: + return boolArray(kv) + case *otlpcommonv1.AnyValue_IntValue: + return intArray(kv) + case *otlpcommonv1.AnyValue_DoubleValue: + return doubleArray(kv) + case *otlpcommonv1.AnyValue_StringValue: + return stringArray(kv) + default: + return attribute.StringSliceValue([]string{}) + } +} + +func instrumentationScope(is *otlpcommonv1.InstrumentationScope) instrumentation.Scope { + if is == nil { + return instrumentation.Scope{} + } + return instrumentation.Scope{ + Name: is.Name, + Version: is.Version, + } +} diff --git a/telemetry/telemetry.go b/telemetry/telemetry.go deleted file mode 100644 index 066088593be..00000000000 --- a/telemetry/telemetry.go +++ /dev/null @@ -1,173 +0,0 @@ -package telemetry - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - "os" - "sync" - "time" - - "github.com/google/uuid" -) - -const ( - flushInterval = 100 * time.Millisecond - pushURL = "https://api.dagger.cloud/events" -) - -type Telemetry struct { - enabled bool - closed bool - - runID string - - pushURL string - token string - - mu sync.Mutex - queue []*Event - stopCh chan struct{} - doneCh chan struct{} -} - -func New() *Telemetry { - cloudToken := os.Getenv("_EXPERIMENTAL_DAGGER_CLOUD_TOKEN") - // add DAGGER_CLOUD_TOKEN in backwards compat way. - // TODO: deprecate in a future release - if v, ok := os.LookupEnv("DAGGER_CLOUD_TOKEN"); ok { - cloudToken = v - } - - t := &Telemetry{ - runID: uuid.NewString(), - pushURL: os.Getenv("_EXPERIMENTAL_DAGGER_CLOUD_URL"), - token: cloudToken, - stopCh: make(chan struct{}), - doneCh: make(chan struct{}), - } - - if t.pushURL == "" { - t.pushURL = pushURL - } - - if t.token != "" { - // only send telemetry if a token was configured - t.enabled = true - go t.start() - } - - return t -} - -func (t *Telemetry) Enabled() bool { - return t.enabled -} - -func (t *Telemetry) URL() string { - return "https://dagger.cloud/runs/" + t.runID -} - -func (t *Telemetry) Push(p Payload, ts time.Time) { - if !t.enabled { - return - } - - t.mu.Lock() - defer t.mu.Unlock() - - if t.closed { - return - } - - ev := &Event{ - Version: eventVersion, - Timestamp: ts, - Type: p.Type(), - Payload: p, - } - - if p.Scope() == EventScopeRun { - ev.RunID = t.runID - } - - t.queue = append(t.queue, ev) -} - -func (t *Telemetry) start() { - defer close(t.doneCh) - - for { - select { - case <-time.After(flushInterval): - t.send() - case <-t.stopCh: - // On stop, send the current queue and exit - t.send() - return - } - } -} - -func (t *Telemetry) send() { - t.mu.Lock() - queue := append([]*Event{}, t.queue...) - t.queue = []*Event{} - t.mu.Unlock() - - if len(queue) == 0 { - return - } - - payload := bytes.NewBuffer([]byte{}) - enc := json.NewEncoder(payload) - for _, ev := range queue { - err := enc.Encode(ev) - if err != nil { - fmt.Fprintln(os.Stderr, "telemetry: encode:", err) - continue - } - } - - req, err := http.NewRequest(http.MethodPost, t.pushURL, bytes.NewReader(payload.Bytes())) - if err != nil { - fmt.Fprintln(os.Stderr, "telemetry: new request:", err) - return - } - if t.token != "" { - req.SetBasicAuth(t.token, "") - } - resp, err := http.DefaultClient.Do(req) - if err != nil { - fmt.Fprintln(os.Stderr, "telemetry: do request:", err) - return - } - if resp.StatusCode != http.StatusCreated { - fmt.Fprintln(os.Stderr, "telemetry: unexpected response:", resp.Status) - } - defer resp.Body.Close() -} - -func (t *Telemetry) Close() { - if !t.enabled { - return - } - - // Stop accepting new events - t.mu.Lock() - if t.closed { - // prevent errors when trying to close multiple times on the same - // telemetry instance - t.mu.Unlock() - return - } - t.closed = true - t.mu.Unlock() - - // Flush events in queue - close(t.stopCh) - - // Wait for completion - <-t.doneCh -} diff --git a/core/pipeline/testdata/.gitattributes b/telemetry/testdata/.gitattributes similarity index 100% rename from core/pipeline/testdata/.gitattributes rename to telemetry/testdata/.gitattributes diff --git a/core/pipeline/testdata/pull_request.synchronize.json b/telemetry/testdata/pull_request.synchronize.json similarity index 100% rename from core/pipeline/testdata/pull_request.synchronize.json rename to telemetry/testdata/pull_request.synchronize.json diff --git a/core/pipeline/testdata/push.json b/telemetry/testdata/push.json similarity index 100% rename from core/pipeline/testdata/push.json rename to telemetry/testdata/push.json diff --git a/core/pipeline/testdata/workflow_dispatch.json b/telemetry/testdata/workflow_dispatch.json similarity index 100% rename from core/pipeline/testdata/workflow_dispatch.json rename to telemetry/testdata/workflow_dispatch.json diff --git a/core/pipeline/util.go b/telemetry/util.go similarity index 98% rename from core/pipeline/util.go rename to telemetry/util.go index 296b538bf7b..b713c1ff383 100644 --- a/core/pipeline/util.go +++ b/telemetry/util.go @@ -1,4 +1,4 @@ -package pipeline +package telemetry import ( "fmt" diff --git a/telemetry/writer.go b/telemetry/writer.go deleted file mode 100644 index 68d1fc5b514..00000000000 --- a/telemetry/writer.go +++ /dev/null @@ -1,132 +0,0 @@ -package telemetry - -import ( - "sync" - "time" - - "github.com/dagger/dagger/core/pipeline" - "github.com/vito/progrock" -) - -type writer struct { - telemetry *Telemetry - pipeliner *Pipeliner - - // emittedMemberships keeps track of whether we've emitted an OpPayload for a - // vertex yet. - emittedMemberships map[vertexMembership]bool - - mu sync.Mutex -} - -type vertexMembership struct { - vertexID string - groupID string -} - -func NewWriter(t *Telemetry) progrock.Writer { - return &writer{ - telemetry: t, - pipeliner: NewPipeliner(), - emittedMemberships: map[vertexMembership]bool{}, - } -} - -func (t *writer) WriteStatus(ev *progrock.StatusUpdate) error { - t.pipeliner.TrackUpdate(ev) - - t.mu.Lock() - defer t.mu.Unlock() - - ts := time.Now().UTC() - - for _, m := range ev.Memberships { - for _, vid := range m.Vertexes { - if v, found := t.pipeliner.Vertex(vid); found { - t.maybeEmitOp(ts, v, false) - } - } - } - - for _, eventVertex := range ev.Vertexes { - if v, found := t.pipeliner.Vertex(eventVertex.Id); found { - // override the vertex with the current event vertex since a - // single PipelineEvent could contain duplicated vertices with - // different data like started and completed - v.Vertex = eventVertex - t.maybeEmitOp(ts, v, true) - } - } - - for _, l := range ev.Logs { - t.telemetry.Push(LogPayload{ - OpID: l.Vertex, - Data: string(l.Data), - Stream: int(l.Stream.Number()), - }, l.Timestamp.AsTime()) - } - - return nil -} - -func (t *writer) Close() error { - t.telemetry.Close() - return nil -} - -// maybeEmitOp emits a OpPayload for a vertex if either A) an OpPayload hasn't -// been emitted yet because we saw the vertex before its membership, or B) the -// vertex has been updated. -func (t *writer) maybeEmitOp(ts time.Time, v *PipelinedVertex, isUpdated bool) { - if len(v.Groups) == 0 { - // should be impossible, since the vertex is found and we've processed - // a membership for it - return - } - - // TODO(vito): for now, we only let a vertex be a member of a single - // group. I spent a long time supporting many-to-many memberships, and - // intelligently tree-shaking in the frontend to only show vertices in - // their most relevant groups, but still haven't found a great heuristic. - // Limiting vertices to a single group allows us to fully switch to - // Progrock without having to figure all that out yet. - group := v.Groups[0] - pipeline := v.Pipelines[0] - - key := vertexMembership{ - vertexID: v.Id, - groupID: group, - } - - if !t.emittedMemberships[key] || isUpdated { - t.telemetry.Push(t.vertexOp(v.Vertex, pipeline), ts) - t.emittedMemberships[key] = true - } -} - -func (t *writer) vertexOp(v *progrock.Vertex, pl pipeline.Path) OpPayload { - op := OpPayload{ - OpID: v.Id, - OpName: v.Name, - Internal: v.Internal, - - Pipeline: pl, - - Cached: v.Cached, - Error: v.GetError(), - - Inputs: v.Inputs, - } - - if v.Started != nil { - t := v.Started.AsTime() - op.Started = &t - } - - if v.Completed != nil { - t := v.Completed.AsTime() - op.Completed = &t - } - - return op -} diff --git a/tracing/graphql.go b/tracing/graphql.go deleted file mode 100644 index 94191c5f915..00000000000 --- a/tracing/graphql.go +++ /dev/null @@ -1,143 +0,0 @@ -package tracing - -import ( - "context" - "encoding/json" - "log/slog" - - "github.com/dagger/dagger/core/pipeline" - "github.com/dagger/dagger/dagql" - "github.com/dagger/dagger/dagql/call" - "github.com/dagger/dagger/dagql/ioctx" - "github.com/vito/progrock" - "go.opentelemetry.io/otel/attribute" - "go.opentelemetry.io/otel/codes" - "google.golang.org/protobuf/types/known/anypb" -) - -func AroundFunc(ctx context.Context, self dagql.Object, id *call.ID, next func(context.Context) (dagql.Typed, error)) func(context.Context) (dagql.Typed, error) { - // install tracing at the outermost layer so we don't ignore perf impact of - // other telemetry - return SpanAroundFunc(ctx, self, id, - ProgrockAroundFunc(ctx, self, id, next)) -} - -func SpanAroundFunc(ctx context.Context, self dagql.Object, id *call.ID, next func(context.Context) (dagql.Typed, error)) func(context.Context) (dagql.Typed, error) { - return func(ctx context.Context) (dagql.Typed, error) { - if isIntrospection(id) { - return next(ctx) - } - - ctx, span := Tracer.Start(ctx, id.Display()) - - v, err := next(ctx) - - if err != nil { - span.RecordError(err) - span.SetStatus(codes.Error, err.Error()) - } - if v != nil { - serialized, err := json.MarshalIndent(v, "", " ") - if err == nil { - // span.AddEvent("event", trace.WithAttributes(attribute.String("value", string(serialized)))) - span.SetAttributes(attribute.String("value", string(serialized))) - } - } - span.End() - - return v, err - } -} - -// IDLabel is a label set to "true" for a vertex corresponding to a DagQL ID. -const IDLabel = "dagger.io/id" - -func ProgrockAroundFunc(ctx context.Context, self dagql.Object, id *call.ID, next func(context.Context) (dagql.Typed, error)) func(context.Context) (dagql.Typed, error) { - return func(ctx context.Context) (dagql.Typed, error) { - if isIntrospection(id) { - return next(ctx) - } - - dig := id.Digest() - // TODO: we don't need this for anything yet - // inputs, err := id.Inputs() - // if err != nil { - // slog.Warn("failed to digest inputs", "id", id.Display(), "err", err) - // return next(ctx) - // } - opts := []progrock.VertexOpt{ - // see telemetry/legacy.go LegacyIDInternalizer - progrock.WithLabels(progrock.Labelf(IDLabel, "true")), - } - if dagql.IsInternal(ctx) { - opts = append(opts, progrock.Internal()) - } - - // group any self-calls or Buildkit vertices beneath this vertex - ctx, vtx := progrock.Span(ctx, dig.String(), id.Field(), opts...) - ctx = ioctx.WithStdout(ctx, vtx.Stdout()) - ctx = ioctx.WithStderr(ctx, vtx.Stderr()) - - // send ID payload to the frontend - idProto, err := id.ToProto() - if err != nil { - slog.Warn("failed to convert id to proto", "id", id.Display(), "err", err) - return next(ctx) - } - payload, err := anypb.New(idProto) - if err != nil { - slog.Warn("failed to anypb.New(id)", "id", id.Display(), "err", err) - return next(ctx) - } - vtx.Meta("id", payload) - - // respect user-configured pipelines - // TODO: remove if we have truly retired these - if w, ok := self.(dagql.Wrapper); ok { // unwrap dagql.Instance - if pl, ok := w.Unwrap().(pipeline.Pipelineable); ok { - ctx = pl.PipelinePath().WithGroups(ctx) - } - } - - // call the actual resolver - res, resolveErr := next(ctx) - if resolveErr != nil { - // NB: we do +id.Display() instead of setting it as a field to avoid - // dobule quoting - slog.Warn("error resolving "+id.Display(), "error", resolveErr) - } - - // record an object result as an output of this vertex - // - // this allows the UI to "simplify" this ID back to its creator ID when it - // sees it in the future if it wants to, e.g. showing mymod.unit().stdout() - // instead of the full container().from().[...].stdout() ID - if obj, ok := res.(dagql.Object); ok { - vtx.Output(obj.ID().Digest()) - } - - vtx.Done(resolveErr) - - return res, resolveErr - } -} - -// isIntrospection detects whether an ID is an introspection query. -// -// These queries tend to be very large and are not interesting for users to -// see. -func isIntrospection(id *call.ID) bool { - if id.Base() == nil { - switch id.Field() { - case "__schema", - "currentTypeDefs", - "currentFunctionCall", - "currentModule": - return true - default: - return false - } - } else { - return isIntrospection(id.Base()) - } -} diff --git a/tracing/tracing.go b/tracing/tracing.go deleted file mode 100644 index f925b23825e..00000000000 --- a/tracing/tracing.go +++ /dev/null @@ -1,78 +0,0 @@ -package tracing - -import ( - "context" - "io" - "os" - "time" - - "go.opentelemetry.io/otel" - //nolint:staticcheck - "go.opentelemetry.io/otel/exporters/jaeger" - "go.opentelemetry.io/otel/sdk/resource" - tracesdk "go.opentelemetry.io/otel/sdk/trace" - semconv "go.opentelemetry.io/otel/semconv/v1.4.0" -) - -var Tracer = otel.Tracer("dagger") - -func Init() io.Closer { - traceEndpoint := os.Getenv("OTEL_EXPORTER_JAEGER_ENDPOINT") - if traceEndpoint == "" { - return &nopCloser{} - } - - tp, err := tracerProvider(traceEndpoint) - if err != nil { - panic(err) - } - - // Register our TracerProvider as the global so any imported - // instrumentation in the future will default to using it. - otel.SetTracerProvider(tp) - - closer := providerCloser{ - TracerProvider: tp, - } - - return closer -} - -// tracerProvider returns an OpenTelemetry TracerProvider configured to use -// the Jaeger exporter that will send spans to the provided url. The returned -// TracerProvider will also use a Resource configured with all the information -// about the application. -func tracerProvider(url string) (*tracesdk.TracerProvider, error) { - // Create the Jaeger exporter - exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) - if err != nil { - return nil, err - } - tp := tracesdk.NewTracerProvider( - // Always be sure to batch in production. - tracesdk.WithBatcher(exp, tracesdk.WithMaxExportBatchSize(1)), - // Record information about this application in an Resource. - tracesdk.WithResource(resource.NewWithAttributes( - semconv.SchemaURL, - semconv.ServiceNameKey.String("dagger"), - )), - ) - return tp, nil -} - -type providerCloser struct { - *tracesdk.TracerProvider -} - -func (t providerCloser) Close() error { - ctx, cancel := context.WithTimeout(context.Background(), time.Second*5) - defer cancel() - return t.Shutdown(ctx) -} - -type nopCloser struct { -} - -func (*nopCloser) Close() error { - return nil -}