Tempo uses GitHub to manage reviews of pull requests:
- If you have a trivial fix or improvement, go ahead and create a pull request.
- If you plan to do something more involved, discuss your ideas on the relevant GitHub issue first.
Before submitting a pull request, read this guide in full. The PR checklist also requires it.
We accept contributions where AI tools (GitHub Copilot, ChatGPT, Claude, etc.) were used during development. However, using AI doesn't lower the bar — it shifts responsibility entirely onto you, the contributor.
If AI tools generated a significant portion of the code or documentation you're submitting, say so in the PR description. State which tool(s) you used and what you used them for. Trivial uses (autocomplete, grammar checking) don't need disclosure; anything more substantial does.
By submitting a PR you certify the Developer Certificate of Origin (DCO) — that you have the right to submit the code under this project's license. AI tools cannot certify the DCO. You are the author of record, regardless of how the code was generated.
This means:
- Read and understand every line before submitting. If you can't explain it, don't submit it.
- Autonomous agents submitting PRs without a human reviewing the output are not acceptable.
- Don't use AI to write your PR description or issue comments — those need to accurately represent your understanding of the change.
AI-generated code tends to fail in specific ways. Before submitting, check for:
- Hallucinated APIs: functions or methods that don't exist in the libraries used
- Fake dependencies: package names that sound plausible but don't exist or aren't imported elsewhere in the codebase — verify every new dependency is real and actively maintained
- Incorrect edge case handling: AI often produces code that looks right but handles error paths or boundary conditions wrong
- Style drift: AI output often doesn't match the project's conventions — run
make fmtandmake lintand fix everything before submitting
All the normal requirements still apply: tests, documentation, changelog entry, passing CI. Unreviewed AI output that wastes maintainer time is not a contribution.
AI tools may reproduce fragments from their training data. If you're using a tool that can flag similarity to public repositories (such as GitHub Copilot's code referencing), use that feature. If a generated snippet looks like it may have come from an incompatibly-licensed source, rewrite it manually.
We use Go modules to manage dependencies on external packages. This requires a working Go environment with version 1.26 or greater and git installed.
To add or update a new dependency, use the go get command:
# Pick the latest tagged release.
go get example.com/some/module/pkg
# Pick a specific version.
go get example.com/some/module/pkg@vX.Y.ZBefore submitting please run the following to verify that all dependencies and proto definitions are consistent:
make vendor-checkcmd/
tempo/ - main tempo binary
tempo-cli/ - cli tool for directly inspecting blocks in the backend
tempo-vulture/ - bird-themed consistency checker. optional.
tempo-query/ - jaeger-query GRPC plugin (Apache2 licensed)
docs/
example/ - great place to get started running Tempo
docker-compose/
tk/
integration/ - e2e tests
modules/ - top level Tempo components
backend-worker/
backend-scheduler/
distributor/
overrides/
querier/
frontend/
storage/
opentelemetry-proto/ - git submodule. necessary for proto vendoring
operations/ - Tempo deployment and monitoring resources (Apache2 licensed)
jsonnet/
tempo-mixin/
pkg/
tempopb/ - proto for interacting with various Tempo services (Apache2 licensed)
tempodb/ - object storage key/value database
vendor/
Imports should follow std libs, external libs, and local packages format:
import (
"context"
"fmt"
"github.com/gogo/protobuf/proto"
"github.com/opentracing/opentracing-go"
"github.com/grafana/tempo/modules/overrides"
"github.com/grafana/tempo/pkg/validation"
)Follow standard Go error handling patterns. Return errors up the call stack; don't swallow them. Avoid panic except in truly unrecoverable situations.
Handle errors immediately and return early:
Check for errors right after the call and return early. The happy path continues at the normal indentation level — don't nest it in an else block.
// good
err := doSomething()
if err != nil {
level.Error(logger).Log("msg", "failed to do something", "err", err)
return err
}
// happy path continues here at normal indentation
// avoid
err := doSomething()
if err == nil {
// happy path buried inside else
} else {
return err
}Wrap errors with context using %w:
return fmt.Errorf("failed to create tempodb: %w", err)Always add a short context string that describes what the current function was trying to do. This builds a readable error chain without losing the original error for later inspection.
Define sentinel errors at the package level:
var (
ErrDoesNotExist = errors.New("does not exist")
ErrEmptyTenantID = errors.New("empty tenant id")
)Use errors.New for sentinel values. Export them (Err*) when callers outside the package need to check for them.
Check errors with errors.Is and errors.As:
// Checking sentinel errors
if errors.Is(err, backend.ErrDoesNotExist) { ... }
// Checking for context cancellation
if errors.Is(err, context.Canceled) { ... }
// Extracting a custom error type
var parseErr *ParseError
if errors.As(err, &parseErr) { ... }Prefer errors.Is and errors.As over direct equality checks or string matching — they traverse the error chain created by %w wrapping.
Define custom error types for structured error data:
When callers need to inspect error fields (not just identity), define a struct that implements the error interface. Add an Unwrap() method if the type wraps another error.
type ParseError struct {
msg string
line int
col int
}
func (e *ParseError) Error() string {
return fmt.Sprintf("parse error at line %d, col %d: %s", e.line, e.col, e.msg)
}Log errors with structured key-value pairs:
level.Error(logger).Log("msg", "failed to flush block", "tenant", tenantID, "err", err)
level.Warn(logger).Log("msg", "skipped span processing", "err", err)Use level.Error for unexpected failures and level.Warn for expected or recoverable conditions. Always include "err", err as the last key-value pair. Use rate-limited loggers (log.NewRateLimitedLogger) for errors that can fire at high frequency.
Use defer to pair cleanup with acquisition — the cleanup statement should appear immediately after the call that requires it, not at the end of the function.
Close iterators and readers immediately after opening:
iter, err := block.Iterator()
if err != nil {
return err
}
defer iter.Close()Unlock mutexes immediately after locking:
mu.Lock()
defer mu.Unlock()Cancel contexts immediately after creating them:
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()End spans immediately after starting them:
ctx, span := tracer.Start(ctx, "operationName")
defer span.End()Stop tickers and timers immediately after creating them:
ticker := time.NewTicker(interval)
defer ticker.Stop()Use anonymous defer functions for conditional or error-sensitive cleanup:
When cleanup logic depends on the function's return value or needs to check errors, use an anonymous function. A common pattern is to record span errors only on failure:
var err error
defer func() {
if err != nil {
span.RecordError(err)
}
}()Another common use is recovering from panics in long-running goroutines:
defer func() {
if r := recover(); r != nil {
level.Error(logger).Log("msg", "recovered from panic", "err", r, "stack", string(debug.Stack()))
err = errors.New("recovered from panic")
}
}()And for graceful shutdown — stopping subservices only when startup fails mid-way:
defer func() {
if err != nil && w.subservices != nil {
if stopErr := services.StopManagerAndAwaitStopped(context.Background(), w.subservices); stopErr != nil {
level.Error(logger).Log("msg", "failed to stop dependencies", "err", stopErr)
}
}
}()Verify interface implementation at compile time at the top of files:
var _ SomeInterface = (*ConcreteType)(nil)Every non-trivial component should emit metrics, logs, and traces. When adding new features, include observability from the start — don't add it as an afterthought.
- Metrics: Tempo is instrumented with Prometheus metrics. It emits RED metrics (rate, errors, duration) for most services and backends. Use
promautofor automatic registration. Name metrics astempo_component_metric_unit, for exampletempo_distributor_ingester_appends_total. The relevant dashboards are in the Tempo mixin. - Logs: Tempo uses the go-kit log library and emits structured logs in
key=value(logfmt) format. Use level functions fromgithub.com/go-kit/log/level, for examplelevel.Info(logger).Log("msg", "started", "tenant", tenantID). Use rate-limited logging for high-frequency events. - Traces: Tempo uses OpenTelemetry for tracing instrumentation.
We try to ensure that most functionality of Tempo is well tested.
- Unit tests: Test the functionality of code in isolation. Place
*_test.gofiles alongside the code they test. Prefer table-driven tests witht.Run()subtests for scenarios with multiple inputs. - Local testing: Use the examples provided with
docker-compose,tanka, orhelmto set up a local deployment and test newly added functionality. - Integration tests: Test the ingest and query path end-to-end. These live in the integration/e2e folder.
Use the testify library for assertions (assert for non-fatal checks, require for fatal ones).
Run tests before submitting:
make testFor coverage information:
make test-with-coverA CI job runs these tests on every PR.
Run linting before submitting your PR:
make lintTo lint only the changes relative to a base branch (faster for large PRs):
make lint base=mainFix formatting issues with:
make fmtThis requires gofumpt and goimports to be in your $PATH. This project uses gofumpt (a stricter superset of gofmt) for formatting. Configure your editor using the gofumpt documentation, or run make fmt before committing.
If you changed any jsonnet or libsonnet files, also run:
make jsonnetfmtThis requires the jsonnetfmt binary in your $PATH.
To compile jsonnet files, run:
make jsonnetThis requires jsonnet, jsonnet-bundler, and tanka binaries in your $PATH.
If you change any .proto files, regenerate the Go code:
make gen-protoIf you change the TraceQL grammar (.y files), regenerate the parser:
make gen-traceqlGenerated files (*.pb.go, *.y.go, *.gen.go) are excluded from formatting and linting — don't edit them by hand.
Every PR must have a clear description covering:
- What this PR does: Summarize the change and the motivation behind it.
- Which issue(s) this PR fixes: Use
Fixes #<issue number>so the issue closes automatically on merge.
Before marking a PR ready for review, confirm:
- Tests updated or added for the changed behavior
- Documentation added or updated (refer to the Documentation section)
-
CHANGELOG.mdupdated (refer to the Changelog entries section)
Use a semantic prefix in commit messages:
<type>: <short description>
Common types:
fix:— bug fixfeat:— new featureenhancement:— improvement to an existing featurechore:— maintenance, dependency updates, build changesrefactor:— code restructuring with no behavior changedocs:— documentation only
Keep the subject line concise and lowercase after the prefix. Examples:
fix: use counter instead of gauge for compactor deduped spans metric
enhancement: deduplicate spans within traces during block builder
chore(deps): update module google.golang.org/api to v0.267.0
All PRs that change behavior (features, enhancements, bug fixes, breaking changes) must include a CHANGELOG.md entry. Dependency-only updates and pure internal refactors do not require one.
Add your entry under ## main / unreleased in this format:
* [TYPE] Short description of the change [#<PR>](https://github.com/grafana/tempo/pull/<PR>) (@your-github-handle)
Valid types in order of precedence: [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX].
Mark breaking changes explicitly:
* [CHANGE] **BREAKING CHANGE** Description of what changed and what users must do [#<PR>](https://github.com/grafana/tempo/pull/<PR>) (@your-github-handle)
Keep entries ordered as [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX] within each release section.
Rebase your PR on main if it gets out of sync. Don't merge main into your branch.
Anyone can help with Tempo's documentation by writing new content, updating existing content, or creating an issue. Current documentation projects are tracked in GitHub issues. Browsing through issues is a good way to find something to work on.
Tempo documentation is located in the docs directory. The docs directory has three folders:
design-proposals: Used for project and feature proposals. This content is not published with the product documentation.internal: Used for internal process-related content, including diagrams.sources: All of the product documentation resides here.- The
helm-chartsfolder contains the documentation for thetempo-distributedHelm chart, https://grafana.com/docs/helm-charts/tempo-distributed/next/ - The
tempofolder contains the product documentation, https://grafana.com/docs/tempo/latest/
- The
Once you know what you would like to write, use the Writer's Toolkit for information on creating good documentation. The toolkit also provides document templates to help get started.
When you create a PR for documentation, add the type/doc label to identify the PR as contributing documentation.
If your content needs to be added to a previous release, use the backport label for the version. When your PR is merged, the backport label triggers an automatic process to create an additional PR to merge the content into the version's branch. Check the PR for content that might not be appropriate for the version. For example, if you fix a broken link on a page and then backport to Tempo 1.5, you would not want any TraceQL information to appear.
To preview the documentation locally, run make docs from the root folder of the Tempo repository. This uses
the grafana/docs image which internally uses Hugo to generate the static site. The site is available on localhost:3002/docs/.
Note The
make docscommand uses a lot of memory. If it is crashing, make sure to increase the memory allocated to Docker and try again.
Tempo uses a CI action to sync documentation to the Grafana website. The CI is
triggered on every merge to main in the docs subfolder.
The helm-charts folder is published from Tempo's next branch. The Tempo documentation is published from the latest branch.
Using a debugger can be useful to find errors in Tempo code. This example shows how to debug Tempo inside docker-compose.