We're currently evaluating only pushing on Dockerhub when releasing, as
everything else uses GCR, but until then, we lower the concurrency to 4
when pushing there, and we keep 8 on the others.
Follow-up to https://github.com/sourcegraph/devx-support/issues/1163
## Test plan
CI
Fixes CODY-3085
Fixes CODY-3086
Previously, there was no way for OpenAI clients to list the available
models on Sourcegraph or query metadata about a given model ID ("model
ref" using our internal terminology). This PR fixes that problem AND
additionally adds infrastructure to auto-generate Go models from a
TypeSpec specification.
[TypeSpec](https://typespec.io/) is an IDL to document REST APIs,
created by Microsoft. Historically, the Go code in this repository has
been the single source of truth about what exact JSON structures are
expected in HTTP request/response pairs in our REST endpoints. This new
TypeSpec infrastructure allows us to document these shapes at a higher
abstraction level, which has several benefits including automatic
OpenAPI generation, which we can use to generate docs on
sourcegraph.com/docs or automatically generate client bindings in
TypeScript (among many other use-cases).
I am planning to write an RFC to propose we start using TypeSpec for new
REST endpoints going forward. If the RFC is not approved then we can
just delete the new `tools/typespec_codegen` directory and keep the
generated code in the repo. It won't be a big difference in the end
compared our current manual approach of writing Go structs for HTTP
APIs.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
See test cases. I additionally wrote a basic python script with the
official OpenAI client to test that it works with this endpoint. First,
I ran `sg start minimal`. Then I wrote this script
```py
import os
from openai import OpenAI
from dotenv import load_dotenv
import httpx
load_dotenv()
openai = OpenAI(
# base_url="https://api.openai.com/v1",
# api_key=os.getenv("OPENAI_API_KEY"),
base_url="https://sourcegraph.test:3443/api/v1",
api_key=os.getenv("SRC_ACCESS_TOKEN"),
http_client=httpx.Client(verify=False)
)
def main():
response = openai.models.list()
for model in response.data:
print(model.id)
if __name__ == "__main__":
main()
```
Finally, I ran
```
❯ python3 models.py
anthropic::unknown::claude-3-haiku-20240307
anthropic::unknown::claude-3-sonnet-20240229
fireworks::unknown::starcoder
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* New `GET /.api/llm/models` and `GET /.api/llm/models/{modelId}` REST
API endpoints to list available LLM models on the instance and to get
information about a given model. This endpoints is compatible with the
`/models` and `/models/{modelId}` endpoints from OpenAI.
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
We dived into our go postgres driver and when executing a migration it
is executed as a "simple query". Postgres in this case automatically
wraps the collection of statements in a transaction, unless it contains
transaction statements. So our last attempt at removing the transaction
failed.
In this attempt we use COMMIT AND CHAIN after each table alter. What
this does is commit the current transaction and then starts it up again.
From the perspective of the go driver, it is as if there was only one
transaction. We then switch the migration to using a transaction to
ensure the go drivers clean up the postgres connection in case of
failure.
IE if a query manually starts a transaction and does not clean up, the
connection will be marked as broken for the next person who gets the
connection from the pool. By wrapping in go's transaction code the
connection will be properly cleaned up.
Test Plan: All continuous environments have already succeeded or failed
on this migration number. So we will manually run this again against
them with the migrator code to ensure the same code paths. If they
succeed we will keep code as is, otherwise we will rollback.
Additionally we did lots of adhoc testing to understand the
characteristics of go and transaction handling.
Co-authored-by: Erik Seliger <erikseliger@me.com>
Currently, build-tracker keeps track of consecutive build failures
through an in-memory store of failed builds. As this gets deployed more
frequently on MSP, we lose state more frequently which would result in
incorrect results. Instead, we can use redis as our external store as
well as for locking using redsync
## Test plan
Unit tests have been updated, but proper testing will require live
traffic
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
We hit a deadlock when deploying this migration to s2. This is because
within our transaction of the migration we likely didn't obtain table
locks in the same order as a transaction in our application code.
So this commit introduces a new migration metadata field
"noTransaction". The documentation for migrator says you should create a
migration per needed transactions. However, this would require us to
create 100s of migrations. We believe the better approach is introducing
this field and barely advertising it.
When reading the code which actually runs migrations, there is no extra
logic done outside of BEGIN; run_migration; COMMIT; so this change is
safe.
We update the migrations to avoid duplicating the function name we
introduce in case something goes wrong (now that the transaction could
leak out the function name).
Test Plan: The actual migrations are tested by go test. I added a test
assertion that we don't call Transact, but to be honest that is super
sketchy. However, we couldn't actually find any test fixtures which
actually run against the DB. So that would require a much deeper
investment for how simple the code change is.
Co-authored-by: Erik Seliger <erikseliger@me.com>
I was tired of seeing this warning when running gazelle:
gazelle: finding module path for import github.com/go-critic/go-critic/framework/linter: go: downloading github.com/go-critic/go-critic v0.11.4
go: module github.com/go-critic/go-critic@upgrade found (v0.11.4), but does not contain package github.com/go-critic/go-critic/framework/linter
This updated go-critic to the latest version which makes the warning go
away since it now uses the new package path.
Test Plan: gazelle is happy and CI is happy
Backcompat test suite works by checking out the code at the previous
minor release, injects the database schema from `HEAD` and run all Go
tests. This ensures that the old code can run against the new schema,
thus being backward compatible.
We forgot to update this the last time, that's why I'm bumping by two
tags here.
## Test plan
CI
This reverts commit 43e06cef0c.
This migration deadlocked on S2, we didn't expect that we have processes
taking table locks but apparently so.. Reverting for now and will
manually fix up S2 once the build went through. We probably need to make
this one migration per table :rip:
## Test plan
Revert.
Previously, `sg` would give no notice that an auto-update of itself was
happening in the background, due to background output being buffered.
This would be confusing when an invocation hangs/doesnt terminate as
quickly as expected due to the update process still being in progress.
Instead, we should print at least certain output from the process
immediately
## Test plan
`go run ./dev/sg -- <some command>` with a time.Sleep
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Fixes CODY-3081
This is the first PR for the project [⏭️ Launch Cody API
Experimental](https://linear.app/sourcegraph/project/launch-cody-api-experimental-8fd5ec338bf4),
which falls under the umbrella of moving Cody's brains to the cloud.
Previously, there was no publicly available REST API for our customers
to interact with Cody. This is a frequently requested feature from
customers and prospects.
This PR adds a new `POST /api/v1/chat/completions` endpoint, which
should be compatible with existing OpenAI clients. The OpenAI API format
is increasingly becoming an industry standard so this seems like a good
first step towards exposing a stable publicly facing API for our
customers.
The goal is to add more Cody-specific APIs in the coming weeks to send
chat messages and reference context.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
See added test cases.
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* API: new publicly available `/api/v1/chat/completions` REST endpoint
that is compatible with OpenAI clients with some restrictions. The detailed list of restrictions will eventually be documented on sourcegraph.com/docs
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
We introduce the tenant_id column to every table without any magic
enforcement or magic default value. Instead we default to NULL and plan
to have an out of band migration. From out testing this change is
instant since it is only a table metadata change. According to the
documentation this sort of ALTER TABLE is instant since Postgres 11.
Test Plan: ran this migration against a clone of the s2 database.
---------
Co-authored-by: Keegan Carruthers-Smith <keegan.csmith@gmail.com>
Give a more descriptive error when we are unable to find a merge base
and can't find the files that have changed.
Closes DINF-162
## Test plan
1. `git fetch -v --prune --depth=100 -- origin
ccbaba24b72f3c6f4524b3f560ca839143ea463b`
2. `git merge-base HEAD origin/main` --- you'll get nothing since there
is no merge base in the current history
Increase the repo history
1. `git fetch --unshallow`
2. `git merge-base HEAD origin/main`
```
0a6e509af3
```
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Co-authored-by: Bolaji Olajide <bolaji.olajide@sourcegraph.com>
Now that https://github.com/sourcegraph/sourcegraph/pull/64339 is
merged, we can tell users more about what to expect with `sg start
single-program-experimental-blame-sqs`. And as it's been in flight for a
while now, it's safe to say that's it's time to give it a shorter name
😊.
So it's been renamed from `single-program-experimental-blame-sqs` to
`minimal`. And to ensure nobody is getting confused, a `deprecated`
attribute has been added on the command sets, which is used here to
indicate that the new alternative is `sg start`.
❓ Thoughts about `sg start minimal`? `sg start single` perhaps?
Running the old commandset: (ignore the yellow message, that's just a
local warning from `sg`)

Running the new commandset, with the preamble explaining what to expect:

## Test plan
Locally tested.
There currently isn't a 'one step' way to start a local Sourcegraph
instance with the SvelteKit app.
This commit adds `sg start enterprise-sveltekit` to fix that.
The changes in https://github.com/sourcegraph/sourcegraph/pull/64272
allow us to run the vite dev server in front of the Sourcegraph
instance, instead of building the assets and having them served by
frontend.
This gives us the benefit of hot module reloading and in general seems
to be a less fragile approach.
It's basically the same what we do with the React app in development
mode.
## Test plan
`sg start enterprise-sveltekit` starts the vite dev server as well as
the sourcegraph instance. Navigating to
`https://sourcegraph.test:3443/search` opens the Svelte app (when
enabled and logged in). Making a change in a source file updates the web
page immediately.
`sg start web-sveltekit-standalone` still works
To prevent cross-cmd imports in the future, moving the backend package into internal.
Test plan: Just moved a package around, Go compiler doesn't complain and CI still passes.
This should resolve the Workflows delivery failure seen in
https://buildkite.com/sourcegraph/sourcegraph/builds/286482#01912eb8-a1a9-48c6-81b0-4f5a09448bbb
after landing the rules_oci 2 upgrade:
```
· Error: failed to compute runfiles hash for manifest: error concurrently hashing file /mnt/ephemeral/output/sourcegraph/__main__/execroot/__main__/bazel-out/k8-opt-ST-d57f47055a04/bin/dev/build-tracker/image_underlying/blobs/sha256/104f4630c01017c52e968bfe039f6eb7622ef1ad9d44d94d784cc9c39992961b: failed to open file /mnt/ephemeral/output/sourcegraph/__main__/execroot/__main__/bazel-out/k8-opt-ST-d57f47055a04/bin/dev/build-tracker/image_underlying/blobs/sha256/104f4630c01017c52e968bfe039f6eb7622ef1ad9d44d94d784cc9c39992961b for hashing: open /mnt/ephemeral/output/sourcegraph/__main__/execroot/__main__/bazel-out/k8-opt-ST-d57f47055a04/bin/dev/build-tracker/image_underlying/blobs/sha256/104f4630c01017c52e968bfe039f6eb7622ef1ad9d44d94d784cc9c39992961b: no such file or directory
```
The issue was that the runfiles of the underlying oci_image
(`image_underlying` target) were not being forwarded through the custom
`oci_image_cross` rule and therefor were not being built or fetched from
the remote cache and layed out on disk for the delivery step.
In this repo, the `oci_image` rule is a macro that is an underlying
`oci_image` with the main target being an `oci_image_cross`:
```
# Apply a transition on oci_image targets and their deps to apply a transition on platforms
# to build binaries for Linux when building on MacOS.
def oci_image(name, **kwargs):
_oci_image(
name = name + "_underlying",
tars = kwargs.pop("tars", []) + ["//internal/version:stamps"],
**kwargs
)
oci_image_cross(
name = name,
image = ":" + name + "_underlying",
platform = select({
"@platforms//os:macos": Label("@zig_sdk//platform:linux_amd64"),
"//conditions:default": Label("@platforms//host"),
}),
visibility = kwargs.pop("visibility", ["//visibility:public"]),
)
```
## Test plan
CI
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
2nd attempt of #63111, a follow up
https://github.com/sourcegraph/sourcegraph/pull/63085
rules_oci 2.0 brings a lot of performance improvement around oci_image
and oci_pull, which will benefit Sourcegraph. It will also make RBE
faster and have less load on remote cache.
However, 2.0 makes some breaking changes like
- oci_tarball's default output is no longer a tarball
- oci_image no longer compresses layers that are uncompressed, somebody
has to make sure all `pkg_tar` targets have a `compression` attribute
set to compress it beforehand.
- there is no curl fallback, but this is fine for sourcegraph as it
already uses bazel 7.1.
I checked all targets that use oci_tarball as much as i could to make
sure nothing depends on the default tarball output of oci_tarball. there
was one target which used the default output which i put a TODO for
somebody else (somebody who is more on top of the repo) to tackle
**later**.
## Test plan
CI. Also run delivery on this PR (don't land those changes)
---------
Co-authored-by: Noah Santschi-Cooney <noah@santschi-cooney.ch>
Closes DINF-89
Gazelle sometimes have trouble processing glob expressions, and this
isn't reported as a failure even though it ultimately results in the
`BUILD.bazel` not being correctly updated.
## Test plan
* Manual testing
In `client/web/BUILD.bazel`, add a new `src` to the `web_lib` ts_project
target that includes a glob pattern.
```
...
ts_project(
name = "web_lib",
srcs = glob(["!src/playwright/*.spec.ts"]) + [
"src/Index.tsx",
"src/LegacyLayout.tsx",
"src/LegacyRouteContext.tsx",
"src/LegacySourcegraphWebApp.tsx",
"src/PageError.tsx",
"src/SearchQueryStateObserver.tsx",
"src/SourcegraphWebApp.tsx",
...
```
When you run `go run ./dev/sg bazel configure`, the command should fail
with an error message instead of returning exit 0.
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
---------
Co-authored-by: Jean-Hadrien Chabran <jean-hadrien.chabran@sourcegraph.com>
Instead of re-generating and re-compiling the mocks 3 times,
consolidate them into a single storemocks package.
This should also provide better code navigation in the editor.
This change removes the backend smart search logic. After this, searches
with smart search enabled (`sm=1`) will be executed in the default
'precise' mode (`sm=0`). For old searches that use `sm=1` and
`patterntype=standard`, it's possible that they will now return no
results.
Looking at telemetry, only 0.1% of searches on dot com trigger any smart
search rule. So this change should only affect a small percentage of
usage. To mitigate the impact on these rare cases, this PR adds an alert
whenever there are no results and smart search is enabled, suggesting
users switch to keyword search. (This will help in the majority of
cases, since the most frequent smart search rule rewrites literal
queries to use 'AND' between terms).
Closes SPLF-92
1. Make `sg cloud eph` instance status Reason simple string, as CloudAPI
will take over and return Reason with job URL - no need to parse Reason
anymore.
```sh
go run ./dev/sg cloud eph status --name ff-eph56
⚠️ Running sg with a dev build, following flags have different default value unless explictly set: skip-auto-update, disable-analytics
✅ Ephemeral instance "ff-eph56" status retrieved
Ephemeral instance details:
Name Expires In Instance status Details
ff-eph56 4h37m4.79031s unspecified creation task is already running: https://console.cloud.google.com/workflows/workflow/us-central1/create-instance-c31bd1a4ea84/executions?project=michael-test-03
```
2. Allow re-run create if CloudAPI returns status with Reason - it means
instance is not fully created yet, so user might re-try create -
CloudAPI will ensure more than one create is not running at the same
time.
3. Updated printers with GOTO action for each instance details:
```sh
go run ./dev/sg cloud eph list --all
⚠️ Running sg with a dev build, following flags have different default value unless explictly set: skip-auto-update, disable-analytics
☁️ Fetched 10 instances
Name Expires In Instance status Details
andre-eph-1 30h10m42.989163s unspecified invoke: sg cloud eph status --name andre-eph-1
ff-eph56 4h34m43.989154s unspecified invoke: sg cloud eph status --name ff-eph56
```
## Test plan
Unit tests simplified.
E2e against old and new CloudAPI.
---------
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
Some commands like the
[`batcheshelper-builder`](https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/sg.config.yaml?L821)
aren't long running commands.
This command is used to build and load an image into docker. The `cmd`
section returns an `exit 0`. This behavior combined with
`continueWatchOnExit` results in an infinite loop where the process is
continually restarted because `sg` doesn't know that the process has
finished executing and isn't a long-running process.
https://github.com/user-attachments/assets/e7a027a1-6f93-403f-9240-6a791255fba9
An example of the behavior is shown below as running `sg start batches`
results in the `batcheshelper-builder` command continually restarted.
The fix is quite simple, we return an empty receiver channel when the
process is done executing so that `sg` knows it's done and doesn't
restart the command unless there's a change.
## Test plan
* Manual testing with `go run ./dev/sg start batches` doesn't result in
an infinite loop anymore.
* Add unit tests
## Changelog
Triggering cloud ephemeral builds on main-dry-run leads to unexpected
results which is due to how the eventual pipeline is generated.
Closes DINF-165
### Generated pipeline?
The pipeline gets generated based on _what matches first_. We detect
cloud ephemeral builds if there is an environment variable
`CLOUD_EPHEMERAL=true`. We detect main-dry-runs if the branch prefix is
`main-dry-run`...
Now due to the `main-dry-run` match happening before the cloud ephemeral
match a Cloud Ephemeral build on a main dry run gets detected as _just_
a `main-dry-run` build.
#### Alternatives
Sure we can just move the Cloud Ephemeral match earlier, but it's
difficult to say what else might break. We could also just add the we
force the runtype to always be `CloudEphemeral` if we're on the Cloud
Ephemeral pipeline, but I don't want to make the pipline a special
snowflake and detect the current pipeline just for Cloud Ephemeral.
#### Why deny it the deployment?
Ultimately, `main-dry-run` builds are meant for ... `main` not cloud
ephemeral. People can just switch to their original branch and run `sg
cloud eph deploy` and the branch will be deployed with no issue
## Test plan
Executed locally
```
./sg-test cloud eph build
⚠️ Running sg with a dev build, following flags have different default value unless explictly set: skip-auto-update, disable-analytics
⚠️ Triggering Cloud Ephemeral builds from "main-dry-run" branches are not supported. Try renaming the branch to not have the "main-dry-run" prefix as it complicates the eventual pipeline that gets generated
To rename a branch and launch a cloud ephemeral deployment do:
1. git branch -m "main-dry-run/lol" <my-new-name>
2. git push --set-upstream origin <my-new-name>
3. trigger the build by running sg cloud ephemeral build
FAQ https://www.notion.so/sourcegraph/How-to-deploy-my-branch-on-an-ephemeral-Cloud-instance-dac45846ca2a4e018c802aba37cf6465?pvs=4#20cb92ae27464891a9d03650b4d67cee❌ failed to trigger epehemeral build for branch: main-dry-run branch is not supported
```
## Changelog
* sg: deny deployment of `main-dry-run` branches on Cloud Ephemeral.
Typo when copying the flags from somewhere else, names mismatched and
didnt upload 🤦
Also adding in proper BEP emitting, it appears the build_event_log.bin
that was being output was actually for the honeyvent bazel invocation
## Test plan
CI main dry-run
https://buildkite.com/sourcegraph/sourcegraph/builds/285375
## Changelog
For more insights into whats happening during those damn build commands
Also removes emitting to honeycomb, that was my experiment that has
concluded.
## Test plan
CI
## Changelog
This patch wires up the newly changed APIs in #64118
to the GraphQL API, enabling precise support in the
usagesForSymbol API. It also handles pagination.
Fixes https://linear.app/sourcegraph/issue/GRAPH-573
## Test plan
Manually tested. Will add automated tests in follow-up PR.
## Changelog
- Adds support for precise code navigation to the experimental
`usagesForSymbol` API, which can be used to implement a
reference panel or similar functionality.
They should not be used outside of cmd/frontend, so making it a frontend
internal package.
While doing that, I realized that there is a coupling dependency between
authz providers and auth (which is authN) providers: GitLab code host
connections can do authz mapping via the usernames of another OIDC or
SAML auth provider
(https://sourcegraph.com/docs/admin/code_hosts/gitlab#administrator-sudo-level-access-token).
It turns out this feature does not work anymore, since at least several
releases, because we don't actually instantiate auth providers outside
of `cmd/frontend` and thus the mapping will never find anything (auth
providers don't explode when queried before init, unlike authz).
This only now became clear as I moved this code, and the dependency
graph was broken, so that's a nice property of these cleanups I guess 😬
Since it doesn't seem to work for quite some time, I opted for removing
it, and added a changelog entry about it. Not sure if that is
sufficient, I raised a thread here:
https://sourcegraph.slack.com/archives/C03K05FCRFH/p1721848436473209.
This would've prevented this change and needed more refactoring as
unfortunately we cannot map an auth provider by the conf type to a
record in the `user_external_accounts` table and need to actually
instantiate it.
Test plan: Compiler doesn't complain, tests still pass.
## Changelog
GitLab code host connections were [able to sync permissions by mapping
Sourcegraph users to GitLab users via the username property of an
external OIDC or SAML
provider](https://sourcegraph.com/docs/admin/code_hosts/gitlab#administrator-sudo-level-access-token)
that is shared across Sourcegraph and GitLab. This integration stopped
working a long time ago, and it has been removed in this release.
Previously, we would store authz providers globally and refresh them
every now and then.
However, creating the providers is fairly cheap (1.3ms in a local trace)
so we should not keep them in memory and remember to not forget to start
the watcher routine.
This will help for multi-tenant Sourcegraph in that providers are now
computed for the context in question, and not held globally. Keeping
potentially 100k authz providers in memory will not scale.
Test plan: Still works, local Jaeger traces are quite acceptable.
This is a register call that is easy to forget. When forgotten, all queries against the repo store will block forever.
In addition, this adds a hard-dependency on conf to every services startup, plus a busy loop. With multi-tenant, this will not work great because authz providers would be a global, and we instead want most things to be ephemeral so they're per-provider. This is a step toward that, but doesn't yet remove the providers global variable.
Good news, it turns out that we don't actually need to register the providers in every service! The reason they were required was to check if zero providers are configured, or if authzbypass mode is enabled.
Authz bypass mode is usually ON, except when there are problems with the authz providers, meaning some authz providers might not be able to sync permissions. Bypassing of permissions is only ever happening if there are ALSO zero providers configured.
So this is basically an optimization for the case where an instance has zero authz configured so that the SQL queries are a bit simpler. This also helps in tests because with bypass mode on and no providers configured, authz enforcement is effectively off in the repo store.
This makes it so that in tests we need to do slightly more work, but also makes for a more realistic test vs at runtime setup. Also, it's highly recommended to use mocks for DB wherever possible in more high-level components to keep tests fast.
To never have a scenario where we accidentally mess up here and enable bypass mode erroneously, this PR drops that entirely. Authz is always enforced, but when a code host connection is unrestricted (i.e., will not spawn a provider) the repos are still visible, so this should be no change over before.
## Test plan
The stack starts and works, and all CI tests are still passing. Code review should help as well.
https://linear.app/sourcegraph/issue/DINF-111/rework-how-we-inject-version-in-our-artifacts
Pros:
- saves having to rebuild `bazel query 'kind("go_library", rdeps(//...,
//internal/version))' | wc -l` == 523 Go packages when stamp variables
cause a rebuild
- Cutting out GoLink action time when stamp changes but code is cached
Cons:
- Binaries themselves are no longer stamped, only knowing their version
info within the context of the docker image
- A tad extra complexity in internal/version/version.go to handle this
new divergence
---
Before:
```
$ bazel aquery --output=summary --include_commandline=false --include_artifacts=false --include_aspects=false --stamp 'inputs(".*volatile-status\.txt", //...)'
Action: 1
Genrule: 2
Rustc: 3
ConvertStatusToJson: 88
GoLink: 383
```
After:
```
$ bazel aquery --output=summary --include_commandline=false --include_artifacts=false --include_aspects=false --stamp 'inputs(".*volatile-status\.txt", //...)'
Mnemonics:
Genrule: 2
Action: 3
Rustc: 3
ConvertStatusToJson: 86
```
## Test plan
Lots of building & rebuilding with stamp flags, comparing execution logs
& times
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
This PR brings back https://github.com/sourcegraph/sgtail back in `sg`,
plus a few adjustments to make it easier to use. I'll archive that repo
once this PR lands.
@camdencheek mentioned you here as you've been the most recent beta
tester, it's more an FYI than a request for a review, though it's
welcome if you want to spend a bit of time reading this.
Closes DINF-155
## Test plan
Locally tested + new unit test + CI
## Changelog
- Adds a new `sg tail` command that provides a better UI to tail and
filter log messages from `sg start --tail`.