2nd attempt of #63111, a follow up
https://github.com/sourcegraph/sourcegraph/pull/63085
rules_oci 2.0 brings a lot of performance improvement around oci_image
and oci_pull, which will benefit Sourcegraph. It will also make RBE
faster and have less load on remote cache.
However, 2.0 makes some breaking changes like
- oci_tarball's default output is no longer a tarball
- oci_image no longer compresses layers that are uncompressed, somebody
has to make sure all `pkg_tar` targets have a `compression` attribute
set to compress it beforehand.
- there is no curl fallback, but this is fine for sourcegraph as it
already uses bazel 7.1.
I checked all targets that use oci_tarball as much as i could to make
sure nothing depends on the default tarball output of oci_tarball. there
was one target which used the default output which i put a TODO for
somebody else (somebody who is more on top of the repo) to tackle
**later**.
## Test plan
CI. Also run delivery on this PR (don't land those changes)
---------
Co-authored-by: Noah Santschi-Cooney <noah@santschi-cooney.ch>
* Frontend no longer embeds the assets intead it reads from the local
filesystem assets.
* Generally the frontend and server cmd targets will use the
`//client/web/dist:copy_bundle` target to create a tarball for the
oci_image. `copy_bundle` puts all the assets at `assets-dist`
* For integration tests, frontend and server have the `no_client_bundle`
target variants. For these oci_images, instead of the `tar_bundle` which
is just a tar'd `copy_bundle` we use the `tar_dummy_manifest` which is
just a tar that contains a dummy manifest.
* By default we expect assets to be at `/assets-dist`
* Renamed DevProvider to DirProvider
## Why
By 'breaking' the dependency of frontend requiring assets to be built we
essentially stop a common cache invalidation scenario that happens:
- someone makes a frontend change = assets need to be rebuilt
By decoupling assets from the frontend binary and moving the packing of
assets to the building of the frontend and server images we will have a
better cache hit rate (theoretically).
Thus with this change, when:
* client/web is change and nothing else ... only assets will have to
rebuilt and cached versions of the backend will be used
* if only backend code has changed ... cached assets will be used
Closes DINF-115
## Test plan
✅ sg start - web app opens and can search. Local dev assets get loaded
✅ sg test bazel-integration-test - server image gets built with **only**
dummy web manifest. Also verified by running `sg bazel run
//cmd/server:no_client_bundle.image` and then inspect container
✅ sg test bazel-e2e - server image gets built with bundle and all tests
pass
✅ [main dry
run](https://buildkite.com/sourcegraph/sourcegraph/builds/284042#0190e54c-14d9-419e-95ca-1198dc682048)
## Changelog
- frontend: assets are no longer bundled with binary through `go:embed`.
Instead assets are now added to the frontend container at `assets-dist`.
https://linear.app/sourcegraph/issue/DINF-111/rework-how-we-inject-version-in-our-artifacts
Pros:
- saves having to rebuild `bazel query 'kind("go_library", rdeps(//...,
//internal/version))' | wc -l` == 523 Go packages when stamp variables
cause a rebuild
- Cutting out GoLink action time when stamp changes but code is cached
Cons:
- Binaries themselves are no longer stamped, only knowing their version
info within the context of the docker image
- A tad extra complexity in internal/version/version.go to handle this
new divergence
---
Before:
```
$ bazel aquery --output=summary --include_commandline=false --include_artifacts=false --include_aspects=false --stamp 'inputs(".*volatile-status\.txt", //...)'
Action: 1
Genrule: 2
Rustc: 3
ConvertStatusToJson: 88
GoLink: 383
```
After:
```
$ bazel aquery --output=summary --include_commandline=false --include_artifacts=false --include_aspects=false --stamp 'inputs(".*volatile-status\.txt", //...)'
Mnemonics:
Genrule: 2
Action: 3
Rustc: 3
ConvertStatusToJson: 86
```
## Test plan
Lots of building & rebuilding with stamp flags, comparing execution logs
& times
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Currently, all backend integration tests transitively depend on the client bundle. This results in rebuilds of the closure, and a (likely) cache miss on the test, when modifying any client-side files.
Given that the client bundle isnt needed for these tests, we can create targets that don't include the client bundle in its transitive closure, which should in theory improve the cache hit for backend integration tests by not having client side changes invalidate it. This would also be beneficial in local env, to keep frontend rebuilds down
To do this, we still need to create a `web.manifest.json` file due to some unfortunate requirement on that file existing as part of initializing the sourcegraph instance. For this I just create an empty json file, `select` this instead of the client bundle target in client/web/dist/BUILD.bazel based on a `//:integration_testing_enabled` config setting, and creating a `go_binary`-wrapping bazel rule + macro that automatically applies a transition to set this to true `go_binary_nobundle`, and using that rule for the specific `//cmd/{server,frontend}:{server,frontend}_nobundle` binary rules (along with the relevant `oci_{image,tarball}` etc rules to consume it).
## Test plan
- Integration tests in CI still work
- `bazel cquery 'kind("js_library", deps(//cmd/frontend:image_nobundle))'`, `bazel cquery 'kind("js_library", deps(//cmd/server:image_nobundle))'`, ``bazel cquery 'kind("js_library", deps(//testing:backend_integration_test))'` etc all return empty set
- Release test with marker in web bundle to ensure released images contain the web bundle via `sg release run test --version 5.4.2` (commenting out other tests for brevity)
This wrapper existed to track performance issues, but we have since added better logging options for that through storing the last output and the ability to log to a file.
This adds another layer of indirection that could cause trouble, so simplifying here and calling p4-fusion directly, like we do in the p4-fusion CI pipeline.
Test plan:
The p4 integration test still passes.
* wip
* gitserver (mostly) wolfi 4 bazel
* the big heck of all things
* Add rules_apko lock translation rules to WORKSPACE
* Call apko_repositories() more
* fix rules_apko to handle our shorter repo urls
* fix workspace from rebase, and missing locks
* visibility on wolfi_base_image
* hand-fix a lock coz apko lock is 🅱️roken
* remove chainguard repo+keyring from base
* update locks
* add chainguard repo+keychain to single server manifest
* unrelated fixes, server+grafana still h*cked
* fix postgres-exporter
* the big fix
* aws lib got bumped?
* downgrade sso-oidc? idk
* ignore wolfi locks from prettier
* dynamically do the locks with a reporule
* document and make nice :nails:
* bazel run @rules_apko//apko patch
* Fix .typo.typo
* Update tooling for end-to-end Bazel images (#61106)
* Update sg wolfi image to build using Bazel
* bazel run @rules_apko//apko patch
* Fix .typo.typo
* Add update-images and implement apko YAML change monitoring
* Use bazel apko and add support for additional repos
* Refactor sg wolfi
* Rework wolfi base image auto-update pipeline
* sg bazel configure
* [rough] Add --check flag to sg wolfi lock
* Refactor sg wolfi lock --check
* Simplify check and update apko lock hash operations
* Fix resolveImagePath when running in bazel
* Fixup logic error in CheckApkoLockHashes
* Tweak DoBaseImageBuild output
* Remove debug output
* Fix sg wolfi lock --check behaviour for all images
* Replace base image build step with apko lock --check
* Remove debug line
* Minor fixups for CI step
* Wrap with AnnotatedCmd
* Fixup annotation
* Update apko lockfiles
* Allow additional repos to be passed
* Update build-base-image.sh with bazel + add back to pipeline
* Ensure that modified base images are rebuilt
* Solve bazelception
* Remove timestamp for bit-level reproducibility
* Skip local keygen when running on buildkite
* Add workaround for lack of local repo support in rules_apko
* Run apkoOps first as it's quick and might fail
* Remove blocking allBaseImagesBuilt step
* Remove unused promethus-gcp image
* Add special cases to resolveImagePath
* Cleanly handle case where no bazel build path exists
This could happen in cases where a base image is only used outside of sourcegraph/sourcegraph,
or if you've added a new base image config but haven't added the associated Bazel scaffolding
* Add debugging around failing docker builds
* More debugging
* Normalise apko_lockfile to match repo.bzl
* Fixup apko docker call
* Try passing imageconfigdir differently to docker
* Run ls in different container
* Soft-fail when using legacy build in Buildkite
* Add missing include
* Workaround for building sourcegraph and sourcegraph-dev
* Add postgresql-client package to server
This contains createdb, which was recently moved from postgresql
* Inflate postgres-12-codeinsights image to avoid rules_apko errors
* Remove update line from yaml files
* Fix issue caused by moving base sourcegraph image
* Remove apk-tools from server
* Update lockfiles
* Address review feedback
* Remove debug lines
* fix unbound var
---------
Co-authored-by: Noah Santschi-Cooney <noah@santschi-cooney.ch>
* go mod tidy + gazelle-update-repos after merging main
* Use aspect bazel cache
* Use Aspect bazel caching when calling bazel in bash and sg
* Append annotation
* Run apko lock on aspect agent
* Remove base image builds
Discussion in https://sourcegraph.slack.com/archives/C05EVRLQEUR/p1712307465660509
* Remove unused functionality
* Update BaseImageConfig comments
* Rewrite wolfi-images/README.md
* Add .apko/range.sh to .gitattributes
* Remove "wolfi" from :base_image and :base_tarball targets
* remove allowlist extras from debugging
* Tweak user instructions around package testing
* Add agent healthcheck to buildkite scripts
* prettier
* sg bazel configure
* bazel run //:gazelle-update-repos
---------
Co-authored-by: Noah Santschi-Cooney <noah@santschi-cooney.ch>
Co-authored-by: Noah S-C <noah@sourcegraph.com>
* wip
* gitserver (mostly) wolfi 4 bazel
* the big heck of all things
* Add rules_apko lock translation rules to WORKSPACE
* Call apko_repositories() more
* fix rules_apko to handle our shorter repo urls
* fix workspace from rebase, and missing locks
* visibility on wolfi_base_image
* hand-fix a lock coz apko lock is 🅱️roken
* remove chainguard repo+keyring from base
* update locks
* add chainguard repo+keychain to single server manifest
* unrelated fixes, server+grafana still h*cked
* fix postgres-exporter
* the big fix
* aws lib got bumped?
* downgrade sso-oidc? idk
* ignore wolfi locks from prettier
* dynamically do the locks with a reporule
* document and make nice :nails:
* bazel run @rules_apko//apko patch
* Fix .typo.typo
* Update tooling for end-to-end Bazel images (#61106)
* Update sg wolfi image to build using Bazel
* bazel run @rules_apko//apko patch
* Fix .typo.typo
* Add update-images and implement apko YAML change monitoring
* Use bazel apko and add support for additional repos
* Refactor sg wolfi
* Rework wolfi base image auto-update pipeline
* sg bazel configure
* [rough] Add --check flag to sg wolfi lock
* Refactor sg wolfi lock --check
* Simplify check and update apko lock hash operations
* Fix resolveImagePath when running in bazel
* Fixup logic error in CheckApkoLockHashes
* Tweak DoBaseImageBuild output
* Remove debug output
* Fix sg wolfi lock --check behaviour for all images
* Replace base image build step with apko lock --check
* Remove debug line
* Minor fixups for CI step
* Wrap with AnnotatedCmd
* Fixup annotation
* Update apko lockfiles
* Allow additional repos to be passed
* Update build-base-image.sh with bazel + add back to pipeline
* Ensure that modified base images are rebuilt
* Solve bazelception
* Remove timestamp for bit-level reproducibility
* Skip local keygen when running on buildkite
* Add workaround for lack of local repo support in rules_apko
* Run apkoOps first as it's quick and might fail
* Remove blocking allBaseImagesBuilt step
* Remove unused promethus-gcp image
* Add special cases to resolveImagePath
* Cleanly handle case where no bazel build path exists
This could happen in cases where a base image is only used outside of sourcegraph/sourcegraph,
or if you've added a new base image config but haven't added the associated Bazel scaffolding
* Add debugging around failing docker builds
* More debugging
* Normalise apko_lockfile to match repo.bzl
* Fixup apko docker call
* Try passing imageconfigdir differently to docker
* Run ls in different container
* Soft-fail when using legacy build in Buildkite
* Add missing include
* Workaround for building sourcegraph and sourcegraph-dev
* Add postgresql-client package to server
This contains createdb, which was recently moved from postgresql
* Inflate postgres-12-codeinsights image to avoid rules_apko errors
* Remove update line from yaml files
* Fix issue caused by moving base sourcegraph image
* Remove apk-tools from server
* Update lockfiles
* Address review feedback
* Remove debug lines
* fix unbound var
---------
Co-authored-by: Noah Santschi-Cooney <noah@santschi-cooney.ch>
* go mod tidy + gazelle-update-repos after merging main
* Use aspect bazel cache
* Use Aspect bazel caching when calling bazel in bash and sg
* Append annotation
* Run apko lock on aspect agent
* Remove base image builds
Discussion in https://sourcegraph.slack.com/archives/C05EVRLQEUR/p1712307465660509
* Remove unused functionality
* Update BaseImageConfig comments
* Rewrite wolfi-images/README.md
* Add .apko/range.sh to .gitattributes
* Remove "wolfi" from :base_image and :base_tarball targets
* remove allowlist extras from debugging
* Tweak user instructions around package testing
* Add agent healthcheck to buildkite scripts
* prettier
---------
Co-authored-by: Noah Santschi-Cooney <noah@santschi-cooney.ch>
Co-authored-by: Noah S-C <noah@sourcegraph.com>
Now that we've updated to Go 1.22, we don't need to copy loop variables before
using them in goroutines.
I found these using the regex searches `go func\(\w+` and `\.Go(func\(\w+`. I
also simplified some non-loop vars when it made sense.
## Test plan
Straight refactor, covered by existing tests
I noticed this comment that we remap the binary name in bazel for
server. Well we can just update server and make it consistent with
everything else.
Test Plan: CI
This change is to mitigate excessive remote cache network traffic in the event that oci_tarball targets are cache busted en masse.
Only //cmd/server:image_tarball and //docker-images/executor-vm:image_tarball and used as inputs to downstream targets so only
these two will be built and remote cached on CI.
We have a number of docs links in the product that point to the old doc site.
Method:
- Search the repo for `docs.sourcegraph.com`
- Exclude the `doc/` dir, all test fixtures, and `CHANGELOG.md`
- For each, replace `docs.sourcegraph.com` with `sourcegraph.com/docs`
- Navigate to the resulting URL ensuring it's not a dead link, updating the URL if necessary
Many of the URLs updated are just comments, but since I'm doing a manual audit of each URL anyways, I felt it was worth it to update these while I was at it.
* First pass at moving moving logging linting into Bazel
* fixed negation operators
* Update dev/linters/logging/logging.go
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
* added more exceptions and refactored one or two impls
* added nogo lint pragmas to offending files
* ran configure
* reverted git-combine refactor
* ran configure
* reverted test as well
---------
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
Use [esbuild](https://esbuild.github.io/) instead of Webpack for builds of `client/web`, for faster builds (dev and prod) and greater dev-prod parity. This PR completely removes all use of Webpack in this repository.
`client/web` is the last build target that still uses Webpack; all others have been recently migrated to esbuild. Most devs here have been using esbuild for local dev of `client/web` for the last 6-12 months anyway. The change here is that now our production builds will be built by esbuild.
All sg commands, integration/e2e tests, etc., continue to work as-is. The bundlesize report will take a while to stabilize because the new build products use different filenames.
## Benchmarks
Running `pnpm run generate && time pnpm -C client/web run task:gulp webBuild` and taking the `time` output from the last command:
- Webpack: 62.5s
- esbuild: 6.7s
Note: This understates esbuild's victory for 2 reasons: (1) because esbuild is building both the main and embed entrypoints, whereas Webpack only builds the main entrypoint in this benchmark) and (2) because a lot of it is in the fixed startup time of `gulp`; esbuild incremental rebuilds during local dev only take ~1s.
## Notes
We no longer use Babel to produce web builds (we use esbuild), so we don't need any Babel plugins that optimize the output or improve browser compatibility. Right now, Babel is only used by Jest (for tests) and by Bazel as an intermediate step.
* log: remove use of description paramter in Scoped
* temporarily point to sglog branch
* bazel configure + gazelle
* remove additional use of description param
* use latest versions of zoekt,log,mountinfo
* go.mod
This service is being replaced by a redsync.Mutex that lives directly in the GitHub client.
By this change we will:
- Simplify deployments by removing one service
- Centralize GitHub access control in the client instead of splitting it across services
- Remove the dependency on a non-HA service to talk to GitHub.com successfully
Other repos referencing this service will be updated once this has shipped to dotcom and proven to work over the course of a couple days.
Apparently VS Code's Python support uses port 9000, so as long as we use
it most VS Code Python devs can't run the app due to the port conflict
with our services.
This PR switches the app's blobstore to listen on `:49000` instead of
`:9000`. My thinking for now is that we can centralize all "default
endpoint" and "what host/port should I listen on?" into the `deploy`
package, and for now just change ports to have a `4` in front of them to
reduce the change of conflicts.
Later, once we have them centralized in the `deploy` package we can
easily modify that code to pick an unavailable port.
This also binds blobstore to `127.0.0.1` (more secure), previously it
was bound to `localhost`.
## Test plan
To test this I did the following:
1. [x] Methodically searched our codebase for `9000` and vetted each
instance, to ensure I didn't miss anything.
2. [x] Tested with `sg start app` and confirmed `sudo lsof -i -P | grep
LISTEN | grep :9000` reported nothing while `sudo lsof -i -P | grep
LISTEN | grep :49000` did.
3. [x] Created an app build with `./enterprise/dev/app/build.sh` and
tested it:
* [x] Confirmed embeddings generation (which get stored in blobstore)
still works through the setup wizard
4. [x] Confirmed `sg start enterprise-codeintel` starts and runs
Signed-off-by: Stephen Gutekanst <stephen@sourcegraph.com>
A few `x_defs` attributes were missing on binaries, which is now fixed.
Also moved to stamping from `go_library` rules to `go_binaries` to ease
caching.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
CI + main-dry-run + locally tested + `strings github-proxy | grep
version` on `us.gcr.io/sourcegraph-dev/github-proxy:323c450504f8`
The previous approach to enable race detection was too radical and
accidently led to build our binaries with the race flage enabled, which
caused issues when building images down the line.
This happened because putting a `test --something` in bazelrc also sets
it on `build` which is absolutely not what we wanted. Usually folks get
this one working by having a `--stamp` config setting that fixes this
when releasing binaries, which we don't at this stage, as we're still
learning Bazel.
Luckily, this was caught swiftly. The current approach insteads takes a
more granular approach, which makes the `go_test` rule uses our own
variant, which injects the `race = "on"` attribute, but only on
`go_test`.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
CI, being a main-dry-run, this will cover the container building jobs,
which were the ones failing.
---------
Co-authored-by: Alex Ostrikov <alex.ostrikov@sourcegraph.com>
Copy the scip-ctags binary from a pre-built container the same as
syntax-higligher
Fixes
```
2023-06-01T15:01:05.315027853Z [0;34m15:01:05 symbols | [0mscip-ctags: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.29' not found (required by scip-ctags)
2023-06-01T15:01:05.315056702Z [0;34m15:01:05 symbols | [0mscip-ctags: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by scip-ctags)
2023-06-01T15:01:05.315062326Z [0;34m15:01:05 symbols | [0mscip-ctags: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by scip-ctags)
2023-06-01T15:01:05.315066789Z [0;34m15:01:05 symbols | [0mscip-ctags: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.27' not found (required by scip-ctags)
```
## Test plan
no more errors in logs during upgrade test
This PR ships our freshly rewritten container images built with
rules_oci and Wolfi, which for now will only be used on S2.
*What is this about*
This work is the conjunction of [hardening container
images](https://github.com/orgs/sourcegraph/projects/302?pane=issue&itemId=25019223)
and fully building our container images with Bazel.
* All base images are now distroless, based on Wolfi, meaning we fully
control every little package version and we won't be subject anymore to
Alpine maintainers dropping a postgres version for example.
* Container images are now built with `rules_oci`, meaning we don't have
Dockerfile anymore, but instead created through [Bazel
rules](https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph@bzl/oci_wolfi/-/blob/enterprise/cmd/gitserver/BUILD.bazel).
Don't be scared, while this will look a bit strange to you at first,
it's much saner and simpler to do than our Dockerfiles and their muddy
shell scripts calling themselves in cascade.
:spiral_note_pad: *Plan*:
*1/ (NOW) We merge our branch on `main` today, here is what it does
change for you 👇:skin-tone-3::*
* On `main`:
* It will introduce a new job on `main` _Bazel Push_, which will push
those new images on our registries with all tags prefixed by `bazel-`.
* These new images will be picked up by S2 and S2 only.
* The existing jobs building docker images and pushing them will stay in
place until we have QA'ed them enough and are confident to roll them out
on Dotcom.
* Because we'll be building both images, there will be more jobs running
on `main`, but this should not affect the wall clock time.
* On all branches (so your PRs and `main`)
* The _Bazel Test_ job will now run: Backend Integration Tests, E2E
Tests and CodeIntel QA
* This will increase the duration of your test jobs in PRs, but as we
haven't removed yet the `sg lint` step, it should not affect too much
the wall clock time of your PRs.
* But it will also increase your confidence toward your changes, as the
coverage will vastly increased compared to before.
* If you have ongoing branches which are affecting the docker images
(like adding a new binary, like the recent `scip-tags`, reach us out on
#job-fair-bazel so we can help you to port your changes. It's much much
simpler than before, but it's going to be unfamiliar to you).
* If something goes awfully wrong, we'll rollback and update this
thread.
*2/ (EOW / Early next week) Once we're confident enough with what we saw
on S2, we'll roll the new images on Dotcom.*
* After the first successful deploy and a few sanity checks, we will
drop the old images building jobs.
* At this point, we'll reach out to all TLs asking for their help to
exercise all features of our product to ensure we catch any potential
breakage.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
* We tested our new images on `scale-testing` and it worked.
* The new container building rules comes with _container tests_ which
ensures that produced images are containing and configured with what
should be in there:
[example](https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph@bzl/oci_wolfi/-/blob/enterprise/cmd/gitserver/image_test.yaml)
.
---------
Co-authored-by: Dave Try <davetry@gmail.com>
Co-authored-by: Will Dollman <will.dollman@sourcegraph.com>
- Write Rust one-for-one protocol compatible replacement for
universal-ctags in JSON streaming mode (scip-ctags)
- Use tree-sitter to generate scip symbols, then emit those through
scip-ctags
- These symbols will be reused for Cody context
- Ensure code is built with musl libc
## Test plan
Unit and snapshot tests in the Rust symbol generation code - verified
working in the symbols sidebar and symbol search for enabled languages.
---------
Co-authored-by: TJ DeVries <devries.timothyj@gmail.com>
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
Co-authored-by: Eric Fritz <eric@eric-fritz.com>
Co-authored-by: Eric Fritz <eric@sourcegraph.com>
Co-authored-by: Jean-Hadrien Chabran <jh@chabran.fr>
- Write Rust one-for-one protocol compatible replacement for
universal-ctags in JSON streaming mode (scip-ctags)
- Use tree-sitter to generate scip symbols, then emit those through
scip-ctags
- These symbols will be reused for Cody context
Currently, only zig is enabled (so other languages should remain unaffected by this change).
We will add other languages throughout the next week as we're able to check them off.
## Test plan
Unit and snapshot tests in the Rust symbol generation code - verified
working in the symbols sidebar and symbol search for enabled languages.
---------
Co-authored-by: SuperAuguste <19855629+SuperAuguste@users.noreply.github.com>
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
Co-authored-by: Eric Fritz <eric@eric-fritz.com>
Co-authored-by: Eric Fritz <eric@sourcegraph.com>
For some bazel targets we want bazel to detect the host cc and use it.
This puts the `--incompatible_enable_cc_toolchain_resolution` flag
behind a config setting one has to opt into to use with `--config
incompat-zig-linux-amd64`.
I also extracted some of the bazelrc to selectively add the CI bazelrc,
so that it is easier to run these scripts locally.
## Test plan
* green ci
* executed the symbols and server scripts locally
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
As we're using
https://github.com/GoogleContainerTools/container-structure-test when
building images with Bazel, we can write tests that ensures that the
binaries we produced are executable on the current platform.
The code using this feature is not present in this PR, as it makes it
much more readable for everyone to ship this independently.
We may still be bitten by `init()` functions, but we'll set low timeouts
on those container structure tests to ensure it stays quick.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
```
~/work/other jh/sanity_check $ SANITY_CHECK=true ./bazel-bin/docker-images/syntax-highlighter/syntect_server
Sanity check passed, exiting without error
~/work/other U jh/sanity_check $ ./bazel-bin/docker-images/syntax-highlighter/syntect_server
## Embedded themes:
- `InspiredGitHub`
- `Monokai`
- `Solarized (dark)`
# ...
```
```
~/work/other jh/sanity_check $ SANITY_CHECK=true bazel-bin/cmd/worker/worker_/worker
Sanity check passed, exiting without error
```