Using the spread operator with large arrays can trigger a
stack overflow in Chrome/V8.
In a highlighting context, we can have 10k-100k occurrences
in a file, so let's avoid using the spread operator.
Fixes https://linear.app/sourcegraph/issue/GRAPH-772
## Test plan
Manually tested against sample file.

## Changelog
- Fixes a Chrome-specific stack overflow when highlighting large files.
<br> Backport 2644e24244 from #64072
Co-authored-by: Varun Gandhi <varun.gandhi@sourcegraph.com>
With the https://github.com/sourcegraph/sourcegraph/pull/63985/files
PatchRelease is matched before InternalRelease leading to the wrong
build being generated.
We therefore move the Promote and Internal Release runtypes higher in
priority so that they get matched first.
## Test plan
```
export RELEASE_INTERNAL=true
export VERSION="5.5.2463"
go run ./dev/sg ci preview
```
👇🏼
```
go run ./dev/sg ci preview
⚠️ Running sg with a dev build, following flags have different default value unless explictly set: skip-auto-update, disable-analytics
If the current branch were to be pushed, the following pipeline would be run:
Parsed diff:
changed files: [WORKSPACE client/web-sveltekit/BUILD.bazel client/web-sveltekit/playwright.config.ts client/web-sveltekit/src/lib/navigation/GlobalHeader.svelte client/web-
sveltekit/src/routes/[...repo=reporev]/(validrev)/(code)/page.spec.ts client/web/src/cody/chat/new-chat/NewCodyChatPage.tsx client/web/src/cody/sidebar/new-cody-sidebar/NewCodySidebar.tsx
client/web/src/cody/sidebar/new-cody-sidebar/NewCodySidebarWebChat.tsx client/web/src/enterprise/batches/settings/AddCredentialModal.tsx
client/web/src/enterprise/batches/settings/BatchChangesCreateGitHubAppPage.tsx client/web/src/repo/blame/hooks.ts client/web/src/repo/blame/shared.ts cmd/frontend/auth/user.go
cmd/frontend/auth/user_test.go cmd/frontend/internal/codycontext/context.go cmd/frontend/internal/codycontext/context_test.go deps.bzl dev/ci/push_all.sh dev/ci/runtype/runtype.go go.mod go.sum
internal/codeintel/uploads/BUILD.bazel internal/codeintel/uploads/internal/background/backfiller/BUILD.bazel internal/codeintel/uploads/internal/background/backfiller/mocks_test.go
internal/codeintel/uploads/internal/background/commitgraph/BUILD.bazel internal/codeintel/uploads/internal/background/commitgraph/job_commitgraph.go
internal/codeintel/uploads/internal/background/expirer/BUILD.bazel internal/codeintel/uploads/internal/background/expirer/mocks_test.go
internal/codeintel/uploads/internal/background/processor/BUILD.bazel internal/codeintel/uploads/internal/background/processor/mocks_test.go internal/codeintel/uploads/internal/store/BUILD.bazel
internal/codeintel/uploads/internal/store/commitdate.go internal/codeintel/uploads/internal/store/commitdate_test.go internal/codeintel/uploads/internal/store/observability.go
internal/codeintel/uploads/internal/store/store.go internal/codeintel/uploads/mocks_test.go internal/database/migration/shared/data/cmd/generator/consts.go
internal/database/migration/shared/data/stitched-migration-graph.json package.json pnpm-lock.yaml schema/schema.go schema/site.schema.json]
diff changes: "Go, Client, pnpm, Docs, Shell"
The generated build pipeline will now follow, see you next time!
• Detected run type: Internal release
• Detected diffs: Go, Client, pnpm, Docs, Shell
• Computed variables:
• VERSION=5.5.2463
• Computed build steps:
• Aspect Workflow specific steps
• 🤖 Generated steps that include Buildifier, Gazelle, Test and Integration/E2E tests
• Image builds
• :bazel::packer: 🚧 Build executor image
• :bazel: Bazel prechecks & build sg
• :bazel:⏳ BackCompat Tests
• :bazel:🧹 Go mod tidy
• Linters and static analysis
• 🍍:lint-roller: Run sg lint → depends on bazel-prechecks
• Client checks
• :java: Build (client/jetbrains)
• :vscode: Tests for VS Code extension
• :stylelint: Stylelint (all)
• Security Scanning
• Semgrep SAST Scan
• Publish candidate images
• :bazel::docker: Push candidate Images
• End-to-end tests
• :bazel::docker::packer: Executors E2E → depends on bazel-push-images-candidate
• Publish images
• :bazel::packer: ✅ Publish executor image → depends on executor-vm-image:candidate
• :bazel:⤴️ Publish executor binary
• :bazel::docker: Push final images → depends on main::test main::test_2
• Release
• Release tests → depends on bazel-push-images
• Finalize internal release
```
## Changelog
<br> Backport 0309564f93 from #64049
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
A Bitbucket Cloud incident caused APIs to error which caused Bitbucket
Cloud OAuth tokens to fail to refresh. This revealed that the Bitbucket
Cloud client called `oauthutil.DoRequest` with a `nil` logger, causing a
nil pointer dereference.
This PR simply creates the logger before calling `DoRequest`, which is
what the other clients do.
## Test plan
No more cases of DoRequest with a nil logger.
## Changelog
- Fixed an issue where a Bitbucket Cloud OAuth token failing to refresh
would crash the `worker` service.
<br> Backport bc036ad2ba from #64028
Co-authored-by: Petri-Johan Last <petri.last@sourcegraph.com>
Automatically generated PR to update package lockfiles for Sourcegraph
base images.
Built from Buildkite run
[#283970](https://buildkite.com/sourcegraph/sourcegraph/builds/283970).
## Test Plan
- CI build verifies image functionality
Co-authored-by: Buildkite <buildkite@sourcegraph.com>
As part of the [Vuln Scanning
Improvements](https://linear.app/sourcegraph/project/[p0]-vulnerability-scanning-improvements-75299c4312dd/issues)
project, I've been working on tooling to automate the security
approval step of the release process.
This PR integrates these improvements into the release pipeline:
* Internal releases will run a vulnerability scan
* Promote-to-public releases will check for security approval
If a public release does not have security approval, it will block the
promotion process. The step happens at the start of the pipeline so
should be a fast-fail. You can also check for release approval before
running promotion by running `@secbot cve approve-release
<version>` in the #secbot-commands channel. In an ideal world we
(security) will have already gone through and approved ahead of release.
I've tested this PR as much as I can without running an actual
release! We have a 5.5.x release tomorrow so it'll be a good test.
If it does cause problems that can't be easily solved, it can always
be temporarily disabled.
I've tagged this PR to be backported to `5.5.x`.
## Pre-merge checklist
- [x] Revert commit that disables release promotion
## Test plan
Manual testing of the release process:
- [x] [Successful test
run](https://buildkite.com/sourcegraph/sourcegraph/builds/283774#0190dfd6-fa70-4cea-9711-f5b8493c7714)
that shows the security scan being triggered
- [x] [Promote to public test
run](https://buildkite.com/sourcegraph/sourcegraph/builds/283826) that
shows the security approval approving a release
- [x] [Promote to public test
run](https://buildkite.com/sourcegraph/sourcegraph/builds/283817#0190e0ec-0641-4451-b7c7-171e664a3127)
that shows the security approval rejecting a release with un-accepted
CVEs
## Changelog
<br> Backport 9dd901f3c9 from #63990
Co-authored-by: Will Dollman <will.dollman@sourcegraph.com>
In order to run nightly vulnerability scans of Sourcegraph releases, we
need to publish a new set of images whenever the release branch is
pushed to.
Previously, this was implemented in
https://github.com/sourcegraph/sourcegraph/pull/63379 but with RFC 795
the release branch format changed from 5.5.1234 to 5.5.x.
This PR updates the regex to catch this new format.
The end result of this is that whenever Buildkite runs on a branch
matching `\d.\d.x`, it will push images to the
`us.gcr.io/sourcegraph-dev/gitserver` registry with the tag
`$branch-insiders`.
I've also tagged this PR for backport as we want it on the current
patch release branch 5.5.x :)
## Test plan
- Test buildkite run on branch `will-0.0.x` (with modified regex to
match that branch)
https://buildkite.com/sourcegraph/sourcegraph/builds/283608
## Changelog
<br> Backport b7242d280f from #63985
Co-authored-by: Will Dollman <will.dollman@sourcegraph.com>
Currently events are triggered whenever a user signs in with
`http-header` auth. This is because of the `GetAndSaveUser` function
always triggering an event.
However, before the new telemetry events, these events were only created
when a new user was created.
This PR brings the new telemetry code in line with the old telemetry
code to stop the massive amounts of spam caused by this event.
Closes SRC-461
## Test plan
Adjust expected events in unit test.
## Changelog
- Fixed an issue where the `http-header` auth would cause a massive
amount of event logs spam
<br> Backport cd65951961 from #63843
Co-authored-by: Petri-Johan Last <petri.last@sourcegraph.com>
Contributes to SRCH-738
Notably, this does not yet identify the root cause of SRCH-738, but it
does identify and fix some confounding bugs. It's possible that these
actually also _cause_ some of the issues in SRCH-738, but I wanted to at
least push these to dotcom, where we can reproduce some of the
weirdness. At the very least, it doesn't explain the auth errors being
reported.
(cherry picked from commit d91fab39e2)
Co-authored-by: Michael Bahr <michael.bahr@sourcegraph.com>
backport https://github.com/sourcegraph/sourcegraph/pull/63863
S2 Cody Web is broken at the moment. New client-config handlers fail
with 401 status because we don't send custom headers, this works for gql
queries since they all are POST requests and the browser automatically
sends an Origin header for them and this is enough for our auth
middleware to check cookies, but with client-config which is rest it's
not the case and we should send `X-Requested-Client: Sourcegraph` header
to make our auth middleware to pass this query correctly
Note that this problem doesn't exist in local builds since we proxy all
requests and add `X-Requested-Client: Sourcegraph` in dev server.
See Cody latest build PR for more details
https://github.com/sourcegraph/cody/pull/4898
## Test plan
CI
Co-authored-by: Vova Kulikov <vovakulikov@icloud.com>
Closes SRCH-723
The baseURL for GitHub apps defaults to `https://github.com` when no
`externalServiceURL`, we somehow missed this during our testing.

## Test plan
Manual testing with the GHE instance.
## Changelog
<br> Backport 1c40c9e5bc from #63803
Co-authored-by: Bolaji Olajide <25608335+BolajiOlajide@users.noreply.github.com>
Co-authored-by: Anish Lakhwara <anish+github@lakhwara.com>
See https://github.com/sourcegraph/sourcegraph/pull/63870
cc @sourcegraph/release
## Test plan
Covered by existing tests
## Changelog
- Adds an experimental feature `commitGraphUpdates` to control how
upload visibility is calculated.
This PR upgrades the cody web experimental package to 0.2.5, in the new
version we fixed
- Telemetry problem with init extension-related events (we don't
send install extension events anymore)
- Most recent updates on LLM availability for enterprise instances
## Test plan
- CI is green
- Manual check on basic Cody Web functionality (highly recommended) <br>
Backport e6bd85e4b7 from #63839
Co-authored-by: Vova Kulikov <vovakulikov@icloud.com>
This PR fixes an important bug in #62976, where we didn't properly
map the
symbol line match to the return type. Instead, we accidentally treated
symbol
matches like file matches and returned the start of the file.
## Test plan
Add new unit test for symbol match conversion. Extensive manual testing.
<br> Backport 004eb0fd83 from #63773
Co-authored-by: Julie Tibshirani <julietibs@apache.org>
The OTEL upgrade https://github.com/sourcegraph/sourcegraph/pull/63171
bumps the `prometheus/common` package too far via transitive deps,
causing us to generate configuration for alertmanager that altertmanager
doesn't accept, at least until the alertmanager project cuts a new
release with a newer version of `promethues/common`.
For now we forcibly downgrade with a replace. Everything still builds,
so we should be good to go.
## Test plan
`sg start` and `sg run prometheus`. On `main`, editing
`observability.alerts` will cause Alertmanager to refuse to accept the
generated configuration. With this patch, all is well it seems - config
changes go through as expected. This is a similar test plan for
https://github.com/sourcegraph/sourcegraph/pull/63329
## Changelog
- Fix Prometheus Alertmanager configuration failing to apply
`observability.alerts` from site config <br> Backport
ffa873f3ad from #63790
Co-authored-by: Robert Lin <robert@bobheadxi.dev>
This will correct6 upgrade path for mvu plan creation
## Test plan
CI test
## Changelog
<br> Backport cb19d6f0a9 from #63764
Co-authored-by: Warren Gifford <warren@sourcegraph.com>
Missing bit for the minor release version bump
## Test plan
CI
<br> Backport 087ad83995 from #63767
Co-authored-by: Jean-Hadrien Chabran <jean-hadrien.chabran@sourcegraph.com>
We created a decoder that was never used, but the package is otherwise
unused. It recently had a CVE, so this just removes it so it's no longer
part of our security surface area.
Make SetupEnvtest slightly lower-level by asking callers to construct
their own client from the returned k8s REST config. This is because
there are 2 kinds of official kubernetes client in Go - a
kubernetes.Clientset and a client.Client. The latter is
more-traditionally used in operators, because it's what a
ControllerManager.GetClient() returns, and indeed this is what our
reconciler uses.
We ended up using a kubernetes.Clientset in the envtest-using golden
tests for the reconciler, because its mechanics for listing resources
were simpler. Now, I want to reuse SetupEnvtest somewhere that needs a
client.Client. We could undertake work to use only one flavor of
kubernetes client, but this commit seems like a decent low-cost first
step.
Docker images executor, executor-kubernetes, bundled-executor has
reported high/critical CVE-2024-24790 , CVE-2023-45288 reported on
golang stdlib. Upon testing, src version 5.3.0 was using `1.20.x` as per
e8e79e0311
This pull request attempts to upgrade src version to 5.4.0
## Test plan
- CI 🟢
- src version should report 5.4.0 (I built the image locally and tested
it)
`docker run --platform linux/amd64 -it --entrypoint /bin/sh
executor:candidate`
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Upgrade src-cli version to 5.4.0 to address CVE-2024-24790 ,
CVE-2023-45288
Currently if a cloud ephemeral build is trigger it is triggered on the
`main` sourcegraph pipeline. Once a build a triggered and a commit is
subsequently pushed the previous build is cancelled - which means the
Cloud Ephemeral build is cancelled leading to a failed deployment.
In this PR, we instead trigger a build on the Cloud Ephemeral pipeline.
Which is the _exact_ pipeline as `sourcegraph` main but:
- sets the pipeline env to always have `CLOUD_EPHEMERAL=true`
- does not cancel previous builds
## Test plan
https://buildkite.com/sourcegraph/cloud-ephemeral/builds/1
## Changelog
* `sg cloud eph` will now trigger builds on the `cloud-ephemeral`
pipeline
Automatically generated PR to update package lockfiles for Sourcegraph
base images.
Built from Buildkite run
[#281769](https://buildkite.com/sourcegraph/sourcegraph/builds/281769).
## Test Plan
- CI build verifies image functionality
Co-authored-by: Buildkite <buildkite@sourcegraph.com>
Since we removed on-demand cloning, the scheduler is now expected to always contain all repositories. Thus, we no longer need to constrain the set of uncloned repos to a ginormous ID list.
Test plan:
CI still passes.
This syncer doesn't depend on anything in repo updater, so we're moving it to worker instead, where it can selectively be disabled and is properly monitored.
Test plan:
CI passes, code review.
We have been using v2 data since >5 years now, this should be safe to
remove.
As a side-effect, we have one less background task running in frontend,
which means it ran N times in horizontally scaled environments, which
isn't exactly useful.
Test plan:
Code review.
Currently, nothing really tells that Cody Gateway needs redis, the env
var for finding the address is hidden somewhere deep in the redispool
package.
In practice, we only use one redis instance, but at some point we
started using both redispool.Cache and redispool.Store, which means we
maintain two connection pools, leading to more than expected
connections.
Test plan:
Code review and CI.
This PR restructures the packages to move all symbols-only code into the
symbols service. This helps to reason better about which service is
accessing what datastores.
Test plan:
Just moved code, compiler and CI are happy.
Recently, this was refactored to also allow using the redispool.Store.
However, that makes it very implicit to know where something is being
written, so instead we pass down the pool instance at instantiation.
This also gives a slightly better overview of where redispool is
actually required.
Test plan: CI passes.