## UI Updates for Perforce Depots and Git Repos
Fixes SRCH-530
**NOTE: This PR is a refactor of an earlier
[PR](https://github.com/sourcegraph/sourcegraph/pull/64014) that was
reverted. For that reason, the PR description is largely the same.**
This PR introduces changes to the UI to differentiate between Perforce
Depots and Git repositories. Below are the key changes included in this
commit:
### 1. Dynamic Top-Level Navigation
**For Perforce Depots:**

**For Git Repos:**

### 2. Tabs on Revision Picker
**For Perforce Depots:**
Since we only need one tab for changelists, no tabs are shown.

**For Git Repos:**
We have tabs for Branches, Tags, and Commits.

### 3. Commits/Changelists Page
**For Git Repos:**
The page displays Git commits.

**For Perforce Depots:**
The page displays Perforce changelists.

### 4. Vocabulary Adjustments
- We display either Git commit SHAs or Changelist IDs based on the
project type.
- For authorship, we use "submitted by" for Perforce and "committed by"
for Git.
- We refer to "Commits" for Git projects and "Changelists" for Perforce
projects.
**Examples:**
- **For Git Commits:**

- **For Perforce Changelists:**

### 5. URL Mapping
URLs are now structured differently based on the project type:
- **Commits Page:**
- Git: `/[repo-name]/-/commits`
- Perforce: `/[repo-name]/-/changelists`
- **Individual Item Page:**
- Git: `/[repo-name]/-/commit/[commit-hash]`
- Perforce: `/[depot-name]/-/changelist/[changelist-ID]`
When viewing a specific commit or changelist:
- **Git:** `/[repo-name]@[git-commit-hash]`
- **Perforce:** `/[repo-name]@changelist/[changelist-id]`
_NOTE: The value displayed in the search field will also change
accordingly._
### What is left to be done?
**On repo search results, when searching a revision, we still show the
git commit SHA instead of the changelist ID for perforce depots:**

I plan to make a follow-up issue for this and begin work on it
immediately. It's a little trickier than the other changes because in
the RepositoryMatch type, there is no value that can help us determine
whether a project is a depot or a repo. We need to find another way to
fetch that data.
### Request for reviewers:
1. Please try to break these new features and tell me what you find. I
stumbled on a number of little gotchas while working on this, and I'm
sure I've missed some.
## Test plan
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
- Manual/Visual testing
- Adjust e2e and integration tests to obtain a passing CI
- Test directly visiting a URL versus getting there via click
- Add unit tests for new/updated helper functions
---------
Co-authored-by: Camden Cheek <camden@ccheek.com>
We're currently evaluating only pushing on Dockerhub when releasing, as
everything else uses GCR, but until then, we lower the concurrency to 4
when pushing there, and we keep 8 on the others.
Follow-up to https://github.com/sourcegraph/devx-support/issues/1163
## Test plan
CI
ALTER TABLE with a foreign key constraint will _always_ add the
constraint, which means we always require a table lock even if this
migration has run. So we check if the column exists first.
The motivation for this change was me noticing slow migrations against
S2, even though it had been migrated already. This speeds it up since it
avoids a bunch of locks.
In testing I kept running into issues with the lsif_* tables. We agreed
we will only migrate them later.
Test Plan: ran against my local instance and S2 multiple times and CI.
Also manually ran the down and up migration against dotcom
This PR makes `WithPrefix` visible on the `KeyValue` interface.
Previously, we had other interface implementations that did not support
`WithPrefix`, but now `redisKeyValue` is the only implementation.
`WithPrefix` is currently just used in tests, but it's broadly useful
and will help clean up a bunch of places that wrap Redis and manually
add key prefixes.
The improvement to the triggers I tried to ship yesterday failed on
dotcom. This is because it has the same issue as the tenant_id column
where we can't acquire the ShareRowExclusiveLock for multiple important
tables in a transaction without deadlocking. We use the same technique
where we commit and restart the transaction. This is safe to do since
even if it fails, the migration is idempotent and partial migration
still works.
Additionally after reading the documentation on triggers more deeply
[1], I noticed we can add in a WHEN clause to triggers. We already have
this in the functions, but this should be more efficient.
I didn't want to remove the condition from the functions just yet, since
that makes the migration more risky under partial failure.
## Test Plan
CI. Also added BEGIN and COMMIT to the migration than ran locally with
``` shell
psql -f migrations/frontend/1723557836_faster_soft_delete_trigger_for_permissions/up.sql
```
Then inspected the DB state for repo and had a good looking trigger come
back
```
\d+ repo
...snip
Triggers:
trig_delete_user_repo_permissions_on_repo_soft_delete AFTER UPDATE
OF deleted_at ON repo FOR EACH ROW WHEN (new.deleted_at IS NOT NULL
AND old.deleted_at IS NULL) EXECUTE FUNCTION
delete_user_repo_permissions_on_repo_soft_delete()
```
Additionally before landing I am going to manually run this migration on
S2 first, then on dotcom to see how it behaves when there is lots of
traffic and scale.
This test ensures that newly added tables also have the tenant_id
column. We will later extend this test to also check that RLS policies
exist, once we created them.
Test plan: Test passes, when I modify the new migration that adds
tenant_id everywhere to skip a table it fails with a nice error message.
Fixes CODY-3085
Fixes CODY-3086
Previously, there was no way for OpenAI clients to list the available
models on Sourcegraph or query metadata about a given model ID ("model
ref" using our internal terminology). This PR fixes that problem AND
additionally adds infrastructure to auto-generate Go models from a
TypeSpec specification.
[TypeSpec](https://typespec.io/) is an IDL to document REST APIs,
created by Microsoft. Historically, the Go code in this repository has
been the single source of truth about what exact JSON structures are
expected in HTTP request/response pairs in our REST endpoints. This new
TypeSpec infrastructure allows us to document these shapes at a higher
abstraction level, which has several benefits including automatic
OpenAPI generation, which we can use to generate docs on
sourcegraph.com/docs or automatically generate client bindings in
TypeScript (among many other use-cases).
I am planning to write an RFC to propose we start using TypeSpec for new
REST endpoints going forward. If the RFC is not approved then we can
just delete the new `tools/typespec_codegen` directory and keep the
generated code in the repo. It won't be a big difference in the end
compared our current manual approach of writing Go structs for HTTP
APIs.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
See test cases. I additionally wrote a basic python script with the
official OpenAI client to test that it works with this endpoint. First,
I ran `sg start minimal`. Then I wrote this script
```py
import os
from openai import OpenAI
from dotenv import load_dotenv
import httpx
load_dotenv()
openai = OpenAI(
# base_url="https://api.openai.com/v1",
# api_key=os.getenv("OPENAI_API_KEY"),
base_url="https://sourcegraph.test:3443/api/v1",
api_key=os.getenv("SRC_ACCESS_TOKEN"),
http_client=httpx.Client(verify=False)
)
def main():
response = openai.models.list()
for model in response.data:
print(model.id)
if __name__ == "__main__":
main()
```
Finally, I ran
```
❯ python3 models.py
anthropic::unknown::claude-3-haiku-20240307
anthropic::unknown::claude-3-sonnet-20240229
fireworks::unknown::starcoder
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* New `GET /.api/llm/models` and `GET /.api/llm/models/{modelId}` REST
API endpoints to list available LLM models on the instance and to get
information about a given model. This endpoints is compatible with the
`/models` and `/models/{modelId}` endpoints from OpenAI.
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
We want to discourage direct usage of the Redis pool in favor of routing
all calls through the main `KeyValue` interface. This PR removes several
usages of `KeyValue.Pool`. To do so, it adds "PING" and "MGET" to the
`KeyValue` interface.
Docs here: https://github.com/sourcegraph/docs/pull/561
This PR adds support for using Bitbucket Server OAuth2 application links
for sign-in and permission syncing.
When used for permission syncing, the user's oauth token is used to
fetch user permissions (and now permissions are fetched via the server).
## Test plan
Tests added and updated.
## Changelog
- Sourcegraph now supports Bitbucket Server OAuth2 application links for
user sign-in and permission syncing.
Did a ripgrep to see if there are any other `console.log` statements
that should be removed but this was the only one.
## Test plan
code inspection, trivial change
Closes DINF-182
Before `sg start otel` would fail if `.bin` did not exist. We now create
the destination directory if it doesn't exist
## Test plan
1. `rm -rf .bin`
2. `sg start otel`
and unit tests + CI
## Changelog
* when downloading files, ensure the directory we download to exists
**chore(appliance): version list obtained from backend**
Instead of calling release registry directly from the frontend. This
commit is just preparation for a fallback mechanism for users that do
not want the external dependency on release registry.
**feat(appliance): optionally load pinned releases file**
Instead of calling release registry. This is a fallback mechanism for
airgap users.
**feat(appliance): respect pinned release versions during self-update**
This PR removes the `redispool.RedisKeyValue` constructor in favor of
the `New...KeyValue` methods, which do not take a pool directly. This
way callers won't create a `Pool` reference, allowing us to track all
direct pool usage through `KeyValue.Pool()`.
This also simplifies a few things:
* Tests now use `NewTestKeyValue` instead of dialing up localhost
directly
* We can remove duplicated Redis connection logic in Cody Gateway
Closes #srch-906
This commit adds the cody chat page to svelte. This is simply reusing
the existing wrapper around the React component and renders it in a
standalone page.
When merged this will cause the cody chat page to be handled by new web
app on dotcom by default.
## Test plan
- Verified that chat loads and the tabs are clickable.
- Verified that the scroll behavior works as in the React app.
- Verified that the sidebar chat still works as expected (scroll
behavior, default context loading/switching)
- The time.Second frequency is too frequent to be checking if the job
should be run, I think I set this in testing
- Adjust the Slack messaging to say `Active license`
- Adjust the Slack messaging to include the Salesforce subscription ID
## Test plan
Tests
This is a partial revert commit of
cbd12608b5.
We added support for NoTransaction but it isn't needed anymore. In fact
avoiding transactions leads to issues like poisoning connections.
Test Plan: CI
We believe the migration to add tenant_id is failing on dotcom is due to
this trigger which runs for any possible change to a row. Instead it
only needs to run if the deleted_at column changes. So this migration
just adjusts each trigger to only run if deleted_at is changed.
We don't need to change the trigger function, since the trigger
functions only read deleted at.
If this doesn't resolve our issue, we will need to look into a statement
level trigger for all our triggers.
Test Plan: CI and then will test safely on larger instances.
Fixes srch-882
I noticed that the button wasn't shown anymore (neither in Svelte nor in
React). It seems that this broke when we switched from using the file
extension to getting the language(s) from the server.
The server sends back capitalized names which we compare against
lowercase IDs.
If there there is a new/modern way to find out whether a language
support 'find implementation' or not, please let me know. For the time
being this seems to be a simple fix to get it working again like it
before. Alternatively we can also compare the stylized name.
## Test plan
Hovering over a Go interface (e.g.
https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/internal/conf/server.go?L12)
shows the 'find implementations' button in the hovercard again)
Fixes srch-744
It seems that the code for testing whether the target element is part of
the layout didn't work in hovercards because it (possibly?) runs before
the hovercard is rendered.
Moving the logic to `onMount` + `await tick()` seem to work, although
that might still be a coincidence.
## Test plan
Hovering over the 'precise' badge shows the corresponding tooltip.
We dived into our go postgres driver and when executing a migration it
is executed as a "simple query". Postgres in this case automatically
wraps the collection of statements in a transaction, unless it contains
transaction statements. So our last attempt at removing the transaction
failed.
In this attempt we use COMMIT AND CHAIN after each table alter. What
this does is commit the current transaction and then starts it up again.
From the perspective of the go driver, it is as if there was only one
transaction. We then switch the migration to using a transaction to
ensure the go drivers clean up the postgres connection in case of
failure.
IE if a query manually starts a transaction and does not clean up, the
connection will be marked as broken for the next person who gets the
connection from the pool. By wrapping in go's transaction code the
connection will be properly cleaned up.
Test Plan: All continuous environments have already succeeded or failed
on this migration number. So we will manually run this again against
them with the migrator code to ensure the same code paths. If they
succeed we will keep code as is, otherwise we will rollback.
Additionally we did lots of adhoc testing to understand the
characteristics of go and transaction handling.
Co-authored-by: Erik Seliger <erikseliger@me.com>
Previously, it took ~6 seconds for a single edit/test/debug feedback
loop in the `llmapi` module. After this change, it's now 1-2s.
The reason the feedback loop was slow was that we depended on the
`//cmd/frontend/internal/modelconfig` target, which transitively brings
in `graphqlbackend` and all the migration code, which adds huge overhead
to Go link times. It was relatively easy to untangle this dependency so
I went ahead and removed it to boost my local feedback loop.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Green CI.
To measure the timing, I ran the tests, made a tiny change and ran the
tests against to measure the total time to build+test.
```
# Before
❯ time go test -timeout 30s github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi
ok github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi 2.394s
go test -timeout 30s 4.26s user 4.73s system 166% cpu 5.393 total
# After
❯ time go test -timeout 30s github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi
ok github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi 0.862s
go test -timeout 30s 1.20s user 1.21s system 135% cpu 1.774 total
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Currently, build-tracker keeps track of consecutive build failures
through an in-memory store of failed builds. As this gets deployed more
frequently on MSP, we lose state more frequently which would result in
incorrect results. Instead, we can use redis as our external store as
well as for locking using redsync
## Test plan
Unit tests have been updated, but proper testing will require live
traffic
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Fixes CODY-3269
Previously, the OpenAI-compatible API endpoints had paths like
`/api/v1/chat/completions`, which went against an existing convention of
keeping all APIs under the `/.api` prefix. We have a fair amount of
internal tooling centered around the assumption that APIs have the
`/.api` prefix so this PR corrects the mistake and moves the
`/api/v1/chat/completions` endpoint to `/.api/llm/chat/completions`.
I went with the prefix `/.api/llm` since these allow clients to interact
with LLM features like `/chat/completions` or listing model information.
These APIs happen to be compatible with OpenAI APIs but I think it will
be more confusing to add something like "openai" or "openaicomptable" in
the API endpoint. We can just document on our website that these
endpoints are compatible with OpenAI clients.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Green CI. I also manually confirmed that I was able to use an OpenAI
client to send requests to the new APIs.
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* Moved `/api/v1/chat/completions` to `/.api/llm/chat/completions`. The
functionality is unchanged.
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
When we added a migration that actually relies on the promised property
of migrations always running inside a transaction, we noticed that the
memoryStore implementation we use in the dbtest package does not
actually start a transaction. This implements transactions for the
memorystore as well, to make it pass.
Test plan: Our migration now passes, and dbtest performance with and
without this change is the same.
Fixes CODY-3267
Previously, requests to `/api/` matched the new `publicrestapi` module,
which meant that requests to the GraphQL `/api/console` no longer
worked. This PR fixes the problem by narrowing the prefix-match to
`/api/v1` so that it no longer matches `/api/console`.
I kept the scope of this PR narrow and only fixed the /api/console bug.
I will share a separate RFC to seek input on the tradeoffs between
/api/v1 and /.api. I can make that change separately if there's wide
consensus in #wg-architecture that we want to keep all API endpoints
(public and internal-facing) under /.api.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Ran `sg start minimal` and confirmed that I'm able to visit the URL
https://sourcegraph.test:3443/api/console
On the main branch, the same URL leads to a 404 page.
I also confirmed that the `/api/v1/chat/completions` endpoint still
works as expected.
```hurl
POST https://sourcegraph.test:3443/api/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{token}}
{
"model": "anthropic::unknown::claude-3-sonnet-20240229",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Respond with \"no\" and nothing else"
}
]
}
],
"temperature": 1,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}
```
```sh
❯ hurl hurl-scratchpad/openai-sg.hurl
{"id":"chat-1727acdf-6850-4387-950b-2e89850071fa","choices":[{"finish_reason":"end_turn","index":0,"message":{"content":"no","role":"assistant"}}],"created":1723536215,"model":"anthropic::unknown::claude-3-sonnet-20240229","system_fingerprint":"","object":"chat.completion","usage":{"completion_tokens":0,"prompt_tokens":0,"total_tokens":0}}
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Our redis `KeyValue` interface has a built-in way to prefix all keys.
This is helpful for integration tests, to contain all keys within a test
namespace.
This PR solidifies the implementation by pushing down the prefixing as
much as possible. In doing so, it fixes a bug with the `Keys()` method
where we forgot to prefix the pattern. This makes it easier to add new
methods to the interface without forgetting to add the prefix.
We hit a deadlock when deploying this migration to s2. This is because
within our transaction of the migration we likely didn't obtain table
locks in the same order as a transaction in our application code.
So this commit introduces a new migration metadata field
"noTransaction". The documentation for migrator says you should create a
migration per needed transactions. However, this would require us to
create 100s of migrations. We believe the better approach is introducing
this field and barely advertising it.
When reading the code which actually runs migrations, there is no extra
logic done outside of BEGIN; run_migration; COMMIT; so this change is
safe.
We update the migrations to avoid duplicating the function name we
introduce in case something goes wrong (now that the transaction could
leak out the function name).
Test Plan: The actual migrations are tested by go test. I added a test
assertion that we don't call Transact, but to be honest that is super
sketchy. However, we couldn't actually find any test fixtures which
actually run against the DB. So that would require a much deeper
investment for how simple the code change is.
Co-authored-by: Erik Seliger <erikseliger@me.com>
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
This PR adds the ability to use `--type minor` when running `sg release
create` during the release process.
For the time being this step is the _same_ as `--type patch` which is
the default, however this allows us to differentiate the two types now
and prepares for when/if the two types diverge. This also clears up the
some confusion as the `sg release` command _can_ accept `--type minor`
already and one would expect that to be the choice when you are in fact
cutting a minor type.
Closes:
https://linear.app/sourcegraph/issue/REL-351/sourcegraphsourcegraph64377-fixrelease-add-minor-step-to-internal
## Test plan
Tested locally with `--type minor` tag.
```shell
➜ sourcegraph git:(08-08-jdp_release_minor-flag-addition) sg release create --version 5.6.877 --type minor
👉 [ setup] Finding release manifest in "."
[ setup] No explicit branch name was provided, assuming current branch is the target: 08-08-jdp_release_minor-flag-addition
[ setup] Found manifest for "sourcegraph" (github.com/sourcegraph/sourcegraph)
[ meta] Owners: @sourcegraph/release
[ meta] Repository: github.com/sourcegraph/sourcegraph
👉 [ vars] Variables
[ vars] version="v5.6.877"
[ vars] tag="5.6.877"
[ vars] config="{\"version\":\"v5.6.877\",\"inputs\":\"\",\"type\":\"minor\"}"
[ vars] git.branch="08-08-jdp_release_minor-flag-addition"
[ vars] is_development="false"
.... Stuff here
[ buildkite] Build created, see:
[ buildkite] "https://buildkite.com/sourcegraph/sourcegraph/builds/287192"
[ step] Step "buildkite" succeeded
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
Internal change, N/A
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
In codenav, we use very few APIs from gitserver, so let's use a more
narrow interface (which we can potentially fake in the future) instead
of depending on gitserver.Client directly.
In the future this will allow us to attribute stack traces collected by
pprof to a tenant. This only sets it for the http middleware. I am
unsure how to achieve the same thing for grpc, since that uses
propogators.
Test Plan: captured a goroutine profile and saw some goroutines with a
tenant label
---------
Co-authored-by: Erik Seliger <erikseliger@me.com>
I was tired of seeing this warning when running gazelle:
gazelle: finding module path for import github.com/go-critic/go-critic/framework/linter: go: downloading github.com/go-critic/go-critic v0.11.4
go: module github.com/go-critic/go-critic@upgrade found (v0.11.4), but does not contain package github.com/go-critic/go-critic/framework/linter
This updated go-critic to the latest version which makes the warning go
away since it now uses the new package path.
Test Plan: gazelle is happy and CI is happy
This change ensure we correctly handle:
1. In Enterprise Portal, where no active license is available, we return
ratelimit=0 intervalduration=0, from the source `PLAN` (as this is
determined by the lack of a plan)
2. In Cody Gateway, where intervalduration=0, we do not grant access to
that feature
## Test plan
Unit tests