## UI Updates for Perforce Depots and Git Repos
Fixes SRCH-530
**NOTE: This PR is a refactor of an earlier
[PR](https://github.com/sourcegraph/sourcegraph/pull/64014) that was
reverted. For that reason, the PR description is largely the same.**
This PR introduces changes to the UI to differentiate between Perforce
Depots and Git repositories. Below are the key changes included in this
commit:
### 1. Dynamic Top-Level Navigation
**For Perforce Depots:**

**For Git Repos:**

### 2. Tabs on Revision Picker
**For Perforce Depots:**
Since we only need one tab for changelists, no tabs are shown.

**For Git Repos:**
We have tabs for Branches, Tags, and Commits.

### 3. Commits/Changelists Page
**For Git Repos:**
The page displays Git commits.

**For Perforce Depots:**
The page displays Perforce changelists.

### 4. Vocabulary Adjustments
- We display either Git commit SHAs or Changelist IDs based on the
project type.
- For authorship, we use "submitted by" for Perforce and "committed by"
for Git.
- We refer to "Commits" for Git projects and "Changelists" for Perforce
projects.
**Examples:**
- **For Git Commits:**

- **For Perforce Changelists:**

### 5. URL Mapping
URLs are now structured differently based on the project type:
- **Commits Page:**
- Git: `/[repo-name]/-/commits`
- Perforce: `/[repo-name]/-/changelists`
- **Individual Item Page:**
- Git: `/[repo-name]/-/commit/[commit-hash]`
- Perforce: `/[depot-name]/-/changelist/[changelist-ID]`
When viewing a specific commit or changelist:
- **Git:** `/[repo-name]@[git-commit-hash]`
- **Perforce:** `/[repo-name]@changelist/[changelist-id]`
_NOTE: The value displayed in the search field will also change
accordingly._
### What is left to be done?
**On repo search results, when searching a revision, we still show the
git commit SHA instead of the changelist ID for perforce depots:**

I plan to make a follow-up issue for this and begin work on it
immediately. It's a little trickier than the other changes because in
the RepositoryMatch type, there is no value that can help us determine
whether a project is a depot or a repo. We need to find another way to
fetch that data.
### Request for reviewers:
1. Please try to break these new features and tell me what you find. I
stumbled on a number of little gotchas while working on this, and I'm
sure I've missed some.
## Test plan
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
- Manual/Visual testing
- Adjust e2e and integration tests to obtain a passing CI
- Test directly visiting a URL versus getting there via click
- Add unit tests for new/updated helper functions
---------
Co-authored-by: Camden Cheek <camden@ccheek.com>
Fixes CODY-3085
Fixes CODY-3086
Previously, there was no way for OpenAI clients to list the available
models on Sourcegraph or query metadata about a given model ID ("model
ref" using our internal terminology). This PR fixes that problem AND
additionally adds infrastructure to auto-generate Go models from a
TypeSpec specification.
[TypeSpec](https://typespec.io/) is an IDL to document REST APIs,
created by Microsoft. Historically, the Go code in this repository has
been the single source of truth about what exact JSON structures are
expected in HTTP request/response pairs in our REST endpoints. This new
TypeSpec infrastructure allows us to document these shapes at a higher
abstraction level, which has several benefits including automatic
OpenAPI generation, which we can use to generate docs on
sourcegraph.com/docs or automatically generate client bindings in
TypeScript (among many other use-cases).
I am planning to write an RFC to propose we start using TypeSpec for new
REST endpoints going forward. If the RFC is not approved then we can
just delete the new `tools/typespec_codegen` directory and keep the
generated code in the repo. It won't be a big difference in the end
compared our current manual approach of writing Go structs for HTTP
APIs.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
See test cases. I additionally wrote a basic python script with the
official OpenAI client to test that it works with this endpoint. First,
I ran `sg start minimal`. Then I wrote this script
```py
import os
from openai import OpenAI
from dotenv import load_dotenv
import httpx
load_dotenv()
openai = OpenAI(
# base_url="https://api.openai.com/v1",
# api_key=os.getenv("OPENAI_API_KEY"),
base_url="https://sourcegraph.test:3443/api/v1",
api_key=os.getenv("SRC_ACCESS_TOKEN"),
http_client=httpx.Client(verify=False)
)
def main():
response = openai.models.list()
for model in response.data:
print(model.id)
if __name__ == "__main__":
main()
```
Finally, I ran
```
❯ python3 models.py
anthropic::unknown::claude-3-haiku-20240307
anthropic::unknown::claude-3-sonnet-20240229
fireworks::unknown::starcoder
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* New `GET /.api/llm/models` and `GET /.api/llm/models/{modelId}` REST
API endpoints to list available LLM models on the instance and to get
information about a given model. This endpoints is compatible with the
`/models` and `/models/{modelId}` endpoints from OpenAI.
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
We want to discourage direct usage of the Redis pool in favor of routing
all calls through the main `KeyValue` interface. This PR removes several
usages of `KeyValue.Pool`. To do so, it adds "PING" and "MGET" to the
`KeyValue` interface.
Docs here: https://github.com/sourcegraph/docs/pull/561
This PR adds support for using Bitbucket Server OAuth2 application links
for sign-in and permission syncing.
When used for permission syncing, the user's oauth token is used to
fetch user permissions (and now permissions are fetched via the server).
## Test plan
Tests added and updated.
## Changelog
- Sourcegraph now supports Bitbucket Server OAuth2 application links for
user sign-in and permission syncing.
**chore(appliance): version list obtained from backend**
Instead of calling release registry directly from the frontend. This
commit is just preparation for a fallback mechanism for users that do
not want the external dependency on release registry.
**feat(appliance): optionally load pinned releases file**
Instead of calling release registry. This is a fallback mechanism for
airgap users.
**feat(appliance): respect pinned release versions during self-update**
This PR removes the `redispool.RedisKeyValue` constructor in favor of
the `New...KeyValue` methods, which do not take a pool directly. This
way callers won't create a `Pool` reference, allowing us to track all
direct pool usage through `KeyValue.Pool()`.
This also simplifies a few things:
* Tests now use `NewTestKeyValue` instead of dialing up localhost
directly
* We can remove duplicated Redis connection logic in Cody Gateway
Closes #srch-906
This commit adds the cody chat page to svelte. This is simply reusing
the existing wrapper around the React component and renders it in a
standalone page.
When merged this will cause the cody chat page to be handled by new web
app on dotcom by default.
## Test plan
- Verified that chat loads and the tabs are clickable.
- Verified that the scroll behavior works as in the React app.
- Verified that the sidebar chat still works as expected (scroll
behavior, default context loading/switching)
- The time.Second frequency is too frequent to be checking if the job
should be run, I think I set this in testing
- Adjust the Slack messaging to say `Active license`
- Adjust the Slack messaging to include the Salesforce subscription ID
## Test plan
Tests
Previously, it took ~6 seconds for a single edit/test/debug feedback
loop in the `llmapi` module. After this change, it's now 1-2s.
The reason the feedback loop was slow was that we depended on the
`//cmd/frontend/internal/modelconfig` target, which transitively brings
in `graphqlbackend` and all the migration code, which adds huge overhead
to Go link times. It was relatively easy to untangle this dependency so
I went ahead and removed it to boost my local feedback loop.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Green CI.
To measure the timing, I ran the tests, made a tiny change and ran the
tests against to measure the total time to build+test.
```
# Before
❯ time go test -timeout 30s github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi
ok github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi 2.394s
go test -timeout 30s 4.26s user 4.73s system 166% cpu 5.393 total
# After
❯ time go test -timeout 30s github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi
ok github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi 0.862s
go test -timeout 30s 1.20s user 1.21s system 135% cpu 1.774 total
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Fixes CODY-3269
Previously, the OpenAI-compatible API endpoints had paths like
`/api/v1/chat/completions`, which went against an existing convention of
keeping all APIs under the `/.api` prefix. We have a fair amount of
internal tooling centered around the assumption that APIs have the
`/.api` prefix so this PR corrects the mistake and moves the
`/api/v1/chat/completions` endpoint to `/.api/llm/chat/completions`.
I went with the prefix `/.api/llm` since these allow clients to interact
with LLM features like `/chat/completions` or listing model information.
These APIs happen to be compatible with OpenAI APIs but I think it will
be more confusing to add something like "openai" or "openaicomptable" in
the API endpoint. We can just document on our website that these
endpoints are compatible with OpenAI clients.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Green CI. I also manually confirmed that I was able to use an OpenAI
client to send requests to the new APIs.
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* Moved `/api/v1/chat/completions` to `/.api/llm/chat/completions`. The
functionality is unchanged.
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Fixes CODY-3267
Previously, requests to `/api/` matched the new `publicrestapi` module,
which meant that requests to the GraphQL `/api/console` no longer
worked. This PR fixes the problem by narrowing the prefix-match to
`/api/v1` so that it no longer matches `/api/console`.
I kept the scope of this PR narrow and only fixed the /api/console bug.
I will share a separate RFC to seek input on the tradeoffs between
/api/v1 and /.api. I can make that change separately if there's wide
consensus in #wg-architecture that we want to keep all API endpoints
(public and internal-facing) under /.api.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Ran `sg start minimal` and confirmed that I'm able to visit the URL
https://sourcegraph.test:3443/api/console
On the main branch, the same URL leads to a 404 page.
I also confirmed that the `/api/v1/chat/completions` endpoint still
works as expected.
```hurl
POST https://sourcegraph.test:3443/api/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{token}}
{
"model": "anthropic::unknown::claude-3-sonnet-20240229",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Respond with \"no\" and nothing else"
}
]
}
],
"temperature": 1,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}
```
```sh
❯ hurl hurl-scratchpad/openai-sg.hurl
{"id":"chat-1727acdf-6850-4387-950b-2e89850071fa","choices":[{"finish_reason":"end_turn","index":0,"message":{"content":"no","role":"assistant"}}],"created":1723536215,"model":"anthropic::unknown::claude-3-sonnet-20240229","system_fingerprint":"","object":"chat.completion","usage":{"completion_tokens":0,"prompt_tokens":0,"total_tokens":0}}
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
This change ensure we correctly handle:
1. In Enterprise Portal, where no active license is available, we return
ratelimit=0 intervalduration=0, from the source `PLAN` (as this is
determined by the lack of a plan)
2. In Cody Gateway, where intervalduration=0, we do not grant access to
that feature
## Test plan
Unit tests
The `DeleteAllKeysWithPrefix` method is now only used in tests to ensure
the test keyspace is clear. This PR makes it clear this method is only
used in tests, and simplifies the implementation so it no longer needs a
script + direct Redis connection.
Another tiny Redis refactor to prepare for multi-tenancy work.
This PR passes the 'boost' annotation down to searcher, so that it can
apply phrase boosting. For now, we just pass the boost to the Zoekt
query in hybrid search, which already gives a nice benefit since the
Zoekt results are streamed back first.
Note: this doesn't completely implement boosting in searcher, but it was
really simple and seemed worth it. We're purposefully not investing in
big searcher ranking improvements, since we think a better investment is
to unify logic across Zoekt + searcher.
Adds a super simple E2E test suite that must be run with `sg test
enterprise-portal-e2e` against a locally running Enterprise Portal
instance. This is not intended to be filled with super granular
assertions - it simply tests that everything is wired up correctly and
runs end-to-end.
Caught at ~3 issues with this already, amended various downstack PRs
with the fix 😆
Closes
https://linear.app/sourcegraph/issue/CORE-229/enterprise-portal-basic-manual-e2e-testing-setup
## Test plan
```
sg start dotcom
sg test enterprise-portal-e2e
```
No additional configuration required, the defaults work as-is
What: This PR does the bare minimum to migrate the current community
search pages to Svelte. A better strategy for managing them is needed in
the medium/long term.
How: The community pages live at the root (e.g. `/kubernetes`) which
complicates things, but I'll get to that later. The page is implemented
as a single parameterized route. A parameter matcher is used to validate
the community name. Because these pages should only be accessible on
dotcom the matcher also validates whether or not we are on dotcom (if
not, the path will be matched against a different route).
The page config is stored in a separate module so that it's no included
in every page and so that it can be used in the integration test.
The loader and page implementation themselves are straightforward. I
made a couple of changes in other modules to make implementation easier:
- Extracted the parameter type of the `marked` function so that it can
be used as prop type.
- Added an `inline` option to `marked` that allows formatting markdown
as 'inline', i.e. without `p` wrapper.
- Added a `wrap` prop to `SyntaxHighlightedQuery.svelte` to configure
line wrapping of syntax highlighted search queries (instead of having to
overwrite styles with `:global`).
- Extended the route code generator to be able to handle single
parameter segments and the `communitySearchContext` matcher.
Because the community routes should only be available on dotcom I added
a new tag to the code generator that allows it include routes only for
dotcom.
Once we change how all this works and have community search pages live
under a different path we can simplify this again.
Result:
| React | Svelte |
|--------|--------|
|

|

|
## Test plan
- New integration tests.
- Verified that `/kubernetes` shows a 'repo not found error' when
running against S2.
- Verified that `/kubernetes` shows the community page when running
against dotcom.
- Verified that `window.context.svelteKit.enabledRoutes` contains the
community page route in enterprise mode but not in dotcom mode.
Required to build an updated subscriptions management UI.
Most of the diff is generated proto for some reason
Closes https://linear.app/sourcegraph/issue/CORE-226
## Test plan
Integration tests
Connecting to the wrong DSN leads to a cryptic error:
```
Message: {"SeverityText":"FATAL","Timestamp":1723196934012096886,"InstrumentationScope":"init db (syntactic-codeintel-worker)","Caller":"shared/shared.go:87","Function":"github.com/sourcegraph/sourcegraph/cmd/syntactic-code-intel-worker/shared.initCodeintelDB","Body":"Failed to connect to codeintel database","Resource":{"service.name":"syntactic-code-intel-worker","service.version":"286647_2024-08-08_5.6-34a7914fb884","service.instance.id":"syntactic-code-intel-worker-7bb9ccc75c-mkzpk"},"Attributes":{"error":"database schema out of date"}}
```
## Test plan
- existing tests should continue to pass
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Fixes CODY-3081
This is the first PR for the project [⏭️ Launch Cody API
Experimental](https://linear.app/sourcegraph/project/launch-cody-api-experimental-8fd5ec338bf4),
which falls under the umbrella of moving Cody's brains to the cloud.
Previously, there was no publicly available REST API for our customers
to interact with Cody. This is a frequently requested feature from
customers and prospects.
This PR adds a new `POST /api/v1/chat/completions` endpoint, which
should be compatible with existing OpenAI clients. The OpenAI API format
is increasingly becoming an industry standard so this seems like a good
first step towards exposing a stable publicly facing API for our
customers.
The goal is to add more Cody-specific APIs in the coming weeks to send
chat messages and reference context.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
See added test cases.
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* API: new publicly available `/api/v1/chat/completions` REST endpoint
that is compatible with OpenAI clients with some restrictions. The detailed list of restrictions will eventually be documented on sourcegraph.com/docs
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
On double-checking the various Cody Gateway access management CRUD I
noticed that I did not implement the ability to remove the override
entirely properly - right now, removing a rate limit with `field_mask`
and no values results in a zero-value rate limit rather than a null one
as desired.
This change makes it so that if you provider a nil `*_rate_limit`
object, all field paths for that object is also set to nil, removing the
override and allowing the default active-license-plan-based rate limits
to apply.
This also adds a minor adjustment to Cody Gateway to correctly handle
zero durations for rate limit intervals, which should be treated as "no
access".
## Test plan
Unit/integration tests, and a manual test:
- Check out https://github.com/sourcegraph/sourcegraph/pull/64090 and
run `sg start dotcom`
- Go to a subscription, add a rate limit override

- Click the 🗑️ button to remove the rate limit override

- Repeat above on each property
This change follows
https://github.com/sourcegraph/sourcegraph/pull/63858 by making the Cody
Access APIs _read_ from the Enterprise Portal database, instead of
dotcomdb, using the data that we sync from dotcomdb into Enterprise
Portal.
As part of this, I also expanded the existing "compatibility" test suite
that compares the result of our dotcomdb queries against the existing
GraphQL resolvers in dotcom to also compare the results of our new Cody
Access APIs, to validate that they return the same access.
> [!WARNING]
> There is one behavioural change, which is that hashes of _expired
licenses_ will no longer be valid as access tokens. This shouldn't be an
issue if customers use zero-config (implied access token from their
license key) - I will do some outreach before rolling this out.
Subsequent PRs will implement write APIs.
Part of https://linear.app/sourcegraph/issue/CORE-218
Part of https://linear.app/sourcegraph/issue/CORE-160
## Test plan
Integration and unit tests at various layers
The utils are currently used outside of cmd/frontend, so to break the
internal -> cmd import I'm moving and merging the utils package with
internal/gqlutil.
Test plan: Just moved around and renamed a package, the go compiler
doesn't complain.
This PR aims to craft the /internal/tenant package for use by all Sourcegraph cluster-internal services to properly scope data visibility to the correct requesting tenant.
For now, we only expose methods we know we will DEFINITELY need.
This PR also adds the required middlewares so we can start to tinker with it in implementations.
## Test plan
CI passes. We don't enforce anything for now except not passing unparseable tenant IDs, which should be fine.
There was a todo comment that said we want to consolidate them but didn't yet to keep another diff smaller - so now we're doing that.
Test plan: Integration tests for search still pass.
To prevent cross-cmd imports in the future, moving the backend package into internal.
Test plan: Just moved a package around, Go compiler doesn't complain and CI still passes.
These functions are not required to be called outside of frontend, so there's no need to reexport them. Instead, we consolidate the signout cookie logic in the session package.
Test plan: Just moved some code around, go compiler doesn't complain.
This PR makes the calls to create the OIDC provider explicit, so that we don't need to implicitly need to call a Refresh method, even if we might end up not needing the `p.oidc`.
This is a start toward being able to create providers on the fly cheaply vs having a globally managed list of providers in memory.
We did call refresh everywhere we do now anyways to the best of my understanding, so a passive goroutine in the background to my best understanding didn't add a lot here.
Test plan: Auth with SAMS locally still works.
2nd attempt of #63111, a follow up
https://github.com/sourcegraph/sourcegraph/pull/63085
rules_oci 2.0 brings a lot of performance improvement around oci_image
and oci_pull, which will benefit Sourcegraph. It will also make RBE
faster and have less load on remote cache.
However, 2.0 makes some breaking changes like
- oci_tarball's default output is no longer a tarball
- oci_image no longer compresses layers that are uncompressed, somebody
has to make sure all `pkg_tar` targets have a `compression` attribute
set to compress it beforehand.
- there is no curl fallback, but this is fine for sourcegraph as it
already uses bazel 7.1.
I checked all targets that use oci_tarball as much as i could to make
sure nothing depends on the default tarball output of oci_tarball. there
was one target which used the default output which i put a TODO for
somebody else (somebody who is more on top of the repo) to tackle
**later**.
## Test plan
CI. Also run delivery on this PR (don't land those changes)
---------
Co-authored-by: Noah Santschi-Cooney <noah@santschi-cooney.ch>
Turns out, the `dev` tag is not a reliable indicator of whether a
license is for dev use only - let's just import all subscriptions into
prod, and drop the customer records from dev later.
## Test plan
CI
Adds a background job that can periodically import subscriptions,
licenses, and Cody Gateway access from dotcom.
Note that subscriptions and licenses cannot be deleted, so we don't need
to worry about that. Additionally licenses cannot be updated, so we only
need to worry about creation and revocation.
The importer can be configured with `DOTCOM_IMPORT_INTERVAL` - if zero,
the importer is disabled.
Closes https://linear.app/sourcegraph/issue/CORE-216
## Test plan
```
DOTCOM_IMPORT_INTERVAL=10s sg start dotcom
```
Look for `service.importer` logs. Play around in
https://sourcegraph.test:3443/site-admin/dotcom/product/subscriptions/
to create and edit subscriptions, licenses, and Cody Gateway access.
Watch them show up in the database:
```
psql -d sourcegraph
sourcegraph# select * from enterprise_portal_susbscriptions;
sourcegraph# select * from enterprise_portal_susbscription_licenses;
sourcegraph# select * from enterprise_portal_cody_gateway_access;
```
---------
Co-authored-by: James Cotter <35706755+jac@users.noreply.github.com>
Previously, the QueuedCount method was confusing because:
1. By default, it actually returned the count for both the 'queued' and
'errored' states (despite the name just have 'Queued').
2. There was an additional boolean flag for also returning entries in
the 'processing' state, but reduced clarity at call-sites.
So I've changed the method to take a bitset instead, mirroring the
just-added Exists API, and renamed the method to a more
generic 'CountByState'.
While this does make call-sites a bit more verbose, I think the
clarity win makes the change an overall positive one.