This PR is stacked on top of all the prior work @chrsmith has done for
shuffling configuration data around; it implements the new "Self hosted
models" functionality.
## Configuration
Configuring a Sourcegraph instance to use self-hosted models basically
involves adding some configuration like this to the site config (if you
set `modelConfiguration`, you are opting in to the new system which is
in early access):
```
// Setting this field means we are opting into the new Cody model configuration system.
"modelConfiguration": {
// Disable use of Sourcegraph's servers for model discovery
"sourcegraph": null,
// Create two model providers
"providerOverrides": [
{
// Our first model provider "mistral" will be a Huggingface TGI deployment which hosts our
// mistral model for chat functionality.
"id": "mistral",
"displayName": "Mistral",
"serverSideConfig": {
"type": "huggingface-tgi",
"endpoints": [{"url": "https://mistral.example.com/v1"}]
},
},
{
// Our second model provider "bigcode" will be a Huggingface TGI deployment which hosts our
// bigcode/starcoder model for code completion functionality.
"id": "bigcode",
"displayName": "Bigcode",
"serverSideConfig": {
"type": "huggingface-tgi",
"endpoints": [{"url": "http://starcoder.example.com/v1"}]
}
}
],
// Make these two models available to Cody users
"modelOverridesRecommendedSettings": [
"mistral::v1::mixtral-8x7b-instruct",
"bigcode::v1::starcoder2-7b"
],
// Configure which models Cody will use by default
"defaultModels": {
"chat": "mistral::v1::mixtral-8x7b-instruct",
"fastChat": "mistral::v1::mixtral-8x7b-instruct",
"codeCompletion": "bigcode::v1::starcoder2-7b"
}
}
```
More advanced configurations are possible, the above is our blessed
configuration for today.
## Hosting models
Another major component of this work is starting to build up
recommendations around how to self-host models, which ones to use, how
to configure them, etc.
For now, we've been testing with these two on a machine with dual A100s:
* Huggingface TGI (this is a Docker container for model inference, which
provides an OpenAI-compatible API - and is widely popular)
* Two models:
* Starcoder2 for code completion; specifically `bigcode/starcoder2-15b`
with `eetq` 8-bit quantization.
* Mixtral 8x7b instruct for chat; specifically
`casperhansen/mixtral-instruct-awq` which uses `awq` 4-bit quantization.
This is our 'starter' configuration. Other models - specifically other
starcoder 2, and mixtral instruct models - certainly work too, and
higher parameter versions may of course provide better results.
Documentation for how to deploy Huggingface TGI, suggested configuration
and debugging tips - coming soon.
## Advanced configuration
As part of this effort, I have added a quite extensive set of
configuration knobs to to the client side model configuration (see `type
ClientSideModelConfigOpenAICompatible` in this PR)
Some of these configuration options are needed for things to work at a
basic level, while others (e.g. prompt customization) are not needed for
basic functionality, but are very important for customers interested in
self-hosting their own models.
Today, Cody clients have a number of different _autocomplete provider
implementations_ which tie model-specific logic to enable autocomplete,
to a provider. For example, if you use a GPT model through Azure OpenAI,
the autocomplete provider for that is entirely different from what you'd
get if you used a GPT model through OpenAI officially. This can lead to
some subtle issues for us, and so it is worth exploring ways to have a
_generalized autocomplete provider_ - and since with self-hosted models
we _must_ address this problem, these configuration knobs fed to the
client from the server are a pathway to doing that - initially just for
self-hosted models, but in the future possibly generalized to other
providers.
## Debugging facilities
Working with customers in the past to use OpenAI-compatible APIs, we've
learned that debugging can be quite a pain. If you can't see what
requests the Sourcegraph backend is making, and what it is getting
back.. it can be quite painful to debug.
This PR implements quite extensive logging, and a `debugConnections`
flag which can be turned on to enable logging of the actual request
payloads and responses. This is critical when a customer is trying to
add support for a new model, their own custom OpenAI API service, etc.
## Robustness
Working with customers in the past, we also learned that various parts
of our backend `openai` provider were not super robust. For example, [if
more than one message was present it was a fatal
error](https://github.com/sourcegraph/sourcegraph/blob/main/internal/completions/client/openai/openai.go#L305),
or if the SSE stream yielded `{"error"}` payloads, they would go
ignored. Similarly, the SSE event stream parser we use is heavily
tailored towards [the exact response
structure](https://github.com/sourcegraph/sourcegraph/blob/main/internal/completions/client/openai/decoder.go#L15-L19)
which OpenAI's official API returns, and is therefor quite brittle if
connecting to a different SSE stream.
For this work, I have _started by forking_ our
`internal/completions/client/openai` - and made a number of major
improvements to it to make it more robust, handle errors better, etc.
I have also replaced the usage of a custom SSE event stream parser -
which was not spec compliant and brittle - with a proper SSE event
stream parser that recently popped up in the Go community:
https://github.com/tmaxmax/go-sse
My intention is that after more extensive testing, this new
`internal/completions/client/openaicompatible` provider will be more
robust, more correct, and all around better than
`internal/completions/client/openai` (and possibly the azure one) so
that we can just supersede those with this new `openaicompatible` one
entirely.
## Client implementation
Much of the work done in this PR is just "let the site admin configure
things, and broadcast that config to the client through the new model
config system."
Actually getting the clients to respect the new configuration, is a task
I am tackling in future `sourcegraph/cody` PRs.
## Test plan
1. This change currently lacks any unit/regression tests, that is a
major noteworthy point. I will follow-up with those in a future PR.
* However, these changes are **incredibly** isolated, clearly only
affecting customers who opt-in to this new self-hosted models
configuration.
* Most of the heavy lifting (SSE streaming, shuffling data around) is
done in other well-tested codebases.
2. Manual testing has played a big role here, specifically:
* Running a dev instance with the new configuration, actually connected
to Huggingface TGI deployed on a remote server.
* Using the new `debugConnections` mechanism (which customers would use)
to directly confirm requests are going to the right places, with the
right data and payloads.
* Confirming with a new client (changes not yet landed) that
autocomplete and chat functionality work.
Can we use more testing? Hell yeah, and I'm going to add it soon. Does
it work quite well and have small room for error? Also yes.
## Changelog
Cody Enterprise: added a new configuration for self-hosting models.
Reach out to support if you would like to use this feature as it is in
early access.
---------
Signed-off-by: Stephen Gutekanst <stephen@sourcegraph.com>
The Prompt Library lets you create, share, and browse chat prompts for
use with Cody. Prompts are owned by users or organizations, and site
admins can make prompts public so that all users on the instance can see
and use them.
A prompt is just plain text for now, and you can see a list of prompts
in your Prompt Library from within Cody chat
(https://github.com/sourcegraph/cody/pull/4903).
See https://www.loom.com/share/f3124269300c481ebfcbd0a1e300be1b.
Depends on https://github.com/sourcegraph/cody/pull/4903.

## Test plan
Add a prompt on the web. Ensure you can access it from Cody.
## Changelog
- The Prompt Library lets you create, share, and browse chat prompts for
use with Cody. Prompts are owned by users or organizations, and site
admins can make prompts public so that all users on the instance can see
and use them. To use a prompt from your Prompt Library in Cody, select
it in the **Prompts** dropdown in the Cody chat message field.
Closes https://linear.app/sourcegraph/issue/SRC-459/
Closes
This PR adds support for saving and retreiving the IP addressess
associated with each path rule in the sub_repo_permissions store.
It does this by:
**Adding a new permissions type to the internal/authz package**:
1be7df6d79/internal/authz/iface.go (L52-L96)
**Adding new `*WithIPs` versions of all the setter and getter methods**
The new setter methods uses the above `authz.SubRepoPermissionsWithIPs`
type that write to the appropriate `ips` column in the DB.
The new getter methods retrieve the ip addresses associated with each
path entry. However, here there is an additional complication: It's
possible for someone to call the `*WithIPs` getters when the ips column
is still NULL (indicating that the perforce syncer hasn't been updated /
ran in order to save the IP addresses from the protection table yet.
| repo_id | user_id | version | updated_at | paths | ips |
|---------|---------|---------|------------|-------|-----|
| 1 | 1 | 1 | 2023-07-01 10:00:00 | {`"/depot/main/..."`,
`"/depot/dev/..."`, `"-/depot/secret/..."`} | NULL |
| 2 | 1 | 1 | 2023-07-01 11:00:00 | {`"/depot/public/..."`,
`"-/depot/private/..."`} | NULL |
In order to address this, the getters each have a `backfill` boolean
that allows the caller to choose the behavior that they want.
- If `backfill = true`, the paths without IP entries will be returned
with a `*` (wildcard) IP indicating that any client IP address is okay.
(This is effectively the behavior we have today since we don't check IPs
for sub_repo_permisisons). I imagine this can be used when callers don't
care about enforcing IP-based permissions (such as when IP address
enforcement is disabled in site configuration).
- If `backfill = false`, if the IPs column is NULL - an error is
returned instead of backfilling ("The IP addresses associated with this
sub-repository-permissions entry have not been synced yet."). This
allows for callers that care about IP address enforcement to know
_explicitly_ if the IP address information hasn't been updated yet - so
we can't know whether or not the user is able to view the file (e.g when
IP based enforcement is enabled).
**Ensuring that the old setter methods set the IPs column to NULL**:
self-explanatory, if someone uses the non `*WithIP` variants of the
setters, we want to ensure that we zero out that column so that we don't
leave stale / inconsistent information for those Path entries.
---
Overall, the design this adds the new IP address functionality without
having to immediately update all the call sites in the codebase to force
them to interpret all this information (which would make for a
gargantuan PR). Eventually, we should be able to simply delete the old
versions of the setters/getters once the IP address functioanlity has
been threaded through everywhere.
## Test plan
Extensive unit tests.
For each new setter and getter, I added unit tests that tested along all
of the following dimenisons:
- **initial store state**: empty database, database seeded with
permissions with no IP information (paths column only), database seeded
with permissions that have the IP information synced
- **insertion method**: was the data for the test inserted **with IP
information** (using the `withIP` variant of upsert, etc.), or was it
inserted with the old legacy way with no ip information
- **retreieval method**: was the data reterived with the legacy getters
(that don't look at the IP information), with the new IP getters that
either backfill (if the IP information for that paths entry hasn't been
synced yet, it will return an `*` for that entry), or avoids backfilling
(will return the information in the IPs column, or hard-error)?
## Changelog
- The sub_repository_permissions_ database store can now save and
retrieve the IP addresses associated with each path rule.
**Public saved searches will let us make global saved searches for
dotcom and for customers to help them discover and share awesome search
queries!**
Saved searches now have:
- Visibility (public vs. secret). Only site admins may make a saved
search public. Secret saved searches are visible only to their owners
(either a user, or all members of the owning org). A public saved search
can be viewed by everyone on the instance.
- Draft status: If a saved search's "draft" checkbox is checked, that
means that other people shouldn't use that saved search yet. You're
still working on it.
- Timestamps: The last user to update a saved search and the creator of
the saved search are now recorded.
Also adds a lot more tests for saved search UI and backend code.



## Test plan
Create a saved search. Ensure it's in secret visibility to begin with.
As a site admin, make it public. Ensure other users can view it, and no
edit buttons are shown. Try changing visibility back and forth.
## Changelog
- Saved searches can now be made public (by site admins), which means
all users can view them. This is a great way to share useful search
queries with all users of a Sourcegraph instance.
- Saved searches can be marked as a "draft", which is a gentle indicator
that other people shouldn't use it yet.
I noticed that the commit page doesn't render well on mobile when the
commit has a commit message.
This commit refactors how the `Commit` component is rendered, including
on mobile, which affects both the commit**s** and the commit page.
The two most important changes:
- The component now uses CSS grid to be more flexible about how
individual elements are arranged.
- On mobile we don't show expand the message inline anymore but instead
show it full screen. I think that works well for the commits list too
because now you can open and read a longer commit message without having
to scroll the commits list itself.
## Test plan
Manually inspecting the commits and commit pages. Opened a long commit
message to test that the message is properly scrollable.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
Opsgenie alert notifications for critical alerts should be enabled by
default for production projects or where `env.alerting.opsgenie` is set
to true.
Closes CORE-223
## Test plan
Tested locally by running `sg msp gen` for a `prod` env which doesn't
have an alerting config and verifying that notification suppression was
disabled
Set `env.alerting.opsgenie` to false which enabled suppression again.
No changes to `test` environments unless `env.alerting.opsgenie` is set
to true.
This PR modifies the sub_repo_permissions database to store the ip
addresses associated with each Perforce path rule, as part of the IP
based sub repo permissions work.
The new IP column is implemetned as a []text similar to the path column.
The IP address associated with `paths[0]` is stored in the ips column in
`ips[0]`.
For example, the follownig proections table
```
Protections:
read user emily * //depot/elm_proj/...
write group devgrp * //...
write user * 192.168.41.0/24 -//...
write user * [2001:db8:1:2::]/64 -//...
write user joe * -//...
write user lisag * -//depot/...
write user lisag * //depot/doc/...
super user edk * //...
```
turns into the following rows in the sub_repo_permissions table
| repo_id | user_id | version | updated_at | paths | ips |
|---------|---------|---------|------------|-------|-----|
| 1 | 1 | 1 | 2023-07-01 10:00:00 | {`"//depot/elm_proj/..."`} | {`"*"`}
|
| 1 | 2 | 1 | 2023-07-01 10:00:00 | {`"//..."`} | {`"*"`} |
| 1 | 3 | 1 | 2023-07-01 10:00:00 | {`"-//..."`} | {`"192.168.41.0/24"`}
|
| 1 | 4 | 1 | 2023-07-01 10:00:00 | {`"-//..."`} |
{`"2001:db8:1:2::]/64"`} |
| 1 | 5 | 1 | 2023-07-01 10:00:00 | {`"-//..."`} | {`"*"`} |
| 1 | 6 | 1 | 2023-07-01 10:00:00 | {`"-//depot/..."`,
`"//depot/doc/..."`} | {`"*"`, `"*"`} |
| 1 | 7 | 1 | 2023-07-01 10:00:00 | {`"//..."`} | {`"*"`} |
## Test plan
The unit test for the sub_repository_permissions store PR that is built
on this PR:
https://app.graphite.dev/github/pr/sourcegraph/sourcegraph/63811/internal-database-sub_repo_permissions-modify-store-to-be-able-to-insert-ip-based-permissions
## Changelog
- The sub_repo_permissions table now has an ips column to store the
associated IP address associated with each path rule.
The branches page didn't work well on mobile and neither did the tags
page if long tag names were present.
This commit changes how the information is displayed and rendered,
especially on mobile.
I also added additional links to each row to make navigating to relevant
places easier.
## Test plan
Manual inspection of pages in various screen sizes.
This improves the typings to remove some inscrutable inferred types and
some weird type errors if you didn't use React.PropsWithChildren. Also
refactors the code and exposes a new component AuthenticatedUserOnly
that's simpler and can be used when you don't need to do prop
propagation of authenticatedUser.
## Test plan
CI
Contributes to SRCH-738
Notably, this does not yet identify the root cause of SRCH-738, but it
does identify and fix some confounding bugs. It's possible that these
actually also _cause_ some of the issues in SRCH-738, but I wanted to at
least push these to dotcom, where we can reproduce some of the
weirdness. At the very least, it doesn't explain the auth errors being
reported.
The goal of this PR is to increase the stability of web-sveltekit
e2e-tests so that we don't have to rely on manual runs anymore. They
have previously been disabled due to a high number of failures:
https://github.com/sourcegraph/sourcegraph/pull/63874
---
To improve the stability of web-sveltekit e2e-tests, I used `sg ci bazel
test //client/web-sveltekit:e2e_test --runs_per_test=N` with N=5,10,15
to see which tests break under different levels of pressure on the
machine. The logs looked like it was mostly timeouts, that got worse
when increasing N. That means we can check where tests will break due to
timeouts, but we don't really need to raise timeouts so far that it
would work with N=20.
With N=5, 10 we get a good understanding if our timeouts are high
enough.
You can see two CI runs here after applying higher timeouts and skipping
a consistently failing test:
- N=5:
https://buildkite.com/sourcegraph/sourcegraph/builds/283011/waterfall
- N=10:
https://buildkite.com/sourcegraph/sourcegraph/builds/283013/waterfall
---
From logs of some other run that I don't have the link to anymore, we
can see that some tests take up to 50s so a timeout of 60s (instead of
the default 30s) for CI should be a good new ceiling.
```
Slow test file: [chromium] › src/routes/[...repo=reporev]/(validrev)/(code)/-/blob/[...path]/page.spec.ts (48.9s)
--
| Slow test file: [chromium] › src/routes/[...repo=reporev]/(validrev)/(code)/page.spec.ts (45.0s)
| Slow test file: [chromium] › src/routes/search/page.spec.ts (40.6s)
| Slow test file: [chromium] › src/routes/[...repo=reporev]/(validrev)/(code)/-/tree/[...path]/page.spec.ts (31.7s)
| Slow test file: [chromium] › src/routes/layout.spec.ts (31.5s)
```
## Test plan
CI
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
It turns out that Kubernetes objects constructed using client-go don't
know their own TypeMeta. There is code in client-go that figures out
which resource-scoped HTTP path to call on the kube-apiserver by looking
up a mapping of Go object kinds to k8s kinds. A similar facility appears
to be exposed to the user by apiutil.GVKForObject().
Closes
https://linear.app/sourcegraph/issue/REL-275/reconciler-logs-dont-contain-resource-metadata
Secrets fetched from GSM should probably not be stored locally. As we
increase the usage of fetching external secrets, this stuff is
increasingly sensitive, particularly for SAMS stuff - every time it's
used, we should ensure that the user has the required permissions, and
also only store external secrets in-memory.
It looks like several other callsites make use of the persistence of
other secrets e.g. those prompted from users, so this change
specifically targets the `GetExternal` method. Additionally, I also
added a check on load to delete any legacy external secrets that are
stored to disk on load - we can remove this after a few weeks.
## Test plan
Unit tests asserts old behaviour and new desired behaviour
`sg start -cmd cody-gateway` uses external secrets and works as expected
After running `sg`, `sg secret list` has no external secrets anymore
Closes
[DINF-58](https://linear.app/sourcegraph/issue/DINF-58/overwrite-ordering-in-sg)
https://github.com/user-attachments/assets/d8e59a5f-9390-47f7-a6a7-9ccbf97423f8
## Test plan
- Add a `commandset` to the `sg.config.overwrite.yaml`
- This commandset should depend on an existing command in the
`sg.config.yaml` file.
- The commandset should also include an `env var` that should override
what's set in the `command` contained in the `sg.config.yaml` file.
- Running `sg start <commandset name>` should allow the env ordering
matrix shown below
```
Priority: overwrite.command.env > overwrite.commandset.env > command.env > commandset.env.
```
## Changelog
N/A
Delivery Manifest step has started to run `bazel build` commands, in them clobbering our execlog artifacts. We should only emit it for the test buildkite jobs (at least for the time being), as it currently doesnt make sense for e.g. the image push jobs which contain multiple invocations
## Test plan
CI
## Changelog
Follow-up to
https://sourcegraph.slack.com/archives/C07KZF47K/p1720639713491779?thread_ts=1720636753.404169&cid=C07KZF47K
Basically, if we see that the local assets are above 500mb, we just nuke
it. It's a bandage btw. The `.gitkeep` is there so it doesn't break the
build because there's nothing to embed.
@eseliger and @burmudar can you test this a bit further and land it if
it's all good? My tests are good, but I don't want to hastily land
something and go in PTO five seconds before I'm out for two weeks.
## Test plan
Locally tested.
---------
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
Fixes CODY-2888
Previously, Sourcegraph Enterprise instances with context filters
enabled rejected requests from all unknown clients out of concern that
they might not respect context filters. This behavior makes it
incredibly impractical to release now agent-based clients (CLI, Eclipse,
Visual Studio, Neovim, ..) that do respect context filters out of the
box thanks to the reused logic in the Cody agent.
This logic suffers from both false positives and false negatives:
- False negatives: upcoming Cody clients (CLI, Eclipse, Visual Studio)
already support context filters out of the box thanks to using the Cody
agent but they can't send requests unless we add a special case to them.
It may require months for these clients to wait for all Enterprise
instances to upgrade to a version that adds exceptions for their name.
- False positive: a malicious client can always fake that it's
"jetbrains" with a valid version number even if the client doesn't
respect context filters. This gives a false sense of security because it
doesn't prevent malicious traffic from bypassing context filters. In
fact, I am leaning towards using the
Now, with this change, Sourcegraph Enterprise instances only reject
requests from old versions of Cody clients that are known to not support
context filters. This ensures we never have false positives or false
negatives.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
See updated test case which accepts a request from an unknown "sublime"
client.
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Instead of a a maximum of 2 minor versions beyond the deployed
Sourcegraph instance.
The main advantage of removing the 2-minor-version constraint, is that
it allows admins to always be able to update to the latest SG with one
click, instead of having to repeat that process through intermediate
versions if they have fallen far behind.
Otherwise Apollo Client complains that it does not know how to cache the
result because it does not know which repository ID to associate the
result with.
## Test plan
CI
We previously improved the performance of Language Stats Insights by
introducing parallel requests to gitserver:
https://github.com/sourcegraph/sourcegraph/pull/62011
This PR replaces the previous approach where we would iterate through
and request each file from gitserver with an approach where we request
just one archive. This eliminates a lot of network traffic, and gives us
an additional(!) performance improvement of 70-90%.
Even repositories like chromium (42GB) can now be processed (on my
machine in just one minute).
---
Caching: We dropped most of the caching, and kept only the top-level
caching (repo@commit). This means that we only need to compute the
language stats once per commit, and subsequent users/requests can see
the cached data. We dropped the file/directory level caching, because
(1) the code to do that got very complex and (2) we can assume that most
repositories are able to compute within the 5 minutes timeout (which can
be increase via the environment variable `GET_INVENTORY_TIMEOUT`). The
timeout is not bound to the user's request anymore. Before, the frontend
would request the stats up to three times to let the computation move
forward and pick up where the last request aborted. While we still have
this frontend retry mechanism, we don't have to worry about an
abort-and-continue mechanism in the backend.
---
Credits for the code to @eseliger:
https://github.com/sourcegraph/sourcegraph/issues/62019#issuecomment-2119278481
I've taken the diff, and updated the caching methods to allow for more
advanced use cases should we decide to introduce more caching. We can
take that out again if the current caching is sufficient.
Todos:
- [x] Check if CI passes, manual testing seems to be fine
- [x] Verify that insights are cached at the top level
---
Test data:
- sourcegraph/sourcegraph: 9.07s (main) -> 1.44s (current): 74% better
- facebook/react: 17.52s (main) -> 0.87s (current): 95% better
- godotengine/godot: 28.92s (main) -> 1.98s (current): 93% better
- chromium/chromium: ~1 minute: 100% better, because it didn't compute
before
## Changelog
- Language stats queries now request one archive from gitserver instead
of individual file requests. This leads to a huge performance
improvement. Even extra large repositories like chromium are now able to
compute within one minute. Previously they timed out.
## Test plan
- New unit tests
- Plenty of manual testing
Precursor for
https://linear.app/sourcegraph/issue/GRAPH-735/test-syntactic-usages
This PR introduces the `MappedIndex` abstraction, which wraps up an
upload with a target commit. Its APIs then take care of mapping upload
relative paths and repo relative paths, and ranges across commits.
My main motivation for making this change is that I can now test this
logic in isolation (which this PR does), and also have an interface that
is easy to fake and use to test syntactic usages.
## Test plan
Added unit tests for the `MappedIndex` component, manual testing of the
GraphQL APIs show no changes in the syntactic usages output between this
PR and master.
Turns out, `gorm` does not auto-migrate constraints you've removed from
the struct schema. You must drop them by hand, or run the commands to
drop them explicitly.
## Test plan
```
sg start -cmd enterprise-portal
```
```
\d+ enterprise_portal_*
```
then, comment the custom migrations, and run `sg start -cmd
enterprise-portal` again to re-do the migrations. The output of `\d+`
matches. Unexpected constraints are gone.
This commit adds the Svelte version of the compare page. Now that we
have a revision picker this was easy to implement.
A couple of notes:
- The URL parameter is a rest parameter ([...spec]) because revisions
can contain slashes, and we don't encode those.
- I changed the API of the revision selector so that it works on pages
that don't have `resolvedRevision` available (also we never needed the
`repo` property from that object), and to provide custom select
handlers.
- Moved the revision to lib since it's used on various different pages.
- Unlike the React version this version
- Provides forward/backward buttons to browse the commit list
- Uses infinity scrolling to load the next diffs
- Doesn't use a different style for the commit list. I experimented with
adding a "compact" version of the commit but it didn't feel quite right.
I'm sure we'll eventually redesign this.
- The `filePath` parameter is supported (for backwards compatibility)
but we don't use it anywhere atm.
(note that the avatar size has changed since the video was recorded;
it's larger now)
## Test plan
Manual testing.
We recently updated the completions APIs to use the `modelconfig` system
for managing LLM model configuration. Behind the scenes, we
automatically converted the existing site configuration ("completions
config") into the newer format so things work as expected.
However, the GraphQL view of the Sourcegraph instance's LLM
configuration was not updated to use the `modelconfig` system. And so fi
the site admin and opted into using the new-style of configuration data,
the data returned would be all sorts of wrong.
(Because the GraphQL handler looked for the "completions config" part of
the site config, and not the newer "model configuration" section.)
This PR updates the `CodyLLMConfiguration` GraphQL resolver to return
the data from the modelconfig component of the Sourcegraph instance.
Some careful refactoring was needed to avoid a circular dependency in
the Go code. So the resolver's type _declaration_ is in the
`graphqlbackend` package. But it's _definition_ is in
`internal/modelconfig`.
## Test plan
I only tested these changes manually.
If you open the Sourcegraph instance's API console, this is the GraphQL
query to serve all of the data:
```gql
{
site {
codyLLMConfiguration {
chatModel
fastChatModel
completionModel
provider
disableClientConfigAPI
chatModelMaxTokens
fastChatModelMaxTokens
completionModelMaxTokens
}
}
}
```
## Changelog
NA
Part of: https://github.com/sourcegraph/devx-support/issues/1093
If we get 3 errors in a row trying to write to bigquery ... chances are
we are not going to succeed. So we exit early.
## Test plan
CI
## Changelog
- sg: provide a better error message when we fail to insert into
bigquery
- sg: stop puslishing to bigquery if we get 3 errors in a row
Testing for display-name setting which we recently added, and this is
useful in the interim to set display names on the go for subscriptions
EP already tracks.
note: I don't anticipate doing this for every field we make update-able,
especially since the next step(s) will be updating the UI
## Test plan
```
sg enterprise subscription set-name es_4dae04ba-5f5b-431a-b90b-e8e3dd449181 "robert's test subscription"
```
Adds site-config configuration for RFC 969 intent detection, making the
Intent Detection API endpoint and token configurable without code
changes. Additionally, adds an option to hit multiple intent detection
backends with the same query.
Previously, URL was hardcoded in code, so if the backend has changed, we
had to redeploy sourcegraph.com.
As we iterate on intent detection, we want to be able to test multiple
models in parallel, so this PR adds a setting for `extra` backends - if
provided, additional .com -> backend requests will be sent, but the
client-initiated request will not wait for those requests.
Closes AI-128.
## Test plan
- tested locally - add
```
"cody.serverSideContext": {
"intentDetectionAPI": {
"default": {
"url": "http://35.188.42.13:8000/predict/linearv2"
},
"extra": [
{
"url": "http://35.188.42.13:8000/predict/linearv2"
}
]
}
}
```
to `experimentalFeatures` in dev-private.
This is an input element that constraints the input to look like a path
component, used for the names of batch changes (and for prompts in the
prompt library in the future).
Also fixes an invalid regexp (it needs to escape `-` because the `'v'`
flag is used; see
https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/pattern).
## Test plan
Test against existing batch change create form name input. Try invalid
inputs.
This was added to the old global navbar but not the one one by accident
(by me).
## Test plan
Ensure that the new global navbar shows a Saved Searches link.
Ideally, we would introduce copies of the old names, deprecate the old
ones, migrate the frontend code, and cut over after a couple of minor
releases.
I don't have time for that now, so just updating the docs a bit for clarity.
Just a couple of things I noticed while working on unrelated tasks.
1) Removes the unused `Separator` component, which has been replaced by
the Panel API
2) Makes the sourcegraph mark a proper icon so it can be used via the
`Icon` component like all our other icons.
While attempting to use `observeIntersection` with the new references
panel, I was running into performance issues with large lists of
references (the API does not actually paginate yet). When I took a look
at the profile, a good chunk of the time came from finding the nearest
scrolling ancestor, specifically the `overflowY` part, which requires
computing large chunks of the layout.
So this changes the `observeIntersection` action to take an explicit
target container so we don't need to do any searching for a scroll
container. For the example that I was debugging, this reduced the time
it took to render the list ~5x. Additionally, it establishes a cache for
all created `IntersectionObserver`s rather than just the root observer.
S2 Cody Web is broken at the moment. New client-config handlers fail
with 401 status because we don't send custom headers, this works for gql
queries since they all are POST requests and the browser automatically
sends an Origin header for them and this is enough for our auth
middleware to check cookies, but with client-config which is rest it's
not the case and we should send `X-Requested-Client: Sourcegraph` header
to make our auth middleware to pass this query correctly
Note that this problem doesn't exist in local builds since we proxy all
requests and add `X-Requested-Client: Sourcegraph` in dev server.
See Cody latest build PR for more details
https://github.com/sourcegraph/cody/pull/4898
## Test plan
- CI is passing
`sg run` is supposed to be deprecated in favour of `sg start -cmd`, but
the `sg start` completions don't work with `-cmd` like `sg run` does.
This change updates `sg start` completion to check for the `-cmd` flag,
and if it is provided, offer completions for commands instead of
command_sets_ (the default `sg start` behaviour).
## Test plan
<img width="1023" alt="image"
src="https://github.com/user-attachments/assets/9b887180-f58f-4aef-9dbb-718c71ba15e6">
<img width="1077" alt="image"
src="https://github.com/user-attachments/assets/927b4562-fce1-48c0-a8c5-453bfc60fe35">
## Changelog
- Completions for `sg start -cmd` now offer valid suggestions.
Noticed several `Usage` using newlines, which makes `-h` output pretty
annoying to read as it breaks up the formatting. It tickled me enough to
put a formatting check against it, and update the existing usages that
were incorrect, to use `Description` or `UsageText` instead :-)
## Test plan
CI, `sg -h` is pretty(er) again (but still very long)