See https://github.com/sourcegraph/sourcegraph/pull/63870
cc @sourcegraph/release
## Test plan
Covered by existing tests
## Changelog
- Adds an experimental feature `commitGraphUpdates` to control how
upload visibility is calculated.
This is part of the Keyword GA Project.
Batch Changes uses Sourcegraph queries to define the list of repositories on which the batch change will run.
With this change we default to pattern type "keyword" instead of "standard".
To make this a backward compatible change, we also introduce a version identifier to batch specs. Authors can specify `version: 2` in the spec, in which case we default to pattern type "keyword". Existing specs (without a specified version) and specs with `version: 1` will keep using pattern type "standard".
Notes:
- Corresponding doc update [PR](https://github.com/sourcegraph/docs/pull/477)
- We don't have a query input field, but instead the query is defined in a batch spec YAML. It didn't feel right to edit the YAML and append "patternType: " on save, which is what we do for Code Monitors and Insights.
- I misuse the pattern type query parameter to effectively override the version. Once we introduce "V4" we should come back here and clean up. I left a TODO in the code.
Test plan:
- New and updated unit tests
- manual testing
- new batch changes use `version: 2` by default.
- using an unsupported version returns an error
- I ran various "on:" queries to verify that version 2 uses keyword search and version 1 uses standard search.
These commits do a few things:
---
46b1303e62ea7e01ba6a441cc55bbe4c166ef5ce corrects a few minor mistakes
with the new site config which I introduced in #63654 - namely fixing
`examples` entries and nullability in a few cases. Nothing controversial
here, just bug fixes.
---
750b61e7dfa661338c9b40042087aed8e795f900 makes it so that the
`/.api/client-config` endpoint returns `"modelsAPIEnabled": true,` if
`"modelConfiguration"` is set in the site config. For context,
`"modelConfiguration"` is a new site config field, which is not used
anywhere before this PR, and has this description:
> BETA FEATURE, only enable if you know what you are doing. If set, Cody
will use the new model configuration system and ignore the old
'completions' site configuration entirely.
I will send a change to the client logic next so that it uses this
`modelsAPIEnabled` field instead of the client-side feature flag
`dev.useServerDefinedModels`.
---
Finally, f52fba342dd2e62a606b885802f7f6bc37f4f4ac and
bde67d57c39f4566dc9287f8793cb5ffd25955b3 make a few site config changes
that @chrsmith and I discussed to enable Self-hosted models support.
Specifically, it makes it possible to specify the following
configuration in the site config:
```
// Setting this field means we are opting into the new Cody model configuration system which is in beta.
"modelConfiguration": {
// Disable use of Sourcegraph's servers for model discovery
"sourcegraph": null,
// Configure the OpenAI-compatible API endpoints that Cody should use to provide
// mistral and bigcode (starcoder) models.
"providerOverrides": [
{
"displayName": "Mistral",
"id": "mistral",
"serverSideConfig": {
"type": "openaicompatible",
"endpoint": "...",
"accessToken": "...",
},
},
{
"displayName": "Bigcode",
"id": "bigcode",
"serverSideConfig": {
"type": "openaicompatible",
"endpoint": "...",
"accessToken": "...",
},
},
],
// Configure which exact mistral and starcoder models we want available
"modelOverridesRecommendedSettings": [
"bigcode::v1::starcoder2-7b",
"mistral::v1::mixtral-8x7b-instruct"
],
// Configure which models Cody will use by default
"defaultModels": {
"chat": "mistral::v1::mixtral-8x7b-instruct",
"fastChat": "mistral::v1::mixtral-8x7b-instruct",
"codeCompletion": "bigcode::v1::starcoder2-7b",
}
}
```
Currently this site config is not actually used, so configuring
Sourcegraph like this should not be done today, but this will be in a
future PR by me.
@chrsmith one divergence from what we discussed.. me and you had planned
to support this:
```
"modelOverrides": [
{
"bigcode::v1::starcoder2-7b"": {
"useRecommendSettings": true,
},
"mistral::v1::mixtral-8x22b-instruct": {
"useRecommendSettings": true,
},
}
],
```
However, being able to specify `"useRecommendSettings": true,` inside of
a `ModelOverride` in the site configuration means that all other
`ModelOverride` fields (the ones we are accepting as recommended
settings) must be optional, which seems quite bad and opens up a number
of misconfiguration possibilities.
Instead, I opted to introduce a new top-level field for model overrides
_with recommended settings_, so the above becomes this instead:
```
"modelOverridesRecommendedSettings": [
"bigcode::v1::starcoder2-7b",
"mistral::v1::mixtral-8x7b-instruct"
],
```
This has the added benefit of making it impossible to set both
`"useRecommendSettings": true,` and other fields.
I will make it a site config error (prevents admins from saving
configuration) to specify the same model in both `modelOverrides` and
`modelOverridesRecommendedSettings` in a future PR.
---
## Test plan
Doesn't affect users yet. Careful review.
## Changelog
N/A
---------
Signed-off-by: Stephen Gutekanst <stephen@sourcegraph.com>
## User-facing impacts
This PR adds a new top-level field to the site configuration
`"modelConfiguration"`
which is intended to replace the old `"completions"` field. For now,
this is 100%
opt-in (beta feature) and customers should only use this new field if
they have
talked with us to confirm it should work in their case.
Today, this configuration is unused, but in future PRs it will actually
be used (planned for the 5.5.0 release.)
Additionally, `"modelConfiguration"` acts as a feature-flag. If it is
set
and not `null`, then Cody will enable this whole new model configuration
system
end-to-end which involves many new components:
* Clients will make use of a new
`/.api/modelconfig/supported-models.json` endpoint to query which models
the server has available.
* Cody will respect this new site configuration, discovering new models
from Sourcegraph's servers without an upgrade by default, etc.
* Clients will enable the "select an LLM model" dropdown menu for
enterprise customers.
* The old `"completions"` configuration, if present, will be ignored if
`"modelConfiguration"` is set.
## Implementation notes
This schema mirrors [the new model configuration
schema](https://github.com/sourcegraph/sourcegraph/tree/main/internal/modelconfig/types)
that Chris and myself have been working on tirelessly to enable a myriad
of use-cases in the near future, including enabling customers to get
support for new models without upgrading Sourcegraph, enabling multiple
models / dropdown model selector in enterprise, improved self-hosted
model
support, and more.
The translation is pretty much 1:1, with a few notable aspects:
* I broke out
[`GenericProviderConfig`](https://github.com/sourcegraph/sourcegraph/blob/main/internal/modelconfig/types/configuration.go#L49-L73)
into distinct types for each provider.
* I represent `ClientSideProviderConfig` and `ClientSideModelConfig`
_objects_ (specify multiple fields)
* I represent `ServerSideProviderConfig` and `ServerSideModelConfig` as
_tagged unions_ / _discriminated unions_, i.e. where you must write a
`{"type": "foo"}` field as part of the object.
My next PR will be to handle conversion from the site config types to
the `modelconfig/types`.
## Test plan
No impact to product behavior yet, nothing to test.
## Changelog
Changelog entry will come later with proper docs link, when we are ready
for customers to use this.
---------
Signed-off-by: Stephen Gutekanst <stephen@sourcegraph.com>
Co-authored-by: Chris Smith <chrsmith@users.noreply.github.com>
This PR removes the keyword search toggle as part of making the feature GA. It
removes the keyword search toggle and popover, but keeps the "call to action"
on the search landing page,
Main changes:
* Remove toggle on search results page
* Stop checking `experimentalFeatures.keywordSearch`. (Instead, users should
set `search.defaultPatternType: standard`)
* Remove `LegacyToggles` and all references. This duplicated `Toggles` and is
no longer needed since we unified the implementations.
Closes SPLF-111
This new feature is WIP, so put it behind an off-by-default feature flag.
As of now (Jul 2 2024), the feature flag is enabled on S2, Sourcegraph.com
and in dev environments.
Makes partial progress towards
https://linear.app/sourcegraph/issue/GRAPH-721
After this, I'll make necessary changes to the various configs to enable
this feature flag for dev, S2 and Sourcegraph.com. After that, I'll change
the default to be `false`.
[Linear Issue
](https://linear.app/sourcegraph/issue/CODY-2586/fix-completions-models-api-for-azure-to-use-the-right-model-with-the)
The purpose of this PR is to make a backwords compatible solution such
that the completions logic in our codebase for azure supports both the
completions API(which is old) and also supports the chat/completions API
which is new. This way we can use models from both of them with
autocomplete.
NOTe: Since we can't figure out which model we are using because azure
has the deployment name instead of model name and because of that we
can't decide which API to use for which model we try with both of the
APIs and then the API that works is cached for that model and then we
used the cached API logic to choose the api to make subsequent
completion calls this way we can choose either of the APIs and not have
added latency with completions.
## Test plan
I used the azure keys to try out different deployment models that we
have both with the old and the new api.
Old API -> Completions (gpt-3.5-turbo-instruct, gpt-3.5-turbo(301),
gpt-3.5-turbo(613))
New API -> Chat Completions(gpt-3.5-turbo(301), gpt-4o,
gpt-3.5-turbo(613), gpt-3.5-turbo-16k)
NOTE both of the set of models work seamless with this PR.
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Description:
This PR introduces support for counting tokens within the Azure code and
updating these counts in Redis. The token counting logic is embedded
directly in the Azure code rather than using a standardized point for
all token counting logic.
Reasoning:
• Azure does not currently support obtaining token usage from their
streaming endpoint, unlike OpenAI.
• To enable immediate functionality, the token counting logic is placed
within the Azure code itself.
• The implementation supports GPT-4o.
Future Considerations:
• When Azure eventually adds support for token usage from the streaming
endpoint, we will migrate to using Azure’s built-in capabilities.
• This will ensure full utilization of Azure OpenAI features as they
achieve parity with OpenAI.
Changes:
• Added token counting logic to the Azure code.
• Updated Redis with the token counts.
Testing:
• Verified the implementation works with GPT-4o.
Conclusion:
This is a temporary solution to enable token counting in Azure. We will
adapt our approach as Azure enhances its feature set to include token
usage from their streaming endpoint.
## Test plan
Tested locally with debugger
<!-- All pull requests REQUIRE a test plan:
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!--
1. Ensure your pull request title is formatted as: $type($domain): $what
2. Add bullet list items for each additional detail you want to cover
(see example below)
3. You can edit this after the pull request was merged, as long as
release shipping it hasn't been promoted to the public.
4. For more information, please see this how-to
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c?
Audience: TS/CSE > Customers > Teammates (in that order).
Cheat sheet: $type = chore|fix|feat $domain:
source|search|ci|release|plg|cody|local|...
-->
<!--
Example:
Title: fix(search): parse quotes with the appropriate context
Changelog section:
## Changelog
- When a quote is used with regexp pattern type, then ...
- Refactored underlying code.
-->
This integrates the new occurrences API into the Svelte webapp. This
fixes a number of issues where the syntax highlighting data is not an
accurate way to determine hoverable tokens. It is currently behind the
setting `experimentalFeatures.enablePreciseOccurrences`
"lucky" was an experimental pattern type we added about 2 years ago.
Judging from the git history and the current code, it was at some point
replaced by "smart search" and "search mode", which we also plan to
remove soon.
See https://github.com/sourcegraph/sourcegraph/pull/43140 for more
context
Test plan:
CI
Historically, sourcegraph.com has been the only instance. It was
connected to GitHub.com and GitLab.com only.
Configuration should be as simple as possible, and we wanted everyone to
try it on any repo. So public repos were added on-demand when browsed
from these code hosts.
Since, dotcom is no longer the only instance, and this is a special case
that only exists for sourcegraph.com.
This causes a bunch of additional complexity and various extra code
paths that we don't test well enough today.
We want to make dotcom simpler to understand, so we've made the decision
to disable that feature, and instead we will maintain a list of
repositories that we have on the instance.
We already disallowed several repos half a year ago, by restricting size
of repos with few stars heavily.
This is basically just a continuation of that.
In the diff, you'll mostly find deletions. This PR does not do much
other than removing the code paths that were only enabled in dotcom mode
in the repo syncer, and then removes code that became unused as a result
of that.
## Test plan
Ran a dotcom mode instance locally, it did not behave differently than a
regular instance wrt. repo cloning.
We will need to verify during the rollout that we're not suddenly
hitting code paths that don't scale to the dotcom size.
## Changelog
Dotcom no longer clones repos on demand.
Part of https://github.com/sourcegraph/sourcegraph/issues/62448
Linear issue
[SRCH-573](https://linear.app/sourcegraph/issue/SRCH-573/integrate-cody-web-package-into-the-sourcegraph-ui)
This is highly experimental usage of the new (not currently merged but
published in NPM `cody-web-experimental`) package
## How to run it
- (Optional) if you previously linked any local packages make sure they
don't exist in your node_modules anymore, `rm -rf node_modules` in the
root then `pnpm install`
- Run standard `sg start web-standalone`
- Turn on `newCodyWeb: true` in your `experimentalFeatures`
## How to run it locally with prototype PR in Cody repository
- Open Cody repository on the `vk/integrate-cody-web-chat-2` branch
- At the root of the repo, run `pnpm install` to make sure you're up to
date with all of the dependencies.
- Go to the web package (`cd web`)
- Build it with `pnpm build`
- Create a global link with `pnpm link --global` (Ignore the warning
message about no binary)
- Open sourcegraph/sourcegraph repository on this PR branch
- Make sure you are in the root of the repo.
- Run `pnpm link --global cody-web-experimental`
- Run `sg start web-standalone` to bundle the web app and launch an
instance that uses S2 for the backend. You'll need to create a login on
S2 that is not federated by GitHub.
- Turn on `newCodyWeb: true` in your `experimentalFeatures`
- Have fun experimenting!
## Test plan
- Check that old version of Cody has got no regressions
Adds `dotcom.codyProConfig.useEmbeddedUI` site config param.
This param defines whether the Cody Pro subscription and team management
UI should be served from the connected instance running in the dotcom
mode. The default value is `false`. This change allows us to enable the
SSC proxy on the instance without enabling the new embedded Cody Pro UI.
Previously whether the embedded Cody Pro UI is enabled was defined by
the `dotcom.codyProConfig` being set, which prevented us from enabling
the SSC proxy without enabling the embedded UI:
> Whether the SSC proxy is enabled is [defined based on
`dotcom.codyProConfig`](41fb56d619/cmd/frontend/internal/ssc/ssc_proxy.go (L227-L231))
being set in the site config. This value is also partially
[propagated](41fb56d619/cmd/frontend/internal/app/jscontext/jscontext.go (L481))
to the frontend via jscontext. And the frontend [uses this
value](41fb56d619/client/web/src/cody/util.ts (L8-L18))
to define whether to use new embedded UI or not.
For more details see [this Slack
thread](https://sourcegraph.slack.com/archives/C05PC7AKFQV/p1719010292837099?thread_ts=1719000927.962429&cid=C05PC7AKFQV).
<!-- 💡 To write a useful PR description, make sure that your description
covers:
- WHAT this PR is changing:
- How was it PREVIOUSLY.
- How it will be from NOW on.
- WHY this PR is needed.
- CONTEXT, i.e. to which initiative, project or RFC it belongs.
The structure of the description doesn't matter as much as covering
these points, so use
your best judgement based on your context.
Learn how to write good pull request description:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e?pvs=4
-->
## Test plan
- CI
- Tested manually:
- Run Sourcegraoh instance locally in dotcom mode
- Set `dotcom.codyProConfig` in the site config
- Type `context. frontendCodyProConfig` that it returns the [correct
values from the site
config](184da4ce4a/cmd/frontend/internal/app/jscontext/jscontext.go (L711-L715))
<!-- All pull requests REQUIRE a test plan:
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!--
1. Ensure your pull request title is formatted as: $type($domain): $what
2. Add bullet list items for each additional detail you want to cover
(see example below)
3. You can edit this after the pull request was merged, as long as
release shipping it hasn't been promoted to the public.
4. For more information, please see this how-to
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c?
Audience: TS/CSE > Customers > Teammates (in that order).
Cheat sheet: $type = chore|fix|feat $domain:
source|search|ci|release|plg|cody|local|...
-->
<!--
Example:
Title: fix(search): parse quotes with the appropriate context
Changelog section:
## Changelog
- When a quote is used with regexp pattern type, then ...
- Refactored underlying code.
-->
The search console page is broken, is not used or maintained, and is
only referenced by a series of blog posts years ago. We have product
support to remove it.
It seems many of our doc links for code hosts are broken in production
due to a url changed from external_services to code_hosts. I did a find
an replace to update all the ones I could find.
CLOSE https://github.com/sourcegraph/cody-issues/issues/211 &
https://github.com/sourcegraph/cody-issues/issues/412 &
https://github.com/sourcegraph/cody-issues/issues/412
UNBLOCK https://github.com/sourcegraph/cody/pull/4360
* Add support for Google Gemini AI models as chat completions provider
* Add new `google` package to handle Google Generative AI client
* Update `client.go` and `codygateway.go` to handle the new Google
provider
* Set default models for chat, fast chat, and completions when Google is
the configured provider
* Add gemini-pro to the allowed list
<!-- 💡 To write a useful PR description, make sure that your description
covers:
- WHAT this PR is changing:
- How was it PREVIOUSLY.
- How it will be from NOW on.
- WHY this PR is needed.
- CONTEXT, i.e. to which initiative, project or RFC it belongs.
The structure of the description doesn't matter as much as covering
these points, so use
your best judgement based on your context.
Learn how to write good pull request description:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e?pvs=4
-->
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
For Enterprise instances using google as provider:
1. In your Soucegraph local instance's Site Config, add the following:
```
"accessToken": "REDACTED",
"chatModel": "gemini-1.5-pro-latest",
"provider": "google",
```
Note: You can get the accessToken for Gemini API in 1Password.
2. After saving the site config with the above change, run the following
curl command:
```
curl 'https://sourcegraph.test:3443/.api/completions/stream' -i \
-X POST \
-H 'authorization: token $LOCAL_INSTANCE_TOKEN' \
--data-raw '{"messages":[{"speaker":"human","text":"Who are you?"}],"maxTokensToSample":30,"temperature":0,"stopSequences":[],"timeoutMs":5000,"stream":true,"model":"gemini-1.5-pro-latest"}'
```
3. Expected Output:
```
❯ curl 'https://sourcegraph.test:3443/.api/completions/stream' -i \
-X POST \
-H 'authorization: token <REDACTED>' \
--data-raw '{"messages":[{"speaker":"human","text":"Who are you?"}],"maxTokensToSample":30,"temperature":0,"stopSequences":[],"timeoutMs":5000,"stream":true,"model":"gemini-1.5-pro-latest"}'
HTTP/2 200
access-control-allow-credentials: true
access-control-allow-origin:
alt-svc: h3=":3443"; ma=2592000
cache-control: no-cache
content-type: text/event-stream
date: Tue, 04 Jun 2024 05:45:33 GMT
server: Caddy
server: Caddy
vary: Accept-Encoding, Authorization, Cookie, Authorization, X-Requested-With, Cookie
x-accel-buffering: no
x-content-type-options: nosniff
x-frame-options: DENY
x-powered-by: Express
x-trace: d4b1f02a3e2882a3d52331335d217b03
x-trace-span: 728ec33860d3b5e6
x-trace-url: https://sourcegraph.test:3443/-/debug/jaeger/trace/d4b1f02a3e2882a3d52331335d217b03
x-xss-protection: 1; mode=block
event: completion
data: {"completion":"I","stopReason":"STOP"}
event: completion
data: {"completion":"I am a large language model, trained by Google. \n\nThink of me as","stopReason":"STOP"}
event: completion
data: {"completion":"I am a large language model, trained by Google. \n\nThink of me as a computer program that can understand and generate human-like text.","stopReason":"MAX_TOKENS"}
event: done
data: {}
```
Verified locally:

#### Before
Cody Gateway returns `no client known for upstream provider google`
```sh
curl -X 'POST' -d '{"messages":[{"speaker":"human","text":"Who are you?"}],"maxTokensToSample":30,"temperature":0,"stopSequences":[],"timeoutMs":5000,"stream":true,"model":"google/gemini-1.5-pro-latest"}' -H 'Accept: application/json' -H 'Authorization: token $YOUR_DOTCOM_TOKEN' -H 'Content-Type: application/json' 'https://sourcegraph.com/.api/completions/stream'
event: error
data: {"error":"no client known for upstream provider google"}
event: done
data: {
```
## Changelog
<!--
1. Ensure your pull request title is formatted as: $type($domain): $what
2. Add bullet list items for each additional detail you want to cover
(see example below)
5. You can edit this after the pull request was merged, as long as
release shipping it hasn't been promoted to the public.
6. For more information, please see this how-to
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c?
Audience: TS/CSE > Customers > Teammates (in that order).
Cheat sheet: $type = chore|fix|feat $domain:
source|search|ci|release|plg|cody|local|...
-->
<!--
Example:
Title: fix(search): parse quotes with the appropriate context
Changelog section:
## Changelog
- When a quote is used with regexp pattern type, then ...
- Refactored underlying code.
-->
Added support for Google as an LLM provider for Cody, with the following
models available through Cody Gateway: Gemini Pro (`gemini-pro-latest`),
Gemini 1.5 Flash (`gemini-1.5-flash-latest`), and Gemini 1.5 Pro
(`gemini-1.5-pro-latest`).
This has historically been set to 1 hour.
We've seen several reports of users running into the limit for clones of very large repositories, but we have seen no complaints of processes hanging for very long and clogging any queues.
So it feels sensible to me to increase the default for this value to 2h.
We might come back here later and decide that we don't really need a deadline here at all and instead hard-code a day or so to prevent infinite clogging, but let's see how far 2x gets us for now.
Test plan:
CI still passes.
* Adding the User param to the site config so that it can be supported by Azure as an extra param
* Adding the User param to the site config so that it can be supported by Azure as an extra param
* Adding the User param to the site config so that it can be supported by Azure as an extra param
* Adding the User param to the site config so that it can be supported by Azure as an extra param
* Rename smartContext to smartContextWindow
* Update CHANGELOG.md
Co-authored-by: Kalan <51868853+kalanchan@users.noreply.github.com>
---------
Co-authored-by: Kalan <51868853+kalanchan@users.noreply.github.com>
Previously, too many fields where required, which means you needed
to specify unnecessary fields when trying to modify only a single
field, such as mapping specific extensions to specific languages.
Fixes https://linear.app/sourcegraph/issue/GRAPH-612
* feat(completions): add smart context site config
- Add `SmartContext` field to `CompletionsConfig` struct
- Implement `SmartContext()` method in `codyLLMConfigurationResolver`
- Set default `SmartContext` value to "enabled" if not provided
- Update schema and documentation to describe `SmartContext` feature
* Update unit test
* Add Changelog entry
* Add config item, get it to the front end
* Use config on the front end
* Send team=1 if the team button is clicked
* Unrelated: Event logging cleanup
- On the frontend:
- Added a new field named `search.displayLimit` to the User settings
- Started using the `search.displayLimit` value while performing stream search
- On the backend:
- No changes
---------
Co-authored-by: Stefan Hengl <stefan@sourcegraph.com>
This adds a couple of configuration options which will allow us to tune the cost of code monitors in the event that we are seeing issues. It does not change the default values.
This removes qdrant from this codebase entirely.
All the docker images, dependencies, (dead) usage in code.
My understanding is that we don't use this feature and never properly rolled it out.
Test plan:
CI passes and code review from owners.
Noticed this test failing once locally and worked out the source of the
flakiness. Could reproduce with the below test plan.
Test Plan: "go test -race -count=100 ./schema" passes
This aligns the configuration for Bitbucket Cloud with most other code hosts where we allow to name repos explicitly. This makes it easier to test the connection for a single repo first, instead of having to sync potentially thousands of repos right away.
## Test plan
Added a test using real-world data with VCR and verified manually on my local instance that I can successfully sync single repos.
To bring Gerrit support more in line with other code hosts and because a customer recently ran into this limitation, this PR adds support for the SSH flag.
The diff is mostly straightforward, with two things to watch out for:
- The clone URL construction in makeRepo was wrong previously, it missed the `/a/`. This is only used for visuals, but still annoying. So I moved the whole construction into here from the gitserver cloneURL package.
- The SSH hostname and port are configurable in Gerrit, so to construct the right URL we need to fetch some instance metadata. This would be costly to do in the gitserver method, so we persist all the info needed to construct clone URLs "offline" during the cloning process by storing all the data for HTTP and SSH clones on the repo metadata column. This is mostly in line with other code hosts as well, see GitLab right above in the gitserver/cloneurl package.
Closes https://github.com/sourcegraph/sourcegraph/issues/60597
## Test plan
Added tests for the various stages, recreated recorded responses, and tried both HTTP and SSH cloning locally, both worked with out Gerrit instance.
Closes#61166
This PR adds support for Claude 3 and the /messages API to the existing
anthropic provider in the Sourcegraph instance.
To ensure a smooth experience, there are a couple of edge cases that we
need to handle.
Because the URL is configurable, a customer could hard-set it to
/complete. We need to error properly in this case. Clients might not now
what is set so they can send requests in the "old" or "new" format. We
handle conversion as best as possible however for better instruction the
clients will eventually only send prompts in the /messages format. We
introduce cody API versioning for this case. Support /complete style
prompt (with trailing assistant, "holes" of no response, system prompt
in messages) when a legacy client connects
---------
Co-authored-by: Chris Warwick <christopher.warwick@sourcegraph.com>
When using an external identity provider, Sourcegraph by default doesn't care how many external identities you're adding to your account.
In some cases, this can be undesirable though, for example when trying to map a Sourcegraph user to a single identity in the outside world.
Since we cannot change the behavior here silently as people might rely on it, I decided to add a new option for this.
When enabled, the following scenario will no longer be allowed:
- I sign in to Sourcegraph for the first time using an Okta account called supererik@sourcegraph.com.
- I then go to the auth provider flow a second time (accounts tab, or through a redirect) and this time I choose my Okta account erik@sourcegraph.com
- Since I am already authenticated to Sourcegraph and it appears to Sourcegraph as if I wanted to add a new new external identity, I would have gotten a second link to an Okta identity.
Now, with this new behavior, linking a second entity for the same {ClientID, ServiceType, ServiceID} tuple but with a different externalID will be forbidden.
## Test plan
Verified locally using an Okta integration and trying to link two different accounts to the same user and see it fail.
Also added a new test to cover this behavior.
We have a number of docs links in the product that point to the old doc site.
Method:
- Search the repo for `docs.sourcegraph.com`
- Exclude the `doc/` dir, all test fixtures, and `CHANGELOG.md`
- For each, replace `docs.sourcegraph.com` with `sourcegraph.com/docs`
- Navigate to the resulting URL ensuring it's not a dead link, updating the URL if necessary
Many of the URLs updated are just comments, but since I'm doing a manual audit of each URL anyways, I felt it was worth it to update these while I was at it.
We already had a rate limiter in the code, it was just not configurable, so we used Inf always. This was a quick win.
## Test plan
Manually set a very low limit locally, ran again, saw requests getting queued. Also the rate limiter state in the UI correctly reflects the configuration.