This PR if what the past dozen or so
[cleanup](https://github.com/sourcegraph/sourcegraph/pull/63359),
[refactoring](https://github.com/sourcegraph/sourcegraph/pull/63731),
and [test](https://github.com/sourcegraph/sourcegraph/pull/63761) PRs
were all about: using the new `modelconfig` system for the completion
APIs.
This will enable users to:
- Use the new site config schema for specifying LLM configuration, added
in https://github.com/sourcegraph/sourcegraph/pull/63654. Sourcegraph
admins who use these new site config options will be able to support
many more LLM models and providers than is possible using the older
"completions" site config.
- For Cody Enterprise users, we no longer ignore the
`CodyCompletionRequest.Model` field. And now support users specifying
any LLM model (provided it is "supported" by the Sourcegraph instance).
Beyond those two things, everything should continue to work like before.
With any existing "completions" configuration data being converted into
the `modelconfig` system (see
https://github.com/sourcegraph/sourcegraph/pull/63533).
## Overview
In order to understand how this all fits together, I'd suggest reviewing
this PR commit-by-commit.
### [Update internal/completions to use
modelconfig](e6b7eb171e)
The first change was to update the code we use to serve LLM completions.
(Various implementations of the `types.CompletionsProvider` interface.)
The key changes here were as follows:
1. Update the `CompletionRequest` type to include the `ModelConfigInfo`
field (to make the new Provider and Model-specific configuration data
available.)
2. Rename the `CompletionRequest.Model` field to
`CompletionRequest.RequestedModel`. (But with a JSON annotation to
maintain compatibility with existing callers.) This is to catch any bugs
related to using the field directly, since that is now almost guaranteed
to be a mistake. (See below.)
With these changes, all of the `CompletionProvider`s were updated to
reflect these changes.
- Any situation where we used the
`CompletionRequest.Parameters.RequestedModel` should now refer to
`CompletionRequest.ModelConfigInfo.Model.ModelName`. The "model name"
being the thing that should be passed to the API provider, e.g.
`gpt-3.5-turbo`.
- In some situations (`azureopenai`) we needed to rely on the Model ID
as a more human-friendly identifier. This isn't 100% accurate, but will
match the behavior we have today. A long doc comment calls out the
details of what is wrong with that.
- In other situations (`awsbedrock`, `azureopenai`) we read the new
`modelconfig` data to configure the API provider (e.g.
`Azure.UseDeprecatedAPI`), or surface model-specific metadata (e.g. AWS
Provisioned Throughput ARNs). While the code is a little clunky to avoid
larger refactoring, this is the heart and soul of how we will be writing
new completion providers in the future. That is, taking specific
configuration bags with whatever data that is required.
### [Fix bugs in
modelconfig](75a51d8cb5)
While we had lots of tests for converting the existing "completions"
site config data into the `modelconfig.ModelConfiguration` structure,
there were a couple of subtle bugs that I found while testing the larger
change.
The updated unit tests and comments should make that clear.
### [Update frontend/internal/httpapi/completions to use
modelconfig](084793e08f)
The final step was to update the HTTP endpoints that serve the
completion requests. There weren't any logic changes here, just
refactoring how we lookup the required data. (e.g. converting the user's
requested model into an actual model found in the site configuration.)
We support Cody clients sending either "legacy mrefs" of the form
`provider/model` like before, or the newer mref
`provider::apiversion::model`. Although it will likely be a while before
Cody clients are updated to only use the newer-style model references.
The existing unit tests for the competitions APIs just worked, which was
the plan. But for the few changes that were required I've added comments
to explain the situation.
### [Fix: Support requesting models just by their
ID](99715feba6)
> ... We support Cody clients sending either "legacy mrefs" of the form
`provider/model` like before ...
Yeah, so apparently I lied 😅 . After doing more testing, the extension
_also_ sends requests where the requested model is just `"model"`.
(Without the provider prefix.)
So that now works too. And we just blindly match "gtp-3.5-turbo" to the
first mref with the matching model ID, such as
"anthropic::unknown::gtp-3.5-turbo".
## Test plan
Existing unit tests pass, added a few tests. And manually tested my Sg
instance configured to act as both "dotcom" mode and a prototypical Cody
Enterprise instance.
## Changelog
Update the Cody APIs for chat or code completions to use the "new style"
model configuration. This allows for great flexibility in configuring
LLM providers and exposing new models, but also allows Cody Enterprise
users to select different models for chats.
This will warrant a longer, more detailed changelog entry for the patch
release next week. As this unlocks many other exciting features.