sourcegraph/schema
Stephen Gutekanst dca1b9694d
self hosted models (#63899)
This PR is stacked on top of all the prior work @chrsmith has done for
shuffling configuration data around; it implements the new "Self hosted
models" functionality.

## Configuration

Configuring a Sourcegraph instance to use self-hosted models basically
involves adding some configuration like this to the site config (if you
set `modelConfiguration`, you are opting in to the new system which is
in early access):

```
  // Setting this field means we are opting into the new Cody model configuration system.
  "modelConfiguration": {
    // Disable use of Sourcegraph's servers for model discovery
    "sourcegraph": null,

    // Create two model providers
    "providerOverrides": [
      {
        // Our first model provider "mistral" will be a Huggingface TGI deployment which hosts our
        // mistral model for chat functionality.
        "id": "mistral",
        "displayName": "Mistral",
        "serverSideConfig": {
          "type": "huggingface-tgi",
          "endpoints": [{"url": "https://mistral.example.com/v1"}]
        },
      },
      {
        // Our second model provider "bigcode" will be a Huggingface TGI deployment which hosts our
        // bigcode/starcoder model for code completion functionality.
        "id": "bigcode",
        "displayName": "Bigcode",
        "serverSideConfig": {
          "type": "huggingface-tgi",
          "endpoints": [{"url": "http://starcoder.example.com/v1"}]
        }
      }
    ],

    // Make these two models available to Cody users
    "modelOverridesRecommendedSettings": [
      "mistral::v1::mixtral-8x7b-instruct",
      "bigcode::v1::starcoder2-7b"
    ],

    // Configure which models Cody will use by default
    "defaultModels": {
      "chat": "mistral::v1::mixtral-8x7b-instruct",
      "fastChat": "mistral::v1::mixtral-8x7b-instruct",
      "codeCompletion": "bigcode::v1::starcoder2-7b"
    }
  }
```

More advanced configurations are possible, the above is our blessed
configuration for today.

## Hosting models

Another major component of this work is starting to build up
recommendations around how to self-host models, which ones to use, how
to configure them, etc.

For now, we've been testing with these two on a machine with dual A100s:

* Huggingface TGI (this is a Docker container for model inference, which
provides an OpenAI-compatible API - and is widely popular)
* Two models:
* Starcoder2 for code completion; specifically `bigcode/starcoder2-15b`
with `eetq` 8-bit quantization.
* Mixtral 8x7b instruct for chat; specifically
`casperhansen/mixtral-instruct-awq` which uses `awq` 4-bit quantization.

This is our 'starter' configuration. Other models - specifically other
starcoder 2, and mixtral instruct models - certainly work too, and
higher parameter versions may of course provide better results.

Documentation for how to deploy Huggingface TGI, suggested configuration
and debugging tips - coming soon.

## Advanced configuration

As part of this effort, I have added a quite extensive set of
configuration knobs to to the client side model configuration (see `type
ClientSideModelConfigOpenAICompatible` in this PR)

Some of these configuration options are needed for things to work at a
basic level, while others (e.g. prompt customization) are not needed for
basic functionality, but are very important for customers interested in
self-hosting their own models.

Today, Cody clients have a number of different _autocomplete provider
implementations_ which tie model-specific logic to enable autocomplete,
to a provider. For example, if you use a GPT model through Azure OpenAI,
the autocomplete provider for that is entirely different from what you'd
get if you used a GPT model through OpenAI officially. This can lead to
some subtle issues for us, and so it is worth exploring ways to have a
_generalized autocomplete provider_ - and since with self-hosted models
we _must_ address this problem, these configuration knobs fed to the
client from the server are a pathway to doing that - initially just for
self-hosted models, but in the future possibly generalized to other
providers.

## Debugging facilities

Working with customers in the past to use OpenAI-compatible APIs, we've
learned that debugging can be quite a pain. If you can't see what
requests the Sourcegraph backend is making, and what it is getting
back.. it can be quite painful to debug.

This PR implements quite extensive logging, and a `debugConnections`
flag which can be turned on to enable logging of the actual request
payloads and responses. This is critical when a customer is trying to
add support for a new model, their own custom OpenAI API service, etc.

## Robustness

Working with customers in the past, we also learned that various parts
of our backend `openai` provider were not super robust. For example, [if
more than one message was present it was a fatal
error](https://github.com/sourcegraph/sourcegraph/blob/main/internal/completions/client/openai/openai.go#L305),
or if the SSE stream yielded `{"error"}` payloads, they would go
ignored. Similarly, the SSE event stream parser we use is heavily
tailored towards [the exact response
structure](https://github.com/sourcegraph/sourcegraph/blob/main/internal/completions/client/openai/decoder.go#L15-L19)
which OpenAI's official API returns, and is therefor quite brittle if
connecting to a different SSE stream.

For this work, I have _started by forking_ our
`internal/completions/client/openai` - and made a number of major
improvements to it to make it more robust, handle errors better, etc.

I have also replaced the usage of a custom SSE event stream parser -
which was not spec compliant and brittle - with a proper SSE event
stream parser that recently popped up in the Go community:
https://github.com/tmaxmax/go-sse

My intention is that after more extensive testing, this new
`internal/completions/client/openaicompatible` provider will be more
robust, more correct, and all around better than
`internal/completions/client/openai` (and possibly the azure one) so
that we can just supersede those with this new `openaicompatible` one
entirely.

## Client implementation

Much of the work done in this PR is just "let the site admin configure
things, and broadcast that config to the client through the new model
config system."

Actually getting the clients to respect the new configuration, is a task
I am tackling in future `sourcegraph/cody` PRs.

## Test plan

1. This change currently lacks any unit/regression tests, that is a
major noteworthy point. I will follow-up with those in a future PR.
* However, these changes are **incredibly** isolated, clearly only
affecting customers who opt-in to this new self-hosted models
configuration.
* Most of the heavy lifting (SSE streaming, shuffling data around) is
done in other well-tested codebases.
2. Manual testing has played a big role here, specifically:
* Running a dev instance with the new configuration, actually connected
to Huggingface TGI deployed on a remote server.
* Using the new `debugConnections` mechanism (which customers would use)
to directly confirm requests are going to the right places, with the
right data and payloads.
* Confirming with a new client (changes not yet landed) that
autocomplete and chat functionality work.

Can we use more testing? Hell yeah, and I'm going to add it soon. Does
it work quite well and have small room for error? Also yes.

## Changelog

Cody Enterprise: added a new configuration for self-hosting models.
Reach out to support if you would like to use this feature as it is in
early access.

---------

Signed-off-by: Stephen Gutekanst <stephen@sourcegraph.com>
2024-07-19 01:34:02 +00:00
..
aws_codecommit.schema.json
azuredevops.schema.json Docs: update links to point to new site (#60381) 2024-02-13 00:23:47 +00:00
batch_spec.schema.json batches: use "keyword" as default pattern type (#63613) 2024-07-09 10:35:01 +02:00
bitbucket_cloud.schema.json Add support for naming repo explicitly for Bitbucket Cloud (#61536) 2024-04-08 19:03:53 +02:00
bitbucket_server_util.go authz/github: validate provider against default github URL if not set (#24598) 2021-09-06 12:37:33 -04:00
bitbucket_server.schema.json fix(Source): Fix documentation URLs for code hosts help pages (#63274) 2024-06-17 14:32:46 -04:00
bitbucketcloud_util.go Add Bitbucket Cloud as an auth provider with Perms syncing (#46309) 2023-01-16 14:20:35 +02:00
BUILD.bazel bazel: transcribe test ownership to bazel tags (#62664) 2024-05-16 15:51:16 +01:00
changeset_spec.schema.json code-search: handle changeset fork when creating a batch change via src-cli (#58156) 2023-11-08 09:55:05 +01:00
extension_schema.go remove extension registry UI and related GraphQL API (#45891) 2022-12-22 00:10:56 -08:00
gerrit.schema.json gerrit: Add support for SSH cloning (#61537) 2024-04-04 15:56:51 +02:00
github_util.go authz/github: validate provider against default github URL if not set (#24598) 2021-09-06 12:37:33 -04:00
github.schema.json dotcom: Remove on-demand cloning of repositories (#63321) 2024-06-26 14:53:14 -07:00
gitlab_util.go authz/github: validate provider against default github URL if not set (#24598) 2021-09-06 12:37:33 -04:00
gitlab.schema.json dotcom: Remove on-demand cloning of repositories (#63321) 2024-06-26 14:53:14 -07:00
gitolite.schema.json Unremoving phabricator integration fields, adding lines to changelog (#32573) 2022-03-15 10:01:39 -04:00
go-modules.schema.json extsvc: Change default rate limits of npm and Go external services (#34042) 2022-04-19 11:50:46 +00:00
json-schema-draft-07.schema.json
jvm-packages.schema.json packages: improve and expand docs (#49774) 2023-03-21 17:47:57 +00:00
npm-packages.schema.json npm: Bump rate limit. (#37018) 2022-06-10 15:00:51 +00:00
onboardingtour.schema.json user onboarding: Use server side configuration and improve admin experience (#56768) 2023-09-19 22:10:45 +02:00
opencodegraph-protocol.schema.json OpenCodeGraph prototype (#58675) 2023-12-06 21:39:33 -08:00
opencodegraph.schema.json OpenCodeGraph prototype (#58675) 2023-12-06 21:39:33 -08:00
other_external_service.schema.json Remove App from codebase (#59115) 2023-12-21 01:07:05 +01:00
package.json web: sync TS project refenreces (#46407) 2023-01-16 18:55:10 -08:00
pagure.schema.json repos: add Pagure code host support (#28084) 2021-11-23 18:03:35 +01:00
perforce.schema.json Remove unused rateLimit on perforce connections (#58188) 2023-11-15 03:27:14 +01:00
phabricator.schema.json
python-packages.schema.json repos: Introduce Python dependency repos integration (#34886) 2022-05-05 13:24:25 +02:00
README.md site-config: Make symbols not required in syntaxHighlighting (#57276) 2023-10-16 19:53:19 -04:00
ruby-packages.schema.json Packages: add RubyGems support (#42817) 2022-10-17 09:48:18 +02:00
rust-packages.schema.json Remove experimental indexRepositoryName for rust packages (#59176) 2024-01-08 17:42:36 +01:00
schema.go self hosted models (#63899) 2024-07-19 01:34:02 +00:00
settings.schema.json various improvements to saved searches (#63539) 2024-07-15 20:12:34 +00:00
site.schema.json self hosted models (#63899) 2024-07-19 01:34:02 +00:00
stringdata.go Remove App from codebase (#59115) 2023-12-21 01:07:05 +01:00
tsconfig.json web: fix pnpm-lock issue (#47478) 2023-02-09 22:04:31 -08:00
validation_test.go schema: remove non-determinism from TestSchemaValidationUUID (#61728) 2024-04-09 15:50:30 +00:00

Sourcegraph JSON Schemas

JSON Schema is a way to define the structure of a JSON document. It enables typechecking and code intelligence on JSON documents.

Sourcegraph uses the following JSON Schemas:

Modifying a schema

  1. Edit the *.schema.json file in this directory.
  2. Run bazel run //schema:write_generated_schema.
  3. Commit the changes to both files.
  4. Run sg start to automatically update TypeScript schema files.

Known issues

  • The JSON Schema IDs (URIs) are of the form https://sourcegraph.com/v1/*.schema.json#, but these are not actually valid URLs. This means you generally need to supply them to JSON Schema validation libraries manually instead of having the validator fetch the schema from the web.