This commit is in preparation for other work. It refactors how we work
with resolved repository and resolved revision information.
The idea was that the top level repo loader should try to resolve the
repository information and error if that wasn't possible. Not being able
to resolve the revision was accepted, hence this check:
```js
// still render the main repo navigation and header
if (!isRevisionNotFoundErrorLike(repoError)) {
error(400, asError(repoError))
}
```
However the way it was implemented meant that we wouldn't pass any
resolved repository information to the sub-pages/layouts when the
revision couldn't be resolved, which seems wrong.
With these changes, the top level repo loader now provides a
`resolvedRepository` object (where `commit`/`changelist` might be unset)
and the `(validrev)` loader creates the `resolvedRevision` object, just
how its sub-pages/layouts expect.
And instead of returning error objects from `resolveRepoRevision` and
checking them in the loader we throw errors/redirects directly in that
function. IMO that makes the whole flow easier to understand.
## Test plan
Manual testing, CI integration tests
In preparation for other work, this commit substantially refactors the
"infinity query" store implementation. The internals have been changed
completely which allows us to simplify its public API.
- Simpler configuration, especially merging previous and next results.
- Restoration support. So far pages/components had to implement
restoring the state of an infinity store on their own. Now the
restoration strategy is part of the configuration. Pages/components only
have to get an opaque snapshot via `store.capture()` and restore it via
`store.restore(snapshot)`.
- More predictable state. It wasn't always obvious if the store content
stale data e.g. during restoring data. Now `data` will only be set when
the data is 'fresh'.
- Smarter 'incremental restoration' strategy. This strategy makes
multiple requests to restore the previous state. It makes multiple
requests because normally requests are cached and there this is fast.
When the data is not cached though there is a noticable delay due to
waterfall requests. Now we use a simple heuristic to determine whether
or not GraqhQL data might be cached. If not we make a single request to
restore the state.
For review I suggest to turn whitespace changes off.
## Test plan
Manual testing, unit tests.
Previously, we would attempt to recreate the clone URL of a repo based
on event data. This is a lossy matching, and can cause events to get
rejected, although we have the repo cloned.
This PR changes the matching to instead use the external ID of the repo,
which we already store in the repo table in a separate column.
Closes SRC-40
Test plan:
Tests still pass, set up webhooks locally and they still matched (but
only tried GitHub).
- Remove long-deprecated and long-ineffective notifications for saved
searches (removed in
de8ae5ee28
2.5 years ago). Note that code monitors were the replacement for saved
searches and work great.
- Clean up UI.
- Make the UI global instead of in the user/org area.
- Convert React class components to function components.
- Add default `patterntype:` because it's required.
- Use `useQuery` and `useMutation` instead of `requestGraphQL`.
- Use a single namespace `owner` GraphQL arg instead of separating out
`userID` and `orgID`.
- Clean up GraphQL resolver code and factor out common auth checking.
- Support transferring ownership of saved searches among owners (the
user's own user account and the orgs they're a member of).
(I know this is not in Svelte.)
SECURITY: There is one substantive change. Site admins may now view any
user's and any org's saved searches. This is so that they can audit and
delete them if needed.




## Test plan
Try creating, updating, and deleting saved searches, and transferring
ownership of them.
## Changelog
- Improved the saved searches feature, which lets you save search
queries to easily reuse them later and share them with other people in
an organization.
- Added the ability to transfer ownership of a saved search to a user's
organizations or from an organization to a user's account.
- Removed a long-deprecated and ineffective settings
`search.savedQueries` field. You can manage saved searches in a user's
or organization's profile area (e.g., at `/user/searches`).
This is a refactor with a few incidental user-facing changes (eg not
showing a sometimes-incorrect total count in the UI).
---
Make `usePageSwitcherPagination` and `useShowMorePaginationUrl` support
storing filter and query params, not just pagination params, in the URL.
This is commonly desired behavior, and there are many ways we do it
across the codebase. This attempts to standardize how it's done. It does
not update all places this is done to standardize them yet.
Previously, you could use the `options: { useURL: true}` arg to
`usePageSwitcherPagination` and `useShowMorePaginationUrl`. This was not
good because it only updated the pagination URL querystring params and
not the filter params. Some places had a manual way to update the filter
params, but it was incorrect (reloading the page would not get you back
to the same view state) and had a lot of duplicated code. There was
actually no way to have everything (filters and pagination params)
updated in the URL all together, except using the deprecated
`<FilteredConnection>`.
Now, callers that want the URL to be updated with the connection state
(including pagination *and* filters) do:
```typescript
const connectionState = useUrlSearchParamsForConnectionState(filters)
const { ... } = usePageSwitcherPagination({ query: ..., state: connectionState}) // or useShowMorePaginationUrl
```
Callers that do not want the connection state to be reflected in the URL
can just not pass any `state:` value.
This PR also has some other refactors:
- remove `<ConnectionSummary first>` that was used in an erroneous
calculation. It was only used as a hack to determine the `totalCount` of
the connection. This is usually returned in the connection result itself
and that is the only value that should be used.
- remove `?visible=N` param from some connection pages, just use
`?first=N`. This was intended to make it so that if you reloaded or
navigated directly to a page that had a list, subsequently fetching the
next page would only get `pageSize` additional records (eg 20), not
`first` (which is however many records were already showing) for
batch-based navigation. This is not worth the additional complexity that
`?visible=` introduces, and is not clearly desirable even.
- 2 other misc. ones that do not affect user-facing behavior (see commit
messages)
## Test plan
Visit the site admin repositories and packages pages. Ensure all filter
options work and pagination works.
This PR stubs out the URI needed for the React UI to interface with the
appliance, as well as removed the previously implemented UI and
components of the React UI that were only around for a demo.
A number of helper and safety methods have also been added for
interfacing with JSON reads/writes and handling common errors.
While the HTTP handlers are still only stubs, this PR was growing in
size so I decided to cut it here and break apart the rest in upcoming
PRs. React UI is able to parse status and auth correctly at this time.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Unit tests
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Implements CRUD on the new licenses DB. I had to make significant
changes from the initial setup after spending more time working on this.
There's lots of schema changes but that's okay, as we have no data yet.
As in the RPC design, this is intended to accommodate new "types" of
licensing in the future, and so the DB is structured as such as well.
There's also feedback that context around license management events is
very useful - this is encoded in the conditions table, and can be
extended to include more types of conditions in the future.
Part of https://linear.app/sourcegraph/issue/CORE-158
Part of https://linear.app/sourcegraph/issue/CORE-100
## Test plan
Integration tests
Locally, running `sg run enterprise-portal` indicates migrations proceed
as expected
For simple, MVP integrations with Telemetry Gateway, and also as a
reference. See example - this can be consumed with:
```go
import telemetrygatewayv1 "github.com/sourcegraph/sourcegraph/lib/telemetrygateway/v1"
```
## Test plan
n/a
While testing the modelconfig system working end-to-end with the data
coming from the site configuration, I ran into a handful of minor
issues.
They are all kinda subtle so I'll just leave comments to explain the
what and why.
## Test plan
Added new unit tests.
## Changelog
NA
This PR adds experimental endpoints implementing for context ranking and
storage, as required by RFC 969.
Also removes the deprecated and unused `getCodyIntent` query (we have
`chatIntent`).
## Test plan
- tested locally, not integrated with any clients
## Linked Issues
- Closes https://github.com/sourcegraph/sourcegraph/issues/35027
## Motivation and Context:
<!--- Why is this change required? What problem does it solve? -->
- To improve the UX by displaying the invalid token error message on the
UI to the users while adding a code host connection
## Existing Issue Root Cause:
- After the submit button is clicked while adding a code host
connection, the existing code navigates to the
`/site-admin/external-services/${data.addExternalService.id}` page (the
page which displays information of the newly added code host connection)
before the error message could even be displayed on the current page
## Changes Made:
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. List any
dependencies that are required for this change. -->
- Modified the onSubmit function (the one which is called when the
submit button is clicked on the code host connection addition page) to
only navigate to the
`/site-admin/external-services/${data.addExternalService.id}` page
(newly added code host information page) only if no error/warning is
returned from the gRPC call which validates the token of the code host
added
## Type of change:
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Refactoring (altering code without changing its external
behaviour)
- [ ] Documentation change
- [ ] Other
## Checklist:
- [x] Development completed
- [x] Comments added to code (wherever necessary)
- [x] Documentation updated (if applicable)
- [x] Tested changes locally
## Follow-up tasks (if any):
- None
## Test Plan
<!--- Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce. -->
- Had setup the codebase locally and tested the entire flow locally
end-to-end
- Screen recording which shows the invalid token error message on the UI
in the code host connection addition flow:
https://github.com/sourcegraph/sourcegraph/assets/51479159/1857f32d-56a6-42c1-af88-ea3c9edc46a5
## Additional Comments
- Please let me know if any further changes are required elsewhere
In Sveltekit, we respect the setting `search.defaultPatternType` for the
initial search, but if you set then unset the regexp toggle, it always
reverts to `keyword`. Now we revert back to the default when a toggle is
unset.
This is important because some customers have disabled keyword search,
and for 5.5 we've directed them to use `search.defaultPatternType:
standard`.
## Test plan
Manually tested search + toggles with following config:
* No settings
* `search.defaultPatternType: standard`, `keyword`, and `regexp`
This PR if what the past dozen or so
[cleanup](https://github.com/sourcegraph/sourcegraph/pull/63359),
[refactoring](https://github.com/sourcegraph/sourcegraph/pull/63731),
and [test](https://github.com/sourcegraph/sourcegraph/pull/63761) PRs
were all about: using the new `modelconfig` system for the completion
APIs.
This will enable users to:
- Use the new site config schema for specifying LLM configuration, added
in https://github.com/sourcegraph/sourcegraph/pull/63654. Sourcegraph
admins who use these new site config options will be able to support
many more LLM models and providers than is possible using the older
"completions" site config.
- For Cody Enterprise users, we no longer ignore the
`CodyCompletionRequest.Model` field. And now support users specifying
any LLM model (provided it is "supported" by the Sourcegraph instance).
Beyond those two things, everything should continue to work like before.
With any existing "completions" configuration data being converted into
the `modelconfig` system (see
https://github.com/sourcegraph/sourcegraph/pull/63533).
## Overview
In order to understand how this all fits together, I'd suggest reviewing
this PR commit-by-commit.
### [Update internal/completions to use
modelconfig](e6b7eb171e)
The first change was to update the code we use to serve LLM completions.
(Various implementations of the `types.CompletionsProvider` interface.)
The key changes here were as follows:
1. Update the `CompletionRequest` type to include the `ModelConfigInfo`
field (to make the new Provider and Model-specific configuration data
available.)
2. Rename the `CompletionRequest.Model` field to
`CompletionRequest.RequestedModel`. (But with a JSON annotation to
maintain compatibility with existing callers.) This is to catch any bugs
related to using the field directly, since that is now almost guaranteed
to be a mistake. (See below.)
With these changes, all of the `CompletionProvider`s were updated to
reflect these changes.
- Any situation where we used the
`CompletionRequest.Parameters.RequestedModel` should now refer to
`CompletionRequest.ModelConfigInfo.Model.ModelName`. The "model name"
being the thing that should be passed to the API provider, e.g.
`gpt-3.5-turbo`.
- In some situations (`azureopenai`) we needed to rely on the Model ID
as a more human-friendly identifier. This isn't 100% accurate, but will
match the behavior we have today. A long doc comment calls out the
details of what is wrong with that.
- In other situations (`awsbedrock`, `azureopenai`) we read the new
`modelconfig` data to configure the API provider (e.g.
`Azure.UseDeprecatedAPI`), or surface model-specific metadata (e.g. AWS
Provisioned Throughput ARNs). While the code is a little clunky to avoid
larger refactoring, this is the heart and soul of how we will be writing
new completion providers in the future. That is, taking specific
configuration bags with whatever data that is required.
### [Fix bugs in
modelconfig](75a51d8cb5)
While we had lots of tests for converting the existing "completions"
site config data into the `modelconfig.ModelConfiguration` structure,
there were a couple of subtle bugs that I found while testing the larger
change.
The updated unit tests and comments should make that clear.
### [Update frontend/internal/httpapi/completions to use
modelconfig](084793e08f)
The final step was to update the HTTP endpoints that serve the
completion requests. There weren't any logic changes here, just
refactoring how we lookup the required data. (e.g. converting the user's
requested model into an actual model found in the site configuration.)
We support Cody clients sending either "legacy mrefs" of the form
`provider/model` like before, or the newer mref
`provider::apiversion::model`. Although it will likely be a while before
Cody clients are updated to only use the newer-style model references.
The existing unit tests for the competitions APIs just worked, which was
the plan. But for the few changes that were required I've added comments
to explain the situation.
### [Fix: Support requesting models just by their
ID](99715feba6)
> ... We support Cody clients sending either "legacy mrefs" of the form
`provider/model` like before ...
Yeah, so apparently I lied 😅 . After doing more testing, the extension
_also_ sends requests where the requested model is just `"model"`.
(Without the provider prefix.)
So that now works too. And we just blindly match "gtp-3.5-turbo" to the
first mref with the matching model ID, such as
"anthropic::unknown::gtp-3.5-turbo".
## Test plan
Existing unit tests pass, added a few tests. And manually tested my Sg
instance configured to act as both "dotcom" mode and a prototypical Cody
Enterprise instance.
## Changelog
Update the Cody APIs for chat or code completions to use the "new style"
model configuration. This allows for great flexibility in configuring
LLM providers and exposing new models, but also allows Cody Enterprise
users to select different models for chats.
This will warrant a longer, more detailed changelog entry for the patch
release next week. As this unlocks many other exciting features.
This adds two new components for the common situation of needing to
display a styled path.
- `DisplayPath` handles splitting, coloring, spacing, slotting in a file
icon, adding a copy button, and ensuring that no spaces get introduced
when copying the path manually
- `ShrinkablePath` is built on top of `DisplayPath` and adds the ability
to collapse path elements into a dropdown menu
These are used in three places:
- The file header. There should be no change in behavior except maybe a
simplified DOM. This makes use of the "Shrinkable" version of the
component.
- The file search result header. This required carefully ensuring that
the text content of the node is exactly equal to the path so that the
character offsets are correct.
- The file popover, where it is used for both the repo name (unlinkified
version) and the file name (linkified version).
Fixes SRCH-718
Fixes SRCH-690
The "Exclude" options in the filter panels are very useful, but many are specific to Go. This change generalizes them so they apply in many more cases:
* All files with suffix `_test` plus extension (covers Go, Python, some Ruby, C++, C, more)
* All files with suffix `.test` plus extension (covers Javascript, some Ruby)
* Ruby specs
* Third party folders (common general naming pattern)
Relates to SPLF-70
We register ctrl+backspace to go to the repository root, but that should
not trigger when an input field, such as the fuzzy finder, is focused.
Fixes srch-681
## Test plan
Manual testing.
## Linked Issues
- Closes https://github.com/sourcegraph/sourcegraph/issues/38348
## Motivation and Context:
<!--- Why is this change required? What problem does it solve? -->
- Improves the UX of the password reset flow
## Changes Made:
<!--- Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context. List any
dependencies that are required for this change. -->
- Made changes to the following 2 flows:
- On the new password entry screen:
- Added a text which displays the email of the account for which the
password change request has been raised
- Added a back button to allow the users to go back to the previous
email entry screen if the account they want to reset the password for is
different
- On the sign-in screen which comes after successful password reset
request completion:
- Made changes to auto-populate the email text-box with the email linked
to the account on which the password reset request was completed
recently
## Type of change:
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Refactoring (altering code without changing its external
behaviour)
- [ ] Documentation change
- [ ] Other
## Checklist:
- [x] Development completed
- [ ] Comments added to code (wherever necessary)
- [ ] Documentation updated (if applicable)
- [x] Tested changes locally
## Follow-up tasks (if any):
- None
## Test Plan
<!--- Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce. -->
- Setup the codebase locally along with configuring a custom SMTP to
enable the email delivery functionality with the help of this
documentation:
https://docs.sourcegraph.com/admin/config/email#configuring-sourcegraph-to-send-email-using-another-provider
- Tested the entire flow locally end-to-end.
- Screen recording of the password reset screen where the current email
ID is added along with a back button:
https://github.com/sourcegraph/sourcegraph/assets/51479159/a79fc338-ace0-4281-86d2-de7cc68eae20
- Screen recording of the sign-in screen after password reset is
successfully done where the email ID is auto-populated in the text-box:
https://github.com/sourcegraph/sourcegraph/assets/51479159/be7db65d-9421-4621-a1e9-a04a546b9757
## Additional Comments
- Please let me know if I need to make any further design changes from
the frontend side or any API contract related changes from the backend
side
---------
Co-authored-by: Vincent <evict@users.noreply.github.com>
Co-authored-by: Shivasurya <s.shivasurya@gmail.com>
This PR overhauls the UI a bunch to make it look more in line with other
pages, and fixes various smaller papercuts and bugs.
Closes SRC-377
Test plan:
Added storybooks and made sure existing ones still look good, created,
updated, deleted various webhooks locally.
The background publisher was started regardless if analytics was
disabled or not. This PR makes it so that we only publish analytics if
it is enabled.
To make it work and not duplicate the disabled analytics check, I moved
the usershell + background context creation to happen earlier.
## Test plan
CI and tested locally
## Changelog
* sg - only start the analytics background publisher when analytics are
enabled
---------
Co-authored-by: Jean-Hadrien Chabran <jh@chabran.fr>
I went through all call sites of the 3 search APIs (Stream API, GQL API,
SearchClient (internal)) and made sure that the query syntax version is
set to "V3".
Why?
Without this change, a new default search syntax version might have
caused a change in behavior for some of the call sites.
## Test plan
- No functional change, so relying mostly on CI
- The codeintel GQL queries set the patternType explicitly, so this
change is a NOP.
I tested manually
- search based code intel sends GQL requests with version "V3"
- repo badge still works
- compute GQL returns results
A couple of tweaks to the commit / diff view:
- Linking both file paths in the header for renamed files
- Collapse renamed file diffs without changes by default
- Move "no changes" out of `FileDiffHunks` to not render a border around
the test.
- Add description for binary files
- Adjust line height and font size to match what we use in the file view
- Added the `visibly-hidden` utility class to render content for a11y
purposes (I didn't test the changes I made with a screenreader though)
Contributes to SRCH-523
## Test plan
Manual testing
The OTEL upgrade https://github.com/sourcegraph/sourcegraph/pull/63171
bumps the `prometheus/common` package too far via transitive deps,
causing us to generate configuration for alertmanager that altertmanager
doesn't accept, at least until the alertmanager project cuts a new
release with a newer version of `promethues/common`.
For now we forcibly downgrade with a replace. Everything still builds,
so we should be good to go.
## Test plan
`sg start` and `sg run prometheus`. On `main`, editing
`observability.alerts` will cause Alertmanager to refuse to accept the
generated configuration. With this patch, all is well it seems - config
changes go through as expected. This is a similar test plan for
https://github.com/sourcegraph/sourcegraph/pull/63329
## Changelog
- Fix Prometheus Alertmanager configuration failing to apply
`observability.alerts` from site config
`REDIS_ENDPOINT` is now registered by gateway, but it is also registered
in `internal/redispool` as the fallback for when the other values are
not set.
The real fix would be to not have env vars in that package, and instead
each service creates one instance of each of those two in their `cmd/`,
but that's a lot of work so short-term fixing it by reading the fallback
using os.Getenv.
Test plan:
`sg run cody-gateway` doesn't panic.
---------
Co-authored-by: Jean-Hadrien Chabran <jh@chabran.fr>
**chore(appliance): extract constant for configmap name**
To the reconciler, this is just a value, but to higher-level packages
like appliance, there is a single configmap that is an entity. Let's
make sure all high-level orchestration packages can reference our name
for it. This could itself be extracted to injected config if there was a
motivation for it.
**chore(appliance): extract NewRandomNamespace() in k8senvtest**
From reconciler tests, so that we can reuse it in self-update tests.
**feat(appliance): self-update**
Add a worker thread to the appliance that periodically polls release
registry for newer versions, and updates its own Kubernetes deployment.
If the APPLIANCE_DEPLOYMENT_NAME environment variable is not set, this
feature is disabled. This PR will be accompanied by one to the
appliance's helm chart to add this variable by default.
**fix(appliance): only self-update 2 minor versions above deployed SG**
**chore(appliance): self-update integration test extra case**
Check that self-update doesn't run when SG is not yet deployed.
https://linear.app/sourcegraph/issue/REL-212/appliance-can-self-upgrade
This PR fixes an important bug in #62976, where we didn't properly map the
symbol line match to the return type. Instead, we accidentally treated symbol
matches like file matches and returned the start of the file.
## Test plan
Add new unit test for symbol match conversion. Extensive manual testing.
Instead of fetching the file for every node, this passes in a
request-scoped cache to minimize the number of gitserver roundtrips, but
does no fetching if `surroundingContent` is not requested by the caller.
This commit removes files/dependencies that we are not using (anymore).
In the case of `@sourcegraph/wildcard` we never want to import
dependencies from it, but have done so accidentally in the past. I hope
that we can prevent this by removing it from dependencies (and we don't
need anyway).
## Test plan
`pnpm build` and CI
This PR fixes the following:
- Handles source range translation in the occurrences API
(Fixes https://linear.app/sourcegraph/issue/GRAPH-705)
- Handles range translation when comparing with document occurrences in
search-based and syntactic usagesForSymbol implementations
Throwing this PR up in its current state as I think adding the bulk
conversion
API will be a somewhat complex task, so we should split them into
separate
PRs anyways, and I don't have time to continue working on this right
now.
Some design notes:
- We want to avoid passing around full CompletedUpload and RequestState
objects,
which is why I chose to create a smaller UploadSummary type and decided
to pass
around GitTreeTranslator as that is the minimal thing we need to handle
range re-mapping.
- Yes, this PR increases the surface of the UploadLike type, but I think
it's still quite manageable.
## Test plan
manual testing, existing tests on gittreetranslator
---------
Co-authored-by: Christoph Hegemann <christoph.hegemann@sourcegraph.com>
Removes the `sg telemetry` command that pertains to the legacy V1
exporter that is specific to Cloud instances.
I got asked about this recently, and especially with the new `sg
analytics` for usage of the `sg` CLI, this has the potential to be
pretty confusing.
Part of https://linear.app/sourcegraph/issue/CORE-104
## Test plan
n/a
## Changelog
- `sg`: the deprecated `sg telemetry` command for allowlisting export of
V1 telemetry from Cloud instances has been removed. Use telemetry V2
instead.
A couple of minor changes to minimize the diff for "large completions
API refactoring".
Most changes are just a refactoring of the `openai` completions
provider, which I apparently missed in
https://github.com/sourcegraph/sourcegraph/pull/63731. (There are still
some smaller tweaks that can be made to the `fireworks` or `google`
completion providers, but they aren't as meaningful.
This PR also removes a couple of unused fields and methods. e.g.
`types.CompletionRequestParameters::Prompt`. There was a comment to the
effect of it being long since deprecated, and it is no longer read
anywhere on the server side. So I'm assuming that a green CI/CD build
means it is safe to remove.
## Test plan
CI/CD
## Changelog
NA
This PR adds more unit tests for the "Chat Completions" HTTP endpoint.
The goal is to have unit tests for more of the one-off quirks that we
support today, so that we can catch any regressions when refactoring
this code.
This PR adds _another layer_ of test infrastructure to use to streamline
writing completion tests. (Since they are kinda involved, and are
mocking out multiple interactions, it's kinda necessary.)
It introduces a new data type `completionsRequestTestData` which
contains all of the "inputs" to the test case, as well as some of the
things we want to validate.
```go
type completionsRequestTestData struct {
SiteConfig schema.SiteConfiguration
UserCompletionRequest types.CodyCompletionRequestParameters
WantRequestToLLMProvider map[string]any
WantRequestToLLMProviderPath string
ResponseFromLLMProvider map[string]any
WantCompletionResponse types.CompletionResponse
}
```
Then to run one of these tests, you just call the new function:
```go
func runCompletionsTest(t *testing.T, infra *apiProviderTestInfra, data completionsRequestTestData) {
```
With this, the new pattern for completion tests is of the form:
```go
func TestProviderX(t *testing.T) {
// Return a valid site configuration, and the expected API request body
// we will send to the LLM API provider X.
getValidTestData := func() completionsRequestTestData {
...
}
t.Run("TestDataIsValid", func(t *testing.T) {
// Just confirm that the stock test data works as expected,
// without any test-specific modifications.
data := getValidTestData()
runCompletionsTest(t, infra, data)
})
}
```
And then, for more sophisticated tests, we would just overwrite whatever
subset of fields are necessary from the stock test data.
For example, testing the way AWS Bedrock provisioned throughput ARNs get
reflected in the completions API can be done by creating a function to
return the specific site configuration data, and then:
```go
t.Run("Chat", func(t *testing.T) {
data := getValidTestData()
data.SiteConfig.Completions = getProvisionedThroughputSiteConfig()
// The chat model is using provisioned throughput, so the
// URLs are different.
data.WantRequestToLLMProviderPath = "/model/arn:aws:bedrock:us-west-2:012345678901:provisioned-model/abcdefghijkl/invoke"
runCompletionsTest(t, infra, data)
})
t.Run("FastChat", func(t *testing.T) {
data := getValidTestData()
data.SiteConfig.Completions = getProvisionedThroughputSiteConfig()
data.UserCompletionRequest.Fast = true
// The fast chat model does not have provisioned throughput, and
// so the request path to bedrock just has the model's name. (No ARN.)
data.WantRequestToLLMProviderPath = "/model/anthropic.claude-v2-fastchat/invoke"
runCompletionsTest(t, infra, data)
})
```
## Test plan
Added more unit tests.
## Changelog
NA
Closes srch-494
This adds the search query syntax introduction component to the search
homepage. I tried to replicate the React version as closely as possible.
I originally wanted to reuse the logic to generate the example sections
but since it had dependencies on wildcard I duplicated it instead.
Notable additional changes:
- Added a `value` method to the temporary settings store to make it
easier to get the current value of the settings store. It only resolves
(or rejects) once the data is loaded.
- Extended the tabs component to not show the tab header if there is
only a single panel. This makes it easier for consumers to render tabs
conditionally.
- Added the `ProductStatusBadge` component
- Various style adjustments
For reference, the relevant parts of the React version are in
https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/client/branded/src/search-ui/components/useQueryExamples.tsx
and
https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/client/branded/src/search-ui/components/QueryExamples.tsx
## Test plan
Manual testing. I manually set the value received from temporary
settings to `null` (in code) to force trigger the compute logic.