The current "name" thing is not used anywhere for subscriptions - all
internal capabilities and APIs depend use the UUID, and Enterprise
Portal will use the UUID as well.
This change replaces all name/IDs with the UUID, prefixed in Enterprise
Portal format, as we prepare to launch Enterprise Portal in more places
(such as Cody Analytics: https://linear.app/sourcegraph/issue/CORE-101).
This is particularly relevant for Cody Analytics so I can document how
to find the UUID in a way that isn't "get it from the URL".
It's not super beautiful in the subscriptions list, but as we progress
on the migration to Enterprise Portal I plan to replace the ID in the
list with "Display name", which is a first-class citizen in Enterprise
Portal.
## Test plan
<img width="953" alt="image"
src="https://github.com/sourcegraph/sourcegraph/assets/23356519/30c4ae6b-e50b-485c-a2c8-e4ab6445fc01">

This cuts building `./cmd/sourcegraph` from ~60s to ~6s when using `sg
start single-program-experimental-blame-sqs` (which is dev only and
never needs bundled assets).
## Test plan
n/a, dev experimental only
Previously, enabling then disabling the regexp toggle would always set the
patterntype to `keyword`, even when the user has set
`search.defaultPatternType: standard`. Now, toggles always revert back to the
default pattern type.
To support this, this PR adds `defaultPatternType` to the nav bar query state,
which is updated every time there's a settings update. Having
`defaultPatternType` available also lets us fix another bug: when
the default pattern type has been set to `standard` we no longer awkwardly show
`patterntype: standard` in the search bar. (This confusing behavior was
introduced recently in #63326.)
**Overall, this PR set us up to remove `experimentalFeatures.keywordSearch`,
along with the keyword search toggle.** To opt out of keyword search, users can
just set `search.defaultPatternType`, and have it work everywhere.
While staring at execution logs locally I noticed that the stamp files
stable-status.txt and volatile-status.txt were inputs to targets such as
`//wolfi-images/sourcegraph-base:wolfi_config`, causing these `yq`
targets to be executed every time `--stamp` changes or stable stamp vars
change. I don't know how this interacts with BwoB. Will ask in the Bazel
slack for clarification
https://bazelbuild.slack.com/archives/CA31HN1T3/p1719324079626239
## Test plan
`bazel build` on an oci_image target works fine, confirmed by CI
## Changelog
Addresses the following error that was missed due to not happening to me
locally, see original:
https://github.com/sourcegraph/sourcegraph/pull/59638
```
ERROR: Traceback (most recent call last):
File "/Users/will/code/sourcegraph/client/backstage-frontend/node_modules/@sourcegraph/build-config/BUILD.bazel", line 9, column 22, in <toplevel>
npm_link_all_packages(name = "node_modules")
File "/private/var/tmp/_bazel_will/dcc801c077cac507d08c05a871989466/external/npm/defs.bzl", line 3214, column 13, in npm_link_all_packages
fail(msg)
Error in fail: The npm_link_all_packages() macro loaded from @npm//:defs.bzl and called in bazel package 'client/backstage-frontend/node_modules/@sourcegraph/build-config' may only be called in bazel packages that correspond to the pnpm root package or pnpm workspace projects. Projects are discovered from the pnpm-lock.yaml and may be missing if the lockfile is out of date. Root package: '', pnpm workspace projects: '', 'client/branded', 'client/browser', 'client/build-config', 'client/client-api', 'client/codeintellify', 'client/cody-shared', 'client/cody-ui', 'client/common', 'client/eslint-plugin-wildcard', 'client/extension-api', 'client/extension-api-types', 'client/http-client', 'client/jetbrains', 'client/observability-client', 'client/observability-server', 'client/shared', 'client/storybook', 'client/template-parser', 'client/testing', 'client/vscode', 'client/web', 'client/web-sveltekit', 'client/wildcard', 'schema'
```
## Test plan
@willdollman
## Changelog
Previously, the code had several problems:
- There was code which seemed like it was handling path renames across
commits, but it was just a no-op. It was not documented why it was
implemented that way.
- The Position adjustment API and Range adjustment APIs were
inconsistent.
- The Position and Range adjustment APIs suggested that they could
handle file renames, but they don't/cannot.
- The mocking logic was just returning wrong outputs (commits
instead of paths in one place).
This patch documents the rationale behind the lack of rename handling,
as well as adds a slow-but-correct form of Document checking.
Previously, there was no way to enable the "tracing" feature from
Fireworks https://readme.fireworks.ai/docs/enabling-tracing This PR
solves the problem by forwarding the `X-Fireworks-Genie` HTTP header to
Fireworks if this HTTP header is set by the Gateway client.
Fixes CODY-2555
<!-- 💡 To write a useful PR description, make sure that your description
covers:
- WHAT this PR is changing:
- How was it PREVIOUSLY.
- How it will be from NOW on.
- WHY this PR is needed.
- CONTEXT, i.e. to which initiative, project or RFC it belongs.
The structure of the description doesn't matter as much as covering
these points, so use
your best judgement based on your context.
Learn how to write good pull request description:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e?pvs=4
-->
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
N/A
## Changelog
<!--
1. Ensure your pull request title is formatted as: $type($domain): $what
2. Add bullet list items for each additional detail you want to cover
(see example below)
3. You can edit this after the pull request was merged, as long as
release shipping it hasn't been promoted to the public.
4. For more information, please see this how-to
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c?
Audience: TS/CSE > Customers > Teammates (in that order).
Cheat sheet: $type = chore|fix|feat $domain:
source|search|ci|release|plg|cody|local|...
-->
<!--
Example:
Title: fix(search): parse quotes with the appropriate context
Changelog section:
## Changelog
- When a quote is used with regexp pattern type, then ...
- Refactored underlying code.
-->
Follow-up to #63448 - we now get Redis spans, but not the database
operations that happen throughout a migration. Maybe this will do the
thing?
## Test plan
n/a
We were aligned on using non-prefixed UUIDs internally, only prefixing
them for external consumption. This lets us use the native UUID type.
Part of https://linear.app/sourcegraph/issue/CORE-155
## Test plan
CI, `sg run enterprise-portal`
`select * from enterprise_portal_subscriptions;` on dev shows we are
using the non-prefixed UUDs only. Manual migration, since it doesn't
seem like GORM can do this for you:
```
ALTER TABLE enterprise_portal_subscriptions ALTER COLUMN id TYPE uuid USING id::uuid;
```
Checked that data is intact.
This makes it easier to run Sourcegraph in local dev by compiling a few
key services (frontend, searcher, repo-updater, gitserver, and worker)
into a single Go binary and running that.
Compared to `sg start` (which compiles and runs ~10 services), it's
faster to start up (by ~10% or a few seconds), takes a lot less memory
and CPU when running, has less log noise, and rebuilds faster. It is
slower to recompile for changes just to `frontend` because it needs to
link in more code on each recompile, but it's faster for most other Go
changes that require recompilation of multiple services.
This is only intended for local dev as a convenience. There may be
different behavior in this mode that could result in problems when your
code runs in the normal deployment. Usually our e2e tests should catch
this, but to be safe, you should run in the usual mode if you are making
sensitive cross-service changes.
Partially reverts "svcmain: Simplify service setup (#61903)" (commit
9541032292).
## Test plan
Existing tests cover any regressions to existing behavior. This new
behavior is for local dev only.
Codeintel v1 telemetry/event logging was broken in
https://github.com/sourcegraph/sourcegraph/pull/62586 due to the lack of
parens around a ternary operator. This simply fixes that issue.
## Test plan
CI
## Changelog
Co-authored-by: Dan Adler <5589410+dadlerj@users.noreply.github.com>
…orduct analytics)
As part of the v1 -> v2 telemetry transition, in product analytics
("IPA") aka admin analytics pages need to all be updated as well.
The codeintel IPA page shows codeintel actions by language. We aren't
capturing this metadata for v2 events. This PR adds it to private
metadata (since this is only needed on the instance itself).
## Test plan
CI
---------
Co-authored-by: Dan Adler <5589410+dadlerj@users.noreply.github.com>
Co-authored-by: Aditya Kalia <32119652+akalia25@users.noreply.github.com>
The `internal/modelconfig` package provides the schema by which
Sourcegraph instances will keep track of LLM providers, models, and
various configuration settings. The `cmd/cody-gateway-config` tool
writes a JSON file contains the model configuration data for Cody
Gateway.
This PR fixes an assortment if problems with these components.
d8963daf6d - Due to a logic error, we were
saving the model ID of "unknown" into `models.json`. We now put the
actual `ModelID` from the `ModelRef`.
d9baa65534 - Updates the model information
that gets rendered, so that it matches what is hard-coded into the
`sourcegraph/cody` repo. (We were missing any reference to the newly
added Google Gemini models.)
c28780c4d9 - Relaxes the resource ID
regular expression so that it is now legal to add periods. So
"gemini-1.5-latest" is now considered a valid `ModelID`.
... however, the validation checks were incorrectly passing because
there was a bug in the regular expression. And after writing some unit
tests for the `validateModelRef` function, I found several other
problems with that regular expression 😅 . But we should be much closer
to things working as intended now.
## Test plan
Added tests
The general principle here is that we want to remove non-functional
upsells in our product. They add clutter and complexity and sometimes
annoy admins and users. As we unify the products, we will be adding a
lot more functional CTAs in the product to the same effect.
## Test plan
n/a; removal only
This page (at `/cody/search`) let you type in a natural-language
description of what to search for, and then it used an LLM to rewrite
that to Sourcegraph search query syntax. This was enabled using the
`cody-web-search` feature flag, which has been disabled on dotcom for a
while and was never documented or shared with customers, so it's safe to
remove.
It was a cool idea, but we decided to focus on making Cody's search in
the editor better, and we haven't seriously improved this for ~12
months.
## Test plan
n/a; removed feature
Repository embeddings were removed in Feb 2024 as part of the Cody
Enterprise GA. They have not been used since. Some Sourcegraph instances
still running an older pre-GA version may still rely on Cody Gateway
(deployed by us) for embeddings generation, but they do not rely on this
UI code at all, so it is safe to remove.
No changelog entry needed since this code's UI has been disabled since
Feb 2024.
## Test plan
Existing tests suffice since this is removing functionality.
Previously, the code would prevent us from using the AccessTokensAdmin
config setting on dotcom entirely, instead of just restricting it when
site admins create an access token for a different user, which was the
intent.
## Test plan
CI
This is a slow and CPU-intensive task, and there is little value in
syntax-highlighting lockfiles. Just disable this on all instances, not
only dotcom.
## Test plan
CI
## Changelog
- Syntax highlighting is disabled on lockfiles (such as
`package-lock.json`) because it is CPU-intensive on these large files
and very rarely desirable.
Previously, all users were allowed to view a user's usage stats. This is
an admin feature and it is not needed nor desirable for other users to
be able to view a user's usage stats.
## Test plan
CI
**feat(appliance): local developer mode**
- Expose a toggle in the web UI to enable dev mode
- dev mode is currently defined as: no container resource
requests/limits
**fix(appliance): fix misconfigurations to 2 services**
Gitserver: the configured probe timeouts were too aggressive.
Indexed-search: the image name was wrong
Both of these were drift from Helm that we didn't catch. Luckily the
appliance is still pre-release!
---
Closes
https://linear.app/sourcegraph/issue/REL-199/populate-accurate-list-of-versions-to-install
Adds `dotcom.codyProConfig.useEmbeddedUI` site config param.
This param defines whether the Cody Pro subscription and team management
UI should be served from the connected instance running in the dotcom
mode. The default value is `false`. This change allows us to enable the
SSC proxy on the instance without enabling the new embedded Cody Pro UI.
Previously whether the embedded Cody Pro UI is enabled was defined by
the `dotcom.codyProConfig` being set, which prevented us from enabling
the SSC proxy without enabling the embedded UI:
> Whether the SSC proxy is enabled is [defined based on
`dotcom.codyProConfig`](41fb56d619/cmd/frontend/internal/ssc/ssc_proxy.go (L227-L231))
being set in the site config. This value is also partially
[propagated](41fb56d619/cmd/frontend/internal/app/jscontext/jscontext.go (L481))
to the frontend via jscontext. And the frontend [uses this
value](41fb56d619/client/web/src/cody/util.ts (L8-L18))
to define whether to use new embedded UI or not.
For more details see [this Slack
thread](https://sourcegraph.slack.com/archives/C05PC7AKFQV/p1719010292837099?thread_ts=1719000927.962429&cid=C05PC7AKFQV).
<!-- 💡 To write a useful PR description, make sure that your description
covers:
- WHAT this PR is changing:
- How was it PREVIOUSLY.
- How it will be from NOW on.
- WHY this PR is needed.
- CONTEXT, i.e. to which initiative, project or RFC it belongs.
The structure of the description doesn't matter as much as covering
these points, so use
your best judgement based on your context.
Learn how to write good pull request description:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e?pvs=4
-->
## Test plan
- CI
- Tested manually:
- Run Sourcegraoh instance locally in dotcom mode
- Set `dotcom.codyProConfig` in the site config
- Type `context. frontendCodyProConfig` that it returns the [correct
values from the site
config](184da4ce4a/cmd/frontend/internal/app/jscontext/jscontext.go (L711-L715))
<!-- All pull requests REQUIRE a test plan:
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!--
1. Ensure your pull request title is formatted as: $type($domain): $what
2. Add bullet list items for each additional detail you want to cover
(see example below)
3. You can edit this after the pull request was merged, as long as
release shipping it hasn't been promoted to the public.
4. For more information, please see this how-to
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c?
Audience: TS/CSE > Customers > Teammates (in that order).
Cheat sheet: $type = chore|fix|feat $domain:
source|search|ci|release|plg|cody|local|...
-->
<!--
Example:
Title: fix(search): parse quotes with the appropriate context
Changelog section:
## Changelog
- When a quote is used with regexp pattern type, then ...
- Refactored underlying code.
-->
The use of different types makes it clear which kind of path is needed
in which place. This also makes the CodeNavService layering clearer;
it has the responsibility of taking in RepoRelPaths and correctly interfacing
with LsifStore, which deals in UploadRelPath values.
The [Server-side Cody Model
Selection](https://linear.app/sourcegraph/project/server-side-cody-model-selection-cca47c48da6d)
project requires refactoring a lot of how the Completions API endpoint
works for the Sourcegraph backend. It will need to update any place
where we interact with the site config as it relates to LLMs, or any
time we figure out which models can-or-can-not be used.
In order to safely land those refactoring without breaking existing
users, who will be using the "older config format", we need tests. LOTS
and LOTS of tests.
This PR adds the necessary infrastructure for writing unit tests against
the Completions API, and adds a few basic ones for the Anthropic API
provider. I wouldn't say it's as easy as we'd like to write these tests,
but at least now it is _possible_. And we can further streamline things
from here.
## Overview
The bulk of functionality is in `entconfig_base_test.go`. That file
contains some basic unit tests for how the enterprise configuration data
is loaded, and provides the mocks and test infrastructure.
The crux of which is the `apiProviderTestInfra` struct. It will bundle
mocks and the HTTP handlers to test, and provides high-level methods for
invoking the completion API.
```go
type apiProviderTestInfra struct {}
func (ti *apiProviderTestInfra) PushGetModelResult(model string, err error)
func (ti *apiProviderTestInfra) SetSiteConfig(siteConfig schema.SiteConfiguration)
func (ti *apiProviderTestInfra) CallChatCompletionAPI(t *testing.T, reqObj types.CodyCompletionRequestParameters) (int, string)
func (ti *apiProviderTestInfra) CallCodeCompletionAPI(t *testing.T, reqObj types.CodyCompletionRequestParameters) (int, string)
func (ti *apiProviderTestInfra) AssertCompletionsResponse(t *testing.T, rawResponseJSON string, wantResponse types.CompletionResponse)
```
What gets trick however, is how we actually hook into the implementation
details of the completions API endpoint. You see, the user makes an HTTP
request to the Sourcegraph instance. But we then figure out which LLM
model to use, and then build an HTTP request to send to the specific LLM
API Provider.
So the test infrastructure allows you to mock out that "middle part".
The
`AssertCodyGatewayReceivesRequestWithResponse`/`AssertGenericExternalAPIRequestWithResponse`
functions will:
1. Verify that the Sourcegraph instance is making an API call to the LLM
provider that matches the format we expect. (e.g. using the correct
Anthropic API request, etc.)
2. The outbound HTTP request looks like it should, that it contains the
right authorization headers, URL path, etc. (e.g. is it using the API
key from the site config?)
3. Finally, it returns the HTTP response from the API provider. (i.e.
whatever Anthropic or Cody Gateway would have returned.)
```go
type assertLLMRequestOptions struct {
// WantRequestObj is what we expect the outbound HTTP request's JSON body
// to be equal to. Required.
WantRequestObj any
// OutResponseObj is serialized to JSON and sent to the caller, i.e. our
// LLM API Provider which is making the API request. Required.
OutResponseObj any
// WantRequestPath is the URL Path we expect in the outbound HTTP request.
// No check is done if empty.
WantRequestPath string
// WantHeaders are HTTP header key/value pairs that must be present.
WantHeaders map[string]string
}
func (ti *apiProviderTestInfra) AssertCodyGatewayReceivesRequestWithResponse(
t *testing.T, opts assertLLMRequestOptions)
func (ti *apiProviderTestInfra) AssertGenericExternalAPIRequestWithResponse(
t *testing.T, opts assertLLMRequestOptions)
```
Unfortunately it's super gnarly because we aren't really exposing the
API data types from the API providers in a useful way. (e.g. perhaps we
should just use the standard Anthropic Golang API client library.) So
generating the test data relies on a lot of `map[string]any` to make it
easier to construct arbitrary data types that can serialize to JSON the
way that we need them to.
Anyways, with all of this test infrastructure in-place, you can write
API provider tests like the following. Here's one such test from
`entconfig_anthropic_test.go` which confirms that various aspects of the
site-configuration are honored correctly when configured to use BYOK
mode.
I've added "ℹ️ comments" to highlight some of the tricker parts.
```go
t.Run("ViaBYOK", func(t *testing.T) {
const (
anthropicAPIKeyInConfig = "secret-api-key"
anthropicAPIEndpointInConfig = "https://byok.anthropic.com/path/from/config"
chatModelInConfig = "anthropic/claude-3-opus"
codeModelInConfig = "anthropic/claude-3-haiku"
)
infra.SetSiteConfig(schema.SiteConfiguration{
CodyEnabled: pointers.Ptr(true),
CodyPermissions: pointers.Ptr(false),
CodyRestrictUsersFeatureFlag: pointers.Ptr(false),
// LicenseKey is required in order to use Cody.
LicenseKey: "license-key",
Completions: &schema.Completions{
Provider: "anthropic",
AccessToken: anthropicAPIKeyInConfig,
Endpoint: anthropicAPIEndpointInConfig,
ChatModel: chatModelInConfig,
CompletionModel: codeModelInConfig,
},
})
t.Run("ChatModel", func(t *testing.T) {
// ℹ️ Generating the "wantAnthropicRequest" and "outAnthropicResponse"
// data is super-tedious. So we instead have a single function
// that returns "a valid set", that we then customize.
// So here, we just update the model we expect to see in the API
// call to Anthropic.
// Start with the stock test data, but customize it to reflect
// what we expect to see based on the site configuration.
testData := getValidTestData()
testData.OutboundAnthropicRequest["model"] = "anthropic/claude-3-opus"
// Register our hook to verify Cody Gateway got called with
// the requested data.
infra.AssertGenericExternalAPIRequestWithResponse(
t, assertLLMRequestOptions{
WantRequestPath: "/path/from/config",
WantRequestObj: &testData.OutboundAnthropicRequest,
OutResponseObj: &testData.InboundAnthropicRequest,
WantHeaders: map[string]string{
// Yes, Anthropic's API uses "X-Api-Key" rather than the "Authorization" header. 🤷
"X-Api-Key": anthropicAPIKeyInConfig,
},
})
// ℹ️ This `PushGetModelResult` is just a quirk of how the code
// under test works. We mock out the `getModelFn` that is invoked
// to resolve the _actual_ LLM model to use. (And not necessarily
// use the one from the HTTP request.)
infra.PushGetModelResult(chatModelInConfig, nil)
status, responseBody := infra.CallChatCompletionAPI(t, testData.InitialCompletionRequest)
assert.Equal(t, http.StatusOK, status)
infra.AssertCompletionsResponse(t, responseBody, types.CompletionResponse{
// ℹ️ The "totally rewrite it in Rust!" is coming from the
// fake Anthropic response, from `getValidTestData`.
Completion: "you should totally rewrite it in Rust!",
StopReason: "max_tokens",
Logprobs: nil,
})
})
})
```
## Next steps
Once this gets checked-in, my plan is to carefully add new unit tests
for the existing functionality before DISMANTLING the code to write
through an entirely different site configuration object. 🤞 this will
allow me to do so in such a way that I can confirm my changes won't
alter any existing Sourcegraph instances that are using the older
configuration format.
## Test plan
Adds more tests.
## Changelog
NA. Just trivial changes and adding more tests.
The name of the product is "Cody", not "Cody AI". Also, "AI" just looks
dumb and hype-y.
## Test plan
View the navbar and ensure it reads "Cody" not "Cody AI".
## Changelog
- In the navbar, Cody is now just "Cody" not "Cody AI".
As we set up to introduce more tables to Enterprise Portal, I think
it'll be more sustainable for things to get their own subpackages. No
real code changes, just moving things around.
## Test plan
CI
This reverts commit 5cf81e0210
(https://github.com/sourcegraph/sourcegraph/pull/63378).
This PR intorduced a call to the SSC backend via proxy on the
Sourcegraph backend (see `cmd/frontend/internal/ssc/ssc_proxy.go`). It
appeared that proxy is not properly configured on dotcom, causing
requests to fail with 503 "proxy not configured" errors.
Reverting this PR until we fix the configuration.
<!-- 💡 To write a useful PR description, make sure that your description
covers:
- WHAT this PR is changing:
- How was it PREVIOUSLY.
- How it will be from NOW on.
- WHY this PR is needed.
- CONTEXT, i.e. to which initiative, project or RFC it belongs.
The structure of the description doesn't matter as much as covering
these points, so use
your best judgement based on your context.
Learn how to write good pull request description:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e?pvs=4
-->
## Test plan
- CI
<!-- All pull requests REQUIRE a test plan:
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!--
1. Ensure your pull request title is formatted as: $type($domain): $what
2. Add bullet list items for each additional detail you want to cover
(see example below)
3. You can edit this after the pull request was merged, as long as
release shipping it hasn't been promoted to the public.
4. For more information, please see this how-to
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c?
Audience: TS/CSE > Customers > Teammates (in that order).
Cheat sheet: $type = chore|fix|feat $domain:
source|search|ci|release|plg|cody|local|...
-->
<!--
Example:
Title: fix(search): parse quotes with the appropriate context
Changelog section:
## Changelog
- When a quote is used with regexp pattern type, then ...
- Refactored underlying code.
-->
Currently the fuzzy finder filters and ranks results received from the
server locally. This was done to improve performance since local
filtering is much faster.
However it can introduce inconsistencies (as reported) because the local
filtering logic works differently that the one on the server. That means
the shown results is depedent on on the local cache, which is not
obvious to the user.
This commit removes the client side filtering and ranking and instead
relies only on the server for this. This makes things more consistent
and predictable, at the expense of being a little slower. However it
still feels quite fast to me.
Note that I didn't implement some aspects due to the limitations of the
GraphQL-based search API:
- No match highlighting. Personally I didn't miss is it so far. I don't
know if the highlighting actually provides a lot of value.
- No total result count. It seems since we are adding `count:50`, the
server cannot actually give use an approximate total count. But we
should probably still convey somehow that we are limiting results to the
top 50.
Because we don't have locally cached data anymore that can be shown
immediately I decided to increase the throttle time to prevent the
result list flickering in and out when typing with a moderate speed.
This change enables three additional features: 'search all' mode, multi
word search and regex search via `/.../` literals (just like for normal
search queries). This is consistent with our existing search query
language (currently regex literals are not syntax highlighted, but we
should consider doing that).
Fixes srch-139
Fixes srch-133
Fixes srch-134
Fixes srch-543
https://github.com/sourcegraph/sourcegraph/assets/179026/81e24345-9e06-4df6-bb4a-8a55e433bfd1
## Test plan
Manual testing.
## Changelog
- Add 'search all' tab
- Support multi-word search
- Support regular expression patterns
- Fix matching reliability