Cody no longer needs it and it is obsolete now!
Since App added a non-insignificant amount of new concepts and alternative code paths, I decided to take some time and remove it from out codebase.
This PR removes ~21k lines of code. If we ever want parts of single binary (app), the redis kv alternatives, or the release pipeline for a native mac app back, we can look back at this PR and revert parts of it, but maintaining 21k lines of code and many code paths for which I had to delete a surprisingly small amount of tests justifies this move for me very well.
Technically, to some extent SG App and Cody App both still existed in the codebase, but we don't distribute either of them anymore, so IMO we shouldn't keep this weight in our code.
So.. here we go.
This should not affect any of the existing deployments, we only remove functionality that was special-cased for app.
This change introduces an `mi2`-like experience for interacting with MSP databases. By default we use the new readonly-SA introduced in https://github.com/sourcegraph/sourcegraph/pull/59105, otherwise with the `--write-access` flag you can connect as the same user as the Cloud Run workload, which gives a lot more permissions.
An alias, `sg msp pg connect`, is available for those less inclined to type out the entire `postgresql` subcommand name.
Closes https://github.com/sourcegraph/managed-services/issues/207
## Test plan
Applied to `msp-testbed`, which has a PG instance and provisioned tables.
Try both service accounts:
```
sg msp pg connect --write-access msp-testbed test
sg msp pg connect msp-testbed test
```

With both of the above:
```
primary=> select * from users;
id | created_at | updated_at | deleted_at | external_id | name | avatar_url
----+------------+------------+------------+-------------+------+------------
(0 rows)
```
Adds the search pattern type to the `SearchSubmitted` event. By looking for the
pattern type `newStandardRC1` vs. `newStandard`, this lets us see how many
users have enabled / disabled the query language improvements using the
toggle.
Found this package frustratingly slow, so looked at the worst offenders.
Top 5 before:
```
TestOutboundWebhookJobs,3.61
TestRecentViewSignalStore_InsertPaths_OverBatchSize,4.9
TestFeatureFlagStore,6.28
TestEventLogs_OwnershipFeatureActivity,7.05
TestExternalServicesStore_Upsert,9.72
```
Top 5 after:
```
TestPermsStore_GrantPendingPermissions,2.28
TestExternalServicesStore_DeleteExtServiceWithManyRepos,2.43
TestAccessTokens,2.47
TestWebhookUpdate,2.67
TestFeatureFlagStore,4.95
```
This reverts the default from loveable search to smart search for the
local dev environment. This is mainly to disentangle our protoype
running on S2 from everyone's dev experience.
Devs who want to use loveable search locally, can set the feature flag
"search-new-keyword".
This updates suggestions to insert literals instead of regex-escaped values for repo and file filters.
Co-authored-by: Felix Kling <felix@felix-kling.de>
fix(sg): start Caddy admin before `caddy trust`
From docs:
This command will attempt to connect to Caddy's admin API to fetch the root certificate,
using the GET /pki/ca/<id>/certificates endpoint.
If the admin server is not running at the moment, the GET request
will not succeed, and the certificate won't be installed correctly.
* remove little-used `web-standalone-http-prod`
This let you run a local web app built in production mode against a remote Sourcegraph endpoint. You can still run a local web app built in *dev* mode.
* add `sg test bazel-backend-integration`
* fix DeveloperDialog positioning
It was taking up 100% width and was translated -50% so the left half of it was off-viewport.
* use vite for web builds
[Vite](https://vitejs.dev/) is a fast web bundler. With Vite, a local dev server is available as fast as with esbuild, and incremental builds are (1) hot (no page reload needed) and (2) much faster (<500ms).
* fix "manifestPlugin.d.ts" was not created
* sg lint
* remove little-used `web-standalone-http-prod`
This let you run a local web app built in production mode against a remote Sourcegraph endpoint. You can still run a local web app built in *dev* mode.
* add `sg test bazel-backend-integration`
* fix DeveloperDialog positioning
It was taking up 100% width and was translated -50% so the left half of it was off-viewport.
* use vite for web builds
[Vite](https://vitejs.dev/) is a fast web bundler. With Vite, a local dev server is available as fast as with esbuild, and incremental builds are (1) hot (no page reload needed) and (2) much faster (<500ms).
* fix "manifestPlugin.d.ts" was not created
* sg lint
* small lint fix
* added events shim to client/web/BUILD.bazel
* updated via bazel configure
* added in side-effectful import for EventEmitter
* added in side-effectful import for EventEmitter
* ran bazel configure
* re-run bazel configure
* pnpm dedupe
---------
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
Co-authored-by: jamesmcnamara <jamesscottmcnamara@gmail.com>
Co-authored-by: Jean-Hadrien Chabran <jh@chabran.fr>
The default is from Cloud, and is way too long for the usual MSP use case - almost all services set a custom suffix length today to avoid hitting project name limits. https://github.com/sourcegraph/managed-services/pull/214 pins anything using the default today.
This change adds a stack, `tfcworkspaces`, dedicated to managing MSP TFC workspaces. In this stack:
1. We apply additional configuration, like notifications! This way we don't have to re-implement the provider entirely, we just need to create/configure the initial workspaces via API and do the rest in Terraform.
2. Provision runs for the other workspaces - this will greatly improve the environment creation experience. The new workflow goes `msp tfc sync`, and hopefully the new `queueAllRuns` option will work to queue a run for the `tfcworkspaces` workspace. This workspace will then queue apply plans for each workspace, _in sequence_.
---------
Co-authored-by: Michael Lin <mlzc@hey.com>
When clicking the "expand results" button, the expanded lines were not syntax highlighted for the svelte prototype. This happened the ranges we were fetching for highlights are pre-calculated from the collapsed groups. Instead, we want to fetch ranges for every group so when we expand, we have the highlighted result already. The first commit does that.
This change bundles the `sg msp` toolchain in default `sg` builds. This was previously in a very experimental state and introduced a significant increase in the `sg` binary size, so we had it behind a build flag. However, we've discussed this a few times with Dev Infra and the consensus is that it is okay to include by default.
We have a demo prepared for this week, and with many of our core features now available, we want to make a renewed push for adoption, so now is a good time to start bundling this command by default.
There are no functionality changes - this PR just removes the overwrite-command-on-init-with-build-flag stuff and exports the full toolchain by default from the `dev/sg/msp` package.
Close https://github.com/sourcegraph/managed-services/issues/182
## Test plan
```
➜ go build -o ./sg ./dev/sg
➜ ./sg msp init -h
NAME:
sg managed-services-platform init - Initialize a template Managed Services Platform service spec
USAGE:
sg msp init -owner core-services -name "MSP Example Service" msp-example
OPTIONS:
--kind value Kind of service (one of: 'service', 'job') (default: "service")
--owner value Name of team owning this new service
--name value Specify a human-readable name for this service
--dev Generate a dev environment as the initial environment (d
```
I don't know if this is only an issue for me, but I'm not able to run pnpm build locally anymore. The build fails with the error mentioned in the comment. Hover seeing that CI is green for other PRs and SvelteKit is deployed to S2 successfully, I don't really know why this is happening.
This change fixes it though.
Closes https://github.com/sourcegraph/sourcegraph/issues/54836
## Test plan
Brought dbconn into the dependency graph of //cmd/executor in a few ways:
1. directly
2. indirectly through a direct dependency
3. indirectly through a transitive dependency
4. inserting/removing from the graph in a few ways to try catch any Fact caching issues
Relates to #58815
With this change, a quoted pattern, like `"foo bar"`, is interpreted literally IE spaces are interpreted as spaces instead of as logical `AND`. Quotes that should be matched literally have to be escaped.
Example:
To search for the text `foo "bar"`, where `bar` is surrounded by quotes, the query is either `"foo \"bar\""` or, if we use single quotes, `'foo "bar"'`.
Note: This change only affects our keyword search prototype.
Test plan:
- updated unit test
Turns out the theme values stored in temporary setting are different
from what the prototype expected. It only accidentally worked when I
tested it.
This commit introduces a helper method that accepts an arbitrary string
value and updates the theme state accordingly, instead of using an
arbitrary string value as theme state (ideally TS would have complained
about this before but it looks like it didn't).
This builds on the change in #58943 to take advantage of the newly-complete chunks being streamed back from the backend. Now, search results are rendered immediately on receipt using the streamed chunk content.
Currently searcher can only handle patterns with a single atom. We'd like to update searcher to handle AND/ OR patterns directly.
This refactor takes us a step towards being able to natively handle AND/ OR patterns by refactoring the way we match files. It pulls out a `matcher` interface, which for now just has a single implementation (`regexMatcher`), but eventually can represent nested matchers combined through AND/ OR. Even without this goal, I think the refactor stands on its own and helps clean up the logic.
Specific changes:
* Lots of renames to make things clearer, like `readerGrep` -> `regexMatcher`
* Define `matcher` interface, which finds matches given file contents, and related logic into new file
* Pull `pathMatcher` out of `matcher`, since it's really its own top-level concept
* Move all file content and buffer management to `search_regex`. This lets each `matcher` object become threadsafe, and avoids the need for copying them for every goroutine.
Before this commit the theme was always initialized as 'system'. This
loads the configured theme from temporary settings. There will be a
flash of the wrong theme on load, but that's not avoidable atm.
New output: `dev/linters/dbconn/dbconn.go:83:16: use of fmt.Errorf forbidden because “errors.Newf should be used instead” (forbidigo)`
Previous output: `dev/linters/dbconn/dbconn.go:83:16: use of fmt.Errorf forbidden by pattern ^fmt\.Errorf$ (forbidigo)`
## Test plan
Used `fmt.Errorf` in //dev/linters/dbconn` and ran bazel build on it
Closes https://github.com/sourcegraph/sourcegraph/issues/54838
## Test plan
Added `github.com/opentracing/opentracing-go` to some package and did `bazel build //<pkg>` to observe:
`cmd/cody-gateway/internal/actor/limits.go:15:2: import 'github.com/opentracing/opentracing-go/log' is not allowed from list 'tracing libraries': use "go.opentelemetry.io/otel/trace" instead (depguard)`
Currently, we sync topics from GitHub and GitLab and it is possible to
filter repositories by their topics with the `repo:has.topic` filter.
However, unlike user-added metadata, the synced topics are not visible in the UI,
neither for repo matches nor on the repo tree page. This means it is impossible
for users to figure out which topics they can filter by.
With this PR we display topics, such as "language" or "golang", in addition to
user-added metadata for repo matches and on the repo tree page.
## Test plan
Note: Our search language distinguishes between user provided metadata (has.meta) and automatically synced metadata (has.topic).
- manual testing
- I checked that clicking on a "topic" badge adds a "has.topic" filter, while clicking on a "metadata" badge add a "has.meta" filter
- Adding and deleting metadata works like before
- Topics and metadata are sorted alphabetically
- Results without metadata or topics are displayed correctly
- added new unit test
This is some setup for getting rid of all the cruft from extensions. The first step is to make the browser extension use the codeintel packages directly rather than requiring codeintel extensions running in a web worker. This extracts and simplifies few functions to prepare for that.
Currently, we do not respect the search.contextLines setting in the backend. Specifically, the search backend currently always yields results with zero lines of context.
One effect of this is in order to display matches with context lines, any client needs to make a followup request to fetch the full contents of the file. This makes the UI feel laggy because results filter in slowly as you scroll. This is exacerbated by the fact that we load the highlighted code, and highlighting can be unpredictable, sometimes taking a couple seconds to return.
We already stream the matched chunk back to the client, so this just updates the backend so that the streamed results include the number of context lines a user requested. Zoekt already supports this, so it was just a matter of taking advantage of that setting and updating searcher to do the same.