This PR adds a "developer settings dialog" that is only available when Sourcegraph is run locally in dev mode, and provides easy access to feature flags and temporary settings. This can also be useful for people testing a PR.
QOL features:
- Filter flags/settings by status and name
- Indicator whether overrides exist and how many there are
- Save/restore dialog state to/from local storage
We can use this dialog to add more development specific settings in the future.
## Changes for feature flags
We already have a way to locally override feature flags and the dialog makes use of the same system. The only change I had to make was converting the type to a runtime value so that I can list all flags.
## Changes for temporary settings
Similar to feature flags I had to add a runtime representation of available temporary settings, but since they have more complex types I had to go about a different way (maybe someone has a better idea though). This requires the setting names to be listed twice, but I think that's OK.
Temporary settings didn't have a system for local overrides. I added one by wrapping an existing backend with a new override backend. Once a feature flag is overridden with setTemporarySettingOverride, the backend will return the overriden value instead. Similarly it will write an overridden setting to local storage and not to the real backend. In order to notify/update all subscribers when a value was overridden I added a global observer which the override backend listens to.
does what it says on the tin
## Test plan
Tested that agent accepts the uri that we send and it does: `2023-09-28 13:22:47,242 [ 4056] WARN - #c.s.c.a.CodyAgent - Cody Agent: handshake with client 'JetBrains' (version '3.4.0-alpha.1') at workspace root path 'file:///home/noah/Sourcegraph/lsif-kotlin'`
This code path was required for cloud v1 because we would have private repos alongside public repos, and cloud default services would usually not be able to pull those.
Since we don't have private repos anymore, we can skip this check.
Worst case, we will have a repo marked as private in the DB and we won't attempt to update it - which would fail anyways. But now with a no sources for X error instead.
We only allow flat queries in the backend, which implies no AND or OR.
Test plan:
- updated test
- checked locally that it is not possible to create a search job from a
query with "AND" operator.
We already don't write headers if we don't find matches, so all we have
to do is skip the upload.
We now also return 404 if the job wasn't found and 204 if there were
no blobs.
Test plan:
- new and updated unit test
- ran a "needle in the haystack" query locally and confirmed that we
upload 1 CSV (for the needle) instead of 1 per revision.
Previously we would always write a header line before actually checking if we
can view a job. This automatically lead to status 200 and us ignoring the
later error. This adjusts the service call to return before any writes.
Test Plan: a test was added which ensures we don't write anything in case of
auth failure.
External account deletion soft-deletes accounts. This means that on subsequent SOAP logins, SOAP users with more than 1 external account are more or less instantly demoted, because the cleanup job detects their expired - but deleted - SOAP external accounts.
This change updates the cleaner to ignore deleted external accounts, and also to hard-delete SOAP external accounts using a new option so that we don't have endless cruft building up.
No backport needed as https://github.com/sourcegraph/sourcegraph/pull/56866, which introduced this bug, did not make it into the 5.2 branch cut.
The `event_logs` table requires that either the user or an anonymous user ID is provided, so some events can fail to insert when translated by `telemetry/teestore`. This change makes sure we always have either the user ID or anonymous ID.
## Test plan
I added an integration test to make sure translated events will always insert successfully without being caught by constraints. Existing unit tests + autogold show the new generated anonymous ID.
Currently we use our default limits which are tuned for interactive and
regular searching. This commit introduces significantly higher limits
for search jobs (exhaustive):
- 1,000,000 file match limit (per repo@rev).
- 1 hour timeout.
- Ignore site config MaxTimeoutSeconds.
This is done by introducing a new Protocol "Exhaustive". All uses of
protocol before were for tuning limits, so this is the perfect place to
use it. Additionally it allows us to remove the hacky Exhaustive boolean
field.
There was a bunch of duplicated logic around how we computed file match
limits. This commit additionally removes those and makes all calls use
MaxResults. This is a tiny noop refactor in this commit.
Test Plan: Added a unit test which helps ensure we have higher limits.
Otherwise normal go test for catching regressions.
I started post processing some of our csv output and wanted this change.
It makes it much more ergonomic to process. For example compare these
two queries using sqlite's csv mode to calculate total match count:
SELECT SUM("Match count") FROM results;
SELECT SUM(match_count) from results;
Test Plan: expected diff of go test -update
This change introduces a license version 3, which now records CreatedAt marking the creation of the license. This will apply to all licenses created when this change rolls out.
Recording this allows us to gate capabilities based on the time a license was created, namely for telemetry export configuration.
Migration of some length pojos to nicer Kotlin equivalents. Includes some drive-by stylistic improvements, as well as aligning some types that cross the agent boundary with https://sourcegraph.com/github.com/sourcegraph/cody/-/blob/agent/src/protocol.ts
## Test plan
Compilation still works, Cody Agent still accepts all types
This PR moves the logic for computing the aggregate state from the resolver to the store.
Why?
To filter and sort by the aggregate state and make use of the paging helpers, we have to return the aggregate state from db instead of computing it in the resolver.
Note:
There is a bit of awkwardness because only `ListSearchJobs` computes the aggregate state, but not `GetSearchJob`. I don't want to set `AggState` in `GetSearchJob` because it is fairly expensive to calculate, and we call `GetSearchJob` a lot as part of the security checks, so I wanted to keep it as lean as possible.
In the future we should have a dedicated call to the db for security checks and use `GetSearchJob` only when we create a search job. Then we can calculate the aggregate state on GetSearchJob, too, and remove the special casing in the code.
Test plan:
- New unit test
- manual testing: I ran a couple of search jobs locally to confirm whether the aggregate state properly reflect the state of the underlying jobs.
This PR adds the onboarding setup flow for the onboarding tour.
If a user has less than X (currently 5) queries and hasn't gone through the setup flow they will see the setup dialog when they open a page that would normally show the tour (search homepage, search results, file page). The user can either answer the questions or skip the whole setup. If they skip the setup the tour will still be shown but only show tasks that don't depend on this information.
## Implementation details
I didn't want to completely reinvent the wheel but also didn't have to the time to build a truly reusable autocompletion system for repositories and languages. Therefor I moved some code around to make the caching logic used by the new query input reusable (now CachedAsyncCompletionSource), which is now used by the query input as well as the repository input. The language autocompletion is all synchronous using fzf. (I think it makes sense in the long run to make autocompletion "sources" more reusable)
The user provided values are validated (not empty, repository exists).
Exposing the PopoverContentProps on ComboboxPopover is necessary to for controlling (and fixing) the position of the popover.
Co-authored-by: numbers88s <cesar.r.jimenez@gmail.com>
Cookie anonymous UID continues to take precedence, but this is helpful for clients that don't use cookies to track anonymous UID (i.e. VSCode), and especially relevant in the context of our new telemetry mutations, which now use real actors to determine the user, instead of asking clients to tell us who the user is.