I had a suspicion another test was failing due to racing with reading
dotcom.SourcegraphDotComMode and another test didn't unset it. It turned
out this wasn't the case, but I ended improving the API to avoid this
issue. Most call sites should be easier to read.
Test Plan: go test
Simple refactor, moved the methods into internal/dotcom so that we don't import across package boundaries.
Simply moved code, existing test suites will find issues.
We have a number of docs links in the product that point to the old doc site.
Method:
- Search the repo for `docs.sourcegraph.com`
- Exclude the `doc/` dir, all test fixtures, and `CHANGELOG.md`
- For each, replace `docs.sourcegraph.com` with `sourcegraph.com/docs`
- Navigate to the resulting URL ensuring it's not a dead link, updating the URL if necessary
Many of the URLs updated are just comments, but since I'm doing a manual audit of each URL anyways, I felt it was worth it to update these while I was at it.
Cody no longer needs it and it is obsolete now!
Since App added a non-insignificant amount of new concepts and alternative code paths, I decided to take some time and remove it from out codebase.
This PR removes ~21k lines of code. If we ever want parts of single binary (app), the redis kv alternatives, or the release pipeline for a native mac app back, we can look back at this PR and revert parts of it, but maintaining 21k lines of code and many code paths for which I had to delete a surprisingly small amount of tests justifies this move for me very well.
Technically, to some extent SG App and Cody App both still existed in the codebase, but we don't distribute either of them anymore, so IMO we shouldn't keep this weight in our code.
So.. here we go.
This should not affect any of the existing deployments, we only remove functionality that was special-cased for app.
Currently, we do not respect the search.contextLines setting in the backend. Specifically, the search backend currently always yields results with zero lines of context.
One effect of this is in order to display matches with context lines, any client needs to make a followup request to fetch the full contents of the file. This makes the UI feel laggy because results filter in slowly as you scroll. This is exacerbated by the fact that we load the highlighted code, and highlighting can be unpredictable, sometimes taking a couple seconds to return.
We already stream the matched chunk back to the client, so this just updates the backend so that the streamed results include the number of context lines a user requested. Zoekt already supports this, so it was just a matter of taking advantage of that setting and updating searcher to do the same.
* log: remove use of description paramter in Scoped
* temporarily point to sglog branch
* bazel configure + gazelle
* remove additional use of description param
* use latest versions of zoekt,log,mountinfo
* go.mod
The goal is to mock gitserver client in another commit. This commit contains a
refactorings to the search client to make this possible. It does this by
storing inside of the search client the runtime clients and moving the
creation of the gitserver client into that field.
Test Plan: CI is sufficient for this change.
Originally I started working on this because of
[comment on another
PR](https://github.com/sourcegraph/sourcegraph/pull/53373#discussion_r1228058455).
I quickly wrote an implementation of Ptr and NonZeroPtr. Then I started
refactoring existing code only to find out that @efritz already beat me
to it. But Erics implementation was hidden in
internal/codeintel/resolvers/utils.go
Moved all of the Ptr functions to `pointers` package instead and
refactored all existing code that I could find to use it instead of
redefining the same functions all the time.
Usage is mostly in tests, so hopefully the impact is not as huge as the
diff size might suggest.
## Test plan
A whole lot of unit tests.
We always passed in envvar.SourcegraphDotComMode() for dot com mode.
Additionally this variable doesn't change per request, so makes more
request to be set in the client at construction time.
We always called settings.CurrentUserFinal before calling Plan. Now that
settings can be a service, we make it part of the client struct as well.
This also makes sense from the perspective that plan should only be
taking in the per request information, then the client does actions
based on that.
Both these changes simplify much of the call sites which is a nice side
effect.
Test Plan: CI
This commit updates all calls to client.NewSearchClient to not specify
zoektStreamer, searcherURLs and searcherGRPCConnectionCache anymore. We
always passed exactly the same values everytime, so we might as well
simplify the call site.
In tests we only ever mocked zoekt, so we add a new constructor for
tests which capture the pattern used there.
Additionally we rename the method from client.NewSearchClient to just
client.New to avoid the stutter. Call sites generally make it clear the
variable being captured is related to search, so this reads better.
Test Plan: CI
The previous approach to enable race detection was too radical and
accidently led to build our binaries with the race flage enabled, which
caused issues when building images down the line.
This happened because putting a `test --something` in bazelrc also sets
it on `build` which is absolutely not what we wanted. Usually folks get
this one working by having a `--stamp` config setting that fixes this
when releasing binaries, which we don't at this stage, as we're still
learning Bazel.
Luckily, this was caught swiftly. The current approach insteads takes a
more granular approach, which makes the `go_test` rule uses our own
variant, which injects the `race = "on"` attribute, but only on
`go_test`.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
CI, being a main-dry-run, this will cover the container building jobs,
which were the ones failing.
---------
Co-authored-by: Alex Ostrikov <alex.ostrikov@sourcegraph.com>
Based on https://github.com/sourcegraph/sourcegraph/pull/51666
## Background
As you may know, prior to this PR, we used a custom GQL handler that
uses Apple script to call the native file picker/dialog UI. In this PR,
we use a native picker with Tauri dialog API.
## Test plan
- Check new local repositories setup UI page
- Check setup wizard local repositories step
- Check both UI in Mac and Windows (if you do have Windows machine)
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
This is quite useful in this tool. Especially since it includes useful
information like how selectors are applied.
Test Plan: go run . -dotcom -pattern_type literal 'type:file select:repo
content:"hello\nworld"'
I was trying to understand exactly how a query I write is transalated
into a zoekt query. So this adds a command which allows me to easily try
stuff out and then see the s-expression output of the jobs plan.
Test Plan: A simple unit test is added to ensure this command doesn't
break.
Closes#50833
On Mac, allows picking multiple folders with Cmd+click in the file
browser in the setup wizard. This was done by modifying the AppleScript
used to launch the file browsers; eventually, we probably want to make
this more system-agnostic by using the [Tauri file browse
dialog](https://tauri.app/v1/api/js/dialog/) instead.
This also modifies the whole end-to-end to accept a list of paths,
rather than a single string path, as the directory. Multiple code hosts
will be created, one for each path selected.
## Test plan
Manually verify on the Tauri app:
`sg start app` in one terminal, `pnpm tauri dev` in another.
This is an optional field only App will set which is the path to the
repository on disk. The intention of this field is for the wizard to
show where on disk a repository resides.
Test Plan: Ran app-discover-repos and validated the absolute paths
printed. Otherwise updated unit tests.
go run ./dev/internal/cmd/app-discover-repos -root ~/src/github.com/sourcegraph/ -v
Co-authored-by: Keegan Carruthers-Smith <keegan.csmith@gmail.com>
to be merged into feature branch of PR
https://github.com/sourcegraph/sourcegraph/pull/47673/
In PR 47673 serve-git at startup identifies a relevant root directory to
be walked for local repo sync'ing. The external service configuration
created at startup will define the url and not much else. That ext svc
configuration does not define the directories, so we can't make updates
to which directories are relevant; that is, it's fixed to what's defined
by `SRC` env var.
This pull request allows external service kind Other to store the root
directories in the configuration. Also, we change how serve-git handles
list-repos in that it accepts a list of the directories to walk.
There are still open questions that I'll try to cover in the comments.
## Test plan
sg start app
unit tests
---------
Co-authored-by: Keegan Carruthers-Smith <keegan.csmith@gmail.com>
Update buildfiles. There was an issue with some of the own code, a
buildfile has been copied over and regenerated, but it carried the
previous rules which were outdated and never picked up by gazelle-go.
I'll add a FAQ entry.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
`bazel build //...` is green locally, minus an exception with nogo that
@burmudar has a fix for in a different PR.
This package implements a best-effort approach to running a file dialog
selector. Instead of linking in a GUI framework into Sourcegraph App, we
rely on shelling out to common utilities used by shell scripts to run a
dialog.
For Mac we can use osascript which is always available. For Linux (and
unofficially BSD/etc) we can rely on what the display manager provides
(zenity for gnome/gtk environments, kdialog for KDE).
For now this commit only integrates it into the test program
app-discover-repos for testing. We intend to integrate this into the
Sourcegraph App though via an API.
Test Plan: app-discover-repos -picker on linux with and without zenity
and kdialog. The same on darwin, which always worked. Also manually
changed app-discover-repos to pass in a short context timeout, an error
was returned as expected.
Based on that manual testing I created unit tests which simulate what
the programs do.
Part of https://github.com/sourcegraph/sourcegraph/issues/48127
Now that we don't have the tricky post processing we can move all the
important logic of discovering repos into the Walk function. I imagine
in the future we will have external callers to Walk so making this
correct will help.
Test Plan: go test
If the root is a repository then we just return that as a repository. We
had some complicated logic here which handled the case when we used to
recurse further into repositories looking for more. But now it can be
simplified so that we don't need to post-process the names.
Test Plan: a test case was added for repository is root. Tested that it
broke in the state where I removed the older post filtering code but not
the new is repo root check.
With these two parameters we should avoid overly long discovery
operations. Right now if we hit these we just log to the console a
warning which isn't the best API. But as we shape this API up we should
be able to better report these errors.
The default of 10 max depth and 5s is plucked out of thin air. Will
likely need tuning.
Test Plan: ran app-discover-repos -root ~ with the following different
values
- n/a :: everything worked as usual
- SRC_DISCOVER_TIMEOUT=10ms :: got timeout warning
- SRC_DISCOVER_MAX_DEPTH=3 :: only got repo names with at most 3
components
I tweaked this code until it ran in under 1s for my home directory on my
macbook. So all heuristics in it are based on that. They could easily
not apply for another user, so another commit will try and bound the
work done.
Test Plan: Ran locally via app-discover-repos
$ go install ./dev/internal/cmd/app-discover-repos
$ /usr/bin/time app-discover-repos -root ~ | wc -l
0.43 real 0.78 user 0.94 sys
43
This is an internal command to help debug and profile local repository
discovery for Sourcegraph App.
Test Plan: go install ./dev/internal/cmd/app-discover-repos and then
running it from various places.