I was tired of seeing this warning when running gazelle:
gazelle: finding module path for import github.com/go-critic/go-critic/framework/linter: go: downloading github.com/go-critic/go-critic v0.11.4
go: module github.com/go-critic/go-critic@upgrade found (v0.11.4), but does not contain package github.com/go-critic/go-critic/framework/linter
This updated go-critic to the latest version which makes the warning go
away since it now uses the new package path.
Test Plan: gazelle is happy and CI is happy
This patch wires up the newly changed APIs in #64118
to the GraphQL API, enabling precise support in the
usagesForSymbol API. It also handles pagination.
Fixes https://linear.app/sourcegraph/issue/GRAPH-573
## Test plan
Manually tested. Will add automated tests in follow-up PR.
## Changelog
- Adds support for precise code navigation to the experimental
`usagesForSymbol` API, which can be used to implement a
reference panel or similar functionality.
For the implementation of precise usagesForSymbol, I need to be
able to access some of these types in the codenav package directly, so move
a bunch of types there to avoid an import cycle: codenav -> resolvers -> codenav.
At the heart of the loop for extracting usages across a Sourcegraph
instance is the `extractLocationsFromPosition` function, which
extracts related symbols and source ranges from a single SCIP
Document. (Source ranges for returning to the user directly,
and related symbols to do further lookups, e.g. in the case
of inheritance.)
Since we want to perform matching based on symbol names in the upcoming
precise usagesForSymbol API, and also return symbol names for each
associated source range, this function needs to be updated to:
1. Be able to take a symbol name for doing lookups. This is done using
the new `FindUsagesKey` type which allows two cases - position-based and
symbol-based.
2. Be able to return symbol names associated with every source range.
This is done by creating a new `UsageBuilder` type which somewhat subsumes
the `Location` type. We avoid copying the same 'UploadID' and 'Path'
fields eagerly for clarity; that will be handled by callers in the future when
they mix `UsageBuilder` values across different Documents (by first calling `build`).
For the above, I've introduced a new func `extractRelatedUsagesAndSymbolNames`,
and `extractLocationsFromPosition` delegates to that. In the future,
`extractLocationsFromPosition` will be removed.
For precise usagesForSymbols, we want to propagate usages everywhere
(with associated symbol names, not just 'Location' values). This PR
introduces the new Usage type, and unifies the old GetBulkSymbolUsages and
GetMinimalBulkSymbolUsages APIs into a single GetSymbolUsages API.
We convert the Usage values to Location to avoid changing a lot of code
at once.
We also change the DB query to do grouping and aggregation for us
instead of doing it in Go code.
---------
Co-authored-by: Christoph Hegemann <christoph.hegemann@sourcegraph.com>
This patch changes the location querying code so that:
1. We're populating structures corresponding to SCIP instead of LSIF
(with "scheme" and "identifier" inside "MonikerData")
2. Avoid repeatedly allocating a constant string 'scip' for the scheme
only to throw it away later.
3. Makes the two queries and their scanning code more similar for easier
comparison. When I land precise usagesForSymbol, I will de-duplicate
some of the scanning code between these two queries.
I have avoided renaming all of the local variables to avoid creating
more noise.
## Test plan
Covered by existing tests.
For certain types, we do not want the zero-value initialization for structs.
This means we need to trade off readability vs exhaustive initialization
checking, as the Go syntax `Foo{Bar: bar}` is more readable, but doesn't do
exhaustiveness checking, and `Foo{bar}` does check for exhaustiveness but can be less
readable depending on context.
For now, the check is only introduced for one type, and is meant to be
opt-in so that code authors may choose for stricter checking.
This makes it easier to run Sourcegraph in local dev by compiling a few
key services (frontend, searcher, repo-updater, gitserver, and worker)
into a single Go binary and running that.
Compared to `sg start` (which compiles and runs ~10 services), it's
faster to start up (by ~10% or a few seconds), takes a lot less memory
and CPU when running, has less log noise, and rebuilds faster. It is
slower to recompile for changes just to `frontend` because it needs to
link in more code on each recompile, but it's faster for most other Go
changes that require recompilation of multiple services.
This is only intended for local dev as a convenience. There may be
different behavior in this mode that could result in problems when your
code runs in the normal deployment. Usually our e2e tests should catch
this, but to be safe, you should run in the usual mode if you are making
sensitive cross-service changes.
Partially reverts "svcmain: Simplify service setup (#61903)" (commit
9541032292).
## Test plan
Existing tests cover any regressions to existing behavior. This new
behavior is for local dev only.
This patch does a few things:
- Adds `go-enry` packages to depguard, so that people do not
accidentally use enry APIs instead of the corresponding APIs
in the `languages` package.
- Adds more tests for different functions in the languages package
to ensure mutual consistency in how language<->extension mappings
are handled.
- Adds tests for enry upgrades
- Adds comments with IDs so that related parts in the code can be
pieced together easily
See [RFC 885 Sourcegraph Enterprise Portal (go/enterprise-portal)](https://docs.google.com/document/d/1tiaW1IVKm_YSSYhH-z7Q8sv4HSO_YJ_Uu6eYDjX7uU4/edit#heading=h.tdaxc5h34u7q) - closes CORE-6. The only files requiring in-depth review are the `.proto` files, as everything else is generated:
- `lib/enterpriseportal/subscriptions/v1/subscriptions.proto`
- `lib/enterpriseportal/codyaccess/v1/codyaccess.proto`
This PR only introduces API definitions - implementation will come as subsequent PRs, tracked in the ["Launch Enterprise Portal" Linear project](https://linear.app/sourcegraph/project/launch-sourcegraph-enterprise-portal-ee5d9ea105c2).
Before reviewing the diffs, **please review this PR description in depth**.
### Design goals
This initial schema aims to help achieve CORE-97 by adding our initial "get subscription Cody access", as well our general Stage 1 goal of providing read-only access to our existing enterprise subscription mechanisms. In doing so, we can start to reshape the API in a way that accommodates future growth and addresses some debt we have accumulated over time, before the Stage 2 goal of having the new Enterprise Portal be the source-of-truth for all things subscriptions.
I am also aiming for a conservative approach with the Cody Gateway access related RPCs, to ease migration risks and allow for Cody teams to follow up quickly with more drastic changes in a V2 of the service after a Core-Services-driven migration to use the new service: https://github.com/sourcegraph/sourcegraph/pull/62263#issuecomment-2101874114
### Design overview
- **Multiple services**: Enterprise Portal aims to be the home of most Enterprise-related subscription and access management, but each component should be defined as a separate service to maintain clear boundaries between "core" capabilities and future extensions. One problem we see in the `dotcom { productSubscriptions }` is the embedding of additional concepts like Cody Gateway access makes the API surface unwieldy and brittle, and encourages an internal design that bundles everything together (the `product_subscriptions` table has 10 `cody_gateway_*` columns today). More concretely, this PR designs 2 services that Enterprise Portal will implement:
- `EnterprisePortalSubscriptionsService` (`subscriptions.proto`): subscriptions and licenses CRUD
- `EnterprisePortalCodyGatewayService` (`codygateway.proto`): Enterprise Cody Gateway access
- **Multiple protocols**: We use [ConnectRPC](https://connectrpc.com/) to generate traditional gRPC handlers for service-to-service use, but also a plain HTTP/1 "REST"-ish protocol (the ["Connect Protocol"](https://connectrpc.com/docs/protocol)) that works for web clients and simple integrations. Go bindings for the Connect protocol are generated into the `v1connect` subpackages.
- **Future licensing model/mechanism changes**: The _Subscription_ model is designed to remain static, but _Licenses_ are designed to accommodate future changes -`EnterpriseSubscriptionLicenseType` and `EnterpriseSubscriptionLicense` in this PR describe only the current type of license, referred to as "classic licenses", but we can extend this in the future for e.g. new models (refreshable licenses?) or new products (Cody-only? PLG enterprise?), or existing problems (test instance licenses?)
- **Granular history**: Instead of a `createdAt`, `isArchived`, `revokedAt` and and so on, the new API defines Kubernetes-style `conditions` for licenses and subscriptions to describe creation, archival, and revocation events respectively, and can be more flexibly extended for future events and a lightweight audit log of major changes to a subscription or license. In particular, `revokedAt` already has a `revokedReason` - this allows us to extend these important events with additional metadata in a flexible manner.
- **Pagination**: I couldn't find a shared internal or off-the-shelf representation of pagination attributes, but each `List*` RPC describes `page_size`, `page_token`, and `next_page_token`
- **Querying/filtering**: I couldn't find a strong standard for this either, but in general:
- `Get*` accepts `query` that is a `oneof`, with the goal of providing exact matches only.
- `List*` accepts `repeated filter`, where each `filter` is a `oneof` a set of strategies relevant to a particular `List*` RPC. Multiple filters are treated as `AND`-concatenated.
Some major changes from the existing model:
- **Downgrade the concept of "subscription access token"**: this was built for Cody Gateway but I am not sure it has aged well, as the mechanism is still tied to individual licenses, did not find new non-Cody-Gateway use cases (except for license checks, though those do not require an "access token" model either), and today are still not "true" access tokens as they cannot be expired/managed properly. This PR relegates the concept to remain Cody-specific as it effectively is today so that we might be able to introduce a better subscription-wide model if the use case arises. Over time, we may want to make this even more opaque, relying entirely on zero-config instead (generating from license keys).
- **Subscriptions are no longer attached to a single dotcom user**: Most of these users today are not real users anyway, as our license creation process asks that you create a fake user account (["User account: [...] We create a company-level account for this."](https://handbook.sourcegraph.com/departments/technical-success/ce/process/license_keys/#license-key-mechanics)). The new API removes the concept entirely, in favour of a true user access management system in CORE-102.
- **Database/GraphQL IDs** are no longer exposed - we use external, prefixed UUIDs for representing entities over APIs in a universal manner.
- **Per-subscription Cody Gateway access no longer exposes `allowed models`**: I suggested this to @rafax in light of recent problems with propagating new models to Enterprise customers. He agreed that the general product direction is "model options as a selling point" - it no longer makes sense to configure these at a per-subscription level. Instead, the Cody Gateway service should configure globally allowed models directly, and each Sourcegraph instance can determine what models they trust. If we really need this back we can add it later, but for now I think this removal is the right direction.
### Direct translations
`cmd/cody-gateway/internal/dotcom/operations.graphql` defines our key dependencies for achieving CORE-97. The concepts referred in `operations.graphql` translate to this new API as follows:
- `dotcom { productSubscriptionByAccessToken(accessToken) }`: `codygateway.v1.GetCodyGatewayAccess({ access_token })`
- `dotcom { productSubscriptions }`: `codygateway.v1.ListCodyGatewayAccess()`
- `fragment ProductSubscriptionState`:
- `id`: **n/a**
- `uuid`: `subscriptions.v1.EnterpriseSubscription.id`
- `account { username }`: `subscriptions.v1.EnterpriseSubscription.display_name`
- `isArchived`: `subscriptions.v1.EnterpriseSubscription.conditions`
- `codyGatewayAccess { ... }`: **separate RPC to `codygateway.v1.GetCodyGatewayAccess`**
- `activeLicense { ... }`: **separate RPC to `subscriptions.v1.ListEnterpriseSubscriptionLicenses`**
### Why `lib/enterpriseportal`?
We recently had to move another Telemetry Gateway to `lib`: #62061. Inevitably, there will be services that live outside the monorepo that want to integrate with Enterprise Portal (one is on our roadmap: Cody Analytics in https://github.com/sourcegraph/cody-analytics). This allows us to share generated bindings and some useful helpers, while keeping things in the monorepo.
### Implications for Cody Clients
For now (and in the future), nothing is likely to change. Here's how I imagine things playing out:
```mermaid
graph TD
cc["Cody Clients"] -- unified API --> cg[services like Cody Gateway]
cg -- PLG users --> ssc[Self-Serve Cody]
cg -- Enterprise users --> ep[Enterprise Portal]
```
## Test plan
CI passes, the schemas can be generated by hand:
```
sg gen buf \
lib/enterpriseportal/subscriptions/v1/buf.gen.yaml \
lib/enterpriseportal/codyaccess/v1/buf.gen.yaml
```
---------
Co-authored-by: Joe Chen <joe@sourcegraph.com>
Co-authored-by: Chris Smith <chrsmith@users.noreply.github.com>
Go 1.22 changes loop variables to have more sensible
semantics where the variable is not reused across iterations
by the codegen, so we can simplify a bunch of code
working around that counterintuitive behavior.
In a recent incident it was bought up that hot fixing searcher had a
risk since it speaks to the DB. I investigated and noticed it doesn't
actually connect, but does have dbconn as an unused transitive dep.
This commit refactors out the bit of logic used by searcher into a pure
package. To do so I did a few renames, but functionally there is no
change in logic other than the searcher binary shedding some heavy
dependencies.
Additionally I updated the go-dbconn-import check to ensure searcher
isn't accidently using the database again.
Test Plan: CI
Currently the worker itself does nothing, only exposes a health endpoint and loads basic environment configuration.
Bazel build for the Docker container
Wire in scip-treesitter-cli to make it available in the container
Dev setup for scip-treesitter-cli (copied from scip-ctags setup for local development)
Run configuration for the worker sg run codeintel-syntactic-worker to test
Start configuration sg start codeintel-syntactic - contains only the minimal dependencies required to run the worker, we will expand the configuration gradually as we add more features
* First pass at moving moving logging linting into Bazel
* fixed negation operators
* Update dev/linters/logging/logging.go
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
* added more exceptions and refactored one or two impls
* added nogo lint pragmas to offending files
* ran configure
* reverted git-combine refactor
* ran configure
* reverted test as well
---------
Co-authored-by: William Bezuidenhout <william.bezuidenhout@sourcegraph.com>
From https://github.com/sourcegraph/sourcegraph/pull/59170#discussion_r1435025135
## Test plan
Bazel build attempt using smithy-go/ptr: `dev/linters/depguard/depguard.go:7:2: import 'github.com/aws/smithy-go/ptr' is not allowed from list 'Main': use github.com/sourcegraph/sourcegraph/lib/pointers instead (depguard)`
Closes https://github.com/sourcegraph/sourcegraph/issues/54833
## Test plan
`bazel run //dev/linters/staticcheck:write_generated_bazelfile` and `cd dev/linters/staticcheck && go generate` as well as adding staticchecks to be ignored
Closes https://github.com/sourcegraph/sourcegraph/issues/54836
## Test plan
Brought dbconn into the dependency graph of //cmd/executor in a few ways:
1. directly
2. indirectly through a direct dependency
3. indirectly through a transitive dependency
4. inserting/removing from the graph in a few ways to try catch any Fact caching issues
New output: `dev/linters/dbconn/dbconn.go:83:16: use of fmt.Errorf forbidden because “errors.Newf should be used instead” (forbidigo)`
Previous output: `dev/linters/dbconn/dbconn.go:83:16: use of fmt.Errorf forbidden by pattern ^fmt\.Errorf$ (forbidigo)`
## Test plan
Used `fmt.Errorf` in //dev/linters/dbconn` and ran bazel build on it
Closes https://github.com/sourcegraph/sourcegraph/issues/54838
## Test plan
Added `github.com/opentracing/opentracing-go` to some package and did `bazel build //<pkg>` to observe:
`cmd/cody-gateway/internal/actor/limits.go:15:2: import 'github.com/opentracing/opentracing-go/log' is not allowed from list 'tracing libraries': use "go.opentelemetry.io/otel/trace" instead (depguard)`
This fixes the `unused` lint to correctly report its line number so the
`nolint` wrapper can successfully ignore it. It's kinda hacky because
the information needed to do this isn't properly exposed (see comment).
This adds the `unused` lint to the set of linters supported by `nogo`.
This PR leaves it disabled because we currently have a _lot_ of dead
code lying around.
For anyone who wants to work on sniping some unused code, just uncomment
the line in `BUILD.bazel`
This removes the linter that checks for incorrectly formatted
`go:generate` directives from `sg lint` and replace it with a proper
linter handled by `nogo`.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
Locally tested + CI
Adds https://github.com/mvdan/unparam as a nogo linter
Without `//nolint:unparam`
```
bb //dev/linters/...
INFO: Analyzed 136 targets (0 packages loaded, 0 targets configured).
INFO: Found 136 targets...
ERROR: /Users/william/code/sourcegraph/dev/linters/unparam/BUILD.bazel:3:11: GoCompilePkg dev/linters/unparam/unparam.a failed: (Exit 1): builder failed: error executing command (from target //dev/linters/unparam:unparam) bazel-out/darwin_arm64-opt-exec-2B5CBBC6/bin/external/go_sdk/builder_reset/builder compilepkg -sdk external/go_sdk -installsuffix darwin_arm64 -src dev/linters/unparam/unparam.go -embedroot '' ... (remaining 30 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
compilepkg: nogo: errors found by nogo during build-time code analysis:
dev/linters/unparam/unparam.go:27:21: Test - b is unused (unparam)
INFO: Elapsed time: 0.374s, Critical Path: 0.17s
INFO: 3 processes: 2 internal, 1 darwin-sandbox.
FAILED: Build did NOT complete successfully
```
With `//nolint:unparam`
```
bb //dev/linters/...
INFO: Analyzed 136 targets (0 packages loaded, 0 targets configured).
INFO: Found 136 targets...
INFO: Elapsed time: 0.261s, Critical Path: 0.11s
INFO: 3 processes: 1 internal, 2 darwin-sandbox.
INFO: Build completed successfully, 3 total actions
```
## Test plan
* Checked that a small function in the linter is picked up when the
`//nolint:unparam` directive is removed
* Greem CI
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
We cannot use the staticcheck analyzer directly, since it's actually a
collection of analyzers. So we have to pull out each "micro" analyzer
and create a seperate bazel target with it.
Each bazel target actually just uses `staticcheck.go` and embeds the
analyzer it SHOULD use when invoked as that target. Ex:
```
bazel build //dev/linters/staticcheck:SA6001
```
The above target will do the following:
* Set `AnalyzerName` to `SA6001` in `staticcheck.go` uses x_def
* Set the importpath to
`github.com/sourcegraph/sourcegraph/dev/llinters/staticcheck/SA6001`.
**This is very important, otherwise there won't be different libraries
with different analyzer names, just one library with one analyzer set
invoked by different targets**.
## Test plan
bazel build //cmd/frontend/...
green ci
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
---------
Co-authored-by: Jean-Hadrien Chabran <jh@chabran.fr>
~Quick~ update to catchup with latest changes and fix edge cases.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
Bazel jobs should be green.
---------
Co-authored-by: Jason Bedard <jason@aspect.dev>
Co-authored-by: Valery Bugakov <skymk1@gmail.com>
Add the ineffassign linter and let it get executed as part of build
anallysis.
### Linter fixes
Made a few fixes the linter found.
## Test plan
`bazel build //cmd/...`
`bazel build //internal/...`
`bazel build //enterprise/...`
`bazel build //dev/...`
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
Adds forbidigo as a linter than can be executed by nogo.
### What are these other BUILD files for?
The build didn't work without them so had to add them. They were auto
generated by `bazel run //:gazelle`
### Example of errors reported by this linter
```
external/com_github_prometheus_common/model/time.go:164:10: use of `fmt.Errorf` forbidden by pattern `^fmt\.Errorf$` (forbidigo)
external/com_github_prometheus_common/model/time.go:196:13: use of `fmt.Errorf` forbidden by pattern `^fmt\.Errorf$` (forbidigo)
external/com_github_prometheus_common/model/time.go:200:13: use of `fmt.Errorf` forbidden by pattern `^fmt\.Errorf$` (forbidigo)
external/com_github_prometheus_common/model/value.go:53:10: use of `fmt.Errorf` forbidden by pattern `^fmt\.Errorf$` (forbidigo)
external/com_github_prometheus_common/model/value.go:279:10: use of `fmt.Errorf` forbidden by pattern `^fmt\.Errorf$` (forbidigo)
external/com_github_prometheus_common/model/value.go:327:10: use of `fmt.Errorf` forbidden by pattern `^fmt\.Errorf$` (forbidigo)
[609 / 1,838] 10 actions running
GoCompilePkg external/com_github_yuin_goldmark_emoji/definition/definition.a; 3s darwin-sandbox
GoCompilePkg external/com_github_go_enry_go_enry_v2/data/data.a; 3s darwin-sandbox
ERROR: build interrupted
INFO: Elapsed time: 10.957s, Critical Path: 4.42s
INFO: 403 processes: 101 internal, 302 darwin-sandbox.
FAILED: Build did NOT complete successfully
```
### Why do we have our own runAnalysis
Annoyingly, the `analyzer.NewAnalyzer` from forbidigo does not allow you
to configure it in anyway, and we want to add our patterns into it. So
`runAnalysis` was copied from `forbidigo` and slightly modified so that
we can specify patterns
## Test plan
`bazel build //cmd/frontend/...`
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
This adds nogo to lint our code from within bazel.
## Test plan
`bazel build //cmd/frontend`
need to do a full build still
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
---------
Co-authored-by: Jean-Hadrien Chabran <jh@chabran.fr>