- The time.Second frequency is too frequent to be checking if the job
should be run, I think I set this in testing
- Adjust the Slack messaging to say `Active license`
- Adjust the Slack messaging to include the Salesforce subscription ID
## Test plan
Tests
This is a partial revert commit of
cbd12608b5.
We added support for NoTransaction but it isn't needed anymore. In fact
avoiding transactions leads to issues like poisoning connections.
Test Plan: CI
We believe the migration to add tenant_id is failing on dotcom is due to
this trigger which runs for any possible change to a row. Instead it
only needs to run if the deleted_at column changes. So this migration
just adjusts each trigger to only run if deleted_at is changed.
We don't need to change the trigger function, since the trigger
functions only read deleted at.
If this doesn't resolve our issue, we will need to look into a statement
level trigger for all our triggers.
Test Plan: CI and then will test safely on larger instances.
Fixes srch-882
I noticed that the button wasn't shown anymore (neither in Svelte nor in
React). It seems that this broke when we switched from using the file
extension to getting the language(s) from the server.
The server sends back capitalized names which we compare against
lowercase IDs.
If there there is a new/modern way to find out whether a language
support 'find implementation' or not, please let me know. For the time
being this seems to be a simple fix to get it working again like it
before. Alternatively we can also compare the stylized name.
## Test plan
Hovering over a Go interface (e.g.
https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/internal/conf/server.go?L12)
shows the 'find implementations' button in the hovercard again)
Fixes srch-744
It seems that the code for testing whether the target element is part of
the layout didn't work in hovercards because it (possibly?) runs before
the hovercard is rendered.
Moving the logic to `onMount` + `await tick()` seem to work, although
that might still be a coincidence.
## Test plan
Hovering over the 'precise' badge shows the corresponding tooltip.
We dived into our go postgres driver and when executing a migration it
is executed as a "simple query". Postgres in this case automatically
wraps the collection of statements in a transaction, unless it contains
transaction statements. So our last attempt at removing the transaction
failed.
In this attempt we use COMMIT AND CHAIN after each table alter. What
this does is commit the current transaction and then starts it up again.
From the perspective of the go driver, it is as if there was only one
transaction. We then switch the migration to using a transaction to
ensure the go drivers clean up the postgres connection in case of
failure.
IE if a query manually starts a transaction and does not clean up, the
connection will be marked as broken for the next person who gets the
connection from the pool. By wrapping in go's transaction code the
connection will be properly cleaned up.
Test Plan: All continuous environments have already succeeded or failed
on this migration number. So we will manually run this again against
them with the migrator code to ensure the same code paths. If they
succeed we will keep code as is, otherwise we will rollback.
Additionally we did lots of adhoc testing to understand the
characteristics of go and transaction handling.
Co-authored-by: Erik Seliger <erikseliger@me.com>
Previously, it took ~6 seconds for a single edit/test/debug feedback
loop in the `llmapi` module. After this change, it's now 1-2s.
The reason the feedback loop was slow was that we depended on the
`//cmd/frontend/internal/modelconfig` target, which transitively brings
in `graphqlbackend` and all the migration code, which adds huge overhead
to Go link times. It was relatively easy to untangle this dependency so
I went ahead and removed it to boost my local feedback loop.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Green CI.
To measure the timing, I ran the tests, made a tiny change and ran the
tests against to measure the total time to build+test.
```
# Before
❯ time go test -timeout 30s github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi
ok github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi 2.394s
go test -timeout 30s 4.26s user 4.73s system 166% cpu 5.393 total
# After
❯ time go test -timeout 30s github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi
ok github.com/sourcegraph/sourcegraph/cmd/frontend/internal/llmapi 0.862s
go test -timeout 30s 1.20s user 1.21s system 135% cpu 1.774 total
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Currently, build-tracker keeps track of consecutive build failures
through an in-memory store of failed builds. As this gets deployed more
frequently on MSP, we lose state more frequently which would result in
incorrect results. Instead, we can use redis as our external store as
well as for locking using redsync
## Test plan
Unit tests have been updated, but proper testing will require live
traffic
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Fixes CODY-3269
Previously, the OpenAI-compatible API endpoints had paths like
`/api/v1/chat/completions`, which went against an existing convention of
keeping all APIs under the `/.api` prefix. We have a fair amount of
internal tooling centered around the assumption that APIs have the
`/.api` prefix so this PR corrects the mistake and moves the
`/api/v1/chat/completions` endpoint to `/.api/llm/chat/completions`.
I went with the prefix `/.api/llm` since these allow clients to interact
with LLM features like `/chat/completions` or listing model information.
These APIs happen to be compatible with OpenAI APIs but I think it will
be more confusing to add something like "openai" or "openaicomptable" in
the API endpoint. We can just document on our website that these
endpoints are compatible with OpenAI clients.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Green CI. I also manually confirmed that I was able to use an OpenAI
client to send requests to the new APIs.
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
* Moved `/api/v1/chat/completions` to `/.api/llm/chat/completions`. The
functionality is unchanged.
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
When we added a migration that actually relies on the promised property
of migrations always running inside a transaction, we noticed that the
memoryStore implementation we use in the dbtest package does not
actually start a transaction. This implements transactions for the
memorystore as well, to make it pass.
Test plan: Our migration now passes, and dbtest performance with and
without this change is the same.
Fixes CODY-3267
Previously, requests to `/api/` matched the new `publicrestapi` module,
which meant that requests to the GraphQL `/api/console` no longer
worked. This PR fixes the problem by narrowing the prefix-match to
`/api/v1` so that it no longer matches `/api/console`.
I kept the scope of this PR narrow and only fixed the /api/console bug.
I will share a separate RFC to seek input on the tradeoffs between
/api/v1 and /.api. I can make that change separately if there's wide
consensus in #wg-architecture that we want to keep all API endpoints
(public and internal-facing) under /.api.
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
## Test plan
Ran `sg start minimal` and confirmed that I'm able to visit the URL
https://sourcegraph.test:3443/api/console
On the main branch, the same URL leads to a 404 page.
I also confirmed that the `/api/v1/chat/completions` endpoint still
works as expected.
```hurl
POST https://sourcegraph.test:3443/api/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{token}}
{
"model": "anthropic::unknown::claude-3-sonnet-20240229",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Respond with \"no\" and nothing else"
}
]
}
],
"temperature": 1,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}
```
```sh
❯ hurl hurl-scratchpad/openai-sg.hurl
{"id":"chat-1727acdf-6850-4387-950b-2e89850071fa","choices":[{"finish_reason":"end_turn","index":0,"message":{"content":"no","role":"assistant"}}],"created":1723536215,"model":"anthropic::unknown::claude-3-sonnet-20240229","system_fingerprint":"","object":"chat.completion","usage":{"completion_tokens":0,"prompt_tokens":0,"total_tokens":0}}
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Our redis `KeyValue` interface has a built-in way to prefix all keys.
This is helpful for integration tests, to contain all keys within a test
namespace.
This PR solidifies the implementation by pushing down the prefixing as
much as possible. In doing so, it fixes a bug with the `Keys()` method
where we forgot to prefix the pattern. This makes it easier to add new
methods to the interface without forgetting to add the prefix.
We hit a deadlock when deploying this migration to s2. This is because
within our transaction of the migration we likely didn't obtain table
locks in the same order as a transaction in our application code.
So this commit introduces a new migration metadata field
"noTransaction". The documentation for migrator says you should create a
migration per needed transactions. However, this would require us to
create 100s of migrations. We believe the better approach is introducing
this field and barely advertising it.
When reading the code which actually runs migrations, there is no extra
logic done outside of BEGIN; run_migration; COMMIT; so this change is
safe.
We update the migrations to avoid duplicating the function name we
introduce in case something goes wrong (now that the transaction could
leak out the function name).
Test Plan: The actual migrations are tested by go test. I added a test
assertion that we don't call Transact, but to be honest that is super
sketchy. However, we couldn't actually find any test fixtures which
actually run against the DB. So that would require a much deeper
investment for how simple the code change is.
Co-authored-by: Erik Seliger <erikseliger@me.com>
<!-- PR description tips:
https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e
-->
This PR adds the ability to use `--type minor` when running `sg release
create` during the release process.
For the time being this step is the _same_ as `--type patch` which is
the default, however this allows us to differentiate the two types now
and prepares for when/if the two types diverge. This also clears up the
some confusion as the `sg release` command _can_ accept `--type minor`
already and one would expect that to be the choice when you are in fact
cutting a minor type.
Closes:
https://linear.app/sourcegraph/issue/REL-351/sourcegraphsourcegraph64377-fixrelease-add-minor-step-to-internal
## Test plan
Tested locally with `--type minor` tag.
```shell
➜ sourcegraph git:(08-08-jdp_release_minor-flag-addition) sg release create --version 5.6.877 --type minor
👉 [ setup] Finding release manifest in "."
[ setup] No explicit branch name was provided, assuming current branch is the target: 08-08-jdp_release_minor-flag-addition
[ setup] Found manifest for "sourcegraph" (github.com/sourcegraph/sourcegraph)
[ meta] Owners: @sourcegraph/release
[ meta] Repository: github.com/sourcegraph/sourcegraph
👉 [ vars] Variables
[ vars] version="v5.6.877"
[ vars] tag="5.6.877"
[ vars] config="{\"version\":\"v5.6.877\",\"inputs\":\"\",\"type\":\"minor\"}"
[ vars] git.branch="08-08-jdp_release_minor-flag-addition"
[ vars] is_development="false"
.... Stuff here
[ buildkite] Build created, see:
[ buildkite] "https://buildkite.com/sourcegraph/sourcegraph/builds/287192"
[ step] Step "buildkite" succeeded
```
<!-- REQUIRED; info at
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles
-->
## Changelog
Internal change, N/A
<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
In codenav, we use very few APIs from gitserver, so let's use a more
narrow interface (which we can potentially fake in the future) instead
of depending on gitserver.Client directly.
In the future this will allow us to attribute stack traces collected by
pprof to a tenant. This only sets it for the http middleware. I am
unsure how to achieve the same thing for grpc, since that uses
propogators.
Test Plan: captured a goroutine profile and saw some goroutines with a
tenant label
---------
Co-authored-by: Erik Seliger <erikseliger@me.com>
I was tired of seeing this warning when running gazelle:
gazelle: finding module path for import github.com/go-critic/go-critic/framework/linter: go: downloading github.com/go-critic/go-critic v0.11.4
go: module github.com/go-critic/go-critic@upgrade found (v0.11.4), but does not contain package github.com/go-critic/go-critic/framework/linter
This updated go-critic to the latest version which makes the warning go
away since it now uses the new package path.
Test Plan: gazelle is happy and CI is happy
This change ensure we correctly handle:
1. In Enterprise Portal, where no active license is available, we return
ratelimit=0 intervalduration=0, from the source `PLAN` (as this is
determined by the lack of a plan)
2. In Cody Gateway, where intervalduration=0, we do not grant access to
that feature
## Test plan
Unit tests
This change caches what we download from google storage.
Before this change on my desktop computer this test would timeout after
10 minutes. It now takes 4s.
Test Plan: go test ./internal/database/migration/stitch
These all fail and are only run outside of bazel (which is why we didn't
notice it). I believe the root cause is google getting more strict with
the URLs you pass it.
Should we consider removing these tests given no one has reported it not
working? Or should we get it working in Bazel?
Test Plan go test ./internal/database/migration/stitch/
The goal of this PR is to make search blitz usable for customers who
want to run it against their instance for continuous performance
analysis.
Specifically, this does two things:
1) Enables configuring the set of queries to run with the
`SEARCH_BLITZ_QUERY_FILE` env var
2) Packages the image in our standard format so we can publish it
Instead of adding instrumentation for each place we expect a tenant, we
instead assume that we always expect a tenant if FromContext is called.
This should allow us to not miss any places.
Note: this does remove the logging we had in diskcache, but it seems
more reasonable to just use pprof for something that can be noisy?
Test Plan: just gonna rely on CI since this defaults to off.
The `DeleteAllKeysWithPrefix` method is now only used in tests to ensure
the test keyspace is clear. This PR makes it clear this method is only
used in tests, and simplifies the implementation so it no longer needs a
script + direct Redis connection.
Another tiny Redis refactor to prepare for multi-tenancy work.
This PR passes the 'boost' annotation down to searcher, so that it can
apply phrase boosting. For now, we just pass the boost to the Zoekt
query in hybrid search, which already gives a nice benefit since the
Zoekt results are streamed back first.
Note: this doesn't completely implement boosting in searcher, but it was
really simple and seemed worth it. We're purposefully not investing in
big searcher ranking improvements, since we think a better investment is
to unify logic across Zoekt + searcher.
Adds a super simple E2E test suite that must be run with `sg test
enterprise-portal-e2e` against a locally running Enterprise Portal
instance. This is not intended to be filled with super granular
assertions - it simply tests that everything is wired up correctly and
runs end-to-end.
Caught at ~3 issues with this already, amended various downstack PRs
with the fix 😆
Closes
https://linear.app/sourcegraph/issue/CORE-229/enterprise-portal-basic-manual-e2e-testing-setup
## Test plan
```
sg start dotcom
sg test enterprise-portal-e2e
```
No additional configuration required, the defaults work as-is
What: This PR does the bare minimum to migrate the current community
search pages to Svelte. A better strategy for managing them is needed in
the medium/long term.
How: The community pages live at the root (e.g. `/kubernetes`) which
complicates things, but I'll get to that later. The page is implemented
as a single parameterized route. A parameter matcher is used to validate
the community name. Because these pages should only be accessible on
dotcom the matcher also validates whether or not we are on dotcom (if
not, the path will be matched against a different route).
The page config is stored in a separate module so that it's no included
in every page and so that it can be used in the integration test.
The loader and page implementation themselves are straightforward. I
made a couple of changes in other modules to make implementation easier:
- Extracted the parameter type of the `marked` function so that it can
be used as prop type.
- Added an `inline` option to `marked` that allows formatting markdown
as 'inline', i.e. without `p` wrapper.
- Added a `wrap` prop to `SyntaxHighlightedQuery.svelte` to configure
line wrapping of syntax highlighted search queries (instead of having to
overwrite styles with `:global`).
- Extended the route code generator to be able to handle single
parameter segments and the `communitySearchContext` matcher.
Because the community routes should only be available on dotcom I added
a new tag to the code generator that allows it include routes only for
dotcom.
Once we change how all this works and have community search pages live
under a different path we can simplify this again.
Result:
| React | Svelte |
|--------|--------|
|

|

|
## Test plan
- New integration tests.
- Verified that `/kubernetes` shows a 'repo not found error' when
running against S2.
- Verified that `/kubernetes` shows the community page when running
against dotcom.
- Verified that `window.context.svelteKit.enabledRoutes` contains the
community page route in enterprise mode but not in dotcom mode.
This adds support for navigating between search results with keyboard
shortcuts. Similar to the React app, `j` or `down` means "next result",
and `k` or `up` means previous results. To accompany this change, when a
search is submitted, the first result is focused by default to
facilitate iterating over results with the keyboard.
The first version of the file tree for the revision panel had flat file
names. This meant that files were not organized, they were very
horizontal-space-sensitive, and you could not filter to a directory
(only repo and file name).
This updates the file filter to be a full tree, which I find much easier
to use.
Required to build an updated subscriptions management UI.
Most of the diff is generated proto for some reason
Closes https://linear.app/sourcegraph/issue/CORE-226
## Test plan
Integration tests