Commit Graph

398 Commits

Author SHA1 Message Date
Erik Seliger
83d0f6876c
dotcom: Remove on-demand cloning of repositories (#63321)
Historically, sourcegraph.com has been the only instance. It was
connected to GitHub.com and GitLab.com only.
Configuration should be as simple as possible, and we wanted everyone to
try it on any repo. So public repos were added on-demand when browsed
from these code hosts.

Since, dotcom is no longer the only instance, and this is a special case
that only exists for sourcegraph.com.
This causes a bunch of additional complexity and various extra code
paths that we don't test well enough today.

We want to make dotcom simpler to understand, so we've made the decision
to disable that feature, and instead we will maintain a list of
repositories that we have on the instance.
We already disallowed several repos half a year ago, by restricting size
of repos with few stars heavily.
This is basically just a continuation of that.

In the diff, you'll mostly find deletions. This PR does not do much
other than removing the code paths that were only enabled in dotcom mode
in the repo syncer, and then removes code that became unused as a result
of that.

## Test plan

Ran a dotcom mode instance locally, it did not behave differently than a
regular instance wrt. repo cloning.
We will need to verify during the rollout that we're not suddenly
hitting code paths that don't scale to the dotcom size.

## Changelog

Dotcom no longer clones repos on demand.
2024-06-26 14:53:14 -07:00
Matthew Manela
92b8ffb8e1
fix(Source): Fix documentation URLs for code hosts help pages (#63274)
It seems many of our doc links for code hosts are broken in production
due to a url changed from external_services to code_hosts. I did a find
an replace to update all the ones I could find.
2024-06-17 14:32:46 -04:00
Erik Seliger
246b53ecc3
Reapply "gitserver(client): Reintroduce 500 maximum connections limit" (#63134)
The first attempt didn't work as there are other exit conditions for the
stream version than just calling RecvMsg until io.EOF. I found that gRPC
has a callback for onFinish, and this seems to work properly locally.

See commit number 2 for the diff over the initial implementation.

## Test plan

Verified locally that all connection counts drop to zero eventually.
2024-06-07 10:46:19 +02:00
Erik Seliger
9e724bc596
Revert "gitserver(client): Reintroduce 500 maximum connections limit (#63064)" (#63132)
This reverts commit 9185da3c3e.

Noticed there are some bad callers in worker and symbols that don't
properly return a connection. Will need to investigate and fix that
first.

## Test plan

Worked before, CI passes.
2024-06-06 18:10:56 +00:00
Erik Seliger
9185da3c3e
gitserver(client): Reintroduce 500 maximum connections limit (#63064)
This used to exist in the HTTP world, and we currently have zero
safeguards to prevent clients from making one billion requests
concurrently.
Until we invest more into server-side rate limiting, or per tenant rate
limiting, we reintroduce this limiter, to prevent resource usage spikes.

Test plan:

Added a test suite.

---------

Co-authored-by: Geoffrey Gilmore <geoffrey@sourcegraph.com>
2024-06-06 15:18:01 +02:00
Erik Seliger
6d142c833f
gitserver: Add observability for repo service (#63026)
Since we split out this service, we lost a few metrics on call counts and latencies.
This PR adds them back.

Closes #62785

Test plan:

Ran the dashboards locally and they return data. These dashboards are a 1:1 replica of the git service observability.
2024-06-03 16:37:20 +02:00
Petri-Johan Last
df0c59ed12
Remove echo test critical alert (#63004)
The 1s echo test alert for gitserver triggers on dotcom and doesn't have any actionable consequences, so we are removing it. The warning will remain.
2024-06-03 14:11:28 +02:00
Michael Bahr
e85028b8bd
fix: update links for dev docs (#62758)
* fix: license checker info is in docs-legacy

* fix: update remaining dev links
2024-05-17 13:47:34 +02:00
Noah S-C
9b6ba7741e
bazel: transcribe test ownership to bazel tags (#62664) 2024-05-16 15:51:16 +01:00
Erik Seliger
ffd5b0a639
gitserver: Fixup confusing label on monitoring dashboard (#62424)
Turns out I blindly copy-pasted this from elsewhere and the graphs always said
GraphQL operations, which is wrong and confused people.

Test plan:

Reads better now.
2024-05-03 19:58:23 +02:00
Erik Seliger
67fd07b624
gitserver: RUsage and high mem exec logging (#62029)
This PR adds additional observation tools and warning logs for git commands that required a lot of memory.
That should help us better identify where potential for OOMs exists and what endpoints could benefit from optimization.

```
[    gitserver-0] WARN gitserver.cleanup gitcli/command.go:307 High memory usage exec request {"TraceId": "f70c73e500ed7831207ce9a7c6dc63fb", "SpanId": "705d1dcfd0b44a06", "ev.Fields": {"exit_status": "0", "cmd_duration_ms": "1944", "user_time": "234.915ms", "cmd_ru_minflt": "10231", "cmd_ru_majflt": "7", "duration_ms": "1944", "trace": "https://sourcegraph.test:3443/-/debug/jaeger/trace/f70c73e500ed7831207ce9a7c6dc63fb", "cmd_ru_maxrss_kib": "160672", "actor": "0", "traceID": "f70c73e500ed7831207ce9a7c6dc63fb", "repo": "github.com/sourcegraph/sourcegraph", "args": "[git commit-graph write --reachable --changed-paths --size-multiple=4 --split]", "system_time": "1.679428s", "cmd_ru_inblock": "0", "cmd_ru_oublock": "0", "cmd": "commit-graph"}}
```

Test plan:

Tried this locally using some command I know will use a bunch of memory, see test output above.
2024-04-23 20:21:54 +02:00
Keegan Carruthers-Smith
2fef03886e
monitoring: fix "instance" dropdown for Zoekt (#61836)
The index_num_assigned metric no longer exists. This should fix a few
all the graphs that you are able to select which indexserver to look at.

Test Plan:
2024-04-12 15:47:19 +02:00
Keegan Carruthers-Smith
2685c8c324
monitoring: add golang monitoring for zoekt (#61731)
Noticed this omission when I was wondering if we had goroutine leaks.
Our other services define this.

I added a simple way to indicate the container name in title since this
is the first service we added which needs this.

Test Plan: go test. Copy paste generated query into grafana explore on
dotcom.
2024-04-12 13:46:11 +00:00
Geoffrey Gilmore
bee764a523
vscsyncer: introduce syncer wrapper which calculates latencies for all operations (#61708)
Closes #61692 


## Test plan

Created the following Grafana screenshot using `sg start monitoring`:

![screencapture-sourcegraph-test-3443-debug-grafana-d-gitserver-git-server-2024-04-11-13_46_51](https://github.com/sourcegraph/sourcegraph/assets/9022011/c4e6d8cf-31fd-444d-bd49-905d4003813f)
2024-04-11 14:21:14 -07:00
Stefan Hengl
e33f9528b0
rockskip: monitor indexing queue (#61588)
This adds a p99.9 and p95 panel to monitor how long index requests have
been waiting on the queue.

Test plan:
- manual testing
2024-04-04 17:46:12 +02:00
Stefan Hengl
db43f32431
symbols: add metrics and panels for Rockskip (#61547)
This adds basic metrics to Rockskip and exposes them in a new panel on
the dashboard of the Symbol service.

Test plan:
- manual testing: I enabled Rockskip locally and verified that the dashboards make sense
2024-04-04 09:29:51 +02:00
Erik Seliger
c32cfe58b8
monitoring: Make gitserver alert less trigger-friendly (#61543) 2024-04-03 15:28:17 +02:00
ggilmore
d3786cb9fd httpcli: add prometheus metric for monitoring the rate that Sourcegraph issues requests to external services
commit-id:0f3120ed
2024-04-02 13:50:07 -07:00
Erik Seliger
faf189f892
gitserver: Cleanup grafana dashboard (#60870)
- Removes duplicative disk IO panels
- Adds some warnings and next steps descriptions to the most critical things
- Hides the backend panel by default
- Adds a metric and alert for repo corruption events
2024-03-12 22:11:13 +01:00
Erik Seliger
29b2dd8323
gitserver: Add some simple observability layer to backend (#59920)
To better track latencies and future performance improvements, adding some lightweight observability layer on top of the existing backend implementation.

I'm really no prometheus expert, this might be very wrong. I do get somewhat okay looking timing information though.
I'd love to get some additional eyes on the monitoring dashboards.

## Test plan

Ran monitoring stack locally and verified the graphs show something.
2024-02-14 15:06:37 +01:00
Camden Cheek
1ead945267
Docs: update links to point to new site (#60381)
We have a number of docs links in the product that point to the old doc site. 

Method:
- Search the repo for `docs.sourcegraph.com`
- Exclude the `doc/` dir, all test fixtures, and `CHANGELOG.md`
- For each, replace `docs.sourcegraph.com` with `sourcegraph.com/docs`
- Navigate to the resulting URL ensuring it's not a dead link, updating the URL if necessary

Many of the URLs updated are just comments, but since I'm doing a manual audit of each URL anyways, I felt it was worth it to update these while I was at it.
2024-02-13 00:23:47 +00:00
Varun Gandhi
ac49f74baa
codeintel: Downgrade queue size critical alert to warning (#60165)
We've been running into spurious alerts on-and-off.

We should add observability here to better narrow down why
we're getting backlogs that are getting cleared later.

In the meantime, it doesn't make sense for this to
be a critical alert.
2024-02-05 14:58:55 +00:00
Erik Seliger
ff1332f0d8
gitserver: Remove CloneableLimiter (#59935)
IMO, this is an unnecessary optimization that increases complexity and in the current implementation locks for longer than it needs to, because the lock in Blocking clone mode is only returned when the clone has completed, limiting the concurrency more than desired.
There are also the clone limiter AND RPS limiter still in place, so we got more than enough rate limiters in place here, IMO.
2024-01-31 09:46:39 +01:00
Erik Seliger
ab8746e864
gitserver: Simplify client (#59772)
This PR consolidates and simplifies the client code a lot, in anticipation of moving more of the logic to the server-side. This allows us to have a better overview of what is actually used, and keeps the client interface as simple as possible.

Most notable changes:

- `Head` has been removed in favor of `GetDefaultBranch`. Currently, this is 1 gRPC -> 2 gRPC but with specialized endpoints, we should get it down to 1 again. They were otherwise the same, and I even noticed there's a slight bug in the `Head` implementation that didn't really handle the "empty repo" case. 
- `GetCommit` no longer exposes the option to also ensure the revision exists. 99% of call sites disabled this feature and I opted for an explicit call to EnsureRevision instead. This will keep the endpoint performant and not cause latency spikes.
- `ListBranches` no longer takes options. They were unused.
- The `BatchLog` endpoint has been deprecated.
- `CommitsExist` has been removed, it's been the only method that makes use of the batch log method and accounts for probably 1500 of the deleted lines in this PR. We're now making individual requests instead. *We should monitor this, it doesn't seem like we're making crazy request counts on dotcom, our largest instance. And we have much worse offenders, like the indexserver endpoint that runs rev-parse for 3M repos.*. I do believe that with the new API, we're able to do better generic batching implementations though that aren't this specific, that also make use of gRPC streaming instead of creating very large responses and latency. 
- `CommitDate` has been removed in favor of `GetCommit` which returns the date as well, one less endpoint.
- `GetCommits` has been unexported from the client API, it was only used internally.
- `Addrs` has been unexported.
- The unused properties `RefGlob` and `ExcludeRefGlob` have been removed from `RevisionSpecifier`, they were unused.
2024-01-24 13:56:54 +01:00
Petri-Johan Last
1bba959307
Remove blobstore latency alert (#59665) 2024-01-17 19:12:37 -08:00
Geoffrey Gilmore
616e3df4b9
monitoring: fix alert definition for site configuration by adding scrape job label (#59687)
We discovered recently that the definition for the alert that fires if the site configuration hasn't been fetched within 5 minutes strips out the regex that targets individual services (since it uses a grafana variable). This means that every instance of this alert will fire if any individual service trips over this threshold.

This PR fixes the issue by adding a new `job` filter for this alert that targets only the services that have that Prometheus scrape target name. This works around the previous issue by using a fixed value for the `job` value instead of a dynamic grafana value.

The value of the job filter generally looks like `job=~.*$container_name` (following the strategy from https://sourcegraph.com/github.com/sourcegraph/sourcegraph@9a780f2e694238b5326e3e121d6a1828463001b9/-/blob/monitoring/monitoring/monitoring.go?L161 ) unless I noticed that there was different logic in the existing dashboard for the services. 

Ex:

- `frontend`: already used `job=~"(sourcegraph-)?frontend"` for some metrics, so I used it again here
- `worker`: `already used `job=~"^worker.*"` in some metrics, so I used it again and standarized the other existing panels to use the same shared variable

## Test plan

I eyeballed the generated alert.md and dashboards.md to verify that my changes looked correct (that is, my refactors resulted in either no diff, or that the diff I generated still looked like valid regex).
2024-01-17 15:19:54 -08:00
Geoffrey Gilmore
9a780f2e69
grpc: fix retry count dashboard (forgot to add label to filter only retried requests) (#59680)
Follow up to #59680. I accidentally forgot to only display retried requests on the dashboards that shows the request counts. 

## Test plan

Ran locally, saw that the "Client retry count per-method over 2m" dashboards now have the proper is_retried label:

![Screenshot 2024-01-17 at 11.52.34 AM.png](https://graphite-user-uploaded-assets-prod.s3.amazonaws.com/5VKJ5spRdhDRvKQ0TTIe/61278d74-eaaf-4a11-9f57-43c8e06aaaba.png)
2024-01-17 12:57:18 -08:00
Geoffrey Gilmore
70d5012674
grpc: retry: create Prometheus dashboards (#59607)
This PR implements the logic in the monitoring generator for displaying the gRPC reply metrics included in https://github.com/sourcegraph/sourcegraph/pull/59399 .

## Test plan

I created the following screenshots (look at the bottom panel) from the gitserver + zoekt-websever grafana dashboards after running `sg start --except zoekt-web-0 --except gitserver-1` and executing searches like `context:global type:diff test count:all r:hashicorp` or `context:global test count:all r:hashicorp`


<img width="1716" alt="Screenshot 2024-01-15 at 1 28 11 PM" src="https://github.com/sourcegraph/sourcegraph/assets/9022011/a503b6d9-3e21-451c-b98b-3b6e634d4ec9">


<img width="1715" alt="Screenshot 2024-01-15 at 1 28 41 PM" src="https://github.com/sourcegraph/sourcegraph/assets/9022011/ed07244e-340d-4ae3-933d-3416abff91cc">
2024-01-16 10:45:43 -08:00
Rafał Gajdulewicz
39d34cd8c9
Remove owner of NoAlert observable (#59384)
* Remove owner from NoAlert observable

* Add generated+pre-commit

* chore: handle trailing spaces

* Regen docs

---------

Co-authored-by: Jean-Hadrien Chabran <jh@chabran.fr>
2024-01-15 16:25:20 +01:00
Erik Seliger
4b5e9f3b8d
Move repo perms syncer to worker (#59510)
Since we have distributed rate limits now, the last dependency is broken and we can move this subsystem around freely.
To make repo-updater more lightweight, worker will be the new home of this system.

## Test plan

Ran stack locally, CI still passes including integration tests.
2024-01-11 21:09:46 +01:00
Erik Seliger
bb09a4ac1f
Remove HTTP for inter-service RPC (#59093)
In the upcoming release, we will only support gRPC going forward. This PR removes the old HTTP client and server implementations and a few leftovers from the transition.
2024-01-11 19:46:32 +01:00
Robert Lin
fc37f74865
monitoring: relax mean_blocked_seconds_per_conn_request alerts (#59507)
https://github.com/sourcegraph/sourcegraph/pull/59284 dramatically reduced the `mean_blocked_seconds_per_conn_request` issues we've been seeing, but overall delays are still higher, even with generally healthy Cloud SQL resource utilization.

<img width="1630" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/23356519/91615471-5187-4d15-83e7-5cc94595303c">

Spot-checking the spikes in load in Cloud SQL, it seems that there is a variety of causes for each spike (analytics workloads, Cody Gateway syncs, code intel workloads, gitserver things, `ListSourcegraphDotComIndexableRepos` etc) so I'm chalking this up to "expected". Since this alert is seen firing on a Cloud instance, let's just relax it for now so that it only fires a critical alert on very significant delays.
2024-01-11 01:14:28 +00:00
Petri-Johan Last
9efa6c7e2e
Adjust blobstore latency alert (#59382) 2024-01-09 08:34:27 +02:00
Robert Lin
8bce54ee62
monitoring: remove very long description (#59338) 2024-01-04 22:56:28 -08:00
Robert Lin
55825e9939
monitoring: test owners for valid Opsgenie teams and handbook pages (#59251)
In INC-264 it seems that certain alerts - such as [zoekt: less than 90% percentage pods available for 10m0s](https://opsg.in/a/i/sourcegraph/178a626f-0f28-4295-bee9-84da988bb473-1703759057681) - don't seem to end up going anywhere because the ObservableOwner is defunct. This change adds _opt-in_ testing to report:

1. How many owners have valid Opsgenie teams
2. How many owners have valid handbook pages

In addition, we collect ObservableOwners that pass the test and use it to generate configuration for `site.json` in Sourcegraph.com: https://github.com/sourcegraph/deploy-sourcegraph-cloud/pull/18338 - this helps ensure the list is valid and not deceptively high-coverage.

The results are not great, but **enforcing** that owners are valid isn't currently in scope:

```
6/10 ObservableOwners do not have valid Opsgenie teams
3/10 ObservableOwners do not point to valid handbook pages
```

I also removed some defunct/unused functionality/owners.

## Test plan

To run these tests:

```
export OPSGENIE_API_KEY="..."
go test -timeout 30s  github.com/sourcegraph/sourcegraph/monitoring/monitoring -update -online                       
```
2023-12-29 14:07:35 -08:00
Dax McDonald
c7f7460061
Remove centralized observability deps (#58484)
* Remove centralized observability deps

* Update generated file
2023-11-22 15:08:30 -07:00
Robert Lin
95b47b7a97
monitoring: assign obvious owners to DatabaseConnectionsMonitoringGroup (#58474)
updates `DatabaseConnectionsMonitoringGroup` to accept an owner - before it was just hardcoded to `ObservableOwnerDevOps`, which is not very helpful. This assigns some of the obvious service owners:

1. Source: gitserver, repo-updater
2. Cody: embeddings (but should eventually be @sourcegraph/search-platform, along with all embeddings alerts: https://github.com/sourcegraph/sourcegraph/pull/58474#issuecomment-1821505062)

Source is an active owner based on [thread](https://sourcegraph.slack.com/archives/C0652SSUA20/p1700592165408089?thread_ts=1700549423.860019&cid=C0652SSUA20), and Cody is a fairly recent addition so hopefully it's valid.
I'm not sure the Search one is still up-to-date, so I didn't change some of the obvious search services - for now, these still point to DevOps as they did before. If it becomes problematic we can revisit later.
2023-11-22 01:06:53 +00:00
Erik Seliger
0236f9e240
Remove global lock around GitHub.com requests (#58190)
Looks like GitHub.com has become more lenient, or transparent on their docs page: https://docs.github.com/en/rest/overview/rate-limits-for-the-rest-api?apiVersion=2022-11-28#about-secondary-rate-limits. The paragraph about single request per token is gone from this page! Instead, they describe secondary rate limits quite well now:

```
You may encounter a secondary rate limit if you:

Make too many concurrent requests. No more than 100 concurrent requests are allowed. This limit is shared across the REST API and GraphQL API.
Make too many requests to a single endpoint per minute. No more than 900 points per minute are allowed for REST API endpoints, and no more than 2,000 points per minute are allowed for the GraphQL API endpoint. For more information about points, see "Calculating points for the secondary rate limit."
Make too many requests per minute. No more than 90 seconds of CPU time per 60 seconds of real time is allowed. No more than 60 seconds of this CPU time may be for the GraphQL API. You can roughly estimate the CPU time by measuring the total response time for your API requests.
Create too much content on GitHub in a short amount of time. In general, no more than 80 content-generating requests per minute and no more than 500 content-generating requests per hour are allowed. Some endpoints have lower content creation limits. Content creation limits include actions taken on the GitHub web interface as well as via the REST API and GraphQL API.
```

So the limit is no longer 1, it is roughly 100. Well, that depends on what APIs you’re calling, but whatever. Strangely, in the best practices section they still say that 1 request is advised, I followed up with a support ticket with GitHub to clarify.

### Outcome

They said 100 is the limit but for certain requests the number can be lower. This doesn't convince us (team source) that it's worth keeping it.

Besides, they also document that they return a Retry-After header in this event and we already handle that with retries (if the retry is not in the too distant future). So.. I want to say that this is “no different than any other API” at this point. Sure, there are some limits that they enforce, but that’s true for all the APIs. The 1-concurrency only one was quite gnarly which totally justified the GitHub-Proxy and now the redis-based replacement IMO, but I don’t think with the recent changes here it does warrant a github.com-only special casing (pending talking to GitHub about that docs weirdness), and instead of investing into moving the concurrency lock into the transport layer, I think we should be fine dropping it altogether.
2023-11-15 14:20:06 +01:00
Erik Seliger
80df730701
proposal: Add scopes to gitserver clients (#57321)
This PR proposes a new pattern for instantiating gitserver clients.
When we instantiate a new gitserver client, we should pass in a scope, a description of the environment it's used in.
When a client is passed down to an environment, we can augment the client with an additional scope.

What is this for?

Looking at Grafana charts for dotcom, we see that we make about 2000 requests per second to gitserver. We know what endpoints we're hitting, and what _container_ is making the request.
In Sourcegraph, containers are not a great boundary for services though. Some components stretch across multiple containers, and one container runs many different components, for example our worker container.
While there are probably at least 50 routines owned by various different teams in that container, our current metrics only tell us that worker is making a large amount of requests.
But we don't know who to talk to about it, because, again, worker is basically every team.

With scopes, we get more fine-grained insights and can group the metric by (container, op, scope), to get full insight into what _component_ (not _container_) is talking to gitserver.
2023-10-27 21:47:47 +02:00
Robert Lin
9009bb3d04
monitoring/telemetry: un-hide panels, improve docstrings (#57740) 2023-10-19 12:52:21 -07:00
Geoffrey Gilmore
9d34a48425
conf: add metric and associated alert if clients fail to update site configuration within 5 minutes (#57682) 2023-10-18 23:53:55 +00:00
Erik Seliger
58fe87f6b5
enterprise: Move last directory out (#57392)
This is the end of the PR train to remove the enterprise directory from out repo since we have consolidated to use a single license.

Bye rough code split :)
2023-10-05 20:15:40 +00:00
Erik Seliger
a2fbaf830f
Remove dead code from org invites (#57279)
Whether we remove email org invitations or not, this code is currently not used and increases our ownership surface area, thus I'm removing it here. We can always revert, should we need it again.
2023-10-04 15:50:44 +02:00
Robert Lin
255f7eda39
monitoring: add queue growth panel and alert (#57222) 2023-10-02 10:59:41 -04:00
Robert Lin
96f2d595e0
monitoring: add telemetrygatewayexporter panels, improve metrics (#57171)
Part of https://github.com/sourcegraph/sourcegraph/issues/56970 - this adds some dashboards for the export side of things, as well as improves the existing metrics. Only includes warnings.

## Test plan

Had to test locally only because I ended up changing the metrics a bit, but validated that the queue size metric works in S2.

Testing locally:

```yaml
# sg.config.overwrite.yaml
env:
  TELEMETRY_GATEWAY_EXPORTER_EXPORT_INTERVAL: "30s"
  TELEMETRY_GATEWAY_EXPORTER_EXPORTED_EVENTS_RETENTION: "5m"
  TELEMETRY_GATEWAY_EXPORTER_QUEUE_CLEANUP_INTERVAL: "10m"
```

```
sg start
sg start monitoring
```

Do lots of searches to generate events. Note `telemetry-export` feature flag must be enabled

Data is not realistic because of the super high interval I configured for testing, but it shows that things work:

![image](https://github.com/sourcegraph/sourcegraph/assets/23356519/c44cd60e-514e-4b62-a6b6-890582d8059c)
2023-09-29 17:10:07 +00:00
Robert Lin
660996100c
monitoring: make email_delivery_failures on percentage, not count (#57045)
This changes the threshold to critical-alert on 10% failure rate, and warn on any non-zero failure rate (as all email delivery failures impact user experience).
2023-09-26 09:04:29 -07:00
Geoffrey Gilmore
09d5386604
grpc: add dashboard for site-configuration service to frontend page (#56799) 2023-09-19 22:40:20 +00:00
Geoffrey Gilmore
c1f887415b
update zoekt to include updated grpc prometheus 2.0.0 version (#56735) 2023-09-18 19:51:59 +00:00
Erik Seliger
711ee1a495
Remove GitHub proxy service (#56485)
This service is being replaced by a redsync.Mutex that lives directly in the GitHub client.
By this change we will:
- Simplify deployments by removing one service
- Centralize GitHub access control in the client instead of splitting it across services
- Remove the dependency on a non-HA service to talk to GitHub.com successfully

Other repos referencing this service will be updated once this has shipped to dotcom and proven to work over the course of a couple days.
2023-09-14 19:43:40 +02:00
Geoffrey Gilmore
b32587f4f0
zoekt: update to 3ce1f2b24c80b3dea89d237600d2b0879342ed1b (#56525) 2023-09-12 16:10:49 +00:00