mirror of
https://github.com/sourcegraph/sourcegraph.git
synced 2026-02-06 15:31:48 +00:00
This change makes Cody Gateway always apply a wildcard model allowlist, irrespective of what the configured model allowlist is for an Enterprise subscription is in dotcom (see #62909). The next PR in the stack, https://github.com/sourcegraph/sourcegraph/pull/62912, makes the GraphQL queries return similar results, and removes model allowlists from the subscription management UI. Closes https://linear.app/sourcegraph/issue/CORE-135 ### Context In https://sourcegraph.slack.com/archives/C05SZB829D0/p1715638980052279 we shared a decision we landed on as part of #62263: > Ignoring (then removing) per-subscription model allowlists: As part of the API discussions, we've also surfaced some opportunities for improvements - to make it easier to roll out new models to Enterprise, we're not including per-subscription model allowlists in the new API, and as part of the Cody Gateway migration (by end-of-June), we will update Cody Gateway to stop enforcing per-subscription model allowlists. Cody Gateway will still retain a Cody-Gateway-wide model allowlist. [@chrsmith](https://sourcegraph.slack.com/team/U061QHKUBJ8) is working on a broader design here and will have more to share on this later. This means there is one less thing for us to migrate as part of https://github.com/sourcegraph/sourcegraph/pull/62934, and avoids the need to add an API field that will be removed shortly post-migration. As part of this, rolling out new models to Enterprise customers no longer require additional code/override changes. ## Test plan Set up Cody Gateway locally as documented, then `sg start dotcom`. Set up an enterprise subscription + license with a high seat count (for a high quota), and force a Cody Gateway sync: ``` curl -v -H 'Authorization: bearer sekret' http://localhost:9992/-/actor/sync-all-sources ``` Verify we are using wildcard allowlist: ```sh $ redis-cli -p 6379 get 'v2:product-subscriptions:v2:slk_...' "{\"key\":\"slk_...\",\"id\":\"6ad033f4-c6da-43a9-95ef-f653bf59aaac\",\"name\":\"bobheadxi\",\"accessEnabled\":true,\"endpointAccess\":{\"/v1/attribution\":true},\"rateLimits\":{\"chat_completions\":{\"allowedModels\":[\"*\"],\"limit\":660,\"interval\":86400000000000,\"concurrentRequests\":330,\"concurrentRequestsInterval\":10000000000},\"code_completions\":{\"allowedModels\":[\"*\"],\"limit\":66000,\"interval\":86400000000000,\"concurrentRequests\":33000,\"concurrentRequestsInterval\":10000000000},\"embeddings\":{\"allowedModels\":[\"*\"],\"limit\":220000000,\"interval\":86400000000000,\"concurrentRequests\":110000000,\"concurrentRequestsInterval\":10000000000}},\"lastUpdated\":\"2024-05-24T20:28:58.283296Z\"}" ``` Using the local enterprise subscription's access token, we run the QA test suite: ```sh $ bazel test --runs_per_test=2 --test_output=all //cmd/cody-gateway/qa:qa_test --test_env=E2E_GATEWAY_ENDPOINT=http://localhost:9992 --test_env=E2E_GATEWAY_TOKEN=$TOKEN INFO: Analyzed target //cmd/cody-gateway/qa:qa_test (0 packages loaded, 0 targets configured). INFO: From Testing //cmd/cody-gateway/qa:qa_test (run 1 of 2): ==================== Test output for //cmd/cody-gateway/qa:qa_test (run 1 of 2): PASS ================================================================================ INFO: From Testing //cmd/cody-gateway/qa:qa_test (run 2 of 2): ==================== Test output for //cmd/cody-gateway/qa:qa_test (run 2 of 2): PASS ================================================================================ INFO: Found 1 test target... Target //cmd/cody-gateway/qa:qa_test up-to-date: bazel-bin/cmd/cody-gateway/qa/qa_test_/qa_test Aspect @@rules_rust//rust/private:clippy.bzl%rust_clippy_aspect of //cmd/cody-gateway/qa:qa_test up-to-date (nothing to build) Aspect @@rules_rust//rust/private:rustfmt.bzl%rustfmt_aspect of //cmd/cody-gateway/qa:qa_test up-to-date (nothing to build) INFO: Elapsed time: 13.653s, Critical Path: 13.38s INFO: 7 processes: 1 internal, 6 darwin-sandbox. INFO: Build completed successfully, 7 total actions //cmd/cody-gateway/qa:qa_test PASSED in 11.7s Stats over 2 runs: max = 11.7s, min = 11.7s, avg = 11.7s, dev = 0.0s Executed 1 out of 1 test: 1 test passes. ```
1928 lines
61 KiB
YAML
1928 lines
61 KiB
YAML
# Documentation for how to override sg configuration for local development:
|
|
# https://github.com/sourcegraph/sourcegraph/blob/main/doc/dev/background-information/sg/index.md#configuration
|
|
env:
|
|
PGPORT: 5432
|
|
PGHOST: localhost
|
|
PGUSER: sourcegraph
|
|
PGPASSWORD: sourcegraph
|
|
PGDATABASE: sourcegraph
|
|
PGSSLMODE: disable
|
|
SG_DEV_MIGRATE_ON_APPLICATION_STARTUP: 'true'
|
|
INSECURE_DEV: true
|
|
|
|
SRC_REPOS_DIR: $HOME/.sourcegraph/repos
|
|
SRC_LOG_LEVEL: info
|
|
SRC_LOG_FORMAT: condensed
|
|
SRC_TRACE_LOG: false
|
|
# Set this to true to show an iTerm link to the file:line where the log message came from
|
|
SRC_LOG_SOURCE_LINK: false
|
|
|
|
# Use two gitserver instances in local dev
|
|
SRC_GIT_SERVER_1: 127.0.0.1:3501
|
|
SRC_GIT_SERVER_2: 127.0.0.1:3502
|
|
SRC_GIT_SERVERS: 127.0.0.1:3501 127.0.0.1:3502
|
|
|
|
# Enable sharded indexed search mode:
|
|
INDEXED_SEARCH_SERVERS: localhost:3070 localhost:3071
|
|
|
|
GO111MODULE: 'on'
|
|
|
|
DEPLOY_TYPE: dev
|
|
|
|
SRC_HTTP_ADDR: ':3082'
|
|
|
|
# I don't think we even need to set these?
|
|
SEARCHER_URL: http://127.0.0.1:3181
|
|
REPO_UPDATER_URL: http://127.0.0.1:3182
|
|
REDIS_ENDPOINT: 127.0.0.1:6379
|
|
SYMBOLS_URL: http://localhost:3184
|
|
EMBEDDINGS_URL: http://localhost:9991
|
|
SRC_SYNTECT_SERVER: http://localhost:9238
|
|
SRC_FRONTEND_INTERNAL: localhost:3090
|
|
GRAFANA_SERVER_URL: http://localhost:3370
|
|
PROMETHEUS_URL: http://localhost:9090
|
|
JAEGER_SERVER_URL: http://localhost:16686
|
|
|
|
SRC_DEVELOPMENT: 'true'
|
|
SRC_PROF_HTTP: ''
|
|
SRC_PROF_SERVICES: |
|
|
[
|
|
{ "Name": "frontend", "Host": "127.0.0.1:6063" },
|
|
{ "Name": "gitserver-0", "Host": "127.0.0.1:3551" },
|
|
{ "Name": "gitserver-1", "Host": "127.0.0.1:3552" },
|
|
{ "Name": "searcher", "Host": "127.0.0.1:6069" },
|
|
{ "Name": "symbols", "Host": "127.0.0.1:6071" },
|
|
{ "Name": "repo-updater", "Host": "127.0.0.1:6074" },
|
|
{ "Name": "codeintel-worker", "Host": "127.0.0.1:6088" },
|
|
{ "Name": "worker", "Host": "127.0.0.1:6089" },
|
|
{ "Name": "worker-executors", "Host": "127.0.0.1:6996" },
|
|
{ "Name": "embeddings", "Host": "127.0.0.1:6099" },
|
|
{ "Name": "zoekt-index-0", "Host": "127.0.0.1:6072" },
|
|
{ "Name": "zoekt-index-1", "Host": "127.0.0.1:6073" },
|
|
{ "Name": "syntactic-code-intel-worker-0", "Host": "127.0.0.1:6075" },
|
|
{ "Name": "syntactic-code-intel-worker-1", "Host": "127.0.0.1:6076" },
|
|
{ "Name": "zoekt-web-0", "Host": "127.0.0.1:3070", "DefaultPath": "/debug/requests/" },
|
|
{ "Name": "zoekt-web-1", "Host": "127.0.0.1:3071", "DefaultPath": "/debug/requests/" }
|
|
]
|
|
# Settings/config
|
|
SITE_CONFIG_FILE: ./dev/site-config.json
|
|
SITE_CONFIG_ALLOW_EDITS: true
|
|
GLOBAL_SETTINGS_FILE: ./dev/global-settings.json
|
|
GLOBAL_SETTINGS_ALLOW_EDITS: true
|
|
|
|
# Point codeintel to the `frontend` database in development
|
|
CODEINTEL_PGPORT: $PGPORT
|
|
CODEINTEL_PGHOST: $PGHOST
|
|
CODEINTEL_PGUSER: $PGUSER
|
|
CODEINTEL_PGPASSWORD: $PGPASSWORD
|
|
CODEINTEL_PGDATABASE: $PGDATABASE
|
|
CODEINTEL_PGSSLMODE: $PGSSLMODE
|
|
CODEINTEL_PGDATASOURCE: $PGDATASOURCE
|
|
CODEINTEL_PG_ALLOW_SINGLE_DB: true
|
|
|
|
# Required for `frontend` and `web` commands
|
|
SOURCEGRAPH_HTTPS_DOMAIN: sourcegraph.test
|
|
SOURCEGRAPH_HTTPS_PORT: 3443
|
|
|
|
# Required for `web` commands
|
|
NODE_OPTIONS: '--max_old_space_size=8192'
|
|
# Default `NODE_ENV` to `development`
|
|
NODE_ENV: development
|
|
|
|
# Required for codeintel uploadstore
|
|
PRECISE_CODE_INTEL_UPLOAD_AWS_ENDPOINT: http://localhost:9000
|
|
PRECISE_CODE_INTEL_UPLOAD_BACKEND: blobstore
|
|
|
|
# Required for embeddings job upload
|
|
EMBEDDINGS_UPLOAD_AWS_ENDPOINT: http://localhost:9000
|
|
|
|
# Required for upload of search job results
|
|
SEARCH_JOBS_UPLOAD_AWS_ENDPOINT: http://localhost:9000
|
|
|
|
# Point code insights to the `frontend` database in development
|
|
CODEINSIGHTS_PGPORT: $PGPORT
|
|
CODEINSIGHTS_PGHOST: $PGHOST
|
|
CODEINSIGHTS_PGUSER: $PGUSER
|
|
CODEINSIGHTS_PGPASSWORD: $PGPASSWORD
|
|
CODEINSIGHTS_PGDATABASE: $PGDATABASE
|
|
CODEINSIGHTS_PGSSLMODE: $PGSSLMODE
|
|
CODEINSIGHTS_PGDATASOURCE: $PGDATASOURCE
|
|
|
|
# Disable code insights by default
|
|
DB_STARTUP_TIMEOUT: 120s # codeinsights-db needs more time to start in some instances.
|
|
DISABLE_CODE_INSIGHTS_HISTORICAL: true
|
|
DISABLE_CODE_INSIGHTS: true
|
|
|
|
# # OpenTelemetry in dev - use single http/json endpoint
|
|
# OTEL_EXPORTER_OTLP_ENDPOINT: http://127.0.0.1:4318
|
|
# OTEL_EXPORTER_OTLP_PROTOCOL: http/json
|
|
|
|
# Enable gRPC Web UI for debugging
|
|
GRPC_WEB_UI_ENABLED: 'true'
|
|
|
|
# Enable full protobuf message logging when an internal error occurred
|
|
SRC_GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_ENABLED: 'true'
|
|
SRC_GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_JSON_TRUNCATION_SIZE_BYTES: '1KB'
|
|
SRC_GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_HANDLING_MAX_MESSAGE_SIZE_BYTES: '100MB'
|
|
## zoekt-specific message logging
|
|
GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_ENABLED: 'true'
|
|
GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_JSON_TRUNCATION_SIZE_BYTES: '1KB'
|
|
GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_HANDLING_MAX_MESSAGE_SIZE_BYTES: '100MB'
|
|
|
|
# Telemetry V2 export configuration. By default, this points to a test
|
|
# instance (go/msp-ops/telemetry-gateway#dev). Set the following:
|
|
#
|
|
# TELEMETRY_GATEWAY_EXPORTER_EXPORT_ADDR: 'http://127.0.0.1:6080'
|
|
#
|
|
# in 'sg.config.overwrite.yaml' to point to a locally running Telemetry
|
|
# Gateway instead (via 'sg run telemetry-gateway')
|
|
TELEMETRY_GATEWAY_EXPORTER_EXPORT_ADDR: "https://telemetry-gateway.sgdev.org:443"
|
|
SRC_TELEMETRY_EVENTS_EXPORT_ALL: 'true'
|
|
|
|
# By default, allow temporary edits to external services.
|
|
EXTSVC_CONFIG_ALLOW_EDITS: true
|
|
|
|
commands:
|
|
server:
|
|
description: Run an all-in-one sourcegraph/server image
|
|
cmd: ./dev/run-server-image.sh
|
|
env:
|
|
TAG: insiders
|
|
CLEAN: 'true'
|
|
DATA: '/tmp/sourcegraph-data'
|
|
URL: 'http://localhost:7080'
|
|
|
|
frontend:
|
|
description: Frontend
|
|
cmd: |
|
|
# TODO: This should be fixed
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
# If EXTSVC_CONFIG_FILE is *unset*, set a default.
|
|
export EXTSVC_CONFIG_FILE=${EXTSVC_CONFIG_FILE-'../dev-private/enterprise/dev/external-services-config.json'}
|
|
|
|
.bin/frontend
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/frontend github.com/sourcegraph/sourcegraph/cmd/frontend
|
|
checkBinary: .bin/frontend
|
|
env:
|
|
CONFIGURATION_MODE: server
|
|
USE_ENHANCED_LANGUAGE_DETECTION: false
|
|
SITE_CONFIG_FILE: '../dev-private/enterprise/dev/site-config.json'
|
|
SITE_CONFIG_ESCAPE_HATCH_PATH: '$HOME/.sourcegraph/site-config.json'
|
|
# frontend processes need this to be so that the paths to the assets are rendered correctly
|
|
WEB_BUILDER_DEV_SERVER: 1
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/frontend
|
|
|
|
gitserver-template: &gitserver_template
|
|
cmd: .bin/gitserver
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/gitserver github.com/sourcegraph/sourcegraph/cmd/gitserver
|
|
checkBinary: .bin/gitserver
|
|
env:
|
|
HOSTNAME: 127.0.0.1:3178
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/gitserver
|
|
|
|
# This is only here to stay backwards-compatible with people's custom
|
|
# `sg.config.overwrite.yaml` files
|
|
gitserver:
|
|
<<: *gitserver_template
|
|
|
|
gitserver-0:
|
|
<<: *gitserver_template
|
|
env:
|
|
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3501
|
|
GITSERVER_ADDR: 127.0.0.1:3501
|
|
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_1
|
|
SRC_PROF_HTTP: 127.0.0.1:3551
|
|
|
|
gitserver-1:
|
|
<<: *gitserver_template
|
|
env:
|
|
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3502
|
|
GITSERVER_ADDR: 127.0.0.1:3502
|
|
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_2
|
|
SRC_PROF_HTTP: 127.0.0.1:3552
|
|
|
|
repo-updater:
|
|
cmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
.bin/repo-updater
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/repo-updater github.com/sourcegraph/sourcegraph/cmd/repo-updater
|
|
checkBinary: .bin/repo-updater
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/repo-updater
|
|
|
|
symbols:
|
|
cmd: .bin/symbols
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
|
|
# Ensure scip-ctags-dev is installed to avoid prompting the user to
|
|
# install it manually.
|
|
if [ ! -f $(./dev/scip-ctags-install.sh which) ]; then
|
|
./dev/scip-ctags-install.sh
|
|
fi
|
|
|
|
go build -gcflags="$GCFLAGS" -o .bin/symbols github.com/sourcegraph/sourcegraph/cmd/symbols
|
|
checkBinary: .bin/symbols
|
|
env:
|
|
CTAGS_COMMAND: dev/universal-ctags-dev
|
|
SCIP_CTAGS_COMMAND: dev/scip-ctags-dev
|
|
CTAGS_PROCESSES: 2
|
|
USE_ROCKSKIP: 'false'
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/symbols
|
|
- internal/rockskip
|
|
|
|
embeddings:
|
|
cmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
.bin/embeddings
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
|
|
go build -gcflags="$GCFLAGS" -o .bin/embeddings github.com/sourcegraph/sourcegraph/cmd/embeddings
|
|
checkBinary: .bin/embeddings
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/embeddings
|
|
- internal/embeddings
|
|
|
|
worker:
|
|
cmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
.bin/worker
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/worker github.com/sourcegraph/sourcegraph/cmd/worker
|
|
checkBinary: .bin/worker
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/worker
|
|
|
|
cody-gateway:
|
|
cmd: |
|
|
.bin/cody-gateway
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
|
|
go build -gcflags="$GCFLAGS" -o .bin/cody-gateway github.com/sourcegraph/sourcegraph/cmd/cody-gateway
|
|
checkBinary: .bin/cody-gateway
|
|
env:
|
|
SRC_LOG_LEVEL: info
|
|
# Enables metrics in dev via debugserver
|
|
SRC_PROF_HTTP: '127.0.0.1:6098'
|
|
# Set in 'sg.config.overwrite.yaml' if you want to test local Cody Gateway:
|
|
# https://docs-legacy.sourcegraph.com/dev/how-to/cody_gateway
|
|
CODY_GATEWAY_DOTCOM_ACCESS_TOKEN: ''
|
|
CODY_GATEWAY_DOTCOM_API_URL: https://sourcegraph.test:3443/.api/graphql
|
|
CODY_GATEWAY_ALLOW_ANONYMOUS: true
|
|
CODY_GATEWAY_DIAGNOSTICS_SECRET: sekret
|
|
# Set in 'sg.config.overwrite.yaml' if you want to test upstream
|
|
# integrations from local Cody Gateway:
|
|
# Entitle: https://app.entitle.io/request?data=eyJkdXJhdGlvbiI6IjIxNjAwIiwianVzdGlmaWNhdGlvbiI6IldSSVRFIEpVU1RJRklDQVRJT04gSEVSRSIsInJvbGVJZHMiOlt7ImlkIjoiYjhmYTk2NzgtNDExZC00ZmU1LWE2NDYtMzY4Y2YzYzUwYjJlIiwidGhyb3VnaCI6ImI4ZmE5Njc4LTQxMWQtNGZlNS1hNjQ2LTM2OGNmM2M1MGIyZSIsInR5cGUiOiJyb2xlIn1dfQ%3D%3D
|
|
# GSM: https://console.cloud.google.com/security/secret-manager?project=cody-gateway-dev
|
|
CODY_GATEWAY_ANTHROPIC_ACCESS_TOKEN: sekret
|
|
CODY_GATEWAY_OPENAI_ACCESS_TOKEN: sekret
|
|
CODY_GATEWAY_FIREWORKS_ACCESS_TOKEN: sekret
|
|
CODY_GATEWAY_SOURCEGRAPH_EMBEDDINGS_API_TOKEN: sekret
|
|
CODY_GATEWAY_GOOGLE_ACCESS_TOKEN: sekret
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/cody-gateway
|
|
|
|
telemetry-gateway:
|
|
cmd: |
|
|
# Telemetry Gateway needs this to parse and validate incoming license keys.
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
.bin/telemetry-gateway
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/telemetry-gateway github.com/sourcegraph/sourcegraph/cmd/telemetry-gateway
|
|
checkBinary: .bin/telemetry-gateway
|
|
env:
|
|
PORT: '6080'
|
|
DIAGNOSTICS_SECRET: sekret
|
|
TELEMETRY_GATEWAY_EVENTS_PUBSUB_ENABLED: false
|
|
SRC_LOG_LEVEL: info
|
|
GRPC_WEB_UI_ENABLED: true
|
|
# Set for convenience - use real values in sg.config.overwrite.yaml if you
|
|
# are interacting with RPCs that enforce SAMS M2M auth. See
|
|
# https://github.com/sourcegraph/accounts.sourcegraph.com/wiki/Operators-Cheat-Sheet#create-a-new-idp-client
|
|
TELEMETRY_GATEWAY_SAMS_CLIENT_ID: 'foo'
|
|
TELEMETRY_GATEWAY_SAMS_CLIENT_SECRET: 'bar'
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/telemetry-gateway
|
|
- internal/telemetrygateway
|
|
|
|
pings:
|
|
cmd: |
|
|
.bin/pings
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
|
|
go build -gcflags="$GCFLAGS" -o .bin/pings github.com/sourcegraph/sourcegraph/cmd/pings
|
|
checkBinary: .bin/pings
|
|
env:
|
|
PORT: '6080'
|
|
SRC_LOG_LEVEL: info
|
|
DIAGNOSTICS_SECRET: 'lifeisgood'
|
|
PINGS_PUBSUB_PROJECT_ID: 'telligentsourcegraph'
|
|
PINGS_PUBSUB_TOPIC_ID: 'server-update-checks-test'
|
|
HUBSPOT_ACCESS_TOKEN: ''
|
|
# Enables metrics in dev via debugserver
|
|
SRC_PROF_HTTP: '127.0.0.1:7011'
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/pings
|
|
|
|
msp-example:
|
|
cmd: .bin/msp-example
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/msp-example github.com/sourcegraph/sourcegraph/cmd/msp-example
|
|
checkBinary: .bin/msp-example
|
|
env:
|
|
PORT: '9080'
|
|
DIAGNOSTICS_SECRET: sekret
|
|
SRC_LOG_LEVEL: debug
|
|
STATELESS_MODE: 'true'
|
|
watch:
|
|
- cmd/msp-example
|
|
- lib/managedservicesplatform
|
|
|
|
enterprise-portal:
|
|
cmd: |
|
|
# Connect to local development database, with the assumption that it will
|
|
# have dotcom database tables.
|
|
export DOTCOM_PGDSN_OVERRIDE="postgres://$PGUSER:$PGPASSWORD@$PGHOST:$PGPORT/$PGDATABASE?sslmode=$PGSSLMODE"
|
|
.bin/enterprise-portal
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/enterprise-portal github.com/sourcegraph/sourcegraph/cmd/enterprise-portal
|
|
checkBinary: .bin/enterprise-portal
|
|
env:
|
|
PORT: '6081'
|
|
DIAGNOSTICS_SECRET: sekret
|
|
SRC_LOG_LEVEL: debug
|
|
GRPC_WEB_UI_ENABLED: 'true'
|
|
# Connects to local database, so include all licenses from local DB
|
|
DOTCOM_INCLUDE_PRODUCTION_LICENSES: 'true'
|
|
# Used for authentication
|
|
SAMS_URL: https://accounts.sgdev.org
|
|
externalSecrets:
|
|
ENTERPRISE_PORTAL_SAMS_CLIENT_ID:
|
|
project: sourcegraph-local-dev
|
|
name: SG_LOCAL_DEV_SAMS_CLIENT_ID
|
|
ENTERPRISE_PORTAL_SAMS_CLIENT_SECRET:
|
|
project: sourcegraph-local-dev
|
|
name: SG_LOCAL_DEV_SAMS_CLIENT_SECRET
|
|
|
|
watch:
|
|
- lib
|
|
- cmd/enterprise-portal
|
|
|
|
searcher:
|
|
cmd: .bin/searcher
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/searcher github.com/sourcegraph/sourcegraph/cmd/searcher
|
|
checkBinary: .bin/searcher
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/searcher
|
|
|
|
caddy:
|
|
ignoreStdout: true
|
|
ignoreStderr: true
|
|
cmd: .bin/caddy_${CADDY_VERSION} run --watch --config=dev/Caddyfile
|
|
install_func: installCaddy
|
|
env:
|
|
CADDY_VERSION: 2.7.3
|
|
|
|
web:
|
|
description: Enterprise version of the web app
|
|
cmd: pnpm --filter @sourcegraph/web dev
|
|
install: |
|
|
pnpm install
|
|
pnpm run generate
|
|
env:
|
|
ENABLE_OPEN_TELEMETRY: true
|
|
# Needed so that node can ping the caddy server
|
|
NODE_TLS_REJECT_UNAUTHORIZED: 0
|
|
|
|
web-sveltekit:
|
|
description: Enterprise version of the web sveltekit app
|
|
cmd: pnpm --filter @sourcegraph/web-sveltekit dev:enterprise
|
|
install: |
|
|
pnpm install
|
|
|
|
web-standalone-http:
|
|
description: Standalone web frontend (dev) with API proxy to a configurable URL
|
|
cmd: pnpm --filter @sourcegraph/web serve:dev --color
|
|
install: |
|
|
pnpm install
|
|
pnpm run generate
|
|
env:
|
|
WEB_BUILDER_SERVE_INDEX: true
|
|
SOURCEGRAPH_API_URL: https://sourcegraph.sourcegraph.com
|
|
|
|
web-standalone-http-prod:
|
|
description: Standalone web frontend (production) with API proxy to a configurable URL
|
|
cmd: pnpm --filter @sourcegraph/web serve:prod
|
|
install: pnpm --filter @sourcegraph/web run build
|
|
env:
|
|
NODE_ENV: production
|
|
WEB_BUILDER_SERVE_INDEX: true
|
|
SOURCEGRAPH_API_URL: https://k8s.sgdev.org
|
|
|
|
web-integration-build:
|
|
description: Build development web application for integration tests
|
|
cmd: pnpm --filter @sourcegraph/web run build
|
|
env:
|
|
INTEGRATION_TESTS: true
|
|
|
|
web-integration-build-prod:
|
|
description: Build production web application for integration tests
|
|
cmd: pnpm --filter @sourcegraph/web run build
|
|
env:
|
|
INTEGRATION_TESTS: true
|
|
NODE_ENV: production
|
|
|
|
web-sveltekit-standalone:
|
|
description: Standalone SvelteKit web frontend (dev) with API proxy to a configurable URL
|
|
cmd: pnpm --filter @sourcegraph/web-sveltekit run dev
|
|
install: |
|
|
pnpm install
|
|
pnpm generate
|
|
|
|
web-sveltekit-prod-watch:
|
|
description: Builds the prod version of the SvelteKit web app and rebuilds on changes
|
|
cmd: pnpm --filter @sourcegraph/web-sveltekit run build --watch
|
|
install: |
|
|
pnpm install
|
|
pnpm generate
|
|
|
|
docsite:
|
|
description: Docsite instance serving the docs
|
|
env:
|
|
RUN_SCRIPT_NAME: .bin/bazel_run_docsite.sh
|
|
cmd: |
|
|
# We tell bazel to write out a script to run docsite and run that script via sg otherwise
|
|
# when we get a SIGINT ... bazel gets killed but docsite doesn't get killed properly. So we use --script_path
|
|
# which tells bazel to write out a script to run docsite, and let sg run that script rather, which means
|
|
# any signal gets propagated and docsite gets properly terminated.
|
|
#
|
|
# We also specifically put this in .bin, since that directory is gitignored, otherwise the run script is left
|
|
# around and currently there is no clean way to remove it - even using a bash trap doesn't work, since the trap
|
|
# never gets executed due to sg running the script.
|
|
bazel run --script_path=${RUN_SCRIPT_NAME} --noshow_progress --noshow_loading_progress //doc:serve
|
|
|
|
./${RUN_SCRIPT_NAME}
|
|
|
|
syntax-highlighter:
|
|
ignoreStdout: true
|
|
ignoreStderr: true
|
|
cmd: |
|
|
docker run --name=syntax-highlighter --rm -p9238:9238 \
|
|
-e WORKERS=1 -e ROCKET_ADDRESS=0.0.0.0 \
|
|
sourcegraph/syntax-highlighter:insiders
|
|
install: |
|
|
# Remove containers by the old name, too.
|
|
docker inspect syntect_server >/dev/null 2>&1 && docker rm -f syntect_server || true
|
|
docker inspect syntax-highlighter >/dev/null 2>&1 && docker rm -f syntax-highlighter || true
|
|
# Pull syntax-highlighter latest insider image, only during install, but
|
|
# skip if OFFLINE=true is set.
|
|
if [[ "$OFFLINE" != "true" ]]; then
|
|
docker pull -q sourcegraph/syntax-highlighter:insiders
|
|
fi
|
|
|
|
zoekt-indexserver-template: &zoekt_indexserver_template
|
|
cmd: |
|
|
env PATH="${PWD}/.bin:$PATH" .bin/zoekt-sourcegraph-indexserver \
|
|
-sourcegraph_url 'http://localhost:3090' \
|
|
-index "$HOME/.sourcegraph/zoekt/index-$ZOEKT_NUM" \
|
|
-hostname "localhost:$ZOEKT_HOSTNAME_PORT" \
|
|
-interval 1m \
|
|
-listen "127.0.0.1:$ZOEKT_LISTEN_PORT" \
|
|
-cpu_fraction 0.25
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
mkdir -p .bin
|
|
export GOBIN="${PWD}/.bin"
|
|
go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-archive-index
|
|
go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-git-index
|
|
go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-sourcegraph-indexserver
|
|
checkBinary: .bin/zoekt-sourcegraph-indexserver
|
|
env: &zoektenv
|
|
CTAGS_COMMAND: dev/universal-ctags-dev
|
|
SCIP_CTAGS_COMMAND: dev/scip-ctags-dev
|
|
GRPC_ENABLED: true
|
|
|
|
zoekt-index-0:
|
|
<<: *zoekt_indexserver_template
|
|
env:
|
|
<<: *zoektenv
|
|
ZOEKT_NUM: 0
|
|
ZOEKT_HOSTNAME_PORT: 3070
|
|
ZOEKT_LISTEN_PORT: 6072
|
|
|
|
zoekt-index-1:
|
|
<<: *zoekt_indexserver_template
|
|
env:
|
|
<<: *zoektenv
|
|
ZOEKT_NUM: 1
|
|
ZOEKT_HOSTNAME_PORT: 3071
|
|
ZOEKT_LISTEN_PORT: 6073
|
|
|
|
zoekt-web-template: &zoekt_webserver_template
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
mkdir -p .bin
|
|
env GOBIN="${PWD}/.bin" go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-webserver
|
|
checkBinary: .bin/zoekt-webserver
|
|
env:
|
|
JAEGER_DISABLED: true
|
|
OPENTELEMETRY_DISABLED: false
|
|
GOGC: 25
|
|
|
|
zoekt-web-0:
|
|
<<: *zoekt_webserver_template
|
|
cmd: env PATH="${PWD}/.bin:$PATH" .bin/zoekt-webserver -index "$HOME/.sourcegraph/zoekt/index-0" -pprof -rpc -indexserver_proxy -listen "127.0.0.1:3070"
|
|
|
|
zoekt-web-1:
|
|
<<: *zoekt_webserver_template
|
|
cmd: env PATH="${PWD}/.bin:$PATH" .bin/zoekt-webserver -index "$HOME/.sourcegraph/zoekt/index-1" -pprof -rpc -indexserver_proxy -listen "127.0.0.1:3071"
|
|
|
|
codeintel-worker:
|
|
cmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
.bin/codeintel-worker
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/codeintel-worker github.com/sourcegraph/sourcegraph/cmd/precise-code-intel-worker
|
|
checkBinary: .bin/codeintel-worker
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/precise-code-intel-worker
|
|
- lib/codeintel
|
|
|
|
syntactic-codeintel-worker-template: &syntactic_codeintel_worker_template
|
|
cmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
.bin/syntactic-code-intel-worker
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
|
|
if [ ! -f $(./dev/scip-syntax-install.sh which) ]; then
|
|
echo "Building scip-syntax"
|
|
./dev/scip-syntax-install.sh
|
|
fi
|
|
|
|
echo "Building codeintel-outkline-scip-worker"
|
|
go build -gcflags="$GCFLAGS" -o .bin/syntactic-code-intel-worker github.com/sourcegraph/sourcegraph/cmd/syntactic-code-intel-worker
|
|
checkBinary: .bin/syntactic-code-intel-worker
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/syntactic-code-intel-worker
|
|
- lib/codeintel
|
|
env:
|
|
SCIP_SYNTAX_PATH: dev/scip-syntax-dev
|
|
|
|
syntactic-code-intel-worker-0:
|
|
<<: *syntactic_codeintel_worker_template
|
|
env:
|
|
SYNTACTIC_CODE_INTEL_WORKER_ADDR: 127.0.0.1:6075
|
|
|
|
syntactic-code-intel-worker-1:
|
|
<<: *syntactic_codeintel_worker_template
|
|
cmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
.bin/syntactic-code-intel-worker
|
|
env:
|
|
SYNTACTIC_CODE_INTEL_WORKER_ADDR: 127.0.0.1:6076
|
|
|
|
executor-template:
|
|
&executor_template # TMPDIR is set here so it's not set in the `install` process, which would trip up `go build`.
|
|
cmd: |
|
|
env TMPDIR="$HOME/.sourcegraph/executor-temp" .bin/executor
|
|
install: |
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/executor github.com/sourcegraph/sourcegraph/cmd/executor
|
|
checkBinary: .bin/executor
|
|
env:
|
|
# Required for frontend and executor to communicate
|
|
EXECUTOR_FRONTEND_URL: http://localhost:3080
|
|
# Must match the secret defined in the site config.
|
|
EXECUTOR_FRONTEND_PASSWORD: hunter2hunter2hunter2
|
|
# Disable firecracker inside executor in dev
|
|
EXECUTOR_USE_FIRECRACKER: false
|
|
EXECUTOR_QUEUE_NAME: TEMPLATE
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/executor
|
|
|
|
executor-kubernetes-template: &executor_kubernetes_template
|
|
cmd: |
|
|
cd $MANIFEST_PATH
|
|
cleanup() {
|
|
kubectl delete jobs --all
|
|
kubectl delete -f .
|
|
}
|
|
kubectl delete -f . --ignore-not-found
|
|
kubectl apply -f .
|
|
trap cleanup EXIT SIGINT
|
|
while true; do
|
|
sleep 1
|
|
done
|
|
install: |
|
|
bazel run //cmd/executor-kubernetes:image_tarball
|
|
|
|
env:
|
|
IMAGE: executor-kubernetes:candidate
|
|
# TODO: This is required but should only be set on M1 Macs.
|
|
PLATFORM: linux/arm64
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/executor
|
|
|
|
codeintel-executor:
|
|
<<: *executor_template
|
|
cmd: |
|
|
env TMPDIR="$HOME/.sourcegraph/indexer-temp" .bin/executor
|
|
env:
|
|
EXECUTOR_QUEUE_NAME: codeintel
|
|
|
|
# If you want to use this, either start it with `sg run batches-executor-firecracker` or
|
|
# modify the `commandsets.batches` in your local `sg.config.overwrite.yaml`
|
|
codeintel-executor-firecracker:
|
|
<<: *executor_template
|
|
cmd: |
|
|
env TMPDIR="$HOME/.sourcegraph/codeintel-executor-temp" \
|
|
sudo --preserve-env=TMPDIR,EXECUTOR_QUEUE_NAME,EXECUTOR_FRONTEND_URL,EXECUTOR_FRONTEND_PASSWORD,EXECUTOR_USE_FIRECRACKER \
|
|
.bin/executor
|
|
env:
|
|
EXECUTOR_USE_FIRECRACKER: true
|
|
EXECUTOR_QUEUE_NAME: codeintel
|
|
|
|
codeintel-executor-kubernetes:
|
|
<<: *executor_kubernetes_template
|
|
env:
|
|
MANIFEST_PATH: ./cmd/executor/kubernetes/codeintel
|
|
|
|
batches-executor:
|
|
<<: *executor_template
|
|
cmd: |
|
|
env TMPDIR="$HOME/.sourcegraph/batches-executor-temp" .bin/executor
|
|
env:
|
|
EXECUTOR_QUEUE_NAME: batches
|
|
EXECUTOR_MAXIMUM_NUM_JOBS: 8
|
|
|
|
# If you want to use this, either start it with `sg run batches-executor-firecracker` or
|
|
# modify the `commandsets.batches` in your local `sg.config.overwrite.yaml`
|
|
batches-executor-firecracker:
|
|
<<: *executor_template
|
|
cmd: |
|
|
env TMPDIR="$HOME/.sourcegraph/batches-executor-temp" \
|
|
sudo --preserve-env=TMPDIR,EXECUTOR_QUEUE_NAME,EXECUTOR_FRONTEND_URL,EXECUTOR_FRONTEND_PASSWORD,EXECUTOR_USE_FIRECRACKER \
|
|
.bin/executor
|
|
env:
|
|
EXECUTOR_USE_FIRECRACKER: true
|
|
EXECUTOR_QUEUE_NAME: batches
|
|
|
|
batches-executor-kubernetes:
|
|
<<: *executor_kubernetes_template
|
|
env:
|
|
MANIFEST_PATH: ./cmd/executor/kubernetes/batches
|
|
|
|
# This tool rebuilds the batcheshelper image every time the source of it is changed.
|
|
batcheshelper-builder:
|
|
# Nothing to run for this, we just want to re-run the install script every time.
|
|
cmd: exit 0
|
|
install: |
|
|
bazel build //cmd/batcheshelper:image_tarball
|
|
docker load --input $(bazel cquery //cmd/batcheshelper:image_tarball --output=files)
|
|
env:
|
|
IMAGE: batcheshelper:candidate
|
|
# TODO: This is required but should only be set on M1 Macs.
|
|
PLATFORM: linux/arm64
|
|
watch:
|
|
- cmd/batcheshelper
|
|
- lib/batches
|
|
continueWatchOnExit: true
|
|
|
|
multiqueue-executor:
|
|
<<: *executor_template
|
|
cmd: |
|
|
env TMPDIR="$HOME/.sourcegraph/multiqueue-executor-temp" .bin/executor
|
|
env:
|
|
EXECUTOR_QUEUE_NAME: ''
|
|
EXECUTOR_QUEUE_NAMES: 'codeintel,batches'
|
|
EXECUTOR_MAXIMUM_NUM_JOBS: 8
|
|
|
|
blobstore:
|
|
cmd: .bin/blobstore
|
|
install: |
|
|
# Ensure the old blobstore Docker container is not running
|
|
docker rm -f blobstore
|
|
if [ -n "$DELVE" ]; then
|
|
export GCFLAGS='all=-N -l'
|
|
fi
|
|
go build -gcflags="$GCFLAGS" -o .bin/blobstore github.com/sourcegraph/sourcegraph/cmd/blobstore
|
|
checkBinary: .bin/blobstore
|
|
watch:
|
|
- lib
|
|
- internal
|
|
- cmd/blobstore
|
|
env:
|
|
BLOBSTORE_DATA_DIR: $HOME/.sourcegraph-dev/data/blobstore-go
|
|
|
|
redis-postgres:
|
|
# Add the following overwrites to your sg.config.overwrite.yaml to use the docker-compose
|
|
# database:
|
|
#
|
|
# env:
|
|
# PGHOST: localhost
|
|
# PGPASSWORD: sourcegraph
|
|
# PGUSER: sourcegraph
|
|
#
|
|
# You could also add an overwrite to add `redis-postgres` to the relevant command set(s).
|
|
description: Dockerized version of redis and postgres
|
|
cmd: docker-compose -f dev/redis-postgres.yml up $COMPOSE_ARGS
|
|
env:
|
|
COMPOSE_ARGS: --force-recreate
|
|
|
|
jaeger:
|
|
cmd: |
|
|
echo "Jaeger will be available on http://localhost:16686/-/debug/jaeger/search"
|
|
.bin/jaeger-all-in-one-${JAEGER_VERSION} --log-level ${JAEGER_LOG_LEVEL}
|
|
install_func: installJaeger
|
|
env:
|
|
JAEGER_VERSION: 1.45.0
|
|
JAEGER_DISK: $HOME/.sourcegraph-dev/data/jaeger
|
|
JAEGER_LOG_LEVEL: error
|
|
QUERY_BASE_PATH: /-/debug/jaeger
|
|
|
|
grafana:
|
|
cmd: |
|
|
if [[ $(uname) == "Linux" ]]; then
|
|
# Linux needs an extra arg to support host.internal.docker, which is how grafana connects
|
|
# to the prometheus backend.
|
|
ADD_HOST_FLAG="--add-host=host.docker.internal:host-gateway"
|
|
|
|
# Docker users on Linux will generally be using direct user mapping, which
|
|
# means that they'll want the data in the volume mount to be owned by the
|
|
# same user as is running this script. Fortunately, the Grafana container
|
|
# doesn't really care what user it runs as, so long as it can write to
|
|
# /var/lib/grafana.
|
|
DOCKER_USER="--user=$UID"
|
|
fi
|
|
|
|
echo "Grafana: serving on http://localhost:${PORT}"
|
|
echo "Grafana: note that logs are piped to ${GRAFANA_LOG_FILE}"
|
|
docker run --rm ${DOCKER_USER} \
|
|
--name=${CONTAINER} \
|
|
--cpus=1 \
|
|
--memory=1g \
|
|
-p 0.0.0.0:3370:3370 ${ADD_HOST_FLAG} \
|
|
-v "${GRAFANA_DISK}":/var/lib/grafana \
|
|
-v "$(pwd)"/dev/grafana/all:/sg_config_grafana/provisioning/datasources \
|
|
grafana:candidate >"${GRAFANA_LOG_FILE}" 2>&1
|
|
install: |
|
|
mkdir -p "${GRAFANA_DISK}"
|
|
mkdir -p "$(dirname ${GRAFANA_LOG_FILE})"
|
|
docker inspect $CONTAINER >/dev/null 2>&1 && docker rm -f $CONTAINER
|
|
bazel build //docker-images/grafana:image_tarball
|
|
docker load --input $(bazel cquery //docker-images/grafana:image_tarball --output=files)
|
|
env:
|
|
GRAFANA_DISK: $HOME/.sourcegraph-dev/data/grafana
|
|
# Log file location: since we log outside of the Docker container, we should
|
|
# log somewhere that's _not_ ~/.sourcegraph-dev/data/grafana, since that gets
|
|
# volume mounted into the container and therefore has its own ownership
|
|
# semantics.
|
|
# Now for the actual logging. Grafana's output gets sent to stdout and stderr.
|
|
# We want to capture that output, but because it's fairly noisy, don't want to
|
|
# display it in the normal case.
|
|
GRAFANA_LOG_FILE: $HOME/.sourcegraph-dev/logs/grafana/grafana.log
|
|
IMAGE: grafana:candidate
|
|
CONTAINER: grafana
|
|
PORT: 3370
|
|
# docker containers must access things via docker host on non-linux platforms
|
|
DOCKER_USER: ''
|
|
ADD_HOST_FLAG: ''
|
|
CACHE: false
|
|
|
|
prometheus:
|
|
cmd: |
|
|
if [[ $(uname) == "Linux" ]]; then
|
|
DOCKER_USER="--user=$UID"
|
|
|
|
# Frontend generally runs outside of Docker, so to access it we need to be
|
|
# able to access ports on the host. --net=host is a very dirty way of
|
|
# enabling this.
|
|
DOCKER_NET="--net=host"
|
|
SRC_FRONTEND_INTERNAL="localhost:3090"
|
|
fi
|
|
|
|
echo "Prometheus: serving on http://localhost:${PORT}"
|
|
echo "Prometheus: note that logs are piped to ${PROMETHEUS_LOG_FILE}"
|
|
docker run --rm ${DOCKER_NET} ${DOCKER_USER} \
|
|
--name=${CONTAINER} \
|
|
--cpus=1 \
|
|
--memory=4g \
|
|
-p 0.0.0.0:9090:9090 \
|
|
-v "${PROMETHEUS_DISK}":/prometheus \
|
|
-v "$(pwd)/${CONFIG_DIR}":/sg_prometheus_add_ons \
|
|
-e SRC_FRONTEND_INTERNAL="${SRC_FRONTEND_INTERNAL}" \
|
|
-e DISABLE_SOURCEGRAPH_CONFIG="${DISABLE_SOURCEGRAPH_CONFIG:-""}" \
|
|
-e DISABLE_ALERTMANAGER="${DISABLE_ALERTMANAGER:-""}" \
|
|
-e PROMETHEUS_ADDITIONAL_FLAGS="--web.enable-lifecycle --web.enable-admin-api" \
|
|
${IMAGE} >"${PROMETHEUS_LOG_FILE}" 2>&1
|
|
install: |
|
|
mkdir -p "${PROMETHEUS_DISK}"
|
|
mkdir -p "$(dirname ${PROMETHEUS_LOG_FILE})"
|
|
|
|
docker inspect $CONTAINER >/dev/null 2>&1 && docker rm -f $CONTAINER
|
|
|
|
if [[ $(uname) == "Linux" ]]; then
|
|
PROM_TARGETS="dev/prometheus/linux/prometheus_targets.yml"
|
|
fi
|
|
|
|
cp ${PROM_TARGETS} "${CONFIG_DIR}"/prometheus_targets.yml
|
|
|
|
bazel build //docker-images/prometheus:image_tarball
|
|
docker load --input $(bazel cquery //docker-images/prometheus:image_tarball --output=files)
|
|
env:
|
|
PROMETHEUS_DISK: $HOME/.sourcegraph-dev/data/prometheus
|
|
# See comment above for `grafana`
|
|
PROMETHEUS_LOG_FILE: $HOME/.sourcegraph-dev/logs/prometheus/prometheus.log
|
|
IMAGE: prometheus:candidate
|
|
CONTAINER: prometheus
|
|
PORT: 9090
|
|
CONFIG_DIR: docker-images/prometheus/config
|
|
DOCKER_USER: ''
|
|
DOCKER_NET: ''
|
|
PROM_TARGETS: dev/prometheus/all/prometheus_targets.yml
|
|
SRC_FRONTEND_INTERNAL: host.docker.internal:3090
|
|
ADD_HOST_FLAG: ''
|
|
DISABLE_SOURCEGRAPH_CONFIG: false
|
|
|
|
postgres_exporter:
|
|
cmd: |
|
|
if [[ $(uname) == "Linux" ]]; then
|
|
# Linux needs an extra arg to support host.internal.docker, which is how grafana connects
|
|
# to the prometheus backend.
|
|
ADD_HOST_FLAG="--add-host=host.docker.internal:host-gateway"
|
|
fi
|
|
|
|
# Use psql to read the effective values for PG* env vars (instead of, e.g., hardcoding the default
|
|
# values).
|
|
get_pg_env() { psql -c '\set' | grep "$1" | cut -f 2 -d "'"; }
|
|
PGHOST=${PGHOST-$(get_pg_env HOST)}
|
|
PGUSER=${PGUSER-$(get_pg_env USER)}
|
|
PGPORT=${PGPORT-$(get_pg_env PORT)}
|
|
# we need to be able to query migration_logs table
|
|
PGDATABASE=${PGDATABASE-$(get_pg_env DBNAME)}
|
|
|
|
ADJUSTED_HOST=${PGHOST:-127.0.0.1}
|
|
if [[ ("$ADJUSTED_HOST" == "localhost" || "$ADJUSTED_HOST" == "127.0.0.1" || -f "$ADJUSTED_HOST") && "$OSTYPE" != "linux-gnu" ]]; then
|
|
ADJUSTED_HOST="host.docker.internal"
|
|
fi
|
|
|
|
NET_ARG=""
|
|
DATA_SOURCE_NAME="postgresql://${PGUSER}:${PGPASSWORD}@${ADJUSTED_HOST}:${PGPORT}/${PGDATABASE}?sslmode=${PGSSLMODE:-disable}"
|
|
|
|
if [[ "$OSTYPE" == "linux-gnu" ]]; then
|
|
NET_ARG="--net=host"
|
|
DATA_SOURCE_NAME="postgresql://${PGUSER}:${PGPASSWORD}@${ADJUSTED_HOST}:${PGPORT}/${PGDATABASE}?sslmode=${PGSSLMODE:-disable}"
|
|
fi
|
|
|
|
echo "postgres_exporter: serving on http://localhost:${PORT}"
|
|
docker run --rm ${DOCKER_USER} \
|
|
--name=${CONTAINER} \
|
|
-e DATA_SOURCE_NAME="${DATA_SOURCE_NAME}" \
|
|
--cpus=1 \
|
|
--memory=1g \
|
|
-p 0.0.0.0:9187:9187 ${ADD_HOST_FLAG} \
|
|
"${IMAGE}"
|
|
install: |
|
|
docker inspect $CONTAINER >/dev/null 2>&1 && docker rm -f $CONTAINER
|
|
bazel build //docker-images/postgres_exporter:image_tarball
|
|
docker load --input $(bazel cquery //docker-images/postgres_exporter:image_tarball --output=files)
|
|
env:
|
|
IMAGE: postgres-exporter:candidate
|
|
CONTAINER: postgres_exporter
|
|
# docker containers must access things via docker host on non-linux platforms
|
|
DOCKER_USER: ''
|
|
ADD_HOST_FLAG: ''
|
|
|
|
monitoring-generator:
|
|
cmd: echo "monitoring-generator is deprecated, please run 'sg generate go' or 'bazel run //dev:write_all_generated' instead"
|
|
env:
|
|
|
|
otel-collector:
|
|
install: |
|
|
bazel build //docker-images/opentelemetry-collector:image_tarball
|
|
docker load --input $(bazel cquery //docker-images/opentelemetry-collector:image_tarball --output=files)
|
|
description: OpenTelemetry collector
|
|
cmd: |
|
|
JAEGER_HOST='host.docker.internal'
|
|
if [[ $(uname) == "Linux" ]]; then
|
|
# Jaeger generally runs outside of Docker, so to access it we need to be
|
|
# able to access ports on the host, because the Docker host only exists on
|
|
# MacOS. --net=host is a very dirty way of enabling this.
|
|
DOCKER_NET="--net=host"
|
|
JAEGER_HOST="localhost"
|
|
fi
|
|
|
|
docker container rm -f otel-collector
|
|
docker run --rm --name=otel-collector $DOCKER_NET $DOCKER_ARGS \
|
|
-p 4317:4317 -p 4318:4318 -p 55679:55679 -p 55670:55670 \
|
|
-p 8888:8888 \
|
|
-e JAEGER_HOST=$JAEGER_HOST \
|
|
-e HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY \
|
|
-e HONEYCOMB_DATASET=$HONEYCOMB_DATASET \
|
|
$IMAGE --config "/etc/otel-collector/$CONFIGURATION_FILE"
|
|
env:
|
|
IMAGE: opentelemetry-collector:candidate
|
|
# Overwrite the following in sg.config.overwrite.yaml, based on which collector
|
|
# config you are using - see docker-images/opentelemetry-collector for more details.
|
|
CONFIGURATION_FILE: 'configs/jaeger.yaml'
|
|
# HONEYCOMB_API_KEY: ''
|
|
# HONEYCOMB_DATASET: ''
|
|
|
|
storybook:
|
|
cmd: pnpm storybook
|
|
install: pnpm install
|
|
|
|
# This will execute `env`, a utility to print the process environment. Can
|
|
# be used to debug which global vars `sg` uses.
|
|
debug-env:
|
|
description: Debug env vars
|
|
cmd: env
|
|
|
|
bext:
|
|
cmd: pnpm --filter @sourcegraph/browser dev
|
|
install: pnpm install
|
|
|
|
bazelCommands:
|
|
blobstore:
|
|
target: //cmd/blobstore
|
|
env:
|
|
BLOBSTORE_DATA_DIR: $HOME/.sourcegraph-dev/data/blobstore-go
|
|
cody-gateway:
|
|
target: //cmd/cody-gateway
|
|
env:
|
|
SRC_LOG_LEVEL: info
|
|
# Enables metrics in dev via debugserver
|
|
SRC_PROF_HTTP: '127.0.0.1:6098'
|
|
# Set in override if you want to test local Cody Gateway: https://docs-legacy.sourcegraph.com/dev/how-to/cody_gateway
|
|
CODY_GATEWAY_DOTCOM_ACCESS_TOKEN: ''
|
|
CODY_GATEWAY_DOTCOM_API_URL: https://sourcegraph.test:3443/.api/graphql
|
|
CODY_GATEWAY_ALLOW_ANONYMOUS: true
|
|
CODY_GATEWAY_DIAGNOSTICS_SECRET: sekret
|
|
# Set in 'sg.config.overwrite.yaml' if you want to test upstream
|
|
# integrations from local Cody Gateway:
|
|
# Entitle: https://app.entitle.io/request?data=eyJkdXJhdGlvbiI6IjIxNjAwIiwianVzdGlmaWNhdGlvbiI6IldSSVRFIEpVU1RJRklDQVRJT04gSEVSRSIsInJvbGVJZHMiOlt7ImlkIjoiYjhmYTk2NzgtNDExZC00ZmU1LWE2NDYtMzY4Y2YzYzUwYjJlIiwidGhyb3VnaCI6ImI4ZmE5Njc4LTQxMWQtNGZlNS1hNjQ2LTM2OGNmM2M1MGIyZSIsInR5cGUiOiJyb2xlIn1dfQ%3D%3D
|
|
# GSM: https://console.cloud.google.com/security/secret-manager?project=cody-gateway-dev
|
|
CODY_GATEWAY_ANTHROPIC_ACCESS_TOKEN: sekret
|
|
CODY_GATEWAY_OPENAI_ACCESS_TOKEN: sekret
|
|
CODY_GATEWAY_FIREWORKS_ACCESS_TOKEN: sekret
|
|
CODY_GATEWAY_SOURCEGRAPH_EMBEDDINGS_API_TOKEN: sekret
|
|
CODY_GATEWAY_GOOGLE_ACCESS_TOKEN: sekret
|
|
docsite:
|
|
runTarget: //doc:serve
|
|
searcher:
|
|
target: //cmd/searcher
|
|
syntax-highlighter:
|
|
target: //docker-images/syntax-highlighter:syntect_server
|
|
ignoreStdout: true
|
|
ignoreStderr: true
|
|
env:
|
|
# Environment copied from Dockerfile
|
|
WORKERS: '1'
|
|
ROCKET_ENV: 'production'
|
|
ROCKET_LIMITS: '{json=10485760}'
|
|
ROCKET_SECRET_KEY: 'SeerutKeyIsI7releuantAndknvsuZPluaseIgnorYA='
|
|
ROCKET_KEEP_ALIVE: '0'
|
|
ROCKET_PORT: '9238'
|
|
QUIET: 'true'
|
|
frontend:
|
|
description: Enterprise frontend
|
|
target: //cmd/frontend
|
|
precmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
# If EXTSVC_CONFIG_FILE is *unset*, set a default.
|
|
export EXTSVC_CONFIG_FILE=${EXTSVC_CONFIG_FILE-'../dev-private/enterprise/dev/external-services-config.json'}
|
|
env:
|
|
CONFIGURATION_MODE: server
|
|
USE_ENHANCED_LANGUAGE_DETECTION: false
|
|
SITE_CONFIG_FILE: '../dev-private/enterprise/dev/site-config.json'
|
|
SITE_CONFIG_ESCAPE_HATCH_PATH: '$HOME/.sourcegraph/site-config.json'
|
|
# frontend processes need this to be so that the paths to the assets are rendered correctly
|
|
WEB_BUILDER_DEV_SERVER: 1
|
|
worker:
|
|
target: //cmd/worker
|
|
precmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
repo-updater:
|
|
target: //cmd/repo-updater
|
|
precmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
symbols:
|
|
target: //cmd/symbols
|
|
checkBinary: .bin/symbols
|
|
env:
|
|
CTAGS_COMMAND: dev/universal-ctags-dev
|
|
SCIP_CTAGS_COMMAND: dev/scip-ctags-dev
|
|
CTAGS_PROCESSES: 2
|
|
USE_ROCKSKIP: 'false'
|
|
gitserver-template: &gitserver_bazel_template
|
|
target: //cmd/gitserver
|
|
env: &gitserverenv
|
|
HOSTNAME: 127.0.0.1:3178
|
|
# This is only here to stay backwards-compatible with people's custom
|
|
# `sg.config.overwrite.yaml` files
|
|
gitserver:
|
|
<<: *gitserver_bazel_template
|
|
gitserver-0:
|
|
<<: *gitserver_bazel_template
|
|
env:
|
|
<<: *gitserverenv
|
|
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3501
|
|
GITSERVER_ADDR: 127.0.0.1:3501
|
|
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_1
|
|
SRC_PROF_HTTP: 127.0.0.1:3551
|
|
gitserver-1:
|
|
<<: *gitserver_bazel_template
|
|
env:
|
|
<<: *gitserverenv
|
|
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3502
|
|
GITSERVER_ADDR: 127.0.0.1:3502
|
|
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_2
|
|
SRC_PROF_HTTP: 127.0.0.1:3552
|
|
|
|
codeintel-worker:
|
|
precmd: |
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
|
|
target: //cmd/precise-code-intel-worker
|
|
executor-template: &executor_template_bazel
|
|
target: //cmd/executor
|
|
env:
|
|
EXECUTOR_QUEUE_NAME: TEMPLATE
|
|
TMPDIR: $HOME/.sourcegraph/executor-temp
|
|
# Required for frontend and executor to communicate
|
|
EXECUTOR_FRONTEND_URL: http://localhost:3080
|
|
# Must match the secret defined in the site config.
|
|
EXECUTOR_FRONTEND_PASSWORD: hunter2hunter2hunter2
|
|
# Disable firecracker inside executor in dev
|
|
EXECUTOR_USE_FIRECRACKER: false
|
|
codeintel-executor:
|
|
<<: *executor_template_bazel
|
|
env:
|
|
EXECUTOR_QUEUE_NAME: codeintel
|
|
TMPDIR: $HOME/.sourcegraph/indexer-temp
|
|
|
|
dockerCommands:
|
|
batcheshelper-builder:
|
|
# Nothing to run for this, we just want to re-run the install script every time.
|
|
cmd: exit 0
|
|
target: //cmd/batcheshelper:image_tarball
|
|
image: batcheshelper:candidate
|
|
env:
|
|
# TODO: This is required but should only be set on M1 Macs.
|
|
PLATFORM: linux/arm64
|
|
continueWatchOnExit: true
|
|
|
|
grafana:
|
|
target: //docker-images/grafana:image_tarball
|
|
docker:
|
|
image: grafana:candidate
|
|
ports:
|
|
- 3370
|
|
flags:
|
|
cpus: 1
|
|
memory: 1g
|
|
volumes:
|
|
- from: $HOME/.sourcegraph-dev/data/grafana
|
|
to: /var/lib/grafana
|
|
- from: $(pwd)/dev/grafana/all
|
|
to: /sg_config_grafana/provisioning/datasources
|
|
|
|
linux:
|
|
flags:
|
|
# Linux needs an extra arg to support host.internal.docker, which is how grafana connects
|
|
# to the prometheus backend.
|
|
add-host: host.docker.internal:host-gateway
|
|
|
|
# Docker users on Linux will generally be using direct user mapping, which
|
|
# means that they'll want the data in the volume mount to be owned by the
|
|
# same user as is running this script. Fortunately, the Grafana container
|
|
# doesn't really care what user it runs as, so long as it can write to
|
|
# /var/lib/grafana.
|
|
user: $UID
|
|
# Log file location: since we log outside of the Docker container, we should
|
|
# log somewhere that's _not_ ~/.sourcegraph-dev/data/grafana, since that gets
|
|
# volume mounted into the container and therefore has its own ownership
|
|
# semantics.
|
|
# Now for the actual logging. Grafana's output gets sent to stdout and stderr.
|
|
# We want to capture that output, but because it's fairly noisy, don't want to
|
|
# display it in the normal case.
|
|
logfile: $HOME/.sourcegraph-dev/logs/grafana/grafana.log
|
|
|
|
env:
|
|
# docker containers must access things via docker host on non-linux platforms
|
|
CACHE: false
|
|
|
|
otel-collector:
|
|
target: //docker-images/opentelemetry-collector:image_tarball
|
|
description: OpenTelemetry collector
|
|
args: '--config "/etc/otel-collector/$CONFIGURATION_FILE"'
|
|
docker:
|
|
image: opentelemetry-collector:candidate
|
|
ports:
|
|
- 4317
|
|
- 4318
|
|
- 55679
|
|
- 55670
|
|
- 8888
|
|
linux:
|
|
flags:
|
|
# Jaeger generally runs outside of Docker, so to access it we need to be
|
|
# able to access ports on the host, because the Docker host only exists on
|
|
# MacOS. --net=host is a very dirty way of enabling this.
|
|
net: host
|
|
env:
|
|
JAEGER_HOST: localhost
|
|
env:
|
|
JAEGER_HOST: host.docker.internal
|
|
# Overwrite the following in sg.config.overwrite.yaml, based on which collector
|
|
# config you are using - see docker-images/opentelemetry-collector for more details.
|
|
CONFIGURATION_FILE: 'configs/jaeger.yaml'
|
|
|
|
postgres_exporter:
|
|
target: //docker-images/postgres_exporter:image_tarball
|
|
docker:
|
|
image: postgres-exporter:candidate
|
|
flags:
|
|
cpus: 1
|
|
memory: 1g
|
|
ports:
|
|
- 9187
|
|
linux:
|
|
flags:
|
|
# Linux needs an extra arg to support host.internal.docker, which is how
|
|
# postgres_exporter connects to the prometheus backend.
|
|
add-host: host.docker.internal:host-gateway
|
|
net: host
|
|
precmd: |
|
|
# Use psql to read the effective values for PG* env vars (instead of, e.g., hardcoding the default
|
|
# values).
|
|
get_pg_env() { psql -c '\set' | grep "$1" | cut -f 2 -d "'"; }
|
|
PGHOST=${PGHOST-$(get_pg_env HOST)}
|
|
PGUSER=${PGUSER-$(get_pg_env USER)}
|
|
PGPORT=${PGPORT-$(get_pg_env PORT)}
|
|
# we need to be able to query migration_logs table
|
|
PGDATABASE=${PGDATABASE-$(get_pg_env DBNAME)}
|
|
|
|
ADJUSTED_HOST=${PGHOST:-127.0.0.1}
|
|
if [[ ("$ADJUSTED_HOST" == "localhost" || "$ADJUSTED_HOST" == "127.0.0.1" || -f "$ADJUSTED_HOST") && "$OSTYPE" != "linux-gnu" ]]; then
|
|
ADJUSTED_HOST="host.docker.internal"
|
|
fi
|
|
env:
|
|
DATA_SOURCE_NAME: postgresql://${PGUSER}:${PGPASSWORD}@${ADJUSTED_HOST}:${PGPORT}/${PGDATABASE}?sslmode=${PGSSLMODE:-disable}
|
|
|
|
prometheus:
|
|
target: //docker-images/prometheus:image_tarball
|
|
logfile: $HOME/.sourcegraph-dev/logs/prometheus/prometheus.log
|
|
docker:
|
|
image: prometheus:candidate
|
|
volumes:
|
|
- from: $HOME/.sourcegraph-dev/data/prometheus
|
|
to: /prometheus
|
|
- from: $(pwd)/$CONFIG_DIR
|
|
to: /sg_prometheus_add_ons
|
|
flags:
|
|
cpus: 1
|
|
memory: 4g
|
|
ports:
|
|
- 9090
|
|
linux:
|
|
flags:
|
|
net: host
|
|
user: $UID
|
|
env:
|
|
PROM_TARGETS: dev/prometheus/linux/prometheus_targets.yml
|
|
SRC_FRONTEND_INTERNAL: localhost:3090
|
|
precmd: cp ${PROM_TARGETS} "${CONFIG_DIR}"/prometheus_targets.yml
|
|
env:
|
|
CONFIG_DIR: docker-images/prometheus/config
|
|
PROM_TARGETS: dev/prometheus/all/prometheus_targets.yml
|
|
SRC_FRONTEND_INTERNAL: host.docker.internal:3090
|
|
DISABLE_SOURCEGRAPH_CONFIG: false
|
|
DISABLE_ALERTMANAGER: false
|
|
PROMETHEUS_ADDITIONAL_FLAGS: '--web.enable-lifecycle --web.enable-admin-api'
|
|
|
|
syntax-highlighter:
|
|
ignoreStdout: true
|
|
ignoreStderr: true
|
|
docker:
|
|
image: sourcegraph/syntax-highlighter:insiders
|
|
pull: true
|
|
ports:
|
|
- 9238
|
|
env:
|
|
WORKERS: 1
|
|
ROCKET_ADDRESS: 0.0.0.0
|
|
|
|
#
|
|
# CommandSets ################################################################
|
|
#
|
|
defaultCommandset: enterprise
|
|
commandsets:
|
|
enterprise-bazel: &enterprise_bazel_set
|
|
checks:
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- bazelisk
|
|
- ibazel
|
|
- dev-private
|
|
bazelCommands:
|
|
- blobstore
|
|
- docsite
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
# - syntax-highlighter
|
|
commands:
|
|
- web
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- caddy
|
|
|
|
# If you modify this command set, please consider also updating the dotcom runset.
|
|
enterprise: &enterprise_set
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- caddy
|
|
- symbols
|
|
# TODO https://github.com/sourcegraph/devx-support/issues/537
|
|
# - docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- embeddings
|
|
env:
|
|
DISABLE_CODE_INSIGHTS_HISTORICAL: false
|
|
DISABLE_CODE_INSIGHTS: false
|
|
|
|
enterprise-e2e:
|
|
<<: *enterprise_set
|
|
env:
|
|
# EXTSVC_CONFIG_FILE being set prevents the e2e test suite to add
|
|
# additional connections.
|
|
EXTSVC_CONFIG_FILE: ''
|
|
|
|
dotcom:
|
|
# This is 95% the enterprise runset, with the addition of Cody Gateway.
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- embeddings
|
|
- cody-gateway
|
|
env:
|
|
SOURCEGRAPHDOTCOM_MODE: true
|
|
|
|
codeintel-bazel: &codeintel_bazel_set
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- bazelisk
|
|
- ibazel
|
|
- dev-private
|
|
bazelCommands:
|
|
- blobstore
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- syntax-highlighter
|
|
- codeintel-worker
|
|
- codeintel-executor
|
|
commands:
|
|
- web
|
|
- docsite
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- caddy
|
|
- jaeger
|
|
- grafana
|
|
- prometheus
|
|
|
|
codeintel-syntactic:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- web
|
|
- worker
|
|
- blobstore
|
|
- repo-updater
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- syntactic-code-intel-worker-0
|
|
- syntactic-code-intel-worker-1
|
|
|
|
codeintel:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- codeintel-worker
|
|
- codeintel-executor
|
|
# - otel-collector
|
|
- jaeger
|
|
- grafana
|
|
- prometheus
|
|
|
|
codeintel-kubernetes:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- codeintel-worker
|
|
- codeintel-executor-kubernetes
|
|
# - otel-collector
|
|
- jaeger
|
|
- grafana
|
|
- prometheus
|
|
|
|
enterprise-codeintel:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- codeintel-worker
|
|
- codeintel-executor
|
|
- otel-collector
|
|
- jaeger
|
|
- grafana
|
|
- prometheus
|
|
enterprise-codeintel-multi-queue-executor:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- codeintel-worker
|
|
- multiqueue-executor
|
|
# - otel-collector
|
|
- jaeger
|
|
- grafana
|
|
- prometheus
|
|
|
|
enterprise-codeintel-bazel:
|
|
<<: *codeintel_bazel_set
|
|
|
|
enterprise-codeinsights:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
env:
|
|
DISABLE_CODE_INSIGHTS_HISTORICAL: false
|
|
DISABLE_CODE_INSIGHTS: false
|
|
|
|
api-only:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
|
|
batches:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- batches-executor
|
|
- batcheshelper-builder
|
|
|
|
batches-kubernetes:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- symbols
|
|
- caddy
|
|
- docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- batches-executor-kubernetes
|
|
- batcheshelper-builder
|
|
|
|
iam:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- frontend
|
|
- repo-updater
|
|
- web
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- caddy
|
|
|
|
monitoring:
|
|
checks:
|
|
- docker
|
|
commands:
|
|
- jaeger
|
|
dockerCommands:
|
|
- otel-collector
|
|
- prometheus
|
|
- grafana
|
|
- postgres_exporter
|
|
|
|
monitoring-og:
|
|
checks:
|
|
- docker
|
|
commands:
|
|
- jaeger
|
|
- otel-collector
|
|
- prometheus
|
|
- grafana
|
|
- postgres_exporter
|
|
|
|
monitoring-alerts:
|
|
checks:
|
|
- docker
|
|
- redis
|
|
- postgres
|
|
commands:
|
|
- prometheus
|
|
- grafana
|
|
# For generated alerts docs
|
|
- docsite
|
|
# For the alerting integration with frontend
|
|
- frontend
|
|
- web
|
|
- caddy
|
|
|
|
web-standalone:
|
|
commands:
|
|
- web-standalone-http
|
|
- caddy
|
|
|
|
web-sveltekit-standalone:
|
|
commands:
|
|
- web-sveltekit-standalone
|
|
- caddy
|
|
env:
|
|
SK_PORT: 3080
|
|
|
|
web-standalone-prod:
|
|
commands:
|
|
- web-standalone-http-prod
|
|
- caddy
|
|
|
|
# For testing our OpenTelemetry stack
|
|
otel:
|
|
checks:
|
|
- docker
|
|
commands:
|
|
- otel-collector
|
|
- jaeger
|
|
|
|
single-program:
|
|
checks:
|
|
- git
|
|
- dev-private
|
|
commands:
|
|
- sourcegraph
|
|
- web
|
|
- caddy
|
|
env:
|
|
DISABLE_CODE_INSIGHTS: false
|
|
PRECISE_CODE_INTEL_UPLOAD_AWS_ENDPOINT: http://localhost:49000
|
|
EMBEDDINGS_UPLOAD_AWS_ENDPOINT: http://localhost:49000
|
|
USE_EMBEDDED_POSTGRESQL: false
|
|
|
|
cody-gateway:
|
|
checks:
|
|
- redis
|
|
commands:
|
|
- cody-gateway
|
|
|
|
cody-gateway-bazel:
|
|
checks:
|
|
- redis
|
|
bazelCommands:
|
|
- cody-gateway
|
|
|
|
enterprise-bazel-sveltekit:
|
|
<<: *enterprise_bazel_set
|
|
env:
|
|
SVELTEKIT: true
|
|
|
|
enterprise-sveltekit:
|
|
<<: *enterprise_set
|
|
# Keep in sync with &enterprise_set.commands
|
|
commands:
|
|
- frontend
|
|
- worker
|
|
- repo-updater
|
|
- web
|
|
- web-sveltekit
|
|
- gitserver-0
|
|
- gitserver-1
|
|
- searcher
|
|
- caddy
|
|
- symbols
|
|
# TODO https://github.com/sourcegraph/devx-support/issues/537
|
|
# - docsite
|
|
- syntax-highlighter
|
|
- zoekt-index-0
|
|
- zoekt-index-1
|
|
- zoekt-web-0
|
|
- zoekt-web-1
|
|
- blobstore
|
|
- embeddings
|
|
env:
|
|
SVELTEKIT: true
|
|
|
|
tests:
|
|
# These can be run with `sg test [name]`
|
|
backend:
|
|
cmd: go test
|
|
defaultArgs: ./...
|
|
|
|
bazel-backend-integration:
|
|
cmd: |
|
|
export GHE_GITHUB_TOKEN=$(gcloud secrets versions access latest --secret=GHE_GITHUB_TOKEN --quiet --project=sourcegraph-ci)
|
|
export GH_TOKEN=$(gcloud secrets versions access latest --secret=GITHUB_TOKEN --quiet --project=sourcegraph-ci)
|
|
|
|
export BITBUCKET_SERVER_USERNAME=$(gcloud secrets versions access latest --secret=BITBUCKET_SERVER_USERNAME --quiet --project=sourcegraph-ci)
|
|
export BITBUCKET_SERVER_TOKEN=$(gcloud secrets versions access latest --secret=BITBUCKET_SERVER_TOKEN --quiet --project=sourcegraph-ci)
|
|
export BITBUCKET_SERVER_URL=$(gcloud secrets versions access latest --secret=BITBUCKET_SERVER_URL --quiet --project=sourcegraph-ci)
|
|
|
|
export PERFORCE_PASSWORD=$(gcloud secrets versions access latest --secret=PERFORCE_PASSWORD --quiet --project=sourcegraph-ci)
|
|
export PERFORCE_USER=$(gcloud secrets versions access latest --secret=PERFORCE_USER --quiet --project=sourcegraph-ci)
|
|
export PERFORCE_PORT=$(gcloud secrets versions access latest --secret=PERFORCE_PORT --quiet --project=sourcegraph-ci)
|
|
|
|
export SOURCEGRAPH_LICENSE_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_KEY --quiet --project=sourcegraph-ci)
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_GENERATION_KEY --quiet --project=sourcegraph-ci)
|
|
|
|
bazel test //testing:backend_integration_test --verbose_failures --sandbox_debug
|
|
|
|
bazel-e2e:
|
|
cmd: |
|
|
export GHE_GITHUB_TOKEN=$(gcloud secrets versions access latest --secret=GHE_GITHUB_TOKEN --quiet --project=sourcegraph-ci)
|
|
export GH_TOKEN=$(gcloud secrets versions access latest --secret=GITHUB_TOKEN --quiet --project=sourcegraph-ci)
|
|
export SOURCEGRAPH_LICENSE_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_KEY --quiet --project=sourcegraph-ci)
|
|
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_GENERATION_KEY --quiet --project=sourcegraph-ci)
|
|
|
|
bazel test //testing:e2e_test --test_env=HEADLESS=false --test_env=SOURCEGRAPH_BASE_URL="http://localhost:7080" --test_env=GHE_GITHUB_TOKEN=$GHE_GITHUB_TOKEN --test_env=GH_TOKEN=$GH_TOKEN --test_env=DISPLAY=$DISPLAY
|
|
|
|
bazel-web-integration:
|
|
cmd: |
|
|
export GH_TOKEN=$(gcloud secrets versions access latest --secret=GITHUB_TOKEN --quiet --project=sourcegraph-ci)
|
|
export PERCY_TOKEN=$(gcloud secrets versions access latest --secret=PERCY_TOKEN --quiet --project=sourcegraph-ci)
|
|
bazel test //client/web/src/integration:integration-tests --test_env=HEADLESS=false --test_env=SOURCEGRAPH_BASE_URL="http://localhost:7080" --test_env=GH_TOKEN=$GH_TOKEN --test_env=DISPLAY=$DISPLAY --test_env=PERCY_TOKEN=$PERCY_TOKEN
|
|
|
|
backend-integration:
|
|
cmd: cd dev/gqltest && go test -long -base-url $BASE_URL -email $EMAIL -username $USERNAME -password $PASSWORD ./gqltest
|
|
env:
|
|
# These are defaults. They can be overwritten by setting the env vars when
|
|
# running the command.
|
|
BASE_URL: 'http://localhost:3080'
|
|
EMAIL: 'joe@sourcegraph.com'
|
|
PASSWORD: '12345'
|
|
|
|
bext:
|
|
cmd: pnpm --filter @sourcegraph/browser test
|
|
|
|
bext-build:
|
|
cmd: EXTENSION_PERMISSIONS_ALL_URLS=true pnpm --filter @sourcegraph/browser build
|
|
|
|
bext-integration:
|
|
cmd: pnpm --filter @sourcegraph/browser test-integration
|
|
|
|
bext-e2e:
|
|
cmd: pnpm --filter @sourcegraph/browser mocha ./src/end-to-end/github.test.ts ./src/end-to-end/gitlab.test.ts
|
|
env:
|
|
SOURCEGRAPH_BASE_URL: https://sourcegraph.com
|
|
|
|
client:
|
|
cmd: pnpm run test
|
|
|
|
docsite:
|
|
cmd: .bin/docsite_${DOCSITE_VERSION} check ./doc
|
|
env:
|
|
DOCSITE_VERSION: v1.9.4 # Update DOCSITE_VERSION everywhere in all places (including outside this repo)
|
|
|
|
web-e2e:
|
|
preamble: |
|
|
A Sourcegraph isntance must be already running for these tests to work, most
|
|
commonly with: `sg start enterprise-e2e`
|
|
|
|
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-end-to-end-tests
|
|
cmd: pnpm test-e2e
|
|
env:
|
|
TEST_USER_EMAIL: test@sourcegraph.com
|
|
TEST_USER_PASSWORD: supersecurepassword
|
|
SOURCEGRAPH_BASE_URL: https://sourcegraph.test:3443
|
|
BROWSER: chrome
|
|
externalSecrets:
|
|
GH_TOKEN:
|
|
project: 'sourcegraph-ci'
|
|
name: 'BUILDKITE_GITHUBDOTCOM_TOKEN'
|
|
|
|
web-regression:
|
|
preamble: |
|
|
A Sourcegraph instance must be already running for these tests to work, most
|
|
commonly with: `sg start enterprise-e2e`
|
|
|
|
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-regression-tests
|
|
|
|
cmd: pnpm test-regression
|
|
env:
|
|
SOURCEGRAPH_SUDO_USER: test
|
|
SOURCEGRAPH_BASE_URL: https://sourcegraph.test:3443
|
|
TEST_USER_PASSWORD: supersecurepassword
|
|
BROWSER: chrome
|
|
|
|
web-integration:
|
|
preamble: |
|
|
A web application should be built for these tests to work, most
|
|
commonly with: `sg run web-integration-build` or `sg run web-integration-build-prod` for production build.
|
|
|
|
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-integration-tests
|
|
|
|
cmd: pnpm test-integration
|
|
|
|
web-integration:debug:
|
|
preamble: |
|
|
A Sourcegraph instance must be already running for these tests to work, most
|
|
commonly with: `sg start web-standalone`
|
|
|
|
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-integration-tests
|
|
|
|
cmd: pnpm test-integration:debug
|