Experiment: Natively run SSBC in docker (#44034)

This adds an experimental code path that I will use to test a docker-only execution mode for server-side batch changes. This code path is never executed for customers until we make the switch when we deem it ready. This will allow me to dogfood this while it's not available to customer instances yet.

Ultimately, the goal of this is to make executors simply be "the job runner platform through a generic interface". Today, this depends on src-cli to do a good bunch of the work. This is a blocker for going full docker-based with executors, which will ultimately be a requirement on the road to k8s-based executors.

As this removes the dependency on src-cli, nothing but the job interface and API endpoints tie executor and Sourcegraph instance together. Ultimately, this will allow us to support larger version spans between the two (pending executors going GA and being feature-complete).

Known issues/limitations:

Steps skipped in between steps that run don't work yet
Skipping steps dynamically is inefficient as we cannot tell the executor to skip a step IF X, so we replace the script by exit 0
It is unclear if all variants of file mounts still work. Basic cases do work. Files used to be read-only in src-cli, they aren't now, but content is still reset in between steps.
The assumption that everything operates in /work is broken here, because we need to use what executors give us to persist out-of-repo state in between containers (like the step result from the previous step)
It is unclear if workspace mounts work
Cache keys are not correctly computed if using workspace mounts - the metadataretriever is nil
We still use log outputs to transfer the AfterStepResults to the Sourcegraph instance, this should finally become an artifact instead. Then, we don't have to rely on the execution_log_entires anymore and can theoretically prune those after some time. This column is currently growing indefinitely.
It depends on tee being available in the docker images to capture the cmd.stdout/cmd.stderr properly for template variable rendering
Env-vars are not rendered in their evaluated form post-execution
File permissions are unclear and might be similarly broken to how they are now - or even worse
Disclaimer: It's not feature complete today! But it is also not hitting any default code paths either. As development on this goes on, we can eventually remove the feature flag and run the new job format on all instances. This PR handles fallback of rendering old records correctly in the UI already.
This commit is contained in:
Erik Seliger 2022-11-10 00:20:43 +01:00 committed by GitHub
parent 53c394b9a8
commit d04e821ceb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
36 changed files with 1117 additions and 148 deletions

View File

@ -285,16 +285,18 @@ export const mockWorkspace = (
__typename: 'ExecutionLogEntry',
},
],
srcExec: {
command: ['src', 'batch', 'exec', '-f', 'input.json'],
durationMilliseconds: null,
exitCode: null,
key: 'step.src.batch-exec',
out:
'stdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.055Z","status":"STARTED","metadata":{}}\nstdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.055Z","status":"PROGRESS","metadata":{"total":1}}\nstdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.188Z","status":"PROGRESS","metadata":{"done":1,"total":1}}\nstdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.188Z","status":"SUCCESS","metadata":{}}\nstdout: {"operation":"DETERMINING_WORKSPACE_TYPE","timestamp":"2022-04-21T06:26:59.188Z","status":"STARTED","metadata":{}}\n',
startTime: subMinutes(now, 10).toISOString(),
__typename: 'ExecutionLogEntry',
},
srcExec: [
{
command: ['src', 'batch', 'exec', '-f', 'input.json'],
durationMilliseconds: null,
exitCode: null,
key: 'step.src.batch-exec',
out:
'stdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.055Z","status":"STARTED","metadata":{}}\nstdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.055Z","status":"PROGRESS","metadata":{"total":1}}\nstdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.188Z","status":"PROGRESS","metadata":{"done":1,"total":1}}\nstdout: {"operation":"PREPARING_DOCKER_IMAGES","timestamp":"2022-04-21T06:26:59.188Z","status":"SUCCESS","metadata":{}}\nstdout: {"operation":"DETERMINING_WORKSPACE_TYPE","timestamp":"2022-04-21T06:26:59.188Z","status":"STARTED","metadata":{}}\n',
startTime: subMinutes(now, 10).toISOString(),
__typename: 'ExecutionLogEntry',
},
],
teardown: [],
...workspace?.stages,
},

View File

@ -132,13 +132,13 @@ const batchPreviewStage = (
if (execution.stages === null) {
return undefined
}
return !execution.stages.srcExec
return execution.stages.srcExec.length === 0
? undefined
: {
text: 'Create batch spec preview',
details: (
<ExecutionLogEntry key={execution.stages.srcExec.key} logEntry={execution.stages.srcExec} now={now} />
),
details: execution.stages.srcExec.map(logEntry => (
<ExecutionLogEntry key={logEntry.key} logEntry={logEntry} now={now} />
)),
...genericStage(execution.stages.srcExec, expandedByDefault),
}
}

View File

@ -896,7 +896,7 @@ type ResolvedBatchSpecWorkspaceResolver interface {
type BatchSpecWorkspaceStagesResolver interface {
Setup() []ExecutionLogEntryResolver
SrcExec() ExecutionLogEntryResolver
SrcExec() []ExecutionLogEntryResolver
Teardown() []ExecutionLogEntryResolver
}

View File

@ -2080,10 +2080,9 @@ type BatchSpecWorkspaceStages {
setup: [ExecutionLogEntry!]!
"""
Execution log entry related to running src batch exec.
This field is null, if the step had not been executed.
Execution log entries related to running the steps of the batch spec.
"""
srcExec: ExecutionLogEntry
srcExec: [ExecutionLogEntry!]!
"""
Execution log entries related to tearing down the workspace.

View File

@ -79,8 +79,8 @@ The default run type.
- Pipeline for `DockerImages` changes:
- **Metadata**: Pipeline metadata
- **Test builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build opentelemetry-collector, Build server, Build sg
- **Scan test builds**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan opentelemetry-collector, Scan server, Scan sg
- **Test builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build batcheshelper, Build opentelemetry-collector, Build server, Build sg
- **Scan test builds**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan batcheshelper, Scan opentelemetry-collector, Scan server, Scan sg
- Upload build trace
### Release branch nightly healthcheck build
@ -126,8 +126,8 @@ Base pipeline (more steps might be included based on branch changes):
- **Metadata**: Pipeline metadata
- **Pipeline setup**: Trigger async
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary, Build docker registry mirror image
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan opentelemetry-collector, Scan server, Scan sg
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build batcheshelper, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary, Build docker registry mirror image
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan batcheshelper, Scan opentelemetry-collector, Scan server, Scan sg
- **Linters and static analysis**: GraphQL lint, Run sg lint
- **Client checks**: Puppeteer tests prep, Puppeteer tests for chrome extension, Puppeteer tests chunk #1, Puppeteer tests chunk #2, Puppeteer tests chunk #3, Puppeteer tests chunk #4, Puppeteer tests chunk #5, Puppeteer tests chunk #6, Puppeteer tests chunk #7, Puppeteer tests chunk #8, Puppeteer tests chunk #9, Puppeteer tests chunk #10, Puppeteer tests chunk #11, Puppeteer tests chunk #12, Puppeteer tests chunk #13, Puppeteer tests chunk #14, Puppeteer tests chunk #15, Upload Storybook to Chromatic, Test (all), Build, Enterprise build, Test (client/web), Test (client/browser), Test (client/jetbrains), Build TS, ESLint (all), Stylelint (all)
- **Go checks**: Test (all), Test (internal/codeintel/shared/dbstore), Test (internal/codeintel/shared/lsifstore), Test (enterprise/internal/insights), Test (internal/database), Test (internal/repos), Test (enterprise/internal/batches), Test (cmd/frontend), Test (enterprise/internal/database), Test (enterprise/cmd/frontend/internal/batches/resolvers), Test (dev/sg), Build
@ -135,7 +135,7 @@ Base pipeline (more steps might be included based on branch changes):
- **CI script tests**: test-trace-command.sh
- **Integration tests**: Backend integration tests, Code Intel QA
- **End-to-end tests**: Sourcegraph E2E, Sourcegraph QA, Sourcegraph Cluster (deploy-sourcegraph) QA, Sourcegraph Upgrade
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, opentelemetry-collector, server, sg, Publish executor image, Publish executor binary, Publish docker registry mirror image
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, batcheshelper, opentelemetry-collector, server, sg, Publish executor image, Publish executor binary, Publish docker registry mirror image
- Upload build trace
### Release branch
@ -146,8 +146,8 @@ Base pipeline (more steps might be included based on branch changes):
- **Metadata**: Pipeline metadata
- **Pipeline setup**: Trigger async
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary, Build docker registry mirror image
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan opentelemetry-collector, Scan server, Scan sg
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build batcheshelper, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary, Build docker registry mirror image
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan batcheshelper, Scan opentelemetry-collector, Scan server, Scan sg
- **Linters and static analysis**: GraphQL lint, Run sg lint
- **Client checks**: Puppeteer tests prep, Puppeteer tests for chrome extension, Puppeteer tests chunk #1, Puppeteer tests chunk #2, Puppeteer tests chunk #3, Puppeteer tests chunk #4, Puppeteer tests chunk #5, Puppeteer tests chunk #6, Puppeteer tests chunk #7, Puppeteer tests chunk #8, Puppeteer tests chunk #9, Puppeteer tests chunk #10, Puppeteer tests chunk #11, Puppeteer tests chunk #12, Puppeteer tests chunk #13, Puppeteer tests chunk #14, Puppeteer tests chunk #15, Upload Storybook to Chromatic, Test (all), Build, Enterprise build, Test (client/web), Test (client/browser), Test (client/jetbrains), Build TS, ESLint (all), Stylelint (all)
- **Go checks**: Test (all), Test (internal/codeintel/shared/dbstore), Test (internal/codeintel/shared/lsifstore), Test (enterprise/internal/insights), Test (internal/database), Test (internal/repos), Test (enterprise/internal/batches), Test (cmd/frontend), Test (enterprise/internal/database), Test (enterprise/cmd/frontend/internal/batches/resolvers), Test (dev/sg), Build
@ -155,7 +155,7 @@ Base pipeline (more steps might be included based on branch changes):
- **CI script tests**: test-trace-command.sh
- **Integration tests**: Backend integration tests, Code Intel QA
- **End-to-end tests**: Sourcegraph E2E, Sourcegraph QA, Sourcegraph Cluster (deploy-sourcegraph) QA, Sourcegraph Upgrade
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, opentelemetry-collector, server, sg
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, batcheshelper, opentelemetry-collector, server, sg
- Upload build trace
### Browser extension release build
@ -195,8 +195,8 @@ Base pipeline (more steps might be included based on branch changes):
- **Metadata**: Pipeline metadata
- **Pipeline setup**: Trigger async
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan opentelemetry-collector, Scan server, Scan sg
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build batcheshelper, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan batcheshelper, Scan opentelemetry-collector, Scan server, Scan sg
- **Linters and static analysis**: GraphQL lint, Run sg lint
- **Client checks**: Puppeteer tests prep, Puppeteer tests for chrome extension, Puppeteer tests chunk #1, Puppeteer tests chunk #2, Puppeteer tests chunk #3, Puppeteer tests chunk #4, Puppeteer tests chunk #5, Puppeteer tests chunk #6, Puppeteer tests chunk #7, Puppeteer tests chunk #8, Puppeteer tests chunk #9, Puppeteer tests chunk #10, Puppeteer tests chunk #11, Puppeteer tests chunk #12, Puppeteer tests chunk #13, Puppeteer tests chunk #14, Puppeteer tests chunk #15, Upload Storybook to Chromatic, Test (all), Build, Enterprise build, Test (client/web), Test (client/browser), Test (client/jetbrains), Build TS, ESLint (all), Stylelint (all)
- **Go checks**: Test (all), Test (internal/codeintel/shared/dbstore), Test (internal/codeintel/shared/lsifstore), Test (enterprise/internal/insights), Test (internal/database), Test (internal/repos), Test (enterprise/internal/batches), Test (cmd/frontend), Test (enterprise/internal/database), Test (enterprise/cmd/frontend/internal/batches/resolvers), Test (dev/sg), Build
@ -204,7 +204,7 @@ Base pipeline (more steps might be included based on branch changes):
- **CI script tests**: test-trace-command.sh
- **Integration tests**: Backend integration tests, Code Intel QA
- **End-to-end tests**: Sourcegraph E2E, Sourcegraph QA, Sourcegraph Cluster (deploy-sourcegraph) QA, Sourcegraph Upgrade
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, gitserver-ms-git, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, opentelemetry-collector, server, sg, Publish executor image, Publish executor binary
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, gitserver-ms-git, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, batcheshelper, opentelemetry-collector, server, sg, Publish executor image, Publish executor binary
- Upload build trace
### Main dry run
@ -220,8 +220,8 @@ Base pipeline (more steps might be included based on branch changes):
- **Metadata**: Pipeline metadata
- **Pipeline setup**: Trigger async
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan opentelemetry-collector, Scan server, Scan sg
- **Image builds**: Build alpine-3.14, Build cadvisor, Build codeinsights-db, Build codeintel-db, Build frontend, Build github-proxy, Build gitserver, Build gitserver-ms-git, Build grafana, Build indexed-searcher, Build jaeger-agent, Build jaeger-all-in-one, Build minio, Build node-exporter, Build postgres-12-alpine, Build postgres_exporter, Build precise-code-intel-worker, Build prometheus, Build prometheus-gcp, Build redis-cache, Build redis-store, Build redis_exporter, Build repo-updater, Build search-indexer, Build searcher, Build symbols, Build syntax-highlighter, Build worker, Build migrator, Build executor, Build executor-vm, Build batcheshelper, Build opentelemetry-collector, Build server, Build sg, Build executor image, Build executor binary
- **Image security scans**: Scan alpine-3.14, Scan cadvisor, Scan codeinsights-db, Scan codeintel-db, Scan frontend, Scan github-proxy, Scan gitserver, Scan grafana, Scan indexed-searcher, Scan jaeger-agent, Scan jaeger-all-in-one, Scan minio, Scan node-exporter, Scan postgres-12-alpine, Scan postgres_exporter, Scan precise-code-intel-worker, Scan prometheus, Scan prometheus-gcp, Scan redis-cache, Scan redis-store, Scan redis_exporter, Scan repo-updater, Scan search-indexer, Scan searcher, Scan symbols, Scan syntax-highlighter, Scan worker, Scan migrator, Scan executor, Scan executor-vm, Scan batcheshelper, Scan opentelemetry-collector, Scan server, Scan sg
- **Linters and static analysis**: GraphQL lint, Run sg lint
- **Client checks**: Puppeteer tests prep, Puppeteer tests for chrome extension, Puppeteer tests chunk #1, Puppeteer tests chunk #2, Puppeteer tests chunk #3, Puppeteer tests chunk #4, Puppeteer tests chunk #5, Puppeteer tests chunk #6, Puppeteer tests chunk #7, Puppeteer tests chunk #8, Puppeteer tests chunk #9, Puppeteer tests chunk #10, Puppeteer tests chunk #11, Puppeteer tests chunk #12, Puppeteer tests chunk #13, Puppeteer tests chunk #14, Puppeteer tests chunk #15, Upload Storybook to Chromatic, Test (all), Build, Enterprise build, Test (client/web), Test (client/browser), Test (client/jetbrains), Build TS, ESLint (all), Stylelint (all)
- **Go checks**: Test (all), Test (internal/codeintel/shared/dbstore), Test (internal/codeintel/shared/lsifstore), Test (enterprise/internal/insights), Test (internal/database), Test (internal/repos), Test (enterprise/internal/batches), Test (cmd/frontend), Test (enterprise/internal/database), Test (enterprise/cmd/frontend/internal/batches/resolvers), Test (dev/sg), Build
@ -229,7 +229,7 @@ Base pipeline (more steps might be included based on branch changes):
- **CI script tests**: test-trace-command.sh
- **Integration tests**: Backend integration tests, Code Intel QA
- **End-to-end tests**: Sourcegraph E2E, Sourcegraph QA, Sourcegraph Cluster (deploy-sourcegraph) QA, Sourcegraph Upgrade
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, opentelemetry-collector, server, sg
- **Publish images**: alpine-3.14, cadvisor, codeinsights-db, codeintel-db, frontend, github-proxy, gitserver, grafana, indexed-searcher, jaeger-agent, jaeger-all-in-one, minio, node-exporter, postgres-12-alpine, postgres_exporter, precise-code-intel-worker, prometheus, prometheus-gcp, redis-cache, redis-store, redis_exporter, repo-updater, search-indexer, searcher, symbols, syntax-highlighter, worker, migrator, executor, executor-vm, batcheshelper, opentelemetry-collector, server, sg
- Upload build trace
### Patch image
@ -292,6 +292,7 @@ Base pipeline (more steps might be included based on branch changes):
- Build migrator
- Build executor
- Build executor-vm
- Build batcheshelper
- Build opentelemetry-collector
- Build server
- Build sg

View File

@ -87,6 +87,7 @@ Available commands in `sg.config.yaml`:
* batches-executor
* batches-executor-firecracker
* batcheshelper-builder
* bext
* caddy
* codeintel-executor

View File

@ -0,0 +1,22 @@
# This Dockerfile was generated from github.com/sourcegraph/godockerize. It
# was not written by a human, and as such looks janky. As you change this
# file, please don't be scared to make it more pleasant / remove hadolint
# ignores.
FROM sourcegraph/alpine-3.14:180512_2022-10-31_84d1e240bb40@sha256:179ad53ab463ebc804f93de967113739fa73efc2cea6d9c53a9106be45f79d5e
ARG COMMIT_SHA="unknown"
ARG DATE="unknown"
ARG VERSION="unknown"
LABEL org.opencontainers.image.revision=${COMMIT_SHA}
LABEL org.opencontainers.image.created=${DATE}
LABEL org.opencontainers.image.version=${VERSION}
LABEL com.sourcegraph.github.url=https://github.com/sourcegraph/sourcegraph/commit/${COMMIT_SHA}
RUN apk add --no-cache \
# We require git 2.38+ because we need a fix for this vulnerability to be included:
# https://github.blog/2022-04-12-git-security-vulnerability-announced/
'git>=2.38.1' --repository=http://dl-cdn.alpinelinux.org/alpine/v3.17/main
COPY batcheshelper /usr/local/bin/

View File

@ -0,0 +1,26 @@
#!/usr/bin/env bash
cd "$(dirname "${BASH_SOURCE[0]}")"/../../..
set -ex
OUTPUT=$(mktemp -d -t sgdockerbuild_XXXXXXX)
cleanup() {
rm -rf "$OUTPUT"
}
trap cleanup EXIT
# Environment for building linux binaries
export GO111MODULE=on
export GOARCH=amd64
export GOOS=linux
export CGO_ENABLED=0
pkg="github.com/sourcegraph/sourcegraph/enterprise/cmd/batcheshelper"
go build -trimpath -ldflags "-X github.com/sourcegraph/sourcegraph/internal/version.version=$VERSION -X github.com/sourcegraph/sourcegraph/internal/version.timestamp=$(date +%s)" -buildmode exe -tags dist -o "$OUTPUT/$(basename $pkg)" "$pkg"
docker build -f enterprise/cmd/batcheshelper/Dockerfile -t "$IMAGE" "$OUTPUT" \
--platform="${PLATFORM:-linux/amd64}" \
--progress=plain \
--build-arg COMMIT_SHA \
--build-arg DATE \
--build-arg VERSION

View File

@ -0,0 +1,76 @@
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"strconv"
batcheslib "github.com/sourcegraph/sourcegraph/lib/batches"
"github.com/sourcegraph/sourcegraph/lib/batches/execution"
"github.com/sourcegraph/sourcegraph/lib/errors"
)
func main() {
if err := doMain(); err != nil {
fmt.Fprintf(os.Stderr, err.Error())
os.Exit(1)
}
os.Exit(0)
}
func doMain() error {
ctx := context.Background()
args := os.Args[1:]
if len(args) != 2 {
return errors.New("invalid argument count")
}
mode := args[0]
stepIdx, err := strconv.Atoi(args[1])
if err != nil {
return errors.Wrap(err, "invalid step index")
}
var executionInput batcheslib.WorkspacesExecutionInput
var previousResult execution.AfterStepResult
c, err := os.ReadFile("input.json")
if err != nil {
return errors.Wrap(err, "failed to read execution input file")
}
if err := json.Unmarshal(c, &executionInput); err != nil {
return errors.Wrap(err, "failed to unmarshal execution input")
}
if stepIdx > 0 {
stepResultPath := fmt.Sprintf("step%d.json", stepIdx-1)
c, err := os.ReadFile(stepResultPath)
if err != nil {
return errors.Wrap(err, "failed to read step result file")
}
if err := json.Unmarshal(c, &previousResult); err != nil {
return errors.Wrap(err, "failed to unmarshal step result file")
}
}
switch mode {
case "pre":
if err := execPre(ctx, stepIdx, executionInput, previousResult); err != nil {
return err
}
case "post":
if err := execPost(ctx, stepIdx, executionInput, previousResult); err != nil {
return err
}
default:
return errors.Newf("invalid mode %q", mode)
}
return nil
}

View File

@ -0,0 +1,135 @@
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"os/exec"
"time"
batcheslib "github.com/sourcegraph/sourcegraph/lib/batches"
"github.com/sourcegraph/sourcegraph/lib/batches/execution"
"github.com/sourcegraph/sourcegraph/lib/batches/execution/cache"
"github.com/sourcegraph/sourcegraph/lib/batches/git"
"github.com/sourcegraph/sourcegraph/lib/batches/template"
"github.com/sourcegraph/sourcegraph/lib/errors"
)
func execPost(ctx context.Context, stepIdx int, executionInput batcheslib.WorkspacesExecutionInput, previousResult execution.AfterStepResult) error {
step := executionInput.Steps[stepIdx]
// Generate the diff.
if _, err := runGitCmd(context.Background(), "git", "add", "--all"); err != nil {
return errors.Wrap(err, "git add --all failed")
}
diff, err := runGitCmd(context.Background(), "git", "diff", "--cached", "--no-prefix", "--binary")
if err != nil {
return errors.Wrap(err, "git diff --cached --no-prefix --binary failed")
}
// Read the stdout of the current step.
stdout, err := os.ReadFile(fmt.Sprintf("stdout%d.log", stepIdx))
if err != nil {
return errors.Wrap(err, "failed to read stdout file")
}
// Read the stderr of the current step.
stderr, err := os.ReadFile(fmt.Sprintf("stderr%d.log", stepIdx))
if err != nil {
return errors.Wrap(err, "failed to read stderr file")
}
// Build the step result.
stepResult := execution.AfterStepResult{
Stdout: string(stdout),
Stderr: string(stderr),
StepIndex: stepIdx,
Diff: string(diff),
// Those will be set below.
Outputs: make(map[string]interface{}),
}
// Render the step outputs.
changes, err := git.ChangesInDiff([]byte(previousResult.Diff))
if err != nil {
return errors.Wrap(err, "failed to get changes in diff")
}
outputs := previousResult.Outputs
if outputs == nil {
outputs = make(map[string]any)
}
stepContext := template.StepContext{
BatchChange: executionInput.BatchChangeAttributes,
Repository: template.Repository{
Name: executionInput.Repository.Name,
Branch: executionInput.Branch.Name,
FileMatches: executionInput.SearchResultPaths,
},
Outputs: outputs,
Steps: template.StepsContext{
Path: executionInput.Path,
Changes: changes,
},
PreviousStep: previousResult,
Step: stepResult,
}
// Render and evaluate outputs.
if err := batcheslib.SetOutputs(step.Outputs, outputs, &stepContext); err != nil {
return errors.Wrap(err, "setting outputs")
}
for k, v := range outputs {
stepResult.Outputs[k] = v
}
// Serialize the step result to disk.
cntnt, err := json.Marshal(stepResult)
if err != nil {
return errors.Wrap(err, "marshalling step result")
}
if err := os.WriteFile(fmt.Sprintf("step%d.json", stepIdx), cntnt, os.ModePerm); err != nil {
return errors.Wrap(err, "failed to write step result file")
}
key := cache.KeyForWorkspace(
&executionInput.BatchChangeAttributes,
batcheslib.Repository{
ID: executionInput.Repository.ID,
Name: executionInput.Repository.Name,
BaseRef: executionInput.Branch.Name,
BaseRev: executionInput.Branch.Target.OID,
FileMatches: executionInput.SearchResultPaths,
},
executionInput.Path,
os.Environ(),
executionInput.OnlyFetchWorkspace,
executionInput.Steps,
stepIdx,
nil, // todo: should not be nil.
)
k, err := key.Key()
if err != nil {
return errors.Wrap(err, "failed to compute cache key")
}
metadata := &batcheslib.CacheAfterStepResultMetadata{
Key: k,
Value: stepResult,
}
e := batcheslib.LogEvent{Operation: batcheslib.LogEventOperationCacheAfterStepResult, Status: batcheslib.LogEventStatusSuccess, Metadata: metadata}
e.Timestamp = time.Now().UTC().Truncate(time.Millisecond)
err = json.NewEncoder(os.Stdout).Encode(e)
if err != nil {
return errors.Wrap(err, "failed to encode after step result event")
}
return nil
}
func runGitCmd(ctx context.Context, args ...string) ([]byte, error) {
cmd := exec.CommandContext(ctx, args[0], args[1:]...)
cmd.Dir = "repository"
return cmd.Output()
}

View File

@ -0,0 +1,181 @@
package main
import (
"bytes"
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/kballard/go-shellquote"
batcheslib "github.com/sourcegraph/sourcegraph/lib/batches"
"github.com/sourcegraph/sourcegraph/lib/batches/execution"
"github.com/sourcegraph/sourcegraph/lib/batches/git"
"github.com/sourcegraph/sourcegraph/lib/batches/template"
"github.com/sourcegraph/sourcegraph/lib/errors"
)
func execPre(ctx context.Context, stepIdx int, executionInput batcheslib.WorkspacesExecutionInput, previousResult execution.AfterStepResult) error {
wd, err := os.Getwd()
if err != nil {
return errors.Wrap(err, "getting working directory")
}
step := executionInput.Steps[stepIdx]
changes, err := git.ChangesInDiff([]byte(previousResult.Diff))
if err != nil {
return errors.Wrap(err, "failed to compute changes")
}
outputs := previousResult.Outputs
if outputs == nil {
outputs = make(map[string]any)
}
stepContext := template.StepContext{
BatchChange: executionInput.BatchChangeAttributes,
Repository: template.Repository{
Name: executionInput.Repository.Name,
Branch: executionInput.Branch.Name,
FileMatches: executionInput.SearchResultPaths,
},
Outputs: outputs,
Steps: template.StepsContext{
Path: executionInput.Path,
Changes: changes,
},
PreviousStep: previousResult,
}
// Render the step.Env variables as templates.
// Resolve step.Env given the current environment.
stepEnv, err := step.Env.Resolve(os.Environ())
if err != nil {
return errors.Wrap(err, "failed to resolve step env")
}
env, err := template.RenderStepMap(stepEnv, &stepContext)
if err != nil {
return errors.Wrap(err, "failed to render step env")
}
envPreamble := ""
for k, v := range env {
envPreamble += shellquote.Join("export", fmt.Sprintf("%s=%s", k, v))
envPreamble += "\n"
}
// Render the step.Run template.
var runScript bytes.Buffer
if err := template.RenderStepTemplate("step-run", step.Run, &runScript, &stepContext); err != nil {
return errors.Wrap(err, "failed to render step.run")
}
var fileMountsPreamble string
// Check if the step needs to be skipped.
cond, err := template.EvalStepCondition(step.IfCondition(), &stepContext)
if err != nil {
return errors.Wrap(err, "failed to evaluate step condition")
}
if !cond {
// Step is skipped.
// TODO: This should somehow communicate to the executor "don't run this step".
// For now, we simply don't run the script but that will require to still pull
// the image which is a performance annoyance.
runScript = *bytes.NewBufferString("exit 0")
} else {
tmpFileDir := filepath.Join(wd, fmt.Sprintf("step%dfiles", stepIdx))
if err := os.Mkdir(tmpFileDir, os.ModePerm); err != nil {
return errors.Wrap(err, "failed to create directory for file mounts")
}
// Parse and render the step.Files.
filesToMount, err := createFilesToMount(tmpFileDir, step, &stepContext)
if err != nil {
return errors.Wrap(err, "failed to create files to mount")
}
for path, file := range filesToMount {
// TODO: Does file.Name() work?
fileMountsPreamble += fmt.Sprintf("%s\n", shellquote.Join("cp", file.Name(), path))
}
// Mount any paths on the local system to the docker container. The paths have already been validated during parsing.
for _, mount := range step.Mount {
workspaceFilePath, err := getAbsoluteMountPath(wd, mount.Path)
if err != nil {
return errors.Wrap(err, "getAbsoluteMountPath")
}
fileMountsPreamble += fmt.Sprintf("%s\n", shellquote.Join("cp", workspaceFilePath, mount.Mountpoint))
}
}
stepScriptPath := fmt.Sprintf("step%d.sh", stepIdx)
fullScript := []byte(envPreamble + fileMountsPreamble + runScript.String())
if err := os.WriteFile(stepScriptPath, fullScript, os.ModePerm); err != nil {
return errors.Wrap(err, "failed to write step script file")
}
if _, err := exec.CommandContext(context.Background(), "chmod", "+x", stepScriptPath).CombinedOutput(); err != nil {
return errors.Wrap(err, "failed to chmod step script file")
}
return nil
}
// createFilesToMount creates temporary files with the contents of Step.Files
// that are to be mounted into the container that executes the step.
// TODO: Remove these files in the `after` step.
func createFilesToMount(tempDir string, step batcheslib.Step, stepContext *template.StepContext) (map[string]*os.File, error) {
// Parse and render the step.Files.
files, err := template.RenderStepMap(step.Files, stepContext)
if err != nil {
return nil, errors.Wrap(err, "parsing step files")
}
// Create temp files with the rendered content of step.Files so that we
// can mount them into the container.
filesToMount := make(map[string]*os.File, len(files))
for name, content := range files {
fp, err := os.CreateTemp(tempDir, "")
if err != nil {
return nil, errors.Wrap(err, "creating temporary file")
}
if _, err := fp.WriteString(content); err != nil {
return nil, errors.Wrap(err, "writing to temporary file")
}
if err := fp.Close(); err != nil {
return nil, errors.Wrap(err, "closing temporary file")
}
filesToMount[name] = fp
}
return filesToMount, nil
}
func getAbsoluteMountPath(batchSpecDir string, mountPath string) (string, error) {
p := mountPath
if !filepath.IsAbs(p) {
// Try to build the absolute path since Docker will only mount absolute paths
p = filepath.Join(batchSpecDir, p)
}
pathInfo, err := os.Stat(p)
if os.IsNotExist(err) {
return "", errors.Newf("mount path %s does not exist", p)
} else if err != nil {
return "", errors.Wrap(err, "mount path validation")
}
if !strings.HasPrefix(p, batchSpecDir) {
return "", errors.Newf("mount path %s is not in the same directory or subdirectory as the batch spec", mountPath)
}
// Mounting a directory on Docker must end with the separator. So, append the file separator to make
// users' lives easier.
if pathInfo.IsDir() && !strings.HasSuffix(p, string(filepath.Separator)) {
p += string(filepath.Separator)
}
return p, nil
}

View File

@ -2,6 +2,7 @@ package resolvers
import (
"context"
"fmt"
"strings"
"sync"
@ -121,6 +122,76 @@ func (r *batchSpecWorkspaceResolver) computeStepResolvers() ([]graphqlbackend.Ba
return nil, nil
}
if r.execution != nil && r.execution.Version == 2 {
skippedSteps, err := batcheslib.SkippedStepsForRepo(r.batchSpec, r.repoResolver.Name(), r.workspace.FileMatches)
if err != nil {
return nil, err
}
resolvers := make([]graphqlbackend.BatchSpecWorkspaceStepResolver, 0, len(r.batchSpec.Steps))
for idx, step := range r.batchSpec.Steps {
skipped := false
// Mark all steps as skipped when a cached result was found.
if r.CachedResultFound() {
skipped = true
}
// Mark all steps as skipped when a workspace is skipped.
if r.workspace.Skipped {
skipped = true
}
// If we have marked the step as to-be-skipped, we have to translate
// that here into the workspace step info.
if _, ok := skippedSteps[idx]; ok {
skipped = true
}
var (
entry workerutil.ExecutionLogEntry
ok bool
)
if r.execution != nil {
entry, ok = findExecutionLogEntry(r.execution, fmt.Sprintf("step.docker.step.%d.run", idx))
}
resolver := &batchSpecWorkspaceStepV2Resolver{
index: idx,
step: step,
skipped: skipped,
logEntry: entry,
logEntryFound: ok,
store: r.store,
repo: r.repoResolver,
baseRev: r.workspace.Commit,
}
// See if we have a cache result for this step.
if cachedResult, ok := r.workspace.StepCacheResult(idx + 1); ok {
resolver.skipped = true
resolver.cachedResult = cachedResult.Value
} else if r.execution != nil {
e, ok := findExecutionLogEntry(r.execution, fmt.Sprintf("step.docker.step.%d.post", idx))
if ok {
ev := btypes.ParseJSONLogsFromOutput(e.Out)
for _, e := range ev {
if e.Operation == batcheslib.LogEventOperationCacheAfterStepResult {
m, ok := e.Metadata.(*batcheslib.CacheAfterStepResultMetadata)
if ok {
resolver.cachedResult = &m.Value
}
}
}
}
}
resolvers = append(resolvers, resolver)
}
return resolvers, nil
}
var stepInfo = make(map[int]*btypes.StepInfo)
var entryExitCode *int
if r.execution != nil {
@ -165,11 +236,11 @@ func (r *batchSpecWorkspaceResolver) computeStepResolvers() ([]graphqlbackend.Ba
// If we have marked the step as to-be-skipped, we have to translate
// that here into the workspace step info.
if _, ok := skippedSteps[int32(idx)]; ok {
if _, ok := skippedSteps[idx]; ok {
si.Skipped = true
}
resolver := &batchSpecWorkspaceStepResolver{
resolver := &batchSpecWorkspaceStepV1Resolver{
index: idx,
step: step,
stepInfo: si,
@ -198,6 +269,9 @@ func (r *batchSpecWorkspaceResolver) Step(args graphqlbackend.BatchSpecWorkspace
if int(args.Index) > len(r.batchSpec.Steps) {
return nil, nil
}
if args.Index <= 0 {
return nil, errors.New("invalid step index")
}
resolvers, err := r.computeStepResolvers()
if err != nil {
@ -460,17 +534,32 @@ type batchSpecWorkspaceStagesResolver struct {
var _ graphqlbackend.BatchSpecWorkspaceStagesResolver = &batchSpecWorkspaceStagesResolver{}
func (r *batchSpecWorkspaceStagesResolver) Setup() []graphqlbackend.ExecutionLogEntryResolver {
return r.executionLogEntryResolversWithPrefix("setup.")
res := r.executionLogEntryResolversWithPrefix("setup.")
// V2 execution has an additional "setup" step that applies the git diff of the previous
// cached result. This shall land under setup, so we fetch it additionally here.
a, found := findExecutionLogEntry(r.execution, "step.docker.apply-diff")
if found {
res = append(res, graphqlbackend.NewExecutionLogEntryResolver(r.store.DatabaseDB(), a))
}
return res
}
func (r *batchSpecWorkspaceStagesResolver) SrcExec() graphqlbackend.ExecutionLogEntryResolver {
func (r *batchSpecWorkspaceStagesResolver) SrcExec() []graphqlbackend.ExecutionLogEntryResolver {
// V1 execution uses a single `step.src.batch-exec` step, for backcompat we return just that
// here.
if entry, ok := findExecutionLogEntry(r.execution, "step.src.batch-exec"); ok {
return graphqlbackend.NewExecutionLogEntryResolver(r.store.DatabaseDB(), entry)
return []graphqlbackend.ExecutionLogEntryResolver{graphqlbackend.NewExecutionLogEntryResolver(r.store.DatabaseDB(), entry)}
}
// Backcompat: The step was unnamed before.
if entry, ok := findExecutionLogEntry(r.execution, "step.src.0"); ok {
return graphqlbackend.NewExecutionLogEntryResolver(r.store.DatabaseDB(), entry)
return []graphqlbackend.ExecutionLogEntryResolver{graphqlbackend.NewExecutionLogEntryResolver(r.store.DatabaseDB(), entry)}
}
if r.execution.Version == 2 {
// V2 execution: There are multiple execution steps involved in running
// a spec now: For each step N {N-pre, N, N-post}.
return r.executionLogEntryResolversWithPrefix("step.docker.step.")
}
return nil

View File

@ -2,17 +2,20 @@ package resolvers
import (
"context"
"strings"
"time"
"github.com/sourcegraph/sourcegraph/cmd/frontend/graphqlbackend"
"github.com/sourcegraph/sourcegraph/enterprise/internal/batches/store"
btypes "github.com/sourcegraph/sourcegraph/enterprise/internal/batches/types"
"github.com/sourcegraph/sourcegraph/internal/gitserver"
"github.com/sourcegraph/sourcegraph/internal/gqlutil"
"github.com/sourcegraph/sourcegraph/internal/workerutil"
batcheslib "github.com/sourcegraph/sourcegraph/lib/batches"
"github.com/sourcegraph/sourcegraph/lib/batches/execution"
)
type batchSpecWorkspaceStepResolver struct {
type batchSpecWorkspaceStepV1Resolver struct {
store *store.Store
repo *graphqlbackend.RepositoryResolver
baseRev string
@ -23,19 +26,19 @@ type batchSpecWorkspaceStepResolver struct {
cachedResult *execution.AfterStepResult
}
func (r *batchSpecWorkspaceStepResolver) Number() int32 {
func (r *batchSpecWorkspaceStepV1Resolver) Number() int32 {
return int32(r.index + 1)
}
func (r *batchSpecWorkspaceStepResolver) Run() string {
func (r *batchSpecWorkspaceStepV1Resolver) Run() string {
return r.step.Run
}
func (r *batchSpecWorkspaceStepResolver) Container() string {
func (r *batchSpecWorkspaceStepV1Resolver) Container() string {
return r.step.Container
}
func (r *batchSpecWorkspaceStepResolver) IfCondition() *string {
func (r *batchSpecWorkspaceStepV1Resolver) IfCondition() *string {
cond := r.step.IfCondition()
if cond == "" {
return nil
@ -43,15 +46,15 @@ func (r *batchSpecWorkspaceStepResolver) IfCondition() *string {
return &cond
}
func (r *batchSpecWorkspaceStepResolver) CachedResultFound() bool {
func (r *batchSpecWorkspaceStepV1Resolver) CachedResultFound() bool {
return r.stepInfo.StartedAt.IsZero() && r.cachedResult != nil
}
func (r *batchSpecWorkspaceStepResolver) Skipped() bool {
func (r *batchSpecWorkspaceStepV1Resolver) Skipped() bool {
return r.CachedResultFound() || r.stepInfo.Skipped
}
func (r *batchSpecWorkspaceStepResolver) OutputLines(ctx context.Context, args *graphqlbackend.BatchSpecWorkspaceStepOutputLinesArgs) (*[]string, error) {
func (r *batchSpecWorkspaceStepV1Resolver) OutputLines(ctx context.Context, args *graphqlbackend.BatchSpecWorkspaceStepOutputLinesArgs) (*[]string, error) {
lines := r.stepInfo.OutputLines
if args.After != nil {
lines = lines[*args.After:]
@ -63,21 +66,21 @@ func (r *batchSpecWorkspaceStepResolver) OutputLines(ctx context.Context, args *
return &lines, nil
}
func (r *batchSpecWorkspaceStepResolver) StartedAt() *gqlutil.DateTime {
func (r *batchSpecWorkspaceStepV1Resolver) StartedAt() *gqlutil.DateTime {
if r.stepInfo.StartedAt.IsZero() {
return nil
}
return &gqlutil.DateTime{Time: r.stepInfo.StartedAt}
}
func (r *batchSpecWorkspaceStepResolver) FinishedAt() *gqlutil.DateTime {
func (r *batchSpecWorkspaceStepV1Resolver) FinishedAt() *gqlutil.DateTime {
if r.stepInfo.FinishedAt.IsZero() {
return nil
}
return &gqlutil.DateTime{Time: r.stepInfo.FinishedAt}
}
func (r *batchSpecWorkspaceStepResolver) ExitCode() *int32 {
func (r *batchSpecWorkspaceStepV1Resolver) ExitCode() *int32 {
if r.stepInfo.ExitCode == nil {
return nil
}
@ -85,7 +88,7 @@ func (r *batchSpecWorkspaceStepResolver) ExitCode() *int32 {
return &code
}
func (r *batchSpecWorkspaceStepResolver) Environment() ([]graphqlbackend.BatchSpecWorkspaceEnvironmentVariableResolver, error) {
func (r *batchSpecWorkspaceStepV1Resolver) Environment() ([]graphqlbackend.BatchSpecWorkspaceEnvironmentVariableResolver, error) {
// The environment is dependent on environment of the executor and template variables, that aren't
// known at the time when we resolve the workspace. If the step already started, src cli has logged
// the final env. Otherwise, we fall back to the preliminary set of env vars as determined by the
@ -109,7 +112,7 @@ func (r *batchSpecWorkspaceStepResolver) Environment() ([]graphqlbackend.BatchSp
return resolvers, nil
}
func (r *batchSpecWorkspaceStepResolver) OutputVariables() *[]graphqlbackend.BatchSpecWorkspaceOutputVariableResolver {
func (r *batchSpecWorkspaceStepV1Resolver) OutputVariables() *[]graphqlbackend.BatchSpecWorkspaceOutputVariableResolver {
if r.CachedResultFound() {
resolvers := make([]graphqlbackend.BatchSpecWorkspaceOutputVariableResolver, 0, len(r.cachedResult.Outputs))
for k, v := range r.cachedResult.Outputs {
@ -129,7 +132,7 @@ func (r *batchSpecWorkspaceStepResolver) OutputVariables() *[]graphqlbackend.Bat
return &resolvers
}
func (r *batchSpecWorkspaceStepResolver) DiffStat(ctx context.Context) (*graphqlbackend.DiffStat, error) {
func (r *batchSpecWorkspaceStepV1Resolver) DiffStat(ctx context.Context) (*graphqlbackend.DiffStat, error) {
diffRes, err := r.Diff(ctx)
if err != nil {
return nil, err
@ -144,7 +147,7 @@ func (r *batchSpecWorkspaceStepResolver) DiffStat(ctx context.Context) (*graphql
return nil, nil
}
func (r *batchSpecWorkspaceStepResolver) Diff(ctx context.Context) (graphqlbackend.PreviewRepositoryComparisonResolver, error) {
func (r *batchSpecWorkspaceStepV1Resolver) Diff(ctx context.Context) (graphqlbackend.PreviewRepositoryComparisonResolver, error) {
if r.CachedResultFound() {
return graphqlbackend.NewPreviewRepositoryComparisonResolver(ctx, r.store.DatabaseDB(), gitserver.NewClient(r.store.DatabaseDB()), r.repo, r.baseRev, r.cachedResult.Diff)
}
@ -154,6 +157,160 @@ func (r *batchSpecWorkspaceStepResolver) Diff(ctx context.Context) (graphqlbacke
return nil, nil
}
type batchSpecWorkspaceStepV2Resolver struct {
store *store.Store
repo *graphqlbackend.RepositoryResolver
baseRev string
index int
step batcheslib.Step
skipped bool
logEntry workerutil.ExecutionLogEntry
logEntryFound bool
cachedResult *execution.AfterStepResult
cachedResultFound bool
}
func (r *batchSpecWorkspaceStepV2Resolver) Number() int32 {
return int32(r.index + 1)
}
func (r *batchSpecWorkspaceStepV2Resolver) Run() string {
return r.step.Run
}
func (r *batchSpecWorkspaceStepV2Resolver) Container() string {
return r.step.Container
}
func (r *batchSpecWorkspaceStepV2Resolver) IfCondition() *string {
cond := r.step.IfCondition()
if cond == "" {
return nil
}
return &cond
}
func (r *batchSpecWorkspaceStepV2Resolver) CachedResultFound() bool {
return r.cachedResultFound
}
func (r *batchSpecWorkspaceStepV2Resolver) Skipped() bool {
return r.CachedResultFound() || r.skipped
}
func (r *batchSpecWorkspaceStepV2Resolver) OutputLines(ctx context.Context, args *graphqlbackend.BatchSpecWorkspaceStepOutputLinesArgs) (*[]string, error) {
if !r.logEntryFound {
return nil, nil
}
lines := strings.Split(r.logEntry.Out, "\n")
if args.After != nil {
lines = lines[*args.After:]
}
if int(args.First) < len(lines) {
lines = lines[:args.First]
}
return &lines, nil
}
func (r *batchSpecWorkspaceStepV2Resolver) StartedAt() *gqlutil.DateTime {
if !r.logEntryFound {
return nil
}
return &gqlutil.DateTime{Time: r.logEntry.StartTime}
}
func (r *batchSpecWorkspaceStepV2Resolver) FinishedAt() *gqlutil.DateTime {
if !r.logEntryFound {
return nil
}
if r.logEntry.DurationMs == nil {
return nil
}
finish := r.logEntry.StartTime.Add(time.Duration(*r.logEntry.DurationMs) * time.Millisecond)
return &gqlutil.DateTime{Time: finish}
}
func (r *batchSpecWorkspaceStepV2Resolver) ExitCode() *int32 {
if !r.logEntryFound {
return nil
}
code := r.logEntry.ExitCode
if code == nil {
return nil
}
i32 := int32(*code)
return &i32
}
func (r *batchSpecWorkspaceStepV2Resolver) Environment() ([]graphqlbackend.BatchSpecWorkspaceEnvironmentVariableResolver, error) {
// The environment is dependent on environment of the executor and template variables, that aren't
// known at the time when we resolve the workspace. If the step already started, src cli has logged
// the final env. Otherwise, we fall back to the preliminary set of env vars as determined by the
// resolve workspaces step.
// TODO: This is only a server-side pass of the environment variables. V2 execution does not yet
// support rendering the actual env var values used at runtime.
// See the V1 resolver for what happens there.
env, err := r.step.Env.Resolve([]string{})
if err != nil {
return nil, err
}
resolvers := make([]graphqlbackend.BatchSpecWorkspaceEnvironmentVariableResolver, 0, len(env))
for k, v := range env {
resolvers = append(resolvers, &batchSpecWorkspaceEnvironmentVariableResolver{key: k, value: v})
}
return resolvers, nil
}
func (r *batchSpecWorkspaceStepV2Resolver) OutputVariables() *[]graphqlbackend.BatchSpecWorkspaceOutputVariableResolver {
// If a cached result was found previously, or one was generated for this step, we can
// use it to read the rendered output variables.
// TODO: Should we return the underendered variables before the cached result is
// available like we do with env vars?
if r.cachedResult != nil {
resolvers := make([]graphqlbackend.BatchSpecWorkspaceOutputVariableResolver, 0, len(r.cachedResult.Outputs))
for k, v := range r.cachedResult.Outputs {
resolvers = append(resolvers, &batchSpecWorkspaceOutputVariableResolver{key: k, value: v})
}
return &resolvers
}
return nil
}
func (r *batchSpecWorkspaceStepV2Resolver) DiffStat(ctx context.Context) (*graphqlbackend.DiffStat, error) {
diffRes, err := r.Diff(ctx)
if err != nil {
return nil, err
}
if diffRes == nil {
return nil, nil
}
fd, err := diffRes.FileDiffs(ctx, &graphqlbackend.FileDiffsConnectionArgs{})
if err != nil {
return nil, err
}
return fd.DiffStat(ctx)
}
func (r *batchSpecWorkspaceStepV2Resolver) Diff(ctx context.Context) (graphqlbackend.PreviewRepositoryComparisonResolver, error) {
// If a cached result was found previously, or one was generated for this step, we can
// use it to return a comparison resolver.
if r.cachedResult != nil {
return graphqlbackend.NewPreviewRepositoryComparisonResolver(ctx, r.store.DatabaseDB(), gitserver.NewClient(r.store.DatabaseDB()), r.repo, r.baseRev, r.cachedResult.Diff)
}
return nil, nil
}
type batchSpecWorkspaceEnvironmentVariableResolver struct {
key string
value string

View File

@ -4,8 +4,11 @@ import (
"context"
"encoding/json"
"fmt"
"path"
"path/filepath"
"strconv"
"github.com/kballard/go-shellquote"
"github.com/sourcegraph/log"
"github.com/sourcegraph/sourcegraph/cmd/frontend/graphqlbackend"
@ -13,6 +16,7 @@ import (
btypes "github.com/sourcegraph/sourcegraph/enterprise/internal/batches/types"
apiclient "github.com/sourcegraph/sourcegraph/enterprise/internal/executor"
"github.com/sourcegraph/sourcegraph/internal/actor"
"github.com/sourcegraph/sourcegraph/internal/conf"
"github.com/sourcegraph/sourcegraph/internal/database"
"github.com/sourcegraph/sourcegraph/internal/encryption/keyring"
batcheslib "github.com/sourcegraph/sourcegraph/lib/batches"
@ -22,6 +26,7 @@ import (
const (
srcInputPath = "input.json"
srcPatchFile = "state.diff"
srcRepoDir = "repository"
srcTempDir = ".src-tmp"
srcWorkspaceFilesDir = "workspace-files"
@ -107,11 +112,8 @@ func transformRecord(ctx context.Context, logger log.Logger, s BatchesStore, job
},
Path: workspace.Path,
OnlyFetchWorkspace: workspace.OnlyFetchWorkspace,
// TODO: We can further optimize here later and tell src-cli to
// not run those steps so there is no discrepancy between the backend
// and src-cli calculating the if conditions.
Steps: batchSpec.Spec.Steps,
SearchResultPaths: workspace.FileMatches,
Steps: batchSpec.Spec.Steps,
SearchResultPaths: workspace.FileMatches,
BatchChangeAttributes: template.BatchChangeAttributes{
Name: batchSpec.Spec.Name,
Description: batchSpec.Spec.Description,
@ -171,21 +173,7 @@ func transformRecord(ctx context.Context, logger log.Logger, s BatchesStore, job
}
}
commands := []string{
"batch",
"exec",
"-f", srcInputPath,
"-repo", srcRepoDir,
// Tell src to store tmp files inside the workspace. Src currently
// runs on the host and we don't want pollution outside of the workspace.
"-tmp", srcTempDir,
}
// Only add the workspaceFiles flag if there are files to mount. This helps with backwards compatibility.
if len(workspaceFiles) > 0 {
commands = append(commands, "-workspaceFiles", srcWorkspaceFilesDir)
}
return apiclient.Job{
aj := apiclient.Job{
ID: int(job.ID),
VirtualMachineFiles: files,
RepositoryName: string(repo.Name),
@ -195,14 +183,130 @@ func transformRecord(ctx context.Context, logger log.Logger, s BatchesStore, job
// Later we might allow to tweak more git parameters, like submodules and LFS.
ShallowClone: true,
SparseCheckout: sparseCheckout,
CliSteps: []apiclient.CliStep{
RedactedValues: redactedEnvVars,
}
if job.Version == 2 {
helperImage := fmt.Sprintf("%s:%s", conf.ExecutorsBatcheshelperImage(), conf.ExecutorsBatcheshelperImageTag())
// Find the step to start with.
startStep := 0
dockerSteps := []apiclient.DockerStep{}
if executionInput.CachedStepResultFound {
cacheEntry := executionInput.CachedStepResult
// Apply the diff if necessary.
if cacheEntry.Diff != "" {
dockerSteps = append(dockerSteps, apiclient.DockerStep{
Key: "apply-diff",
Dir: srcRepoDir,
Commands: []string{
"set -e",
shellquote.Join("git", "apply", "-p0", "../"+srcPatchFile),
shellquote.Join("git", "add", "--all"),
},
Image: helperImage,
})
files[srcPatchFile] = apiclient.VirtualMachineFile{
Content: cacheEntry.Diff,
}
}
startStep = cacheEntry.StepIndex + 1
val, err := json.Marshal(cacheEntry)
if err != nil {
return apiclient.Job{}, err
}
// Write the step result for the last cached step.
files[fmt.Sprintf("step%d.json", cacheEntry.StepIndex)] = apiclient.VirtualMachineFile{
Content: string(val),
}
}
skipped, err := batcheslib.SkippedStepsForRepo(batchSpec.Spec, string(repo.Name), workspace.FileMatches)
if err != nil {
return apiclient.Job{}, err
}
for i := startStep; i < len(batchSpec.Spec.Steps); i++ {
step := batchSpec.Spec.Steps[i]
// Skip statically skipped steps.
if _, skipped := skipped[i]; skipped {
continue
}
runDir := srcRepoDir
if workspace.Path != "" {
runDir = path.Join(runDir, workspace.Path)
}
runDirToScriptDir, err := filepath.Rel("/"+runDir, "/")
if err != nil {
return apiclient.Job{}, err
}
dockerSteps = append(dockerSteps, apiclient.DockerStep{
Key: fmt.Sprintf("step.%d.pre", i),
Image: helperImage,
Env: secretEnvVars,
Dir: ".",
Commands: []string{
// TODO: This doesn't handle skipped steps right, it assumes
// there are outputs from i-1 present at all times.
shellquote.Join("batcheshelper", "pre", strconv.Itoa(i)),
},
})
dockerSteps = append(dockerSteps, apiclient.DockerStep{
Key: fmt.Sprintf("step.%d.run", i),
Image: step.Container,
Dir: runDir,
// Invoke the script file but also write stdout and stderr to separate files, which will then be
// consumed by the post step to build the AfterStepResult.
Commands: []string{
// Hide commands from stderr.
"{ set +x; } 2>/dev/null",
fmt.Sprintf(`(exec "%s/step%d.sh" | tee %s/stdout%d.log) 3>&1 1>&2 2>&3 | tee %s/stderr%d.log`, runDirToScriptDir, i, runDirToScriptDir, i, runDirToScriptDir, i),
},
})
// This step gets the diff, reads stdout and stderr, renders the outputs and builds the AfterStepResult.
dockerSteps = append(dockerSteps, apiclient.DockerStep{
Key: fmt.Sprintf("step.%d.post", i),
Image: helperImage,
Env: secretEnvVars,
Dir: ".",
Commands: []string{
shellquote.Join("batcheshelper", "post", strconv.Itoa(i)),
},
})
aj.DockerSteps = dockerSteps
}
} else {
commands := []string{
"batch",
"exec",
"-f", srcInputPath,
"-repo", srcRepoDir,
// Tell src to store tmp files inside the workspace. Src currently
// runs on the host and we don't want pollution outside of the workspace.
"-tmp", srcTempDir,
}
// Only add the workspaceFiles flag if there are files to mount. This helps with backwards compatibility.
if len(workspaceFiles) > 0 {
commands = append(commands, "-workspaceFiles", srcWorkspaceFilesDir)
}
aj.CliSteps = []apiclient.CliStep{
{
Key: "batch-exec",
Commands: commands,
Dir: ".",
Env: secretEnvVars,
},
},
RedactedValues: redactedEnvVars,
}, nil
}
}
return aj, nil
}

View File

@ -8,14 +8,15 @@ import (
"strings"
"github.com/graph-gophers/graphql-go"
"github.com/graph-gophers/graphql-go/relay"
"github.com/sourcegraph/log"
"github.com/sourcegraph/sourcegraph/cmd/frontend/graphqlbackend"
"github.com/sourcegraph/sourcegraph/enterprise/internal/batches/service"
"github.com/sourcegraph/sourcegraph/enterprise/internal/batches/store"
btypes "github.com/sourcegraph/sourcegraph/enterprise/internal/batches/types"
"github.com/sourcegraph/sourcegraph/internal/actor"
"github.com/sourcegraph/sourcegraph/internal/api"
"github.com/sourcegraph/sourcegraph/internal/database"
"github.com/sourcegraph/sourcegraph/internal/encryption/keyring"
"github.com/sourcegraph/sourcegraph/internal/workerutil"
@ -56,7 +57,7 @@ type workspaceCacheKey struct {
dbWorkspace *btypes.BatchSpecWorkspace
repo batcheslib.Repository
stepCacheKeys []stepCacheKey
skippedSteps map[int32]struct{}
skippedSteps map[int]struct{}
}
// process runs one workspace creation run for the given job utilizing the given
@ -154,7 +155,7 @@ func (r *batchSpecWorkspaceCreator) process(
}
repo := batcheslib.Repository{
ID: string(graphqlbackend.MarshalRepositoryID(w.Repo.ID)),
ID: string(marshalRepositoryID(w.Repo.ID)),
Name: string(w.Repo.Name),
BaseRef: w.Branch,
BaseRev: string(w.Commit),
@ -169,7 +170,7 @@ func (r *batchSpecWorkspaceCreator) process(
stepCacheKeys := make([]stepCacheKey, 0, len(spec.Spec.Steps))
// Generate cache keys for all the steps.
for i := 0; i < len(spec.Spec.Steps); i++ {
if _, ok := skippedSteps[int32(i)]; ok {
if _, ok := skippedSteps[i]; ok {
continue
}
@ -249,6 +250,7 @@ func (r *batchSpecWorkspaceCreator) process(
// TODO: In the future, move this to a separate field, so we can
// tell the two cases apart.
if len(spec.Spec.Steps) == len(workspace.skippedSteps) {
// TODO: Doesn't this mean we don't build changeset specs?
workspace.dbWorkspace.CachedResultFound = true
continue
}
@ -257,7 +259,7 @@ func (r *batchSpecWorkspaceCreator) process(
latestStepIdx := -1
for i := len(spec.Spec.Steps) - 1; i >= 0; i-- {
// Keep skipping steps until the first one is hit that we do want to run.
if _, ok := workspace.skippedSteps[int32(i)]; ok {
if _, ok := workspace.skippedSteps[i]; ok {
continue
}
latestStepIdx = i
@ -267,6 +269,9 @@ func (r *batchSpecWorkspaceCreator) process(
continue
}
// TODO: Should we also do dynamic evaluation, instead of just static?
// We have everything that's needed at this point, including the latest
// execution step result.
res, found := workspace.dbWorkspace.StepCacheResult(latestStepIdx + 1)
if !found {
// There is no cache result available, proceed.
@ -393,7 +398,7 @@ func changesetSpecsForImports(ctx context.Context, s *store.Store, importChanges
repoNameIDs := make(map[string]string, len(repos))
for _, r := range repos {
repoNameIDs[string(r.Name)] = string(graphqlbackend.MarshalRepositoryID(r.ID))
repoNameIDs[string(r.Name)] = string(marshalRepositoryID(r.ID))
}
return repoNameIDs, nil
})
@ -401,7 +406,8 @@ func changesetSpecsForImports(ctx context.Context, s *store.Store, importChanges
return nil, err
}
for _, c := range specs {
repoID, err := graphqlbackend.UnmarshalRepositoryID(graphql.ID(c.BaseRepository))
var repoID api.RepoID
err = relay.UnmarshalSpec(graphql.ID(c.BaseRepository), &repoID)
if err != nil {
return nil, err
}
@ -418,3 +424,7 @@ func changesetSpecsForImports(ctx context.Context, s *store.Store, importChanges
}
return cs, nil
}
func marshalRepositoryID(id api.RepoID) graphql.ID {
return relay.MarshalID("Repository", id)
}

View File

@ -87,6 +87,7 @@ var DeploySourcegraphDockerImages = []string{
"migrator",
"executor",
"executor-vm",
"batcheshelper",
"opentelemetry-collector",
}

View File

@ -10,6 +10,7 @@ import (
btypes "github.com/sourcegraph/sourcegraph/enterprise/internal/batches/types"
"github.com/sourcegraph/sourcegraph/internal/database/basestore"
"github.com/sourcegraph/sourcegraph/internal/database/dbutil"
"github.com/sourcegraph/sourcegraph/internal/featureflag"
"github.com/sourcegraph/sourcegraph/internal/observation"
"github.com/sourcegraph/sourcegraph/internal/workerutil"
dbworkerstore "github.com/sourcegraph/sourcegraph/internal/workerutil/dbworker/store"
@ -38,6 +39,8 @@ var batchSpecWorkspaceExecutionJobColumns = SQLColumns{
"batch_spec_workspace_execution_jobs.created_at",
"batch_spec_workspace_execution_jobs.updated_at",
"batch_spec_workspace_execution_jobs.version",
}
var batchSpecWorkspaceExecutionJobColumnsWithNullQueue = SQLColumns{
@ -62,6 +65,8 @@ var batchSpecWorkspaceExecutionJobColumnsWithNullQueue = SQLColumns{
"batch_spec_workspace_execution_jobs.created_at",
"batch_spec_workspace_execution_jobs.updated_at",
"batch_spec_workspace_execution_jobs.version",
}
const executionPlaceInQueueFragment = `
@ -72,10 +77,11 @@ FROM batch_spec_workspace_execution_queue
const createBatchSpecWorkspaceExecutionJobsQueryFmtstr = `
INSERT INTO
batch_spec_workspace_execution_jobs (batch_spec_workspace_id, user_id)
batch_spec_workspace_execution_jobs (batch_spec_workspace_id, user_id, version)
SELECT
batch_spec_workspaces.id,
batch_specs.user_id
batch_specs.user_id,
%s
FROM
batch_spec_workspaces
JOIN batch_specs ON batch_specs.id = batch_spec_workspaces.batch_spec_id
@ -105,16 +111,17 @@ func (s *Store) CreateBatchSpecWorkspaceExecutionJobs(ctx context.Context, batch
defer endObservation(1, observation.Args{})
cond := sqlf.Sprintf(executableWorkspaceJobsConditionFmtstr)
q := sqlf.Sprintf(createBatchSpecWorkspaceExecutionJobsQueryFmtstr, batchSpecID, cond)
q := sqlf.Sprintf(createBatchSpecWorkspaceExecutionJobsQueryFmtstr, versionForExecution(ctx, s), batchSpecID, cond)
return s.Exec(ctx, q)
}
const createBatchSpecWorkspaceExecutionJobsForWorkspacesQueryFmtstr = `
INSERT INTO
batch_spec_workspace_execution_jobs (batch_spec_workspace_id, user_id)
batch_spec_workspace_execution_jobs (batch_spec_workspace_id, user_id, version)
SELECT
batch_spec_workspaces.id,
batch_specs.user_id
batch_specs.user_id,
%s
FROM
batch_spec_workspaces
JOIN
@ -128,7 +135,7 @@ func (s *Store) CreateBatchSpecWorkspaceExecutionJobsForWorkspaces(ctx context.C
ctx, _, endObservation := s.operations.createBatchSpecWorkspaceExecutionJobsForWorkspaces.With(ctx, &err, observation.Args{LogFields: []log.Field{}})
defer endObservation(1, observation.Args{})
q := sqlf.Sprintf(createBatchSpecWorkspaceExecutionJobsForWorkspacesQueryFmtstr, pq.Array(workspaceIDs))
q := sqlf.Sprintf(createBatchSpecWorkspaceExecutionJobsForWorkspacesQueryFmtstr, versionForExecution(ctx, s), pq.Array(workspaceIDs))
return s.Exec(ctx, q)
}
@ -488,6 +495,7 @@ func ScanBatchSpecWorkspaceExecutionJob(wj *btypes.BatchSpecWorkspaceExecutionJo
&dbutil.NullInt64{N: &wj.PlaceInGlobalQueue},
&wj.CreatedAt,
&wj.UpdatedAt,
&wj.Version,
); err != nil {
return err
}
@ -502,3 +510,12 @@ func ScanBatchSpecWorkspaceExecutionJob(wj *btypes.BatchSpecWorkspaceExecutionJo
return nil
}
func versionForExecution(ctx context.Context, s *Store) int {
version := 1
if featureflag.FromContext(featureflag.WithFlags(ctx, s.DatabaseDB().FeatureFlags())).GetBoolOr("native-ssbc-execution", false) {
version = 2
}
return version
}

View File

@ -3,6 +3,7 @@ package store
import (
"context"
"encoding/json"
"strings"
"time"
"github.com/graph-gophers/graphql-go/relay"
@ -334,21 +335,16 @@ func logEventsFromLogEntries(logs []workerutil.ExecutionLogEntry) []*batcheslib.
return nil
}
var (
entry workerutil.ExecutionLogEntry
found bool
)
entries := []*batcheslib.LogEvent{}
for _, e := range logs {
if e.Key == "step.src.0" || e.Key == "step.src.batch-exec" {
entry = e
found = true
break
// V1 executions used either `step.src.0` or `step.src.batch-exec` (after named keys were introduced).
// From V2 on, every step has a step in the scheme of `step.docker.step.%d.post` that emits the
// AfterStepResult. This will be revised when we are able to upload artifacts from executions.
if strings.HasSuffix(e.Key, ".post") || e.Key == "step.src.0" || e.Key == "step.src.batch-exec" {
entries = append(entries, btypes.ParseJSONLogsFromOutput(e.Out)...)
}
}
if !found {
return nil
}
return btypes.ParseJSONLogsFromOutput(entry.Out)
return entries
}

View File

@ -105,6 +105,7 @@ func CreateBatchSpecWorkspaceExecutionJob(ctx context.Context, s createBatchSpec
"NULL as place_in_global_queue",
"batch_spec_workspace_execution_jobs.created_at",
"batch_spec_workspace_execution_jobs.updated_at",
"batch_spec_workspace_execution_jobs.version",
},
func(rows dbutil.Scanner) error {
i++

View File

@ -69,6 +69,8 @@ type BatchSpecWorkspaceExecutionJob struct {
CreatedAt time.Time
UpdatedAt time.Time
Version int
}
func (j *BatchSpecWorkspaceExecutionJob) RecordID() int { return int(j.ID) }

View File

@ -12,6 +12,7 @@ import (
"github.com/sourcegraph/sourcegraph/internal/conf/deploy"
"github.com/sourcegraph/sourcegraph/internal/extsvc"
srccli "github.com/sourcegraph/sourcegraph/internal/src-cli"
"github.com/sourcegraph/sourcegraph/internal/version"
"github.com/sourcegraph/sourcegraph/schema"
)
@ -181,6 +182,28 @@ func ExecutorsSrcCLIImageTag() string {
return srccli.MinimumVersion
}
func ExecutorsBatcheshelperImage() string {
current := Get()
if current.ExecutorsBatcheshelperImage != "" {
return current.ExecutorsBatcheshelperImage
}
return "sourcegraph/batcheshelper"
}
func ExecutorsBatcheshelperImageTag() string {
current := Get()
if current.ExecutorsBatcheshelperImageTag != "" {
return current.ExecutorsBatcheshelperImageTag
}
if version.IsDev(version.Version()) {
return "insiders"
}
return version.Version()
}
func CodeIntelAutoIndexingEnabled() bool {
if enabled := Get().CodeIntelAutoIndexingEnabled; enabled != nil {
return *enabled

View File

@ -2221,6 +2221,19 @@
"GenerationExpression": "",
"Comment": ""
},
{
"Name": "version",
"Index": 19,
"TypeName": "integer",
"IsNullable": false,
"Default": "1",
"CharacterMaximumLength": 0,
"IsIdentity": false,
"IdentityGeneration": "",
"IsGenerated": "NEVER",
"GenerationExpression": "",
"Comment": ""
},
{
"Name": "worker_hostname",
"Index": 11,
@ -20681,7 +20694,7 @@
"Views": [
{
"Name": "batch_spec_workspace_execution_jobs_with_rank",
"Definition": " SELECT j.id,\n j.batch_spec_workspace_id,\n j.state,\n j.failure_message,\n j.started_at,\n j.finished_at,\n j.process_after,\n j.num_resets,\n j.num_failures,\n j.execution_logs,\n j.worker_hostname,\n j.last_heartbeat_at,\n j.created_at,\n j.updated_at,\n j.cancel,\n j.queued_at,\n j.user_id,\n q.place_in_global_queue,\n q.place_in_user_queue\n FROM (batch_spec_workspace_execution_jobs j\n LEFT JOIN batch_spec_workspace_execution_queue q ON ((j.id = q.id)));"
"Definition": " SELECT j.id,\n j.batch_spec_workspace_id,\n j.state,\n j.failure_message,\n j.started_at,\n j.finished_at,\n j.process_after,\n j.num_resets,\n j.num_failures,\n j.execution_logs,\n j.worker_hostname,\n j.last_heartbeat_at,\n j.created_at,\n j.updated_at,\n j.cancel,\n j.queued_at,\n j.user_id,\n j.version,\n q.place_in_global_queue,\n q.place_in_user_queue\n FROM (batch_spec_workspace_execution_jobs j\n LEFT JOIN batch_spec_workspace_execution_queue q ON ((j.id = q.id)));"
},
{
"Name": "batch_spec_workspace_execution_queue",

View File

@ -167,6 +167,7 @@ Foreign-key constraints:
cancel | boolean | | not null | false
queued_at | timestamp with time zone | | | now()
user_id | integer | | not null |
version | integer | | not null | 1
Indexes:
"batch_spec_workspace_execution_jobs_pkey" PRIMARY KEY, btree (id)
"batch_spec_workspace_execution_jobs_batch_spec_workspace_id" btree (batch_spec_workspace_id)
@ -3297,6 +3298,7 @@ Foreign-key constraints:
j.cancel,
j.queued_at,
j.user_id,
j.version,
q.place_in_global_queue,
q.place_in_user_queue
FROM (batch_spec_workspace_execution_jobs j

View File

@ -209,8 +209,8 @@ func IsValidationError(err error) bool {
}
// SkippedStepsForRepo calculates the steps required to run on the given repo.
func SkippedStepsForRepo(spec *BatchSpec, repoName string, fileMatches []string) (skipped map[int32]struct{}, err error) {
skipped = map[int32]struct{}{}
func SkippedStepsForRepo(spec *BatchSpec, repoName string, fileMatches []string) (skipped map[int]struct{}, err error) {
skipped = map[int]struct{}{}
for idx, step := range spec.Steps {
// If no if condition is set the step is always run.
@ -238,7 +238,7 @@ func SkippedStepsForRepo(spec *BatchSpec, repoName string, fileMatches []string)
}
if static && !boolVal {
skipped[int32(idx)] = struct{}{}
skipped[idx] = struct{}{}
}
}

View File

@ -260,7 +260,7 @@ func TestOnQueryOrRepository_Branches(t *testing.T) {
func TestSkippedStepsForRepo(t *testing.T) {
tests := map[string]struct {
spec *BatchSpec
wantSkipped []int32
wantSkipped []int
}{
"no if": {
spec: &BatchSpec{
@ -268,7 +268,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 1"},
},
},
wantSkipped: []int32{},
wantSkipped: []int{},
},
"if has static true value": {
@ -277,7 +277,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 1", If: "true"},
},
},
wantSkipped: []int32{},
wantSkipped: []int{},
},
"one of many steps has if with static true value": {
@ -288,7 +288,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 3"},
},
},
wantSkipped: []int32{},
wantSkipped: []int{},
},
"if has static non-true value": {
@ -297,7 +297,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 1", If: "this is not true"},
},
},
wantSkipped: []int32{0},
wantSkipped: []int{0},
},
"one of many steps has if with static non-true value": {
@ -308,7 +308,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 3"},
},
},
wantSkipped: []int32{1},
wantSkipped: []int{1},
},
"if expression that can be partially evaluated to true": {
@ -317,7 +317,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 1", If: `${{ matches repository.name "github.com/sourcegraph/src*" }}`},
},
},
wantSkipped: []int32{},
wantSkipped: []int{},
},
"if expression that can be partially evaluated to false": {
@ -326,7 +326,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 1", If: `${{ matches repository.name "horse" }}`},
},
},
wantSkipped: []int32{0},
wantSkipped: []int{0},
},
"one of many steps has if expression that can be evaluated to false": {
@ -337,7 +337,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 3"},
},
},
wantSkipped: []int32{1},
wantSkipped: []int{1},
},
"if expression that can NOT be partially evaluated": {
@ -346,7 +346,7 @@ func TestSkippedStepsForRepo(t *testing.T) {
{Run: "echo 1", If: `${{ eq outputs.value "foobar" }}`},
},
},
wantSkipped: []int32{},
wantSkipped: []int{},
},
}
@ -358,12 +358,12 @@ func TestSkippedStepsForRepo(t *testing.T) {
}
want := tt.wantSkipped
sort.Sort(sortableInt32(want))
have := make([]int32, 0, len(haveSkipped))
sort.Sort(sortableInt(want))
have := make([]int, 0, len(haveSkipped))
for s := range haveSkipped {
have = append(have, s)
}
sort.Sort(sortableInt32(have))
sort.Sort(sortableInt(have))
if diff := cmp.Diff(have, want); diff != "" {
t.Fatal(diff)
}
@ -371,13 +371,13 @@ func TestSkippedStepsForRepo(t *testing.T) {
}
}
type sortableInt32 []int32
type sortableInt []int
func (s sortableInt32) Len() int { return len(s) }
func (s sortableInt) Len() int { return len(s) }
func (s sortableInt32) Less(i, j int) bool { return s[i] < s[j] }
func (s sortableInt) Less(i, j int) bool { return s[i] < s[j] }
func (s sortableInt32) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s sortableInt) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func TestBatchSpec_RequiredEnvVars(t *testing.T) {
for name, tc := range map[string]struct {

View File

@ -1,8 +1,6 @@
package execution
import (
"github.com/sourcegraph/sourcegraph/lib/batches/git"
)
import "github.com/sourcegraph/sourcegraph/lib/batches/git"
// AfterStepResult is the execution result after executing a step with the given
// index in Steps.

48
lib/batches/outputs.go Normal file
View File

@ -0,0 +1,48 @@
package batches
import (
"bytes"
"encoding/json"
"fmt"
"github.com/sourcegraph/sourcegraph/lib/batches/template"
"github.com/sourcegraph/sourcegraph/lib/errors"
yamlv3 "gopkg.in/yaml.v3"
)
// SetOutputs renders the outputs of the current step into the global outputs
// map using templating.
func SetOutputs(stepOutputs Outputs, global map[string]interface{}, stepCtx *template.StepContext) error {
for name, output := range stepOutputs {
var value bytes.Buffer
if err := template.RenderStepTemplate("outputs-"+name, output.Value, &value, stepCtx); err != nil {
return errors.Wrap(err, "parsing step run")
}
fmt.Printf("Rendering step output %s %s: %q (stdout is %q)\n", name, output.Value, value.String(), stepCtx.Step.Stdout)
switch output.Format {
case "yaml":
var out interface{}
// We use yamlv3 here, because it unmarshals YAML into
// map[string]interface{} which we need to serialize it back to
// JSON when we cache the results.
// See https://github.com/go-yaml/yaml/issues/139 for context
if err := yamlv3.NewDecoder(&value).Decode(&out); err != nil {
return err
}
global[name] = out
case "json":
var out interface{}
if err := json.NewDecoder(&value).Decode(&out); err != nil {
return err
}
global[name] = out
default:
global[name] = value.String()
}
}
return nil
}

View File

@ -7,14 +7,18 @@ import (
type WorkspacesExecutionInput struct {
BatchChangeAttributes template.BatchChangeAttributes
Repository WorkspaceRepo `json:"repository"`
Branch WorkspaceBranch `json:"branch"`
Path string `json:"path"`
OnlyFetchWorkspace bool `json:"onlyFetchWorkspace"`
Steps []Step `json:"steps"`
SearchResultPaths []string `json:"searchResultPaths"`
CachedStepResultFound bool `json:"cachedStepResultFound"`
CachedStepResult execution.AfterStepResult `json:"cachedStepResult,omitempty"`
Repository WorkspaceRepo `json:"repository"`
Branch WorkspaceBranch `json:"branch"`
Path string `json:"path"`
OnlyFetchWorkspace bool `json:"onlyFetchWorkspace"`
Steps []Step `json:"steps"`
SearchResultPaths []string `json:"searchResultPaths"`
// CachedStepResultFound is only required for V1 executions.
// TODO: Remove me once V2 is the only execution format.
CachedStepResultFound bool `json:"cachedStepResultFound"`
// CachedStepResult is only required for V1 executions.
// TODO: Remove me once V2 is the only execution format.
CachedStepResult execution.AfterStepResult `json:"cachedStepResult,omitempty"`
}
type WorkspaceRepo struct {

View File

@ -17,7 +17,6 @@ github.com/alecthomas/kingpin v2.2.6+incompatible h1:5svnBTFgJjZvGKyYBtMB0+m5wvr
github.com/alecthomas/kingpin v2.2.6+incompatible/go.mod h1:59OFYbFVLKQKq+mqrL6Rw5bR0c3ACQaawgXx0QYndlE=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20210208195552-ff826a37aa15/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
@ -55,8 +54,7 @@ github.com/dave/astrid v0.0.0-20170323122508-8c2895878b14/go.mod h1:Sth2QfxfATb/
github.com/dave/brenda v1.1.0/go.mod h1:4wCUr6gSlu5/1Tk7akE5X7UorwiQ8Rij0SKH3/BGMOM=
github.com/dave/courtney v0.3.0/go.mod h1:BAv3hA06AYfNUjfjQr+5gc6vxeBVOupLqrColj+QSD8=
github.com/dave/gopackages v0.0.0-20170318123100-46e7023ec56e/go.mod h1:i00+b/gKdIDIxuLDFob7ustLAVqhsZRk2qVZrArELGQ=
github.com/dave/jennifer v1.4.1 h1:XyqG6cn5RQsTj3qlWQTKlRGAyrTcsk1kUmWdZBzRjDw=
github.com/dave/jennifer v1.4.1/go.mod h1:7jEdnm+qBcxl8PC0zyp7vxcpSRnzXSt9r39tpTVGlwA=
github.com/dave/jennifer v1.5.0 h1:HmgPN93bVDpkQyYbqhCHj5QlgvUkvEOzMyEvKLgCRrg=
github.com/dave/jennifer v1.5.0/go.mod h1:4MnyiFIlZS3l5tSDn8VnzE6ffAhYBMB2SZntBsZGUok=
github.com/dave/kerr v0.0.0-20170318121727-bc25dd6abe8e/go.mod h1:qZqlPyPvfsDJt+3wHJ1EvSXDuVjFTK0j2p/ca+gtsb8=
github.com/dave/patsy v0.0.0-20210517141501-957256f50cba/go.mod h1:qfR88CgEGLoiqDaE+xxDCi5QA5v4vUoW0UCX2Nd5Tlc=
@ -64,8 +62,6 @@ github.com/dave/rebecca v0.9.1/go.mod h1:N6XYdMD/OKw3lkF3ywh8Z6wPGuwNFDNtWYEMFWE
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/derision-test/go-mockgen v1.1.2 h1:bMNCerr4I3dz2/UlguwgMuMuJDQXqRnBw17ezXkGvyI=
github.com/derision-test/go-mockgen v1.1.2/go.mod h1:9H3VGTWYnL1VJoHHCuPKDpPFmNQ1uVyNlpX6P63l5Sk=
github.com/derision-test/go-mockgen v1.3.6 h1:56qoOxncBNM/eWrrgX++XR00gtQuXSFXpp9ee+OjRd4=
github.com/derision-test/go-mockgen v1.3.6/go.mod h1:/TXUePlhtHmDDCaDAi/a4g6xOHqMDz3Wf0r2NPGskB4=
github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
@ -101,7 +97,6 @@ github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclK
github.com/go-errors/errors v1.0.1 h1:LUHzmkK3GUKUrL/1gfBUxAHzcev3apQlezX/+O7ma6w=
github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
github.com/go-martini/martini v0.0.0-20170121215854-22fa46961aab/go.mod h1:/P9AEU963A2AYjv4d1V5eVL1CQbEJq6aCNHDDjibzu8=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/gobwas/httphead v0.0.0-20180130184737-2c6c146eadee/go.mod h1:L0fX3K22YWvt/FAX9NnzrNzcI4wNYi9Yku4O0LKYflo=
@ -274,17 +269,14 @@ github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OS
github.com/nightlyone/lockfile v1.0.0 h1:RHep2cFKK4PonZJDdEl4GmkabuhbsRMgk/k3uAmxBiA=
github.com/nightlyone/lockfile v1.0.0/go.mod h1:rywoIealpdNse2r832aiD9jRk8ErCatROs6LzC841CI=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.13.0/go.mod h1:+REjRxOmWfHCjfv9TTWB1jD1Frx4XydAD3zm1lskyM0=
github.com/onsi/ginkgo v1.16.2/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.13.0/go.mod h1:lRk9szgn8TxENtWd0Tp4c3wjlRfMTMH27I+3Je41yGY=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
@ -418,7 +410,6 @@ golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211008194852-3b03d305991f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
@ -450,14 +441,12 @@ golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210218084038-e8e29180ff58/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210611083646-a4fc73990273/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@ -495,7 +484,6 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200624163319-25775e59acb7/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210101214203-2dba1e4ea05c/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=

View File

@ -0,0 +1,28 @@
DROP VIEW batch_spec_workspace_execution_jobs_with_rank;
CREATE VIEW batch_spec_workspace_execution_jobs_with_rank AS (
SELECT
j.id,
j.batch_spec_workspace_id,
j.state,
j.failure_message,
j.started_at,
j.finished_at,
j.process_after,
j.num_resets,
j.num_failures,
j.execution_logs,
j.worker_hostname,
j.last_heartbeat_at,
j.created_at,
j.updated_at,
j.cancel,
j.queued_at,
j.user_id,
q.place_in_global_queue,
q.place_in_user_queue
FROM
batch_spec_workspace_execution_jobs j
LEFT JOIN batch_spec_workspace_execution_queue q ON j.id = q.id
);
ALTER TABLE batch_spec_workspace_execution_jobs DROP COLUMN IF EXISTS version;

View File

@ -0,0 +1,2 @@
name: Add SSBC execution V2 flag
parents: [1667433265, 1667497565, 1667500111]

View File

@ -0,0 +1,12 @@
ALTER TABLE batch_spec_workspace_execution_jobs ADD COLUMN IF NOT EXISTS version integer NOT NULL DEFAULT 1;
DROP VIEW batch_spec_workspace_execution_jobs_with_rank;
CREATE VIEW batch_spec_workspace_execution_jobs_with_rank AS (
SELECT
j.*,
q.place_in_global_queue,
q.place_in_user_queue
FROM
batch_spec_workspace_execution_jobs j
LEFT JOIN batch_spec_workspace_execution_queue q ON j.id = q.id
);

View File

@ -917,7 +917,8 @@ CREATE TABLE batch_spec_workspace_execution_jobs (
updated_at timestamp with time zone DEFAULT now() NOT NULL,
cancel boolean DEFAULT false NOT NULL,
queued_at timestamp with time zone DEFAULT now(),
user_id integer NOT NULL
user_id integer NOT NULL,
version integer DEFAULT 1 NOT NULL
);
CREATE SEQUENCE batch_spec_workspace_execution_jobs_id_seq
@ -966,6 +967,7 @@ CREATE VIEW batch_spec_workspace_execution_jobs_with_rank AS
j.cancel,
j.queued_at,
j.user_id,
j.version,
q.place_in_global_queue,
q.place_in_user_queue
FROM (batch_spec_workspace_execution_jobs j

View File

@ -2093,6 +2093,10 @@ type SiteConfiguration struct {
EncryptionKeys *EncryptionKeys `json:"encryption.keys,omitempty"`
// ExecutorsAccessToken description: The shared secret between Sourcegraph and executors.
ExecutorsAccessToken string `json:"executors.accessToken,omitempty"`
// ExecutorsBatcheshelperImage description: The image to use for batch changes in executors. Use this value to pull from a custom image registry.
ExecutorsBatcheshelperImage string `json:"executors.batcheshelperImage,omitempty"`
// ExecutorsBatcheshelperImageTag description: The tag to use for the batcheshelper image in executors. Use this value to use a custom tag. Sourcegraph by default uses the best match, so use this setting only if you really need to overwrite it and make sure to keep it updated.
ExecutorsBatcheshelperImageTag string `json:"executors.batcheshelperImageTag,omitempty"`
// ExecutorsFrontendURL description: The frontend URL for Sourcegraph. Only root URLs are allowed. If not set, falls back to externalURL
ExecutorsFrontendURL string `json:"executors.frontendURL,omitempty"`
// ExecutorsSrcCLIImage description: The image to use for src-cli in executors. Use this value to pull from a custom image registry.

View File

@ -1149,6 +1149,16 @@
"type": "string",
"examples": ["4.1.0"]
},
"executors.batcheshelperImage": {
"description": "The image to use for batch changes in executors. Use this value to pull from a custom image registry.",
"type": "string",
"default": "sourcegraph/batcheshelper"
},
"executors.batcheshelperImageTag": {
"description": "The tag to use for the batcheshelper image in executors. Use this value to use a custom tag. Sourcegraph by default uses the best match, so use this setting only if you really need to overwrite it and make sure to keep it updated.",
"type": "string",
"examples": ["4.1.0"]
},
"executors.accessToken": {
"description": "The shared secret between Sourcegraph and executors.",
"type": "string",

View File

@ -524,6 +524,20 @@ commands:
EXECUTOR_QUEUE_NAME: batches
EXECUTOR_MAXIMUM_NUM_JOBS: 8
# This tool rebuilds the batcheshelper image every time the source of it is changed.
batcheshelper-builder:
# Nothing to run for this, we just want to re-run the install script every time.
cmd: exit 0
install: ./enterprise/cmd/batcheshelper/build.sh
env:
IMAGE: sourcegraph/batcheshelper:insiders
# TODO: This is required but should only be set on M1 Macs.
PLATFORM: linux/arm64
watch:
- enterprise/cmd/batcheshelper
- lib/batches
continueWatchOnExit: true
# If you want to use this, either start it with `sg run batches-executor-firecracker` or
# modify the `commandsets.batches` in your local `sg.config.overwrite.yaml`
batches-executor-firecracker:
@ -929,6 +943,7 @@ commandsets:
- zoekt-web-0
- zoekt-web-1
- batches-executor
- batcheshelper-builder
iam:
requiresDevPrivate: true