msp: deployment rollout strategies (#59956)

Allows services to define a `rollout` spec that ensures new image releases go through a specified sequence and flow. We do this using Cloud Deploy and custom targets that update the Cloud Run service image and configuring Terraform to ignore image changes.

> [!NOTE]
> We use a custom target (as opposed to using the native Cloud Deploy + Cloud Run integration, which wants the entire spec in YAML for releases - see https://github.com/sourcegraph/managed-services/issues/186#issuecomment-1915196511) because everything else we have is generated in Terraform, and the core Cloud Run configuration extensively references Terraform values. It would be an extensive undertaking to change how this works. For the most part, this is to deploy a new version of the service code, and it can be beneficial to tie that to the service repository's CI to make it clear where a piece of code goes - building the custom target to _only_ roll out images allows us to do that.

Custom targets are not yet supported by the GCP Terraform provider, which is unfortunate - instead we have to render some YAML that can be applied with a `gcloud` command. For the most part, this should be a one-time operation. There is generated guidance on what to do with the generated output, and also how to create releases.

Closes https://github.com/sourcegraph/managed-services/issues/186

Kinda rambly, high-level Loom overview: https://www.loom.com/share/55bfa34d173c40a9b78708de2029f34f?sid=6f1b062d-ba02-4bb9-8abe-c9f8f8f9a8fe

### Configuring rollouts

In the top-level service spec:

```yaml
rollout:
  stages:
    - environment: test
    - environment: robert
```

And in each relevant environment:

```yaml
- id: robert
  projectID: msp-testbed-robert-7be9
  category: test
  deploy:
    type: rollout
```

`sg msp generate` will render resources for the "last" stage to house Cloud Deploy infrastructure.

### Creating releases

Creating a release triggers a rollout, which progresses through the specified stages, like so:

<img width="1347" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/23356519/9df0e510-08eb-4fd4-bbd4-1d58c6817bba">

Creating releases is intended to be run using `gcloud` commands for now - we could introduce a `sg msp` command for this later. The command creates a release targeting the Cloud Deploy pipeline that exists in the final-stage project. Example command (one is also generated in the pipeline YAML file docstrings):

```sh
gcloud deploy releases create manual-test-04-2024-01-31 \
    --project=msp-testbed-robert-7be9 \
    --region=us-central1 \
    --delivery-pipeline=msp-testbed-us-central1-rollout \
    --source='gs://msp-testbed-robert-7be9-cloudrun-skaffold/source.tar.gz' \
    --labels="commit=abc123,author=foo" \
    --deploy-parameters="customTarget/tag=dd34d1be076e_2024-01-31"
```

Promotions can happen at any time - not every release needs to be promoted to the subsequent stage - and currently must happen manually for each stage except the first.

A secret/output is provisioned with a "release creator" SA that can be used to create a [workload identity pool](https://sourcegraph.sourcegraph.com/github.com/sourcegraph/infrastructure/-/blob/managed-services/continuous-deployment-pipeline/main.tf?L5-20) that can be used to run the `gcloud deploy releases create` command in CI.

After the first apply, which now assumes an `insiders` tag, Terraform no longer touches the image via a [lifecycle ignore](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle)

### Rollout execution

Rollouts happen via a new `clouddeploy-executor` SA in the last stage, which is granted sufficient IAM roles to deploy Cloud Run revisions.

The "render" step in Skaffold prepares a release - in our case, generating a `deploy.sh` with the prerequisite arguments. A record is available in the the relevant "rollout" page:

<img width="1694" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/23356519/53b70923-c0ce-4661-8e6f-d3444cf256e1">

![image](https://github.com/sourcegraph/sourcegraph/assets/23356519/1a82b2ce-50cc-4411-92a3-02cf9779465e)

The "deploy" step just downloads the artifact and executes it.

### Tracing a release

You can include arbitrary labels on releases - this shows up in the release entity in GCP console, but we don't yet propagate anything very well down to the Cloud Run revision. In particular, it seems like we don't get the tag information in the revision UI, but if you click "edit" you see the correct tag populated:

<img width="500" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/23356519/856ccd92-9e0d-41d9-b84c-1846b30a3f79">  <img width="500" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/23356519/f917be4b-f714-4fef-8871-2006fbf83901">


See `skaffold.yaml` - we're mostly just executing commands with `gcloud`, and reporting expected outputs. We can extend this with more detailed outputs and additional tagging or scripting if we want - examples I've seen often build a custom binary/image to execute more advanced use cases. Also see https://cloud.google.com/deploy/docs/custom-targets

### Rollbacks

[Cloud Deploy has a concept of rollbacks](https://cloud.google.com/deploy/docs/roll-back), which you can apply via UI - it seems this just runs the previous configuration:

<img width="868" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/23356519/6bdc8459-61b7-4ce6-9397-c2f9b3a29e8b">
<img width="1426" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/23356519/778241a7-3a97-45f9-b4a6-31bf81f5a8d5">

## Test plan

See https://github.com/sourcegraph/managed-services/pull/454 and https://console.cloud.google.com/deploy/delivery-pipelines/us-central1/msp-testbed-us-central1-rollout?project=msp-testbed-robert-7be9 . I also specifically tested that deploying a particular image, and then deploying a change in Terraform, does not overwrite the image, and we do not have infinite drift on the Terraform when releases deploy images.

Also https://github.com/sourcegraph/managed-services/actions/runs/7744296405
This commit is contained in:
Robert Lin 2024-02-13 23:12:11 -08:00 committed by GitHub
parent 9f935a1908
commit 46e107a3fb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
25 changed files with 747 additions and 41 deletions

View File

@ -0,0 +1,25 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
load("//dev:go_defs.bzl", "go_test")
go_library(
name = "clouddeploy",
srcs = ["clouddeploy.go"],
embedsrcs = [
"customtarget.yaml",
"skaffold.yaml",
"target.template.yaml",
],
importpath = "github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/clouddeploy",
visibility = ["//visibility:public"],
deps = [
"//dev/managedservicesplatform/spec",
"//lib/errors",
],
)
go_test(
name = "clouddeploy_test",
srcs = ["clouddeploy_test.go"],
embed = [":clouddeploy"],
deps = ["@com_github_stretchr_testify//require"],
)

View File

@ -0,0 +1,105 @@
package clouddeploy
import (
"archive/tar"
"bytes"
"compress/gzip"
"embed"
"fmt"
"html/template"
"io"
"github.com/sourcegraph/sourcegraph/lib/errors"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/spec"
)
//go:embed skaffold.yaml
var skaffoldAssets embed.FS
// NewCloudRunCustomTargetSkaffoldAssetsArchive generates an archive of assets
// required for 'gcloud deploy releases create', to be provided via the
// '--source' flag: https://cloud.google.com/sdk/gcloud/reference/deploy/releases/create#--source
func NewCloudRunCustomTargetSkaffoldAssetsArchive() (*bytes.Buffer, error) {
var buf bytes.Buffer
gw := gzip.NewWriter(&buf)
defer gw.Close()
tw := tar.NewWriter(gw)
defer tw.Close()
files, err := skaffoldAssets.ReadDir(".")
if err != nil {
return nil, err
}
for _, file := range files {
if file.IsDir() {
return nil, errors.New("unexpected dir")
}
info, err := file.Info()
if err != nil {
return nil, err
}
header, err := tar.FileInfoHeader(info, info.Name())
if err != nil {
return nil, err
}
if err := tw.WriteHeader(header); err != nil {
return nil, err
}
f, err := skaffoldAssets.Open(file.Name())
if err != nil {
return nil, err
}
if _, err := io.Copy(tw, f); err != nil {
return nil, err
}
}
return &buf, nil
}
//go:embed customtarget.yaml
var cloudDeployCustomTarget []byte
//go:embed target.template.yaml
var cloudDeployTargetTemplateRaw string
var cloudDeployTargetTemplate = template.Must(template.New("cloudDeployTargetTemplate").
Parse(cloudDeployTargetTemplateRaw))
func RenderSpec(
service spec.ServiceSpec,
build spec.BuildSpec,
config spec.RolloutPipelineConfiguration,
region string,
) (*bytes.Buffer, error) {
var targetsSpec bytes.Buffer
if _, err := targetsSpec.Write(cloudDeployCustomTarget); err != nil {
return nil, err
}
for _, stage := range config.Stages {
if _, err := targetsSpec.WriteString("\n---\n"); err != nil {
return nil, err
}
var b bytes.Buffer
if err := cloudDeployTargetTemplate.Execute(&b, map[string]any{
"Stage": stage,
"Service": service,
"Build": build,
"Region": region,
// Stable naming: always a SA in the last stage's project.
"CloudDeployServiceAccount": fmt.Sprintf("clouddeploy-executor@%s.iam.gserviceaccount.com",
config.Stages[len(config.Stages)-1].ProjectID),
}); err != nil {
return nil, err
}
if _, err := targetsSpec.Write(b.Bytes()); err != nil {
return nil, err
}
}
return &targetsSpec, nil
}

View File

@ -0,0 +1,12 @@
package clouddeploy
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestArchive(t *testing.T) {
_, err := NewCloudRunCustomTargetSkaffoldAssetsArchive()
require.NoError(t, err)
}

View File

@ -0,0 +1,11 @@
apiVersion: deploy.cloud.google.com/v1
kind: CustomTargetType
metadata:
name: cloud-run-service
labels:
msp: "true"
description: "MSP Cloud Run Service"
# customActions are defined in skaffold.yaml
customActions:
renderAction: cloud-run-image-deploy-render
deployAction: cloud-run-image-deploy

View File

@ -0,0 +1,55 @@
# See https://cloud.google.com/deploy/docs/custom-targets for guidance on how
# to build custom targets and the conventions that are expected.
apiVersion: skaffold/v4beta7
kind: Config
metadata:
name: CloudRunServiceImageDeployment
customActions:
- name: cloud-run-image-deploy-render
containers:
- name: Render
# TODO: Pulling this image is super slow (~1 minute)
image: gcr.io/google.com/cloudsdktool/google-cloud-cli
command: ['/bin/bash']
args:
- '-c'
- |-
set -e
set -o pipefail
SERVICE_ID=$CLOUD_DEPLOY_customTarget_serviceID # customTarget/serviceID
REVISION=$CLOUD_DEPLOY_customTarget_tag # customTarget/tag
IMAGE=$CLOUD_DEPLOY_customTarget_image # customTarget/image
PROJECT_ID=$CLOUD_DEPLOY_customTarget_projectID # customTarget/projectID
REGION=$CLOUD_DEPLOY_LOCATION
CLOUDRUN_SERVICE="$SERVICE_ID-$CLOUD_DEPLOY_TARGET"
echo "gcloud run deploy $CLOUDRUN_SERVICE --project=$PROJECT_ID --image=$IMAGE:$REVISION --region=$REGION" > deploy.sh
gsutil cp deploy.sh $CLOUD_DEPLOY_OUTPUT_GCS_PATH/deploy.sh
# Provide results back to Cloud Deploy
echo {\"resultStatus\": \"SUCCEEDED\", \"manifestFile\": \"$CLOUD_DEPLOY_OUTPUT_GCS_PATH/deploy.sh\"} > results.json
gsutil cp results.json $CLOUD_DEPLOY_OUTPUT_GCS_PATH/results.json
- name: cloud-run-image-deploy
containers:
- name: Deploy
# TODO: Pulling this image is super slow (~1 minute)
image: gcr.io/google.com/cloudsdktool/google-cloud-cli
command: ['/bin/bash']
args:
- '-c'
- |-
set -e
set -o pipefail
gsutil cp $CLOUD_DEPLOY_MANIFEST_GCS_PATH deploy.sh
bash deploy.sh
# Provide results back to Cloud Deploy
echo {\"resultStatus\": \"SUCCEEDED\"} > results.json
gsutil cp results.json $CLOUD_DEPLOY_OUTPUT_GCS_PATH/results.json

View File

@ -0,0 +1,17 @@
apiVersion: deploy.cloud.google.com/v1
kind: Target
metadata:
name: "{{ .Stage.EnvironmentID }}-{{ .Region }}"
customTarget:
customTargetType: cloud-run-service
deployParameters:
customTarget/serviceID: {{ .Service.ID }}
customTarget/image: {{ .Build.Image }}
customTarget/projectID: {{ .Stage.ProjectID }}
# Tag must be provided in 'gcloud deploy releases create' via the
# flag '--deploy-parameters="customTarget/tag=$TAG"'.
# customTarget/tag: ""
executionConfigs:
- usages: [RENDER, DEPLOY]
serviceAccount: {{ .CloudDeployServiceAccount }}

View File

@ -0,0 +1,15 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "deliverypipeline",
srcs = ["deliverypipeline.go"],
importpath = "github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/deliverypipeline",
visibility = ["//dev/managedservicesplatform:__subpackages__"],
deps = [
"//dev/managedservicesplatform/internal/resourceid",
"//lib/pointers",
"@com_github_aws_constructs_go_constructs_v10//:constructs",
"@com_github_hashicorp_terraform_cdk_go_cdktf//:cdktf",
"@com_github_sourcegraph_managed_services_platform_cdktf_gen_google//clouddeploydeliverypipeline",
],
)

View File

@ -0,0 +1,62 @@
package deliverypipeline
import (
"github.com/aws/constructs-go/constructs/v10"
"github.com/hashicorp/terraform-cdk-go/cdktf"
"github.com/sourcegraph/managed-services-platform-cdktf/gen/google/clouddeploydeliverypipeline"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resourceid"
"github.com/sourcegraph/sourcegraph/lib/pointers"
)
type Config struct {
// Location currently must also be the location of all targets being
// deployed by this pipeline.
Location string
Name string
Description string
// Stages lists target IDs in order.
Stages []string
// Suspended prevents releases and rollouts from being created, rolled back,
// etc using this pipeline: https://cloud.google.com/deploy/docs/suspend-pipeline
Suspended bool
DependsOn []cdktf.ITerraformDependable
}
type Output struct{}
// New provisions resources for a google_clouddeploy_delivery_pipeline:
// https://cloud.google.com/deploy/docs/overview
func New(scope constructs.Construct, id resourceid.ID, config Config) (*Output, error) {
_ = clouddeploydeliverypipeline.NewClouddeployDeliveryPipeline(scope,
id.TerraformID("pipeline"),
&clouddeploydeliverypipeline.ClouddeployDeliveryPipelineConfig{
Location: &config.Location,
Name: &config.Name,
Description: &config.Description,
Suspended: &config.Suspended,
SerialPipeline: &clouddeploydeliverypipeline.ClouddeployDeliveryPipelineSerialPipeline{
Stages: pointers.Ptr(newStages(config)),
},
DependsOn: &config.DependsOn,
})
return &Output{}, nil
}
func newStages(config Config) []*clouddeploydeliverypipeline.ClouddeployDeliveryPipelineSerialPipelineStages {
var stages []*clouddeploydeliverypipeline.ClouddeployDeliveryPipelineSerialPipelineStages
for _, target := range config.Stages {
stages = append(stages, &clouddeploydeliverypipeline.ClouddeployDeliveryPipelineSerialPipelineStages{
TargetId: pointers.Ptr(target),
})
}
return stages
}

View File

@ -55,10 +55,8 @@ type Renderer struct {
//
// The required workspaces are managed by 'sg msp tfc sync'.
func (r *Renderer) RenderEnvironment(
svc spec.ServiceSpec,
build spec.BuildSpec,
svc spec.Spec,
env spec.EnvironmentSpec,
monitoringSpec spec.MonitoringSpec,
) (*CDKTF, error) {
terraformVersion := terraform.Version
stacks := stack.NewSet(r.OutputDir,
@ -68,7 +66,7 @@ func (r *Renderer) RenderEnvironment(
// provisioned separately.
tfcbackend.With(tfcbackend.Config{
Workspace: func(stackName string) string {
return terraformcloud.WorkspaceName(svc, env, stackName)
return terraformcloud.WorkspaceName(svc.Service, env, stackName)
},
}))
@ -76,24 +74,27 @@ func (r *Renderer) RenderEnvironment(
// destroys.
preventDestroys := !pointers.DerefZero(env.AllowDestroys)
// Only non-nil if this is the last stage in a rollout spec.
rolloutPipeline := svc.BuildRolloutPipelineConfiguration(env)
// Render all required CDKTF stacks for this environment.
//
// This MUST line up with managedservicesplatform.StackNames() in this
// package.
projectOutput, err := project.NewStack(stacks, project.Variables{
ProjectID: env.ProjectID,
DisplayName: fmt.Sprintf("%s - %s", svc.GetName(), env.ID),
DisplayName: fmt.Sprintf("%s - %s", svc.Service.GetName(), env.ID),
Category: env.Category,
Labels: map[string]string{
"service": svc.ID,
"service": svc.Service.ID,
"environment": env.ID,
"category": string(env.Category),
"msp": "true",
},
Services: func() []string {
if svc.IAM != nil && len(svc.IAM.Services) > 0 {
return svc.IAM.Services
if svc.Service.IAM != nil && len(svc.Service.IAM.Services) > 0 {
return svc.Service.IAM.Services
}
return nil
}(),
@ -104,10 +105,12 @@ func (r *Renderer) RenderEnvironment(
}
iamOutput, err := iam.NewStack(stacks, iam.Variables{
ProjectID: *projectOutput.Project.ProjectId(),
Image: build.Image,
Service: svc,
Image: svc.Build.Image,
Service: svc.Service,
SecretEnv: env.SecretEnv,
PreventDestroys: preventDestroys,
IsFinalStageOfRollout: rolloutPipeline != nil,
})
if err != nil {
return nil, errors.Wrap(err, "failed to create IAM stack")
@ -116,10 +119,12 @@ func (r *Renderer) RenderEnvironment(
ProjectID: *projectOutput.Project.ProjectId(),
IAM: *iamOutput,
Service: svc,
Image: build.Image,
Service: svc.Service,
Image: svc.Build.Image,
Environment: env,
RolloutPipeline: rolloutPipeline,
StableGenerate: r.StableGenerate,
PreventDestroys: preventDestroys,
@ -129,12 +134,12 @@ func (r *Renderer) RenderEnvironment(
}
if _, err := monitoring.NewStack(stacks, monitoring.Variables{
ProjectID: *projectOutput.Project.ProjectId(),
Service: svc,
Service: svc.Service,
EnvironmentCategory: env.Category,
EnvironmentID: env.ID,
Alerting: pointers.DerefZero(env.Alerting),
Monitoring: monitoringSpec,
Monitoring: *svc.Monitoring,
MaxInstanceCount: env.Instances.Scaling.GetMaxCount(), // returns nil if not relevant
ExternalDomain: pointers.DerefZero(env.EnvironmentServiceSpec).Domain,
ServiceAuthentication: pointers.DerefZero(env.EnvironmentServiceSpec).Authentication,

View File

@ -8,6 +8,7 @@ go_library(
"environment.go",
"monitoring.go",
"projectid.go",
"rollout.go",
"service.go",
"spec.go",
],

View File

@ -151,11 +151,25 @@ func (c EnvironmentCategory) Validate() error {
return nil
}
type EnvironmentDeployType string
const (
EnvironmentDeployTypeManual = "manual"
EnvironmentDeployTypeSubscription = "subscription"
EnvironmentDeployTypeRollout = "rollout"
)
func (c EnvironmentCategory) IsProduction() bool {
return c == EnvironmentCategoryExternal || c == EnvironmentCategoryInternal
}
type EnvironmentDeploySpec struct {
// Type specifies the deployment method for the environment. There are
// 3 supported types:
//
// - 'manual': Revisions are deployed manually by configuring it in 'deploy.manual.tag'
// - 'subscription': Revisions are deployed via GitHub Action, which pins to the latest image SHA of 'deploy.subscription.tag'.
// - 'rollout': Revisions are deployed via Cloud Deploy - an env-level 'rollout' spec is required, and a 'rollout.clouddeploy.yaml' is rendered with further instructions.
Type EnvironmentDeployType `yaml:"type"`
Manual *EnvironmentDeployManualSpec `yaml:"manual,omitempty"`
Subscription *EnvironmentDeployTypeSubscriptionSpec `yaml:"subscription,omitempty"`
@ -179,16 +193,7 @@ func (s EnvironmentDeploySpec) Validate() []error {
return errs
}
type EnvironmentDeployType string
const (
EnvironmentDeployTypeManual = "manual"
EnvironmentDeployTypeSubscription = "subscription"
)
// ResolveTag uses the deploy spec to resolve an appropriate tag for the environment.
//
// TODO: Implement ability to resolve latest concrete tag from a source
func (d EnvironmentDeploySpec) ResolveTag(repo string) (string, error) {
switch d.Type {
case EnvironmentDeployTypeManual:
@ -207,8 +212,11 @@ func (d EnvironmentDeploySpec) ResolveTag(repo string) (string, error) {
return "", errors.Wrapf(err, "resolve digest for tag %q", "insiders")
}
return tagAndDigest, nil
case EnvironmentDeployTypeRollout:
// Enforce convention
return "insiders", nil
default:
return "", errors.New("unable to resolve tag")
return "", errors.Newf("unable to resolve tag for unknown deploy type %q", d.Type)
}
}

View File

@ -0,0 +1,77 @@
package spec
type RolloutSpec struct {
// Stages specifies the order and environments through which releases
// progress.
Stages []RolloutStageSpec `yaml:"stages"`
// Suspended prevents releases and rollouts from being created, rolled back,
// etc using this rollout pipeline pipeline: https://cloud.google.com/deploy/docs/suspend-pipeline
//
// Set to true to prevent all deployments from being created through Cloud
// Deploy. Note that this does NOT prevent manual deploys from happening
// directly in Cloud Run.
Suspended *bool `yaml:"suspended,omitempty"`
}
func (r *RolloutSpec) GetStageByEnvironment(id string) *RolloutStageSpec {
if r == nil {
return nil
}
for _, stage := range r.Stages {
if stage.EnvironmentID == id {
return &stage
}
}
return nil
}
type RolloutStageSpec struct {
// EnvironmentID is the ID of the environment to use in this stage.
// The specified environment MUST have 'deploy: { type: "rollout" }' configured.
EnvironmentID string `yaml:"environment"`
}
// RolloutPipelineConfiguration is rendered from BuildPipelineConfiguration for use in
// stacks.
type RolloutPipelineConfiguration struct {
// Stages is evaluated from OriginalSpec.Stages to include attributes
// required to actually configure the stages.
Stages []rolloutPipelineTargetConfiguration
OriginalSpec RolloutSpec
}
// rolloutPipelineTargetConfiguration is an internal type that extends
// RolloutStageSpec with other top-level environment spec.
type rolloutPipelineTargetConfiguration struct {
RolloutStageSpec
// ProjectID is the project the target environmet lives in.
ProjectID string
}
// BuildRolloutPipelineConfiguration evaluates a configuration for use in
// configuring a Cloud Deploy pipeline in the final environment of a rollout
// spec's stages.
func (s Spec) BuildRolloutPipelineConfiguration(env EnvironmentSpec) *RolloutPipelineConfiguration {
if s.Rollout == nil {
return nil
}
// We only need the configuration
if s.Rollout.Stages[len(s.Rollout.Stages)-1].EnvironmentID != env.ID {
return nil
}
var targets []rolloutPipelineTargetConfiguration
for _, stage := range s.Rollout.Stages {
env := s.GetEnvironment(stage.EnvironmentID)
targets = append(targets, rolloutPipelineTargetConfiguration{
ProjectID: env.ProjectID,
RolloutStageSpec: stage,
})
}
return &RolloutPipelineConfiguration{
Stages: targets,
OriginalSpec: *s.Rollout,
}
}

View File

@ -28,6 +28,9 @@ type Spec struct {
Build BuildSpec `yaml:"build"`
Environments []EnvironmentSpec `yaml:"environments"`
Monitoring *MonitoringSpec `yaml:"monitoring,omitempty"`
// Rollout can be configured to indicate how releases should roll out
// through a set of environments.
Rollout *RolloutSpec `yaml:"rollout,omitempty"`
}
// Open a specification file, validate it, unmarshal the data as a MSP spec,
@ -134,6 +137,9 @@ func (s Spec) Validate() []error {
if e.EnvironmentServiceSpec != nil {
errs = append(errs, errors.New("service specifications are not supported for 'kind: job'"))
}
if e.Deploy.Type == EnvironmentDeployTypeRollout {
errs = append(errs, errors.New("'deploy { type: \"rollout\" }' not supported for 'kind: job'"))
}
if e.Instances.Scaling != nil {
errs = append(errs, errors.New("'environments.instances.scaling' not supported for 'kind: job'"))
}
@ -171,6 +177,36 @@ func (s Spec) Validate() []error {
configuredDomains[domain] = struct{}{}
}
}
if s.Rollout.GetStageByEnvironment(env.ID) != nil {
if env.Deploy.Type != EnvironmentDeployTypeRollout {
errs = append(errs, errors.Newf("environment %q is referenced in a rollout stage - deploy type must be '%s'",
env.ID, EnvironmentDeployTypeRollout))
}
} else if env.Deploy.Type == EnvironmentDeployTypeRollout {
errs = append(errs, errors.Newf("environment %q has deploy type '%s', but is not referenced in rollout stages",
EnvironmentDeployTypeRollout, env.ID))
}
}
if s.Rollout != nil {
if len(s.Rollout.Stages) == 0 {
errs = append(errs, errors.New("rollout spec is defined but contains no stages"))
}
seenStages := make(map[string]struct{})
for _, stage := range s.Rollout.Stages {
if s.GetEnvironment(stage.EnvironmentID) == nil {
errs = append(errs, errors.Newf("rollout stage references unknown environment %q",
stage.EnvironmentID))
}
if _, seen := seenStages[stage.EnvironmentID]; seen {
errs = append(errs, errors.Newf("rollout stage references environment %q more than once",
stage.EnvironmentID))
} else {
seenStages[stage.EnvironmentID] = struct{}{}
}
}
}
errs = append(errs, s.Service.Validate()...)

View File

@ -12,6 +12,7 @@ go_library(
"//dev/managedservicesplatform/googlesecretsmanager",
"//dev/managedservicesplatform/internal/resource/bigquery",
"//dev/managedservicesplatform/internal/resource/cloudsql",
"//dev/managedservicesplatform/internal/resource/deliverypipeline",
"//dev/managedservicesplatform/internal/resource/gsmsecret",
"//dev/managedservicesplatform/internal/resource/postgresqlroles",
"//dev/managedservicesplatform/internal/resource/privatenetwork",
@ -33,6 +34,9 @@ go_library(
"//lib/errors",
"//lib/pointers",
"@com_github_hashicorp_terraform_cdk_go_cdktf//:cdktf",
"@com_github_sourcegraph_managed_services_platform_cdktf_gen_google//projectiammember",
"@com_github_sourcegraph_managed_services_platform_cdktf_gen_google//storagebucket",
"@com_github_sourcegraph_managed_services_platform_cdktf_gen_google//storagebucketobject",
"@com_github_sourcegraph_managed_services_platform_cdktf_gen_sentry//datasentryorganization",
"@com_github_sourcegraph_managed_services_platform_cdktf_gen_sentry//datasentryteam",
"@com_github_sourcegraph_managed_services_platform_cdktf_gen_sentry//key",

View File

@ -2,6 +2,7 @@ package cloudrun
import (
"bytes"
"fmt"
"html/template"
"slices"
"strconv"
@ -12,6 +13,9 @@ import (
"github.com/hashicorp/terraform-cdk-go/cdktf"
"github.com/sourcegraph/managed-services-platform-cdktf/gen/google/projectiammember"
"github.com/sourcegraph/managed-services-platform-cdktf/gen/google/storagebucket"
"github.com/sourcegraph/managed-services-platform-cdktf/gen/google/storagebucketobject"
"github.com/sourcegraph/managed-services-platform-cdktf/gen/sentry/datasentryorganization"
"github.com/sourcegraph/managed-services-platform-cdktf/gen/sentry/datasentryteam"
"github.com/sourcegraph/managed-services-platform-cdktf/gen/sentry/key"
@ -20,6 +24,7 @@ import (
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/googlesecretsmanager"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/bigquery"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/cloudsql"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/deliverypipeline"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/gsmsecret"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/postgresqlroles"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/privatenetwork"
@ -58,6 +63,11 @@ type Variables struct {
Image string
Environment spec.EnvironmentSpec
// RolloutPipeline is only non-nil if this environment is the final
// environment of a rollout spec - the final environment is where the Cloud
// Deploy pipeline lives.
RolloutPipeline *spec.RolloutPipelineConfiguration
StableGenerate bool
PreventDestroys bool
@ -67,12 +77,18 @@ const StackName = "cloudrun"
const (
OutputCloudSQLConnectionName = "cloudsql_connection_name"
// ScaffoldSourceFile is the file to place in the cloudrun Terraform stack
// directory for upload. We expect this to be generated into the TF dir -
// it's weird but unfortunately placing the file into bucket object 'content'
// directly in Terraform seems to mangle it terribly.
ScaffoldSourceFile = "skaffoldsource.tar.gz"
)
// Hardcoded variables.
var (
// gcpRegion is currently hardcoded.
gcpRegion = "us-central1"
// GCPRegion is currently hardcoded.
GCPRegion = "us-central1"
)
const tfVarKeyResolvedImageTag = "resolved_image_tag"
@ -157,7 +173,7 @@ func NewStack(stacks *stack.Set, vars Variables) (crossStackOutput *CrossStackOu
return privatenetwork.New(stack, privatenetwork.Config{
ProjectID: vars.ProjectID,
ServiceID: vars.Service.ID,
Region: gcpRegion,
Region: GCPRegion,
})
})
@ -177,7 +193,7 @@ func NewStack(stacks *stack.Set, vars Variables) (crossStackOutput *CrossStackOu
resourceid.New("redis"),
redis.Config{
ProjectID: vars.ProjectID,
Region: gcpRegion,
Region: GCPRegion,
Spec: *vars.Environment.Resources.Redis,
Network: privateNetwork().Network,
})
@ -210,7 +226,7 @@ func NewStack(stacks *stack.Set, vars Variables) (crossStackOutput *CrossStackOu
pgSpec := *vars.Environment.Resources.PostgreSQL
sqlInstance, err := cloudsql.New(stack, resourceid.New("postgresql"), cloudsql.Config{
ProjectID: vars.ProjectID,
Region: gcpRegion,
Region: GCPRegion,
Spec: pgSpec,
Network: privateNetwork().Network,
@ -327,7 +343,7 @@ func NewStack(stacks *stack.Set, vars Variables) (crossStackOutput *CrossStackOu
ResolvedImageTag: *imageTag.StringValue,
Environment: vars.Environment,
GCPProjectID: vars.ProjectID,
GCPRegion: gcpRegion,
GCPRegion: GCPRegion,
ServiceAccount: vars.IAM.CloudRunWorkloadServiceAccount,
DiagnosticsSecret: diagnosticsSecret,
ResourceLimits: makeContainerResourceLimits(vars.Environment.Instances.Resources),
@ -342,6 +358,84 @@ func NewStack(stacks *stack.Set, vars Variables) (crossStackOutput *CrossStackOu
return nil, errors.Wrapf(err, "build Cloud Run resource kind %q", cloudRunBuilder.Kind())
}
// We have a rollout pipeline to configure.
if vars.RolloutPipeline != nil {
id := id.Group("rolloutpipeline")
// For now, we only use 1 region everywhere, but also note that ALL
// deployment targets must be in the same location as the delivery
// pipeline, so if we ever do multi-region we'll need multiple delivery
// pipelines for each. In particular, see https://registry.terraform.io/providers/hashicorp/google/5.10.0/docs/resources/clouddeploy_delivery_pipeline#target_id:
//
// > The location of the Target is inferred to be the same as the location of the DeliveryPipeline that contains this Stage.
var rolloutLocation = GCPRegion
// stageTargets enumerate stages in order. Cloud Deploy targets are
// created separately because the TF provider doesn't support Custom
// Targets yet - TODO document
var stageTargets []string
for _, stage := range vars.RolloutPipeline.Stages {
id := id.Group("stage").Group(stage.EnvironmentID)
// Our execution service account needs access to this project's
// resources to deploy releases.
_ = projectiammember.NewProjectIamMember(stack,
id.Group("cloudrun_developer").TerraformID("member"),
&projectiammember.ProjectIamMemberConfig{
Project: pointers.Ptr(stage.ProjectID),
Role: pointers.Ptr("roles/run.developer"),
Member: &vars.IAM.CloudDeployExecutionServiceAccount.Member,
})
_ = projectiammember.NewProjectIamMember(stack,
id.Group("service_account_user").TerraformID("member"),
&projectiammember.ProjectIamMemberConfig{
Project: pointers.Ptr(stage.ProjectID),
Role: pointers.Ptr("roles/iam.serviceAccountUser"),
Member: &vars.IAM.CloudDeployExecutionServiceAccount.Member,
})
// Name targets with environment+location - this is expected by
// our Cloud Deploy Custom Target
stageTargets = append(stageTargets,
fmt.Sprintf("%s-%s", stage.EnvironmentID, rolloutLocation))
}
// Now, apply each target in a rollout pipeline. The targets don't need
// to exist at this point yet, though attempting to use the pipeline
// before creating targets will fail.
_, _ = deliverypipeline.New(stack, id.Group("pipeline"), deliverypipeline.Config{
Location: rolloutLocation,
Name: fmt.Sprintf("%s-%s-rollout", vars.Service.ID, rolloutLocation),
Description: fmt.Sprintf("Rollout delivery pipeline for %s",
vars.Service.GetName()),
Stages: stageTargets,
Suspended: pointers.DerefZero(vars.RolloutPipeline.OriginalSpec.Suspended),
// Make it so that our Cloud Run service is up before we
// configure the rollout pipeline
DependsOn: []cdktf.ITerraformDependable{
cloudRunResource,
},
})
// We also need to synchronize the Skaffold configuration for our custom
// target, so that we can reference it easily without requiring operators
// to have the required Skaffold assets for 'gcloud deploy releases create'
// locally.
skaffoldBucket := storagebucket.NewStorageBucket(stack, id.Group("skaffold").TerraformID("bucket"), &storagebucket.StorageBucketConfig{
Name: pointers.Stringf("%s-cloudrun-skaffold", vars.ProjectID),
Location: &GCPRegion,
})
_ = storagebucketobject.NewStorageBucketObject(stack, id.Group("skaffold").TerraformID("object"), &storagebucketobject.StorageBucketObjectConfig{
Name: pointers.Ptr("source.tar.gz"),
Bucket: skaffoldBucket.Name(),
Source: pointers.Ptr(ScaffoldSourceFile), // see docstring for hack
ContentType: pointers.Ptr("application/gzip"),
})
}
// Collect outputs
locals.Add("cloud_run_resource_name", *cloudRunResource.Name(),
"Cloud Run resource name")

View File

@ -0,0 +1,8 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "cloudrunresource",
srcs = ["cloudrunresource.go"],
importpath = "github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/stacks/cloudrun/cloudrunresource",
visibility = ["//visibility:public"],
)

View File

@ -0,0 +1,10 @@
package cloudrunresource
import "fmt"
// NewName is the stable representation of a Cloud Run resource. It must be
// unique across environments.
func NewName(serviceID, environmentID, gcpRegion string) string {
return fmt.Sprintf("%s-%s-%s",
serviceID, environmentID, gcpRegion)
}

View File

@ -10,6 +10,7 @@ go_library(
"//dev/managedservicesplatform/internal/resource/random",
"//dev/managedservicesplatform/internal/resource/serviceaccount",
"//dev/managedservicesplatform/spec",
"//dev/managedservicesplatform/stacks/cloudrun/cloudrunresource",
"//lib/errors",
"@com_github_hashicorp_terraform_cdk_go_cdktf//:cdktf",
],

View File

@ -1,14 +1,13 @@
package builder
import (
"fmt"
"github.com/hashicorp/terraform-cdk-go/cdktf"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/privatenetwork"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/random"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/internal/resource/serviceaccount"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/spec"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/stacks/cloudrun/cloudrunresource"
"github.com/sourcegraph/sourcegraph/lib/errors"
)
@ -37,10 +36,9 @@ type Variables struct {
// Name returns the name to use for the Cloud Run resource.
func (v *Variables) Name() (string, error) {
name := cloudrunresource.NewName(v.Service.ID, v.Environment.ID, v.GCPRegion)
// Extra guard against long names, just in case - an apply to change the
// name that fails during apply could cause extended downtime.
name := fmt.Sprintf("%s-%s-%s",
v.Service.ID, v.Environment.ID, v.GCPRegion)
if len(name) > 63 {
return name, errors.Newf("evaluated Cloud Run name %q is too long, maximum length is 63 characters")
}

View File

@ -108,6 +108,21 @@ func (b *serviceBuilder) Build(stack cdktf.TerraformStack, vars builder.Variable
}
}
var lifecycle *cdktf.TerraformResourceLifecycle
if vars.Environment.Deploy.Type == spec.EnvironmentDeployTypeRollout {
lifecycle = &cdktf.TerraformResourceLifecycle{
IgnoreChanges: &[]*string{
// This will be managed by Cloud Deploy releases issued by
// the service owner, e.g. via their CI.
pointers.Ptr("template[0].containers[0].image"),
// These will be set when a revision is created via our Cloud
// Deploy custom target when a release is deployed.
pointers.Ptr("client"),
pointers.Ptr("client_version"),
},
}
}
name, err := vars.Name()
if err != nil {
return nil, err
@ -116,6 +131,7 @@ func (b *serviceBuilder) Build(stack cdktf.TerraformStack, vars builder.Variable
Name: pointers.Ptr(name),
Location: pointers.Ptr(vars.GCPRegion),
DependsOn: &b.dependencies,
Lifecycle: lifecycle,
// Disallows direct traffic from public internet, we have a LB set up for that.
Ingress: pointers.Ptr("INGRESS_TRAFFIC_INTERNAL_LOAD_BALANCER"),

View File

@ -29,6 +29,9 @@ import (
type CrossStackOutput struct {
CloudRunWorkloadServiceAccount *serviceaccount.Output
OperatorAccessServiceAccount *serviceaccount.Output
// CloudDeployExecutionServiceAccount is only provisioned if
// IsFinalStageOfRollout is true for this environment.
CloudDeployExecutionServiceAccount *serviceaccount.Output
}
type Variables struct {
@ -39,6 +42,10 @@ type Variables struct {
// SecretEnv should be the environment config that sources from secrets.
SecretEnv map[string]string
// IsFinalStageOfRollout should be true if BuildRolloutPipelineConfiguration
// provides a non-nil configuration for an environment.
IsFinalStageOfRollout bool
// PreventDestroys indicates if destroys should be allowed on core components of
// this resource.
PreventDestroys bool
@ -49,6 +56,8 @@ const StackName = "iam"
const (
OutputCloudRunServiceAccount = "cloud_run_service_account"
OutputOperatorServiceAccount = "operator_access_service_account"
OutputCloudDeployReleaserServiceAccountID = "cloud_deploy_releaser_service_account_id"
)
func NewStack(stacks *stack.Set, vars Variables) (*CrossStackOutput, error) {
@ -152,6 +161,10 @@ func NewStack(stacks *stack.Set, vars Variables) (*CrossStackOutput, error) {
},
)
googleBeta := google_beta.NewGoogleBetaProvider(stack, pointers.Ptr("google_beta"), &google_beta.GoogleBetaProviderConfig{
Project: &vars.ProjectID,
})
// Provision the default Cloud Run robot account so that we can grant it
// access to prerequisite resources.
cloudRunIdentity := googleprojectserviceidentity.NewGoogleProjectServiceIdentity(stack,
@ -161,9 +174,7 @@ func NewStack(stacks *stack.Set, vars Variables) (*CrossStackOutput, error) {
Service: pointers.Ptr("run.googleapis.com"),
// Only available via beta provider:
// https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/project_service_identity
Provider: google_beta.NewGoogleBetaProvider(stack, pointers.Ptr("google_beta"), &google_beta.GoogleBetaProviderConfig{
Project: &vars.ProjectID,
}),
Provider: googleBeta,
})
identityMember := pointers.Ptr(fmt.Sprintf("serviceAccount:%s", *cloudRunIdentity.Email()))
@ -207,14 +218,58 @@ func NewStack(stacks *stack.Set, vars Variables) (*CrossStackOutput, error) {
}
}
// Only referenced if vars.IsFinalStageOfRollout is true anyway, so safe
// to leave as nil.
var cloudDeployExecutorServiceAccount *serviceaccount.Output
if vars.IsFinalStageOfRollout {
cloudDeployExecutorServiceAccount = serviceaccount.New(stack,
id.Group("clouddeploy-executor"),
serviceaccount.Config{
ProjectID: vars.ProjectID,
AccountID: "clouddeploy-executor",
DisplayName: fmt.Sprintf("%s Cloud Deploy Executor Service Account", vars.Service.GetName()),
Roles: []serviceaccount.Role{
{
ID: resourceid.New("role_clouddeploy_job_runner"),
Role: "roles/clouddeploy.jobRunner",
},
},
},
)
cloudDeployReleaserServiceAccount := serviceaccount.New(stack,
id.Group("clouddeploy-releaser"),
serviceaccount.Config{
ProjectID: vars.ProjectID,
AccountID: "clouddeploy-releaser",
DisplayName: fmt.Sprintf("%s Cloud Deploy Releases Service Account", vars.Service.GetName()),
Roles: []serviceaccount.Role{
{
ID: resourceid.New("role_clouddeploy_releaser"),
Role: "roles/clouddeploy.releaser",
},
},
},
)
// For use in e.g. https://sourcegraph.sourcegraph.com/github.com/sourcegraph/infrastructure/-/blob/managed-services/continuous-deployment-pipeline/main.tf?L5-20
// For now, just provide the ID and ask users to configure the GH action
// workload identity pool elsewhere. This can be referenced directly from
// GSM of the environment secrets.
locals.Add(OutputCloudDeployReleaserServiceAccountID, cloudDeployReleaserServiceAccount.Email,
"Service Account ID for Cloud Deploy release creation - intended for workload identity federation in CI")
}
// Collect outputs
locals.Add(OutputCloudRunServiceAccount, workloadServiceAccount.Email,
"Service Account email used as Cloud Run resource workload identity")
locals.Add(OutputOperatorServiceAccount, operatorAccessServiceAccount.Email,
"Service Account email used for operator access to other resources")
return &CrossStackOutput{
CloudRunWorkloadServiceAccount: workloadServiceAccount,
OperatorAccessServiceAccount: operatorAccessServiceAccount,
CloudRunWorkloadServiceAccount: workloadServiceAccount,
OperatorAccessServiceAccount: operatorAccessServiceAccount,
CloudDeployExecutionServiceAccount: cloudDeployExecutorServiceAccount,
}, nil
}

View File

@ -36,6 +36,7 @@ var gcpServices = []string{
"cloudprofiler.googleapis.com",
"cloudscheduler.googleapis.com",
"sqladmin.googleapis.com",
"clouddeploy.googleapis.com",
}
const (

View File

@ -1,3 +1,4 @@
load("//dev:go_defs.bzl", "go_test")
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
@ -10,6 +11,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//dev/managedservicesplatform",
"//dev/managedservicesplatform/clouddeploy",
"//dev/managedservicesplatform/googlesecretsmanager",
"//dev/managedservicesplatform/operationdocs",
"//dev/managedservicesplatform/spec",
@ -34,3 +36,10 @@ go_library(
"@org_golang_x_exp//maps",
],
)
go_test(
name = "msp_test",
srcs = ["helpers_test.go"],
embed = [":msp"],
deps = ["@com_github_hexops_autogold_v2//:autogold"],
)

View File

@ -11,7 +11,9 @@ import (
"github.com/urfave/cli/v2"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/clouddeploy"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/spec"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/stacks/cloudrun"
"github.com/sourcegraph/sourcegraph/dev/managedservicesplatform/terraformcloud"
"github.com/sourcegraph/sourcegraph/dev/sg/internal/std"
msprepo "github.com/sourcegraph/sourcegraph/dev/sg/msp/repo"
@ -166,7 +168,7 @@ func generateTerraform(serviceID string, opts generateTerraformOptions) error {
}
// Render environment
cdktf, err := renderer.RenderEnvironment(service.Service, service.Build, env, *service.Monitoring)
cdktf, err := renderer.RenderEnvironment(*service, env)
if err != nil {
return err
}
@ -176,8 +178,44 @@ func generateTerraform(serviceID string, opts generateTerraformOptions) error {
if err := cdktf.Synthesize(); err != nil {
return err
}
if rollout := service.BuildRolloutPipelineConfiguration(env); rollout != nil {
pending.Updatef("[%s] Building rollout pipeline configurations for environment %q...", serviceID, env.ID)
// region is currently fixed
region := cloudrun.GCPRegion
deploySpec, err := clouddeploy.RenderSpec(
service.Service,
service.Build,
*rollout,
region)
if err != nil {
return errors.Wrap(err, "render Cloud Deploy configuration file")
}
deploySpecFilename := fmt.Sprintf("rollout-%s.clouddeploy.yaml", region)
comment := generateCloudDeployDocstring(env.ProjectID, region, deploySpecFilename)
if err := os.WriteFile(
filepath.Join(filepath.Dir(serviceSpecPath), deploySpecFilename),
append([]byte(comment), deploySpec.Bytes()...),
0644,
); err != nil {
return errors.Wrap(err, "write Cloud Deploy configuration file")
}
skaffoldObject, err := clouddeploy.NewCloudRunCustomTargetSkaffoldAssetsArchive()
if err != nil {
return errors.Wrap(err, "create Cloud Deploy custom target skaffold YAML archive")
}
skaffoldObjectPath := filepath.Join(renderer.OutputDir, "stacks/cloudrun", cloudrun.ScaffoldSourceFile)
if err := os.WriteFile(skaffoldObjectPath, skaffoldObject.Bytes(), 0644); err != nil {
return errors.Wrap(err, "write Cloud Run custom target skaffold YAML archive")
}
}
pending.Complete(output.Styledf(output.StyleSuccess,
"[%s] Terraform assets generated in %q!", serviceID, renderer.OutputDir))
"[%s] Infrastructure assets generated in %q!", serviceID, renderer.OutputDir))
}
return nil
@ -208,3 +246,21 @@ func isHandbookRepo(relPath string) error {
}
return errors.Newf("unexpected package %q", packageJSON.Name)
}
func generateCloudDeployDocstring(projectID, gcpRegion, cloudDeployFilename string) string {
return fmt.Sprintf(`# DO NOT EDIT; generated by 'sg msp generate'
#
# This file defines additional Cloud Deploy configuration that is not yet available in Terraform.
# Apply this using the following command:
#
# gcloud deploy apply --project=%[1]s --region=%[2]s --file=%[3]s
#
# Releases can be created using the following command, which can be added to CI pipelines:
#
# gcloud deploy releases create $RELEASE_NAME --labels="commit=$COMMIT,author=$AUTHOR" --deploy-parameters="customTarget/tag=$TAG" --project=%[1]s --region=%[2]s --delivery-pipeline=%[1]s-%[2]s-rollout --source='gs://%[1]s-cloudrun-skaffold/source.tar.gz'
#
# The secret 'cloud_deploy_releaser_service_account_id' provides the ID of a service account
# that can be used to provision workload auth, for example https://sourcegraph.sourcegraph.com/github.com/sourcegraph/infrastructure/-/blob/managed-services/continuous-deployment-pipeline/main.tf?L5-20
`, // TODO improve the releases DX
projectID, gcpRegion, cloudDeployFilename)
}

View File

@ -0,0 +1,25 @@
package msp
import (
"testing"
"github.com/hexops/autogold/v2"
)
func TestGenerateCloudDeployDocstring(t *testing.T) {
comment := generateCloudDeployDocstring("PROJECT_ID", "REGION", "rollout-REGION.clouddeploy.yaml")
autogold.Expect(`# DO NOT EDIT; generated by 'sg msp generate'
#
# This file defines additional Cloud Deploy configuration that is not yet available in Terraform.
# Apply this using the following command:
#
# gcloud deploy apply --project=PROJECT_ID --region=REGION --file=rollout-REGION.clouddeploy.yaml
#
# Releases can be created using the following command, which can be added to CI pipelines:
#
# gcloud deploy releases create $RELEASE_NAME --labels="commit=$COMMIT,author=$AUTHOR" --deploy-parameters="customTarget/tag=$TAG" --project=PROJECT_ID --region=REGION --delivery-pipeline=PROJECT_ID-REGION-rollout --source='gs://PROJECT_ID-cloudrun-skaffold/source.tar.gz'
#
# The secret 'cloud_deploy_releaser_service_account_id' provides the ID of a service account
# that can be used to provision workload auth, for example https://sourcegraph.sourcegraph.com/github.com/sourcegraph/infrastructure/-/blob/managed-services/continuous-deployment-pipeline/main.tf?L5-20
`).Equal(t, comment)
}