Wolfi CI improvements (#55398)

Improvements to how the Wolfi build system works in CI.

This PR combines action items from
https://github.com/sourcegraph/security/issues/556 and
https://github.com/sourcegraph/security/issues/489

- [x] If a package changes, automatically rebuild any images that depend
on it
- [x] Upload packages to branch-specific repos
  * On `main`, packages are upload to the `@sourcegraph` repository
  * On branches, packages are uploaded to per-branch repos for testing
- [x] Prevent packages in main repo from being overwritten
  * This should fail the pipeline, and it should be clear pre-merge
- [x] Run the wolfi CI pipeline on `main`
- [x] Run the wolfi CI pipeline on all non-`main` branches
- [x] Only push base images to Dockerhub from the main branch, but
always push base images to dev repo.


## Test plan

<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->

- [x] green main-dry-run
https://buildkite.com/sourcegraph/sourcegraph/builds/237546
- [x] Full testing of CI pipeline
This commit is contained in:
Will Dollman 2023-08-10 10:45:52 +01:00 committed by GitHub
parent 87a9d8bde3
commit fa2f1b510d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 453 additions and 94 deletions

View File

@ -14,7 +14,6 @@ const (
// RunTypes should be defined by order of precedence.
PullRequest RunType = iota // pull request build
WolfiExpBranch // branch that only builds wolfi images
ManuallyTriggered // build that is manually triggred - typically used to start CI for external contributions
// Nightly builds - must be first because they take precedence
@ -149,10 +148,6 @@ func (t RunType) Matcher() *RunTypeMatcher {
return &RunTypeMatcher{
Branch: "_manually_triggered_external/",
}
case WolfiExpBranch:
return &RunTypeMatcher{
Branch: "wolfi/",
}
case ImagePatch:
return &RunTypeMatcher{
Branch: "docker-images-patch/",
@ -189,8 +184,6 @@ func (t RunType) String() string {
switch t {
case PullRequest:
return "Pull request"
case WolfiExpBranch:
return "Wolfi Exp Branch"
case ManuallyTriggered:
return "Manually Triggered External Build"
case ReleaseNightly:

View File

@ -106,6 +106,7 @@ The default run type.
- Tests
- BackCompat Tests
- **Security Scanning**: Sonarcloud Scan
- **Dependency packages**: Build and sign repository index
- Pipeline for `WolfiBaseImages` changes:
- Perform bazel prechecks
@ -120,18 +121,6 @@ The default run type.
- **Linters and static analysis**: Run sg lint
- **Security Scanning**: Sonarcloud Scan
### Wolfi Exp Branch
The run type for branches matching `wolfi/`.
You can create a build of this run type for your changes using:
```sh
sg ci build wolfi
```
Base pipeline (more steps might be included based on branch changes):
### Manually Triggered External Build
The run type for branches matching `_manually_triggered_external/`.

View File

@ -263,7 +263,6 @@ This command is useful when:
Supported run types when providing an argument for 'sg ci build [runtype]':
* wolfi - Wolfi Exp Branch
* _manually_triggered_external - Manually Triggered External Build
* main-dry-run - Main dry run
* docker-images-patch - Patch image

View File

@ -12,27 +12,30 @@ These configuration files can be processed with apko, which will generate a base
## How to...
### Update base image packages
### Update base images for a new release
In order to pull in updated packages with new features or fixed vulnerabilities, we need to periodically rebuild the base images.
Before each release, we should update the base images to ensure we include any updated packages and vulnerability fixes.
This is currently a two-step process, but will be automated in the future:
This is currently a two-step process, which will be further automated in the future:
- Run the [`wolfi-images/rebuild-images.sh`](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@588463afbb0904c125cdcf78c7b182f43328504e/-/blob/wolfi-images/rebuild-images.sh) script (with an optional argument to just update one base image), commit the updated YAML files, and merge to main.
- This will trigger Buildkite to rebuild the base images and publish them.
- Update the relevant Dockerfiles with the new base image's `sha256` hash, commit the change, and merge to main.
- NOTE: Currently we use the `latest` label, but we will switch to using a `sha256` tag once deployed in production.
- Run [`wolfi-images/rebuild-images.sh`](https://sourcegraph.com/github.com/sourcegraph/sourcegraph@588463afbb0904c125cdcf78c7b182f43328504e/-/blob/wolfi-images/rebuild-images.sh) script, commit the updated YAML files, and merge to main.
- Wait for the `main` branch's Buildkite run to complete.
- Buildkite will rebuild the base images and publish them to Dockerhub.
- Run `sg wolfi update-hashes` locally to update the base image hashes in `dev/oci_deps.bzl`. Commit these changes and merge to `main`.
- This fetches the updated base image hashes from the images that were pushed to Dockerhub in the previous step.
### Modify an existing base image
To modify a base image to add packages, users, or directories:
- Update its apko YAML configuration file, which can be found under [`wolfi-images/`](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/wolfi-images/)
- Build and testing it locally using `sg wolfi image <image-name>`.
- You can use this local image in your Dockerfiles, or exec into it directly.
- Once happy with changes, create a PR and merge to main. Buildkite will detect the changes and rebuild the base image.
- Update the relevant Dockerfiles with the new base image's `sha256` hash, commit the change, and merge to main.
- NOTE: Currently we use the `latest` label, but we will switch to using a `sha256` tag once deployed in production.
- Build the image
- To build locally, use `sg wolfi image <image-name>`.
- To build on CI, push your changes and Buildkite will build your image and push it to our `us.gcr.io/sourcegraph-dev/` dev repository.
- Test your changes by exec-ing into the image, or update `dev/oci_deps.bzl` to point at your dev base image and build the full image with Bazel.
- Once happy, merge your changes and wait for the `main` branch's Buildkite run to complete.
- Buildkite will rebuild the base image and publish it to Dockerhub.
- Run `sg wolfi update-hashes <image-name>` to update the hashes for the changed image in `dev/oci_deps.bzl`. Commit and merge these changes.
### Create a new base image
@ -44,7 +47,10 @@ Otherwise, you can create a new base image configuration file:
- Add any required packages, users, directory structure, or metadata.
- See [apko file format](https://github.com/chainguard-dev/apko/blob/main/docs/apko_file.md) for a full list of supported configuration.
- See the other images under [`wolfi-images/`](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/wolfi-images/) and [`chainguard-images/images`](https://github.com/chainguard-images/images/tree/main/images) for examples and best practices.
- Build your image locally with `sg wolfi image <image-name>`.
- Commit your updated YAML file and merge it to main. Buildkite will build and publish your new image.
- Build the image:
- To build locally, use `sg wolfi image <image-name>`.
- To build on CI, push your changes and Buildkite will build your image and push it to our `us.gcr.io/sourcegraph-dev/` dev repository.
- Test your changes by exec-ing into the image, or update `dev/oci_deps.bzl` to point at your dev base image and build the full image with Bazel.
- Commit your updated YAML file and merge it to main. Buildkite will build and publish your new image to Dockerhub.
Once complete, treat the published image it as a standard base image, and use it in your Dockerfile.
Once complete, treat the published image it as a standard Docker image, and add it to `dev/oci_deps.bzl`.

View File

@ -19,8 +19,11 @@ This markdown is used to generate an annotation at the top of every build, displ
<li><strong>ESLint</strong> is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code, with the goal of making code more consistent and avoiding bugs.</li>
<li><strong>golangci-lint</strong> is a fast Go linters runner, providing checks for errors, bugs, performance issues, and style inconsistencies in your Go code.</li>
<li><strong>nogo</strong> is a tool within the Bazel ecosystem that serves as a linter and static analyzer for Go code, checking for programming errors, bugs, stylistic errors, and suspicious constructs.</li>
<li><strong>Packages</strong> are a simple way of managing dependencies in our container images.</li>
<li><strong>pnpm</strong> is a fast, disk space efficient package manager for JavaScript and Node.js that works in a similar manner to npm and Yarn, but uses a different approach to storing and linking packages.</li>
<li><strong>Webpack</strong> is a static module bundler for modern JavaScript applications, transforming a multitude of file types into a single output file.</li>
<li><strong>Wolfi</strong> is a minimal, hardened Linux distro that's designed for containers.</li>
<li><strong>Wolfi Base Images</strong> are minimal container images that contain all the dependencies needed for our final images.</li>
</ul>
<p>Want to update the glossary? See the <a href="https://docs.sourcegraph.com/dev/how-to/update_ci_glossary">how-to in the docs.</a></p>

View File

@ -16,6 +16,7 @@ go_library(
visibility = ["//enterprise/dev/ci:__subpackages__"],
deps = [
"//dev/ci/runtype",
"//dev/sg/root",
"//enterprise/dev/ci/images",
"//enterprise/dev/ci/internal/buildkite",
"//enterprise/dev/ci/internal/ci/changed",
@ -25,11 +26,18 @@ go_library(
"//lib/errors",
"@com_github_masterminds_semver//:semver",
"@com_github_sourcegraph_log//:log",
"@in_gopkg_yaml_v2//:yaml_v2",
],
)
go_test(
name = "ci_test",
srcs = ["bazel_operations_test.go"],
srcs = [
"bazel_operations_test.go",
"wolfi_operations_test.go",
],
data = [
"//enterprise/dev/ci/internal/ci/test:test-image-configs",
],
embed = [":ci"],
)

View File

@ -130,29 +130,6 @@ func GeneratePipeline(c Config) (*bk.Pipeline, error) {
if bzlCmd == "" {
return nil, errors.Newf("no bazel command was given")
}
case runtype.WolfiExpBranch:
// Rebuild packages if package configs have changed
updatePackages := c.Diff.Has(changed.WolfiPackages)
// Rebuild base images if base image OR package configs have changed
updateBaseImages := c.Diff.Has(changed.WolfiBaseImages) || updatePackages
var numUpdatedPackages int
if updatePackages {
var packageOps *operations.Set
packageOps, numUpdatedPackages = WolfiPackagesOperations(c.ChangedFiles[changed.WolfiPackages])
ops.Merge(packageOps)
}
if updateBaseImages {
var baseImageOps *operations.Set
baseImageOps, _ = WolfiBaseImagesOperations(
c.ChangedFiles[changed.WolfiBaseImages], // TODO: If packages have changed need to update all base images. Requires a list of all base images
c.Version,
(numUpdatedPackages > 0),
)
ops.Merge(baseImageOps)
}
case runtype.PullRequest:
// First, we set up core test operations that apply both to PRs and to other run
// types such as main.
@ -166,6 +143,9 @@ func GeneratePipeline(c Config) (*bk.Pipeline, error) {
securityOps.Append(sonarcloudScan())
ops.Merge(securityOps)
// Wolfi package and base images
addWolfiOps(c, ops)
// Now we set up conditional operations that only apply to pull requests.
if c.Diff.Has(changed.Client) {
// triggers a slow pipeline, currently only affects web. It's optional so we
@ -328,6 +308,9 @@ func GeneratePipeline(c Config) (*bk.Pipeline, error) {
// testUpgrade(c.candidateImageTag(), minimumUpgradeableVersion),
))
// Wolfi package and base images
addWolfiOps(c, ops)
// All operations before this point are required
ops.Append(wait)
@ -427,3 +410,33 @@ func withAgentLostRetries(s *bk.Step) {
ExitStatus: -1,
})
}
// addWolfiOps adds operations to rebuild modified Wolfi packages and base images.
func addWolfiOps(c Config, ops *operations.Set) {
// Rebuild Wolfi packages that have config changes
var updatedPackages []string
if c.Diff.Has(changed.WolfiPackages) {
var packageOps *operations.Set
packageOps, updatedPackages = WolfiPackagesOperations(c.ChangedFiles[changed.WolfiPackages])
ops.Merge(packageOps)
}
// Rebuild Wolfi base images
// Inspect package dependencies, and rebuild base images with updated packages
_, imagesWithChangedPackages, err := GetDependenciesOfPackages(updatedPackages, "sourcegraph")
if err != nil {
panic(err)
}
// Rebuild base images with package changes AND with config changes
imagesToRebuild := append(imagesWithChangedPackages, c.ChangedFiles[changed.WolfiBaseImages]...)
imagesToRebuild = sortUniq(imagesToRebuild)
if len(imagesToRebuild) > 0 {
baseImageOps, _ := WolfiBaseImagesOperations(
imagesToRebuild,
c.Version,
(len(updatedPackages) > 0),
)
ops.Merge(baseImageOps)
}
}

View File

@ -0,0 +1,5 @@
filegroup(
name = "test-image-configs",
srcs = glob(["wolfi-images/*"]),
visibility = ["//enterprise/dev/ci/internal/ci:__pkg__"],
)

View File

@ -0,0 +1,21 @@
# Base image config, used for unit tests
contents:
packages:
- tini
- mailcap
- git
- wolfi-test-package@sourcegraph
- wolfi-test-package-subpackage@sourcegraph
- foobar-package
accounts:
run-as: sourcegraph
groups:
- groupname: sourcegraph
gid: 101
users:
- username: sourcegraph
uid: 100
gid: 101

View File

@ -0,0 +1,21 @@
# Base image config, used for unit tests
contents:
packages:
- tini
- mailcap
- git
- foobar-package
- wolfi-test-package-subpackage@sourcegraph
- wolfi-test-package-2@sourcegraph
accounts:
run-as: sourcegraph
groups:
- groupname: sourcegraph
gid: 101
users:
- username: sourcegraph
uid: 100
gid: 101

View File

@ -2,40 +2,49 @@ package ci
import (
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"github.com/sourcegraph/log"
"gopkg.in/yaml.v2"
"github.com/sourcegraph/sourcegraph/dev/sg/root"
bk "github.com/sourcegraph/sourcegraph/enterprise/dev/ci/internal/buildkite"
"github.com/sourcegraph/sourcegraph/enterprise/dev/ci/internal/ci/operations"
"github.com/sourcegraph/sourcegraph/internal/lazyregexp"
)
const wolfiImageDir = "wolfi-images"
const wolfiPackageDir = "wolfi-packages"
var baseImageRegex = lazyregexp.New(`wolfi-images\/([\w-]+)[.]yaml`)
var packageRegex = lazyregexp.New(`wolfi-packages\/([\w-]+)[.]yaml`)
// WolfiPackagesOperations rebuilds any packages whose configurations have changed
func WolfiPackagesOperations(changedFiles []string) (*operations.Set, int) {
// TODO: Should we require the image name, or the full path to the yaml file?
func WolfiPackagesOperations(changedFiles []string) (*operations.Set, []string) {
ops := operations.NewNamedSet("Dependency packages")
var changedPackages []string
var buildStepKeys []string
for _, c := range changedFiles {
match := packageRegex.FindStringSubmatch(c)
if len(match) == 2 {
changedPackages = append(changedPackages, match[1])
buildFunc, key := buildPackage(match[1])
ops.Append(buildFunc)
buildStepKeys = append(buildStepKeys, key)
}
}
ops.Append(buildRepoIndex("main", buildStepKeys))
ops.Append(buildRepoIndex(buildStepKeys))
return ops, len(buildStepKeys)
return ops, changedPackages
}
// WolfiBaseImagesOperations rebuilds any base images whose configurations have changed
func WolfiBaseImagesOperations(changedFiles []string, tag string, packagesChanged bool) (*operations.Set, int) {
// TODO: Should we require the image name, or the full path to the yaml file?
ops := operations.NewNamedSet("Base image builds")
logger := log.Scoped("gen-pipeline", "generates the pipeline for ci")
@ -72,10 +81,10 @@ func buildPackage(target string) (func(*bk.Pipeline), string) {
}, stepKey
}
func buildRepoIndex(branch string, packageKeys []string) func(*bk.Pipeline) {
func buildRepoIndex(packageKeys []string) func(*bk.Pipeline) {
return func(pipeline *bk.Pipeline) {
pipeline.AddStep(fmt.Sprintf(":card_index_dividers: Build and sign repository index for branch '%s'", branch),
bk.Cmd(fmt.Sprintf("./enterprise/dev/ci/scripts/wolfi/build-repo-index.sh %s", branch)),
pipeline.AddStep(":card_index_dividers: Build and sign repository index",
bk.Cmd("./enterprise/dev/ci/scripts/wolfi/build-repo-index.sh"),
// We want to run on the bazel queue, so we have a pretty minimal agent.
bk.Agent("queue", "bazel"),
// Depend on all previous package building steps
@ -129,3 +138,128 @@ var reStepKeySanitizer = lazyregexp.New(`[^a-zA-Z0-9_-]+`)
func sanitizeStepKey(key string) string {
return reStepKeySanitizer.ReplaceAllString(key, "")
}
// GetDependenciesOfPackages takes a list of packages and returns the set of base images that depend on these packages
// Returns two slices: the image names, and the paths to the associated config files
func GetDependenciesOfPackages(packageNames []string, repo string) (images []string, imagePaths []string, err error) {
repoRoot, err := root.RepositoryRoot()
if err != nil {
return nil, nil, err
}
wolfiImageDirPath := filepath.Join(repoRoot, wolfiImageDir)
packagesByImage, err := GetAllImageDependencies(wolfiImageDirPath)
if err != nil {
return nil, nil, err
}
// Create a list of images that depend on packageNames
for _, packageName := range packageNames {
i := GetDependenciesOfPackage(packagesByImage, packageName, repo)
images = append(images, i...)
}
// Dedupe image names
images = sortUniq(images)
// Append paths to returned image names
imagePaths = imagesToImagePaths(wolfiImageDir, images)
return
}
// GetDependenciesOfPackage returns the list of base images that depend on the given package
func GetDependenciesOfPackage(packagesByImage map[string][]string, packageName string, repo string) (images []string) {
// Use a regex to catch cases like the `jaeger` package which builds `jaeger-agent` and `jaeger-all-in-one`
var packageNameRegex = lazyregexp.New(fmt.Sprintf(`^%s(?:-[a-z0-9-]+)?$`, packageName))
if repo != "" {
packageNameRegex = lazyregexp.New(fmt.Sprintf(`^%s(?:-[a-z0-9-]+)?@%s`, packageName, repo))
}
for image, packages := range packagesByImage {
for _, p := range packages {
match := packageNameRegex.FindStringSubmatch(p)
if len(match) > 0 {
images = append(images, image)
}
}
}
// Dedupe image names
images = sortUniq(images)
return
}
// Add directory path and .yaml extension to each image name
func imagesToImagePaths(path string, images []string) (imagePaths []string) {
for _, image := range images {
imagePaths = append(imagePaths, filepath.Join(path, image)+".yaml")
}
return
}
func sortUniq(inputs []string) []string {
unique := make(map[string]bool)
var dedup []string
for _, input := range inputs {
if !unique[input] {
unique[input] = true
dedup = append(dedup, input)
}
}
sort.Strings(dedup)
return dedup
}
// GetAllImageDependencies returns a map of base images to the list of packages they depend upon
func GetAllImageDependencies(wolfiImageDir string) (packagesByImage map[string][]string, err error) {
packagesByImage = make(map[string][]string)
files, err := os.ReadDir(wolfiImageDir)
if err != nil {
return nil, err
}
for _, f := range files {
if !strings.HasSuffix(f.Name(), ".yaml") {
continue
}
filename := filepath.Join(wolfiImageDir, f.Name())
imageName := strings.Replace(f.Name(), ".yaml", "", 1)
packages, err := getPackagesFromBaseImageConfig(filename)
if err != nil {
return nil, err
}
packagesByImage[imageName] = packages
}
return
}
// BaseImageConfig follows a subset of the structure of a Wolfi base image manifests
type BaseImageConfig struct {
Contents struct {
Packages []string `yaml:"packages"`
} `yaml:"contents"`
}
// getPackagesFromBaseImageConfig reads a base image config file and extracts the list of packages it depends on
func getPackagesFromBaseImageConfig(configFile string) ([]string, error) {
var config BaseImageConfig
yamlFile, err := os.ReadFile(configFile)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(yamlFile, &config)
if err != nil {
return nil, err
}
return config.Contents.Packages, nil
}

View File

@ -0,0 +1,116 @@
package ci
import (
"reflect"
"testing"
)
func Test_sanitizeStepKey(t *testing.T) {
type args struct {
key string
}
tests := []struct {
name string
key string
want string
}{
{
"Test 1",
"foo!@£_bar$%^baz;'-bam",
"foo_barbaz-bam",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := sanitizeStepKey(tt.key); got != tt.want {
t.Errorf("sanitizeStepKey() = %v, want %v", got, tt.want)
}
})
}
}
func TestGetAllImageDependencies(t *testing.T) {
type args struct {
wolfiImageDir string
}
tests := []struct {
name string
wolfiImageDir string
wantPackagesByImage map[string][]string
wantErr bool
}{
{
"Test 1",
"test/wolfi-images",
map[string][]string{
"wolfi-test-image-1": {
"tini",
"mailcap",
"git",
"wolfi-test-package@sourcegraph",
"wolfi-test-package-subpackage@sourcegraph",
"foobar-package",
},
"wolfi-test-image-2": {
"tini",
"mailcap",
"git",
"foobar-package",
"wolfi-test-package-subpackage@sourcegraph",
"wolfi-test-package-2@sourcegraph",
},
},
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wolfiImageDirPath := tt.wolfiImageDir
gotPackagesByImage, err := GetAllImageDependencies(wolfiImageDirPath)
if (err != nil) != tt.wantErr {
t.Errorf("GetAllImageDependencies() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(gotPackagesByImage, tt.wantPackagesByImage) {
t.Errorf("GetAllImageDependencies() = %v, want %v", gotPackagesByImage, tt.wantPackagesByImage)
}
})
}
}
func TestGetDependenciesOfPackage(t *testing.T) {
type args struct {
packageName string
repo string
}
tests := []struct {
name string
args args
wantImages []string
}{
{
"Test wolfi-test-package and subpackage",
args{packageName: "wolfi-test-package", repo: "sourcegraph"},
[]string{"wolfi-test-image-1", "wolfi-test-image-2"},
},
{
"Test wolfi-test-package-2",
args{packageName: "wolfi-test-package-2", repo: "sourcegraph"},
[]string{"wolfi-test-image-2"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wolfiImageDirPath := "test/wolfi-images"
gotPackagesByImage, err := GetAllImageDependencies(wolfiImageDirPath)
if err != nil {
t.Errorf("Error running GetAllImageDependencies() error = %v", err)
return
}
if gotImages := GetDependenciesOfPackage(gotPackagesByImage, tt.args.packageName, tt.args.repo); !reflect.DeepEqual(gotImages, tt.wantImages) {
t.Errorf("GetDependenciesOfPackage() = %v, want %v", gotImages, tt.wantImages)
}
})
}
}

View File

@ -4,6 +4,10 @@ set -euf -o pipefail
cd "$(dirname "${BASH_SOURCE[0]}")/../../../../.."
MAIN_BRANCH="main"
BRANCH="${BUILDKITE_BRANCH:-'default-branch'}"
IS_MAIN=$([ "$BRANCH" = "$MAIN_BRANCH" ] && echo "true" || echo "false")
tmpdir=$(mktemp -d -t wolfi-bin.XXXXXXXX)
function cleanup() {
echo "Removing $tmpdir"
@ -66,13 +70,13 @@ docker load <"$tarball"
# Push to internal dev repo
docker tag "$image_name" "us.gcr.io/sourcegraph-dev/wolfi-${name}-base:$tag"
docker push "us.gcr.io/sourcegraph-dev/wolfi-${name}-base:$tag"
# TODO(will): Limit to main branch when Wolfi CI is running on main
docker tag "$image_name" "us.gcr.io/sourcegraph-dev/wolfi-${name}-base:latest"
docker push "us.gcr.io/sourcegraph-dev/wolfi-${name}-base:latest"
# Push to dockerhub
# TODO(will): Limit to main branch when Wolfi CI is running on main
docker tag "$image_name" "sourcegraph/wolfi-${name}-base:$tag"
docker push "sourcegraph/wolfi-${name}-base:$tag"
docker tag "$image_name" "sourcegraph/wolfi-${name}-base:latest"
docker push "sourcegraph/wolfi-${name}-base:latest"
# Push to Dockerhub only on main branch
if [[ "$IS_MAIN" == "true" ]]; then
docker tag "$image_name" "sourcegraph/wolfi-${name}-base:$tag"
docker push "sourcegraph/wolfi-${name}-base:$tag"
docker tag "$image_name" "sourcegraph/wolfi-${name}-base:latest"
docker push "sourcegraph/wolfi-${name}-base:latest"
fi

View File

@ -10,7 +10,15 @@ key_path=$(realpath ./wolfi-packages/temporary-keys/)
GCP_PROJECT="sourcegraph-ci"
GCS_BUCKET="package-repository"
TARGET_ARCH="x86_64"
branch="main"
MAIN_BRANCH="main"
BRANCH="${BUILDKITE_BRANCH:-'default-branch'}"
IS_MAIN=$([ "$BRANCH" = "$MAIN_BRANCH" ] && echo "true" || echo "false")
# shellcheck disable=SC2001
BRANCH_PATH=$(echo "$BRANCH" | sed 's/[^a-zA-Z0-9_-]/-/g')
if [[ "$IS_MAIN" != "true" ]]; then
BRANCH_PATH="branches/$BRANCH_PATH"
fi
tmpdir=$(mktemp -d -t melange-bin.XXXXXXXX)
function cleanup() {
@ -37,7 +45,7 @@ apkindex_build_dir=$(mktemp -d -t apkindex-build.XXXXXXXX)
pushd "$apkindex_build_dir"
# Fetch all APKINDEX fragments from bucket
gsutil -u "$GCP_PROJECT" -m cp "gs://$GCS_BUCKET/packages/$branch/$TARGET_ARCH/*.APKINDEX.fragment" ./
gsutil -u "$GCP_PROJECT" -m cp "gs://$GCS_BUCKET/packages/$BRANCH_PATH/$TARGET_ARCH/*.APKINDEX.fragment" ./
# Concat all fragments into a single APKINDEX and tar.gz it
touch placeholder.APKINDEX.fragment
@ -46,8 +54,9 @@ touch DESCRIPTION
tar zcf APKINDEX.tar.gz APKINDEX DESCRIPTION
# Sign index
# TODO: Use separate keys for staging and prod repos
melange sign-index --signing-key "$key_path/melange.rsa" APKINDEX.tar.gz
# Upload signed APKINDEX archive
# Use no-cache to avoid index/packages getting out of sync
gsutil -u "$GCP_PROJECT" -h "Cache-Control:no-cache" cp APKINDEX.tar.gz "gs://$GCS_BUCKET/packages/$branch/$TARGET_ARCH/"
gsutil -u "$GCP_PROJECT" -h "Cache-Control:no-cache" cp APKINDEX.tar.gz "gs://$GCS_BUCKET/packages/$BRANCH_PATH/$TARGET_ARCH/"

View File

@ -8,24 +8,35 @@ cd "$(dirname "${BASH_SOURCE[0]}")/../../../../.."
GCP_PROJECT="sourcegraph-ci"
GCS_BUCKET="package-repository"
TARGET_ARCH="x86_64"
branch="main"
MAIN_BRANCH="main"
BRANCH="${BUILDKITE_BRANCH:-'default-branch'}"
IS_MAIN=$([ "$BRANCH" = "$MAIN_BRANCH" ] && echo "true" || echo "false")
# shellcheck disable=SC2001
BRANCH_PATH=$(echo "$BRANCH" | sed 's/[^a-zA-Z0-9_-]/-/g')
if [[ "$IS_MAIN" != "true" ]]; then
BRANCH_PATH="branches/$BRANCH_PATH"
fi
cd wolfi-packages/packages/$TARGET_ARCH
# Use GCP tooling to upload new package to repo, ensuring it's on the right branch.
# Check that this exact package does not already exist in the repo - fail if so
# TODO: Support branches for uploading
# TODO: Check for existing files only if we're on main - overwriting is permitted on branches
echo " * Uploading package to repository"
# List all .apk files under wolfi-packages/packages/$TARGET_ARCH/
error="false"
package_usage_list=""
apks=(*.apk)
for apk in "${apks[@]}"; do
echo " * Processing $apk"
dest_path="gs://$GCS_BUCKET/packages/$branch/$TARGET_ARCH/"
echo " -> File path: $dest_path / $apk"
# Generate the branch-specific path to upload the package to
dest_path="gs://$GCS_BUCKET/packages/$BRANCH_PATH/$TARGET_ARCH/"
echo " -> File path: ${dest_path}${apk}"
# Generate the path to the package file on the main branch
dest_path_main="gs://$GCS_BUCKET/packages/$MAIN_BRANCH/$TARGET_ARCH/"
# Generate index fragment for this package
melange index -o "$apk.APKINDEX.tar.gz" "$apk"
@ -34,18 +45,45 @@ for apk in "${apks[@]}"; do
mv APKINDEX "$index_fragment"
echo " * Generated index fragment '$index_fragment"
# Check if this version of the package already exists in bucket
echo " * Checking if this package version already exists in repo..."
if gsutil -q -u "$GCP_PROJECT" stat "$dest_path/$apk"; then
echo "$apk: A package with this version already exists, and cannot be overwritten."
echo "Resolve this issue by incrementing the \`epoch\` field in the package's YAML file."
# exit 1
# Check whether this version of the package already exists in the main package repo
echo " * Checking if this package version already exists in the production repo..."
if gsutil -q -u "$GCP_PROJECT" stat "${dest_path_main}${apk}"; then
echo "The production package repository already contains a package with this version: $apk" >&2
echo " -> Production repository file path: ${dest_path_main}${apk}" >&2
echo "Resolve this issue by incrementing the \`epoch\` field in the package's YAML file." >&2
# Soft fail at the end - we still want to allow the package to be uploaded for cases like a Buildkite pipeline being rerun
error="true"
else
echo " * File does not exist, uploading..."
fi
# TODO: Pass -n when on main to avoid accidental overwriting
# no-cache to avoid index/packages getting out of sync
echo " * Uploading package and index fragment to repo"
gsutil -u "$GCP_PROJECT" -h "Cache-Control:no-cache" cp "$apk" "$index_fragment" "$dest_path"
# Concat package names for annotation
package_name=$(echo "$apk" | sed -E 's/(-[0-9].*)//')
package_usage_list="$package_usage_list - ${package_name}@branch\n"
done
# Show package usage message on branches
if [[ "$IS_MAIN" != "true" ]]; then
# TODO: Update keyring when keys change: https://storage.googleapis.com/package-repository/packages/${BRANCH_PATH}/melange.rsa.pub
if [[ -n "$BUILDKITE" ]]; then
echo -e "To use this package locally, add the following lines to your base image config:
\`\`\`
contents:
keyring:
- https://storage.googleapis.com/package-repository/packages/melange.rsa.pub
repositories:
- '@branch https://storage.googleapis.com/package-repository/packages/${BRANCH_PATH}'
packages:
$package_usage_list
\`\`\`" | ../../../enterprise/dev/ci/scripts/annotate.sh -s "custom-repo" -m -t "info"
fi
fi
if [[ "$error" == "true" ]]; then
exit 222 # Soft fail
fi

View File

@ -31,4 +31,4 @@ work-dir: /redis-data
entrypoint:
command: redis-server
# MANUAL REBUILD: Thu Jun 22 13:43:35 BST 2023
# MANUAL REBUILD: Fri Jul 28 15:24:48 BST 2023

View File

@ -3,7 +3,7 @@
package:
name: ctags
version: f95bb3497f53748c2b6afc7f298cff218103ab90
epoch: 1
epoch: 2
description: "A maintained ctags implementation"
target-architecture:
- x86_64

View File

@ -4,7 +4,7 @@
package:
name: jaeger
version: 1.45.0 # Keep in sync with version in sg.config.yaml
epoch: 1
epoch: 2
description: "Distributed Tracing Platform"
target-architecture:
- x86_64