Docs update for new Kustomize setup(#47772)

Docs update for the new base cluster with the new Kustomize setup

Co-authored-by: Jason Hawk Harris <jasonhawkharris@gmail.com>
Co-authored-by: Warren Gifford <warren@sourcegraph.com>
Co-authored-by: Jacob Pleiness <jdpleiness@users.noreply.github.com>
This commit is contained in:
Beatrix 2023-02-23 13:52:41 -08:00 committed by GitHub
parent 7b927ed00e
commit 5b3df7373e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
28 changed files with 2566 additions and 1084 deletions

View File

@ -17,7 +17,8 @@ All notable changes to Sourcegraph are documented in this file.
### Added
-
- Kubernetes Deployments: Introduced a new Kubernetes deployment option ([deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s)) to deploy Sourcegraph with Kustomize. [#46755](https://github.com/sourcegraph/sourcegraph/issues/46755)
- Kubernetes Deployments: The new Kustomize deployment ([deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s)) introduces a new base cluster that runs all Sourcegraph services as non-root users with limited privileges and eliminates the need to create RBAC resources.[#4213](https://github.com/sourcegraph/deploy-sourcegraph/pull/4213)
### Changed

View File

@ -1,21 +1,32 @@
---
title: Deployment Overview
---
<style>
.markdown-body aside p:before {
content: '';
display: inline-block;
height: 1.2em;
width: 1em;
background-size: contain;
background-repeat: no-repeat;
background-image: url(../code_monitoring/file-icon.svg);
margin-right: 0.2em;
margin-bottom: -0.29em;
}
</style>
# Deployment Overview
Sourcegraph supports different deployment methods for different purposes. Each deployment type requires different levels of investment and technical understanding. What works best for you and your team depends on the needs and desired outcomes for your business.
Sourcegraph offers multiple deployment options to suit different needs. The appropriate option for your organization depends on your goals and requirements, as well as the technical expertise and resources available. The following sections overview the available options and their associated investments and technical demands.
If you aren't currently working with our Customer Engineering team, this overview will provide a high-level view of what's available and needed depending on the deployment type you choose.
## Deployment types
In general:
Carefully consider your organization's needs and technical expertise when selecting a Sourcegraph deployment method. The method you choose cannot be changed for a running instance, so make an informed decision. The available methods have different capabilities, and the following sections provide recommendations to help you choose.
- For most customers, we recommend Sourcegraph Cloud, managed entirely by Sourcegraph.
- For customers who want to self-host, we recommend one of the single-node deployment options.
- For enterprise customers that require a multi-node, self-hosted deployment, we offer a Kubernetes option. We strongly encourage you to get in touch by emails (sales@sourcegraph.com) if you pursue this option.
- If you are short on time and looking for a quick way to test Sourcegraph locally, consider running Sourcegraph via our [Docker Single Container](docker-single-container/index.md).
### [Sourcegraph Cloud](https://signup.sourcegraph.com/)
## Recommended
**For Enterprises looking for a Cloud solution.**
A cloud instance hosted and maintained by Sourcegraph
<div>
<a class="cloud-cta" href="https://signup.sourcegraph.com" target="_blank" rel="noopener noreferrer">
@ -29,20 +40,14 @@ In general:
</div>
</a>
</div>
## Deployment types
To start, you will need to decide your on deployment method, including Kubernetes with or without Helm, as they are noninterchangeable. In short, you **cannot** change your deployment type of a running instance.
Each of the deployment types listed below provides a different level of capability. As mentioned previously, you shall pick a deployment type based on the needs of your business. However, you should also consider the technical expertise available for your deployment. The sections below provide more detailed recommendations for each deployment type.
Sourcegraph provides a [resource estimator](resource_estimator.md) to help predict and plan the required resource for your deployment. This tool ensures you provision appropriate resources to scale your instance.
### [Machine Images](machine-images/index.md)
<span class="badge badge-note">RECOMMENDED</span> Customized machine images allow you to spin up a preconfigured and customized Sourcegraph instance with just a few clicks, all in less than 10 minutes!
**For Enterprises looking for a self-hosted solution on Cloud.**
Currently available in the following hosts:
An option to run Sourcegraph on your own infrastructure using pre-configured machine images.
Customized machine images allow you to spin up a preconfigured and customized Sourcegraph instance with just a few clicks, all in less than 10 minutes. Currently available in the following hosts:
<div class="getting-started">
<a class="btn btn-secondary text-center" href="machine-images/aws-ami"><span>AWS AMIs</span></a>
@ -50,75 +55,41 @@ Currently available in the following hosts:
<a class="btn btn-secondary text-center" href="machine-images/gce"><span>Google Compute Images</span></a>
</div>
### [Single-machine install-script](single-node/script.md)
### [Install-script](single-node/script.md)
Quickly install Sourcegraph onto a single Linux machine using our install script.
Sourcegraph provides an install script that can deploy Sourcegraph instances to Linux-based virtual machines. This method is recommended for:
### [Kubernetes](kubernetes/index.md) or [Kubernetes with Helm](kubernetes/helm.md)
- On-premises deployments (your own infrastructure)
- Deployments to unsupported cloud providers (non-officially supported)
Kubernetes is recommended for non-standard deployments where our Machine Images or install-script is not a viable option.
>NOTE: Deploying with machine images requires technical expertise and the ability to maintain and manage your own infrastructure.
This path will require advanced knowledge of Kubernetes. For teams without the ability to support this, please speak to your Sourcegraph contact about using our other deployments instead.
### [Kubernetes](kubernetes/index.md)
Helm provides a simple mechanism for deployment customizations, as well as a much simpler upgrade experience.
**For large Enterprises that require a multi-node, self-hosted solution.**
If you are unable to use Helm to deploy, but still want to use Kubernetes, follow our [Kubernetes deployment documentation](kubernetes/index.md). This path will require advanced knowledge of Kubernetes. For teams without the ability to support this, please speak to your Sourcegraph contact about using our other deployments instead.
- **Kustomize** utilizes the built-in features of kubectl to provide maximum flexibility in configuring your deployment
- **Helm** offers a simpler deployment process but with less customization flexibility
---
We highly recommend deploying Sourcegraph on Kubernetes with Kustomize due to the flexibility it provides.
## Reference repositories
<div class="getting-started">
<a class="btn btn-secondary text-center" href="kubernetes/index"><span>Kustomize</span></a>
<a class="btn btn-secondary text-center" href="kubernetes/helm"><span>Helm</span></a>
</div>
Sourcegraph provides reference repositories with branches corresponding to the version of Sourcegraph you wish to deploy. The reference repository contains everything you need to spin up and configure your instance depending on your deployment type, which also assists in your upgrade process going forward.
>NOTE: Given the technical knowledge required to deploy and maintain on Kubernetes, teams without these resources should contact their Sourcegraph representative at [sales@sourcegraph.com](mailto:sales@sourcegraph.com) to discuss alternative deployment options.
For more information, follow the installation and configuration docs for your specific deployment type.
### Local machines
## Configuration
**For setting up non-production environments on local machines.**
Configuration at the deployment level focuses on ensuring your Sourcegraph deployment runs optimally, based on the size of your repositories and the number of users. Configuration options will vary based on the type of deployment you choose. Consult the specific configuration deployment sections for additional information.
- <span class="badge badge-experimental">Experimental</span> [Sourcegraph App](../../app/index.md) - A standalone Desktop application
- [Docker Compose](docker-compose/index.md) - Install Sourcegraph on Docker Compose
- [Docker Single Container](docker-single-container/index.md) - Install Sourcegraph using a single Docker container
- [Minikube](single-node/minikube.md) - Install Sourcegraph using Minikube
- [k3s](single-node/k3s.md) - Install Sourcegraph in a k3s cluster
In addition you can review our [Configuration docs](../config/index.md) for overall Sourcegraph configuration.
### ARM / ARM64 support
## Operation
In general, operation activities for your Sourcegraph deployment will consist of storage management, database access, database migrations, and backup and restore. Details are provided with the instructions for each deployment type.
## Monitoring
Sourcegraph provides a number of options to monitor the health and usage of your deployment. While high-level guidance is provided as part of your deployment type, you can also review our [Observability docs](../observability/index.md) for more detailed instruction.
## Upgrades
A new version of Sourcegraph is released every month (with patch releases in between as needed). We actively maintain the two most recent monthly releases of Sourcegraph. The [changelog](../../CHANGELOG.md) provides all information related to any changes that are/were in a release.
Depending on your current version and the version you are looking to upgrade, there may be specific upgrade instruction and requirements. Checkout the [Upgrade docs](../updates/index.md) for additional information and instructions.
## External services
By default, Sourcegraph provides versions of services it needs to operate, including:
- A [PostgreSQL](https://www.postgresql.org/) instance for storing long-term information, such as user data, when using Sourcegraph's built-in authentication provider instead of an external one.
- A second PostgreSQL instance for storing large-volume code graph data.
- A [Redis](https://redis.io/) instance for storing short-term information such as user sessions.
- A second Redis instance for storing cache data.
- A `sourcegraph/blobstore` instance that serves as a local S3-compatible object storage to hold user uploads before processing. _This data is for temporary storage, and content will be automatically deleted once processed._
- A [Jaeger](https://www.jaegertracing.io/) instance for end-to-end distributed tracing.
> NOTE: As a best practice, configure your Sourcegraph instance to use an external or managed version of these services. Using a managed version of PostgreSQL can make backups and recovery easier to manage and perform. Using a managed object storage service may decrease hosting costs as persistent volumes are often more expensive than object storage space.
### External services guides
See the following guides to use an external or managed version of each service type.
- [PostgreSQL Guide](../postgres.md)
- See [Using your PostgreSQL server](../external_services/postgres.md) to replace the bundled PostgreSQL instances.
- See [Using your Redis server](../external_services/redis.md) to replace the bundled Redis instances.
- See [Using a managed object storage service (S3 or GCS)](../external_services/object_storage.md) to replace the bundled blobstore instance.
- See [Using an external Jaeger instance](../observability/tracing.md#use-an-external-jaeger-instance) in our [tracing documentation](../observability/tracing.md) to replace the bundled Jaeger instance.Use-an-external-Jaeger-instance
> NOTE: Using Sourcegraph with an external service is a [paid feature](https://about.sourcegraph.com/pricing). [Contact us](https://about.sourcegraph.com/contact/sales) to get a trial license.
### Cloud alternatives
- Amazon Web Services: [AWS RDS for PostgreSQL](https://aws.amazon.com/rds/), [Amazon ElastiCache](https://aws.amazon.com/elasticache/redis/), and [S3](https://aws.amazon.com/s3/) for storing user uploads.
- Google Cloud: [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres/), [Cloud Memorystore](https://cloud.google.com/memorystore/), and [Cloud Storage](https://cloud.google.com/storage) for storing user uploads.
- Digital Ocean: [Digital Ocean Managed Databases](https://www.digitalocean.com/products/managed-databases/) for [Postgres](https://www.digitalocean.com/products/managed-databases-postgresql/), [Redis](https://www.digitalocean.com/products/managed-databases-redis/), and [Spaces](https://www.digitalocean.com/products/spaces/) for storing user uploads.
Running Sourcegraph on ARM / ARM64 images is not supported for production deployments.

File diff suppressed because it is too large Load Diff

View File

@ -34,10 +34,10 @@ Through this guide, we will be using a number of parameters starting with `$` th
Before getting started, we need to identify the size for our initial cluster. You can use the following chart as reference:
| Users | `$NODE_TYPE` | `$NODE_MIN` | `$NODE_MAX` | Cost est. |
| -------- | ------------ | ----------- | ----------- | ------------ |
| 10-500 | m5.4xlarge | 3 | 6 | $59-118/day |
| 500-2000 | m5.4xlarge | 6 | 10 | $118-195/day |
| Users | `$NODE_TYPE` | `$NODE_MIN` | `$NODE_MAX` | Cost est. |
| -------- | ------------- | ----------- | ----------- | ------------ |
| 10-500 | m6a.4xlarge | 3 | 6 | $59-118/day |
| 500-2000 | m6a.4xlarge | 6 | 10 | $118-195/day |
> **Note:** You can modify these values later on to scale up/down the number of worker nodes using the `eksctl` command line. For more information please the [eksctl documentation](https://eksctl.io/)

View File

@ -1,5 +1,9 @@
# Using Helm
Helm offers a simple deployment process on Kubernetes.
>NOTE: We highly recommend [deploying Sourcegraph on Kubernetes with Kustomize](index.md) due to the flexibility it provides.
## Requirements
* [Helm 3 CLI](https://helm.sh/docs/intro/install/)
@ -20,7 +24,7 @@
## Why use Helm
Our Helm chart has a lot of sensible defaults baked into the values.yaml. Not only does this make customizations much easier (than either using Kustomize or manually editing Sourcegraph's manifest files) it also means that, when an override file is used to make the changes, you _never_ have to deal with merge conflicts during upgrades (see more about customizations in the [configuration](#configuration) section).
Our Helm chart has a lot of sensible defaults baked into the values.yaml so that when an override file is used to make the changes, you _never_ have to deal with merge conflicts during upgrades (see more about customizations in the [configuration](#configuration) section).
## High-level overview of how to use Helm with Sourcegraph

View File

@ -1,119 +1,210 @@
# Sourcegraph with Kubernetes
# Sourcegraph on Kubernetes
<p class="lead">
Deploying Sourcegraph on Kubernetes is for organizations that need highly scalable and available code search and code navigation.
</p>
Deploying on Kubernetes is for organizations that need highly scalable and available code search and code navigation.
> NOTE: Sourcegraph recommends [using Helm to deploy Sourcegraph](helm.md) if possible.
> This page covers a more manual Kubernetes deployment, using `kubectl` to deploy manifests. This is only recommended if Helm cannot be used in your Kubernetes enviroment. See the Helm guide for more information on why Helm is preferable.
<div class="cta-group">
<a class="btn btn-primary" href="#installation">★ Installation</a>
<a class="btn" href="operations">Operations guides</a>
<a class="btn" href="#about">About Kubernetes</a>
<a class="btn" href="../../../#get-help">Get help</a>
<div class="getting-started">
<a class="btn btn-primary text-center" href="#prerequisites">★ Installation</a>
<a class="btn text-center" href="kustomize">Introduction</a>
<a class="btn text-center" href="configure">Configuration</a>
<a class="btn text-center" href="operations">Maintenance</a>
</div>
## Requirements for using Kubernetes
Below is an overview of installing Sourcegraph on Kubernetes using Kustomize.
Our Kubernetes support has the following requirements:
### Prerequisites
- [Sourcegraph Enterprise license](configure.md#add-license-key). _You can run through these instructions without one, but you must obtain a license for instances of more than 10 users_
- A deployed kubernetes cluster. You can do this yourself, or use [our terraform configs](https://github.com/sourcegraph/tf-k8s-configs) to quickly deploy a cluster that will support a standard Sourcegraph instance on Google Cloud Platform (GKE) or Amazon Web Services (EKS).
- Minimum Kubernetes version: [v1.19](https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) v1.19 or later (check kubectl docs for backward and forward compatibility with Kubernetes versions)
- Support for Persistent Volumes (SSDs recommended)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (v1.19 or later) with [Kustomize](https://kustomize.io/) (built into [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) in version >= 1.14)
* A [Kubernetes](https://kubernetes.io/) cluster ([v1.19 or later](https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/))
- Support for Persistent Volumes with SSDs
- You can optionally refer to our [terraform configurations](https://github.com/sourcegraph/tf-k8s-configs) for setting up clusters on:
- [Amazon Web Services EKS](https://github.com/sourcegraph/tf-k8s-configs/tree/main/aws)
- [Azure AKS](https://github.com/sourcegraph/tf-k8s-configs/tree/main/azure)
- [Google Cloud Platform GKE](https://github.com/sourcegraph/tf-k8s-configs/tree/main/gcp)
We also recommend some familiarity with the following Kubernetes concepts before proceeding:
>WARNING: **If your Sourcegraph version is older than `4.5.0`, please refer to the [old deployment docs for Kubernetes](https://docs.sourcegraph.com/@v4.4.2/admin/deploy/kubernetes).**
- [Kubernetes Objects](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/)
- [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
- [Role Based Access Control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
- [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)
### **Step 1**: Set up a release branch
Not sure if Kubernetes is the right choice for you? Learn more about the various [Sourcegraph installation options](../index.md).
Create a release branch from the default branch in your local fork of the [deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s) repository.
## Installation
See the [docs on reference repository](../repositories.md) for detailed instructions on creating a local fork.
Before starting, we recommend reading the [configuration guide](configure.md#getting-started), ensuring you have prepared the items below so that you're ready to start your installation:
```bash
$ git clone https://github.com/sourcegraph/deploy-sourcegraph-k8s.git
$ cd deploy-sourcegraph-k8s
$ git checkout -b release
```
- [Customization](./configure.md#customizations)
- [Storage class](./configure.md#configure-a-storage-class)
- [Network Access](./configure.md#configure-network-access)
- [PostgreSQL Database](./configure.md#configure-external-databases)
- [Scaling services](./scale.md)
- [Cluster role administrator access](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
### **Step 2**: Set up a directory for your instance
> WARNING: If you are deploying on Azure, you **must** ensure that [your cluster is created with support for CSI storage drivers](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers). This **can not** be enabled after the fact.
Create a copy of the [instances/template](kustomize/index.md#template) directory and rename it to `instances/my-sourcegraph`:
Once you are all set up, either [install Sourcegraph directly](#direct-installation) or [deploy Sourcegraph to a cloud of your choice](#cloud-installation).
```bash
$ cp -R instances/template instances/my-sourcegraph
```
### Reference repository
In Kustomize, this directory is referred to as an [overlay](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#overlay).
Sourcegraph for Kubernetes is configured using our [`sourcegraph/deploy-sourcegraph` reference repository](https://github.com/sourcegraph/deploy-sourcegraph/). This repository contains everything you need to [spin up](#installation) and [configure](./configure.md) a Sourcegraph deployment on Kubernetes.
### **Step 3**: Set up the configuration files
### Direct installation
1\. Rename the [kustomization.template.yaml](kustomize/index.md#kustomization-yaml) file in `instances/my-sourcegraph` to `kustomization.yaml`.
- After meeting all the requirements, make sure you can [access your cluster](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/) with `kubectl`.
- `cd` to the forked local copy of the [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph) repository previously set up during [configuration](./configure.md#getting-started).
- Deploy the desired version of Sourcegraph to your cluster by [applying the Kubernetes manifests](./configure.md#applying-manifests):
The `kustomization.yaml` file is used to configure your Sourcegraph instance.
```sh
./kubectl-apply-all.sh
```bash
$ mv instances/template/kustomization.template.yaml instances/my-sourcegraph/kustomization.yaml
```
2\. Rename the [buildConfig.template.yaml](kustomize/index.md#buildconfig-yaml) file in `instances/my-sourcegraph` to `buildConfig.yaml`.
The `buildConfig.yaml` file is used to configure components included in your `kustomization` file when required.
```bash
$ mv instances/template/buildConfig.template.yaml instances/my-sourcegraph/buildConfig.yaml
```
### **Step 4**: Set namespace
By default, the provided `kustomization.yaml` template deploys Sourcegraph into the `ns-sourcegraph` namespace.
If you intend to deploy Sourcegraph into a different namespace, replace `ns-sourcegraph` with the name of the existing namespace in your cluster, or set it to `default` to deploy into the default namespace.
```yaml
# instances/my-sourcegraph/kustomization.yaml
namespace: sourcegraph
```
> NOTE: Google Cloud Platform (GCP) users are required to give their user the ability to create roles in Kubernetes
> ([Learn more](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control)):
>
> `kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)`
### **Step 5**: Set storage class
- Monitor the status of the deployment:
A storage class must be created and configured before deploying Sourcegraph. SSD storage is not required but is strongly recommended for optimal performance.
```sh
kubectl get pods -o wide --watch
#### Option 1: Create a new storage class
We recommend using a preconfigured storage class component for your cloud provider if you can create cluster-wide resources:
```yaml
# instances/my-sourcegraph/kustomization.yaml
components:
# Select a component that corresponds to your cluster provider
- ../../components/storage-class/aws/aws-ebs
- ../../components/storage-class/aws/ebs-csi
- ../../components/storage-class/azure
- ../../components/storage-class/gke
```
See our [configurations guide](configure.md) for the full list of available storage class components.
#### Option 2: Use an existing storage class
If you cannot create a new storage class and/or want to use an existing one with SSDs:
1\. Include the `storage-class/name-update` component under the components list
```yaml
# instances/my-sourcegraph/kustomization.yaml
components:
# This updates storageClassName to
# the STORAGECLASS_NAME value from buildConfig.yaml
- ../../components/storage-class/name-update
```
- After deployment is completed, verify Sourcegraph is running by temporarily making the frontend port accessible:
2\. Input the storage class name by setting the value of `STORAGECLASS_NAME` in `buildConfig.yaml`.
```sh
kubectl port-forward svc/sourcegraph-frontend 3080:30080
For example, set `STORAGECLASS_NAME=sourcegraph` if `sourcegraph` is the name of an existing storage class:
```yaml
# instances/my-sourcegraph/buildConfig.yaml
kind: SourcegraphBuildConfig
metadata:
name: sourcegraph-kustomize-config
data:
STORAGECLASS_NAME: sourcegraph # -- [ACTION] Update storage class name here
```
- Open http://localhost:3080 in your browser and you will see a setup page. Congratulations, you have Sourcegraph up and running! 🎉
#### Option 3: Use default storage class
> NOTE: If you previously [set up an `ingress-controller`](./configure.md#ingress-controller-recommended), you can now also access your deployment via the ingress.
Skip this step to use the default storage class without SSD support for non-production environments. However, you must recreate the cluster with SSDs configured for production environments later.
### Cloud installation
>WARNING: Search performance will suffer tremendously without SSDs provisioned.
> WARNING: If you intend to set this up as a production instance, we recommend you create the cluster in a VPC
> or other secure network that restricts unauthenticated access from the public Internet. You can later expose the
> necessary ports via an
> [Internet Gateway](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html) or equivalent
> mechanism. Note that SG must expose port 443 for outbound traffic to codehosts and to enable [telemetry](https://docs.sourcegraph.com/admin/pings) with
> Sourcegraph.com. Additionally port 22 may be opened to enable git SSH cloning by Sourcegraph. Take care to secure your cluster in a manner that meets your
> organization's security requirements.
### **Step 6**: Build manifests with Kustomize
Follow the instructions linked in the table below to provision a Kubernetes cluster for the
infrastructure provider of your choice, using the recommended node and list types in the
table.
Generate a new set of manifests locally using the configuration applied to the `my-sourcegraph` subdirectory without applying to the cluster.
|Provider|Node type|Boot/ephemeral disk size|
|--- |--- |--- |
|[Amazon EKS (better than plain EC2)](eks.md)|m5.4xlarge| 100 GB (SSD preferred) |
|[AWS EC2](https://kubernetes.io/docs/getting-started-guides/aws/)|m5.4xlarge| 100 GB (SSD preferred) |
|[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/quickstart)|n1-standard-16|100 GB (default)|
|[Azure](azure.md)|D16 v3|100 GB (SSD preferred)|
|[Other](https://kubernetes.io/docs/setup/production-environment/turnkey-solutions/)|16 vCPU, 60 GiB memory per node|100 GB (SSD preferred)|
```bash
$ kubectl kustomize instances/my-sourcegraph -o cluster.yaml
```
<span class="virtual-br"></span>
### **Step 7**: Review manifests
> NOTE: Sourcegraph can run on any Kubernetes cluster, so if your infrastructure provider is not
> listed, see the "Other" row. Pull requests to add rows for more infrastructure providers are
> welcome!
Review the generated manifests to ensure they match your intended configuration.
<span class="virtual-br"></span>
```bash
$ less cluster.yaml
```
> WARNING: If you are deploying on Azure, you **must** ensure that [your cluster is created with support for CSI storage drivers](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers). This **can not** be enabled after the fact.
### **Step 8**: Deploy the generated manifests
### ARM / ARM64 support
Apply the manifests from the ouput file `cluster.yaml` to your cluster:
> WARNING: Running Sourcegraph on ARM / ARM64 images is not supported for production deployments.
```bash
$ kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
```
### **Step 9**: Monitor the deployment
Monitor the deployment status to ensure all components are running properly.
```bash
$ kubectl get pods -A -o wide --watch
```
### **Step 10**: Access Sourcegraph in Browser
To verify that the deployment was successful, port-forward the frontend pod with the following command:
```bash
$ kubectl port-forward svc/sourcegraph-frontend 3080:30080
```
Then access your new Sourcegraph instance at http://localhost:3080 to proceed to the site-admin setup step.
```bash
$ open http://localhost:3080
```
---
## Configure
After the initial deployment, additional configuration might be required for Sourcegraph to customize your deployment to suit your specific needs.
Common configurations that are strongly recommended for all Sourcegraph deployments:
- [Enable the Sourcegraph monitoring stack](configure.md#monitoring-stack)
- [Enable tracing](configure.md#tracing)
- [Adjust resource allocations](configure.md#instance-size-based-resources)
- [Adjust storage sizes](configure.md#adjust-storage-sizes)
- [Configure ingress](configure.md#ingress)
- [Enable TLS](configure.md#tls)
Other common configurations include:
- [Set up an external PostgreSQL Database](configure.md#external-postgres)
- [Set up SSH connection for cloning repositories](configure.md#ssh-for-cloning)
See the [configuration guide for Kustomize](configure.md) for more configuration options.
## Helm Chart
We recommend deploying Sourcegraph on Kubernetes with Kustomize due to the flexibility it provides. If your organization uses Helm to deploy on Kubernetes, please refer to the documentation for the [Sourcegraph Helm chart](helm.md) instead.
## Learn more
- [Migrate from deploy-sourcegraph to deploy-sourcegraph-k8s](kustomize/migrate.md)
- Examples of deploying Sourcegraph to the cloud provider listed below:
- [Amazon EKS](kustomize/eks.md)
- [Google GKE](kustomize/gke.md)
- [Minikube](../single-node/minikube.md)
- [Troubleshooting](troubleshoot.md)
- [Other deployment options](../index.md)

View File

@ -1,270 +0,0 @@
# Kustomize
> WARNING: Kustomize can be used **with** Helm to configure Sourcegraph (see [this guidance](helm.md#integrate-kustomize-with-helm-chart)) but this is only recommended as a temporary workaround while Sourcegraph adds to the Helm chart to support previously unsupported customizations.
> If you have yet to deploy Sourcegraph, it is highly recommended to use Helm for the deployment and configuration ([Using Helm with Sourcegraph](helm.md)).
Sourcegraph supports the use of [Kustomize](https://kustomize.io) to modify and customize our Kubernetes manifests. Kustomize is a template-free way to customize configuration with a simple configuration file.
Some benefits of using Kustomize to generate manifests instead of modifying the base directly include:
- Reduce the odds of encountering a merge conflict when [updating Sourcegraph](update.md)—they allow you to separate your unique changes from the upstream base files Sourcegraph provides.
- Better enable Sourcegraph to support you if you run into issues, because how your deployment varies from our defaults is encapsulated in a small set of files.
## Using Kustomize
### General premise
In general, we recommend that customizations work like this:
1. Create, customize, and apply overlays for your deployment
2. Ensure the services came up correctly, then commit all the customizations to the new branch
```sh
git add /overlays/$MY_OVERLAY/*
# Keeping all overlays contained to a single commit allows for easier cherry-picking
git commit amend -m "overlays: update $MY_OVERLAY"
```
See the [overlays guide](#overlays) to learn about the [overlays we provide](#provided-overlays) and [how to create your own overlays](#custom-overlays).
## Overlays
An [*overlay*](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#bases-and-overlays) specifies customizations for a base directory of Kubernetes manifests, in this case the `base/` directory in the [reference repository](#reference-repository).
Overlays can:
- Be used for example to change the number of replicas, change a namespace, add a label, etc
- Refer to other overlays that eventually refer to the base (forming a directed acyclic graph with the base as the root)
### Using overlays
Overlays can be used in one of two ways:
- With `kubectl`: Starting with `kubectl` client version 1.14 `kubectl` can handle `kustomization.yaml` files directly. When using `kubectl` there is no intermediate step that generates actual manifest files. Instead the combined resources from the overlays and the base are directly sent to the cluster. This is done with the `kubectl apply -k` command. The argument to the command is a directory containing a `kustomization.yaml` file.
- With`kustomize`: This generates manifest files that are then applied in the conventional way using `kubectl apply -f`.
The overlays provided in our [overlays directory](https://github.com/sourcegraph/deploy-sourcegraph/tree/master/overlays) rely on the `kustomize` tool and the `overlay-generate-cluster.sh` script in the `root` directory to generate the manifests. There are two reasons why it was set up like this:
- It avoids having to put a `kustomization.yaml` file in the `base` directory and forcing users that don't use overlays
to deal with it (unfortunately `kubectl apply -f` doesn't work if a `kustomization.yaml` file is in the directory).
- It generates manifests instead of applying them directly. This provides opportunity to additionally validate the files
and also allows using `kubectl apply -f` with `--prune` flag turned on (`apply -k` with `--prune` does not work correctly).
### Generating Manifests
To generate Kubernetes manifests from an overlay, run the `overlay-generate-cluster.sh` with two arguments:
- the name of the overlay
- and a path to an output directory where the generated manifests will be
For example:
```sh
# overlay directory name output directory
# | |
./overlay-generate-cluster.sh my-overlay generated-cluster
```
After executing the script you can apply the generated manifests from the `generated-cluster` directory:
```sh
kubectl apply --prune -l deploy=sourcegraph -f generated-cluster --recursive
```
We recommend that you:
- [Update the `./overlay-generate-cluster` script](./operations.md#applying-manifests) to apply the generated manifests from the `generated-cluster` directory with something like the above snippet
- Commit your overlays changes separately—see our [customization guide](#customizations) for more details.
You can now get started with using overlays:
- [Provided overlays](#provided-overlays)
- [Custom overlays](#custom-overlays)
### Provided overlays
Overlays provided out-of-the-box are in the subdirectories of [`deploy-sourcegraph/overlays`](https://github.com/sourcegraph/deploy-sourcegraph/tree/master/overlays) and are documented here.
#### Namespaced overlay
This overlay adds a namespace declaration to all the manifests.
1. Change the namespace by replacing `ns-sourcegraph` to the name of your choice everywhere within the
[overlays/namespaced/](https://github.com/sourcegraph/deploy-sourcegraph/blob/master/overlays/namespaced/) directory.
1. Generate the overlay by running this command from the `root` directory:
```
./overlay-generate-cluster.sh namespaced generated-cluster
```
1. Create the namespace if it doesn't exist yet:
```
kubectl create namespace ns-<EXAMPLE NAMESPACE>
kubectl label namespace ns-<EXAMPLE NAMESPACE> name=ns-sourcegraph
```
1. Apply the generated manifests (from the `generated-cluster` directory) by running this command from the `root` directory:
```
kubectl apply -n ns-<EXAMPLE NAMESPACE> --prune -l deploy=sourcegraph -f generated-cluster --recursive
```
1. Check for the namespaces and their status with:
```
kubectl get pods -A
```
#### Storageclass
By default Sourcegraph is configured to use a storage class called `sourcegraph`. If you wish to use an alternate name, you can use this overlay to change all `storageClass` references in the manifests.
You need to create the storageclass if it doesn't exist yet. See [these docs](./configure.md#configure-a-storage-class) for more instructions.
1. To use it, update the following two files, `replace-storageclass-name-pvc.yaml` and `replace-storageclass-name-sts.yaml` in the `deploy-sourcegraph/overlays/storageclass` directory with your storageclass name.
1. To generate to the cluster, execute the following command:
```shell script
./overlay-generate-cluster.sh storageclass generated-cluster
```
1. After executing the script you can apply the generated manifests from the `generated-cluster` directory:
```shell script
kubectl apply --prune -l deploy=sourcegraph -f generated-cluster --recursive
```
1. Ensure the persistent volumes have been created in the correct storage class by running the following command and inspecting the output:
```shell script
kubectl get pvc
```
#### Non-privileged create cluster overlay
This kustomization is for Sourcegraph installations in clusters with security restrictions. It runs all containers as a non root users, as well removing cluster roles and cluster role bindings and does all the rolebinding in a namespace. It configures Prometheus to work in the namespace and not require ClusterRole wide privileges when doing service discovery for scraping targets. It also disables cAdvisor.
This version and `non-privileged` need to stay in sync. This version is only used for cluster creation.
To use it, execute the following command from the `root` directory:
```
./overlay-generate-cluster.sh non-privileged-create-cluster generated-cluster
```
After executing the script you can apply the generated manifests from the generated-cluster directory:
```
kubectl create namespace ns-sourcegraph
kubectl apply -n ns-sourcegraph --prune -l deploy=sourcegraph -f generated-cluster --recursive
```
#### Non-privileged overlay
This overlay is for continued use after you have successfully deployed the `non-privileged-create-cluster`. It runs all containers as a non root users, as well removing cluster roles and cluster role bindings and does all the rolebinding in a namespace. It configures Prometheus to work in the namespace and not require ClusterRole wide privileges when doing service discovery for scraping targets. It also disables cAdvisor.
To use it, execute the following command from the `root` directory:
```shell script
./overlay-generate-cluster.sh non-privileged generated-cluster
```
After executing the script you can apply the generated manifests from the generated-cluster directory:
```shell script
kubectl apply -n ns-sourcegraph --prune -l deploy=sourcegraph -f generated-cluster --recursive
```
If you are starting a fresh installation use the overlay `non-privileged-create-cluster`. After creation you can use the overlay
`non-privileged`.
#### Migrate-to-nonprivileged overlay
If you already are running a Sourcegraph instance using user `root` and want to convert to running with non-root user then
you need to apply a migration step that will change the permissions of all persistent volumes so that the volumes can be
used by the non-root user. This migration is provided as overlay `migrate-to-nonprivileged`. After the migration you can use
overlay `non-privileged`. If you have previously deployed your cluster in a non-default namespace, be sure to edit the `kustomization.yaml` file in the overlays directly to ensure the files are generated with the correct namespace.
This kustomization injects initContainers in all pods with persistent volumes to transfer ownership of directories to specified non-root users. It is used for migrating existing installations to a non-privileged environment.
```
./overlay-generate-cluster.sh migrate-to-nonprivileged generated-cluster
```
After executing the script you can apply the generated manifests from the generated-cluster directory:
```
kubectl apply --prune -l deploy=sourcegraph -f generated-cluster --recursive
```
#### minikube overlay
This kustomization deletes resource declarations and storage classnames to enable running Sourcegraph on minikube.
To use it, execute the following command from the `root` directory:
```sh
./overlay-generate-cluster.sh minikube generated-cluster
```
After executing the script you can apply the generated manifests from the generated-cluster directory:
```sh
minikube start
kubectl create namespace ns-sourcegraph
kubectl -n ns-sourcegraph apply --prune -l deploy=sourcegraph -f generated-cluster --recursive
kubectl -n ns-sourcegraph expose deployment sourcegraph-frontend --type=NodePort --name sourcegraph --port=3080 --target-port=3080
minikube service list
```
> NOTE: For Mac Users, run `minikube service sourcegraph -n ns-sourcegraph` to open the newly deployed Sourcegraph in your browser
To tear it down:
```sh
kubectl delete namespaces ns-sourcegraph
minikube stop
```
### Custom overlays
To create your own [overlays](#overlays), first [set up your deployment reference repository to enable customizations](./configure.md#getting-started).
Then, within the `overlays` directory of the [reference repository](./index.md#reference-repository), create a new directory for your overlay along with a `kustomization.yaml`.
```text
deploy-sourcegraph
|-- overlays
| |-- my-new-overlay
| | +-- kustomization.yaml
| |-- bases
| +-- ...
+-- ...
```
Within `kustomization.yaml`:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Only include resources from 'overlays/bases' you are interested in modifying
# To learn more about bases: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#base
resources:
- ../bases/deployments
- ../bases/rbac-roles
- ../bases/pvcs
```
You can then define patches, transformations, and more. A complete reference is available [here](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/).
To get started, we recommend you explore writing your own [`patches`](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/), or the more specific variants:
- [`patchesStrategicMerge`](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patchesstrategicmerge/)
- [`patchesJson6902`](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patchesjson6902/)
To avoid complications with reference cycles an overlay can only reference resources inside the directory subtree of the directory it resides in (symlinks are not allowed either).
Learn more in the [`kustomization` documentation](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/).
You can also explore how our [provided overlays](#provided-overlays) use patches, for reference: [`deploy-sourcegraph` usage of patches](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/deploy-sourcegraph%24+lang:YAML+patches:+:%5B_%5D+OR+patchesStrategicMerge:+:%5B_%5D+OR+patchesJson6902:+:%5B_%5D+count:999&patternType=structural).
Once you have created your overlays, refer to our [overlays guide](#generating-manifests) to generate and apply your changes.

View File

@ -0,0 +1,138 @@
# Installation Guide - Amazon Elastic Kubernetes Service (EKS)
This section is aimed at providing high-level guidance on deploying Sourcegraph using a Kustomize overlay on Amazon Elastic Kubernetes Service (EKS).
## Overview
The installation instructions below will guide you through deploying Sourcegraph on Elastic Kubernetes Service (EKS) with our quick-start overlay.
The overlay will:
- Deploy a Sourcegraph instance without RBAC resources
- Configure Ingress to use [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) to expose Sourcegraph publicly on your domain
- Configure the Storage Class to use [AWS EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html) (installed as adds-on)
## Prerequisites
- A EKS cluster (>=1.19) with the following addons enabled:
- [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html)
- [AWS EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html)
- Minimum Kubernetes version: [v1.19](https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/) with [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) v1.19 or later
- [Kustomize](https://kustomize.io/) (built into `kubectl` in version >= 1.14)
## Quick Start
You must complete **all** the prerequisites listed above before installing Sourcegraph with following steps.
### Step 1: Deploy Sourcegraph
Deploy Sourcegraph main app without the monitoring stacks to your cluster:
```bash
$ kubectl apply --prune -l deploy=sourcegraph -k https://github.com/sourcegraph/deploy-sourcegraph-k8s/examples/aws
```
Monitor the deployment status to make sure everything is up and running:
```bash
kubectl get pods -o wide --watch
```
### Step 2: Access Sourcegraph in Browser
To check the status of the load balancer and obtain its IP:
```bash
$ kubectl describe ingress sourcegraph-frontend
```
From you output, look for the IP address of the load balancer, which is listed under `Address`.
```bash
# Sample output:
Name: sourcegraph-frontend
Namespace: default
Address: 12.345.678.0
```
Once the load balancer is ready, you can access your new Sourcegraph instance at the returned IP address in your browser via HTTP. Accessing the IP address with HTTPS will return errors because TLS must be enabled first.
It might take about 10 minutes for the load balancer to be fully ready. In the meantime, you can access Sourcegraph using the port forward method as described below.
#### Port forward
Forward the remote port so that you can access Sourcegraph without network configuration temporarily.
```bash
kubectl port-forward svc/sourcegraph-frontend 3080:30080
```
You should now be able to access your new Sourcegraph instance at http://localhost:3080 🎉
### Further configuration
The steps above have guided you to deploy Sourcegraph using the [quick-start/aws/eks](https://github.com/sourcegraph/deploy-sourcegraph-k8s/tree/master/examples/aws) overlay preconfigured by us.
If you would like to make other configurations to your existing instance, you can create a new overlay using its kustomization.yaml file shown below and build on top of it. For example, you can upgrade your instance from size XS to L, or add the monitoring stacks.
```yaml
# overlays/$INSTANCE_NAME/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: default
resources:
# Deploy Sourcegraph main stack
- ../../base/sourcegraph
# Deploy Sourcegraph monitoring stack
- ../../base/monitoring
components:
# Use resources for a size-XS instance
- ../../components/sizes/xs
# Apply configurations for AWS EKS storage class and ALB
- ../../components/clusters/aws/eks-ebs
```
#### Enable TLS
Once you have created a new overlay using the kustomization file from our quick-start overlay for AWS EKS, we strongly recommend that you:
- create a DNS A record for your Sourcegraph instance domain
- enable TLS is highly recommended.
If you would like to enable TLS with your own certificate, please read the [TLS configuration guide](../configure.md#tls) for detailed instructions.
##### AWS-managed certificate
In order to use a managed certificate from [AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) to enable TLS:
Step 1: Add the `aws/mange-cert` component to your overlay:
```yaml
# overlays/$INSTANCE_NAME/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: default
resources:
# Deploy Sourcegraph main stack
- ../../base/sourcegraph
# Deploy Sourcegraph monitoring stack
- ../../base/monitoring
components:
- ../../components/sizes/xs
- ../../components/clusters/aws/eks-ebs
- ../../components/clusters/aws/mange-cert
```
Step 2: Set the `AWS_MANAGED_CERT_ARN` variable with the `ARN of your AWS-managed TLS certificate` under the [BUILD CONFIGURATIONS](index.md#buildconfig-yaml) section:
```yaml
# overlays/$INSTANCE_NAME/kustomization.yaml
# BUILD CONFIGURATIONS
configMapGenerator:
# Handle updating configs using env vars for kustomize
- name: sourcegraph-kustomize-env
behavior: merge
literals:
...
# ARN of the AWS-managed TLS certificate
- AWS_MANAGED_CERT_ARN=arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx
```

View File

@ -0,0 +1,144 @@
# Installation Guide - Google Kubernetes Engine (GKE)
This section is aimed at providing high-level guidance on deploying Sourcegraph using a Kustomize overlay on GKE.
## Overview
The installation guide below will walk you through deploying Sourcegraph on Google Kubernetes Engine (GKE) using our GKE example overlay.
The GKE overlay will:
- Deploy a Sourcegraph instance without RBAC resources
- Create [BackendConfig](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#create_backendconfig) CRD. This is necessary to instruct the GCP load balancer on how to perform health checks on our deployment.
- Configure ingress to use [Container-native load balancing](https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing) to expose Sourcegraph publicly on a domain of your choosing and
- Create Storage Class to use [Compute Engine persistent disk](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver).
## Prerequisites
- A running GKE cluster with the following configurations:
- **Enable HTTP load balancing** in Networking
- **SSD persistent disk** as book disk type
- Minimum Kubernetes version: [v1.19](https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/) with [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) v1.19 or later
- [Kustomize](https://kustomize.io/) (built into `kubectl` in version >= 1.14)
## Quick Start
You must complete **all** the prerequisites listed above before installing Sourcegraph with following steps.
### Step 1: Deploy Sourcegraph
Deploy Sourcegraph to your cluster:
```bash
$ kubectl apply --prune -l deploy=sourcegraph -k https://github.com/sourcegraph/deploy-sourcegraph-k8s/examples/gke
```
Monitor the deployment status to make sure everything is up and running:
```bash
kubectl get pods -o wide --watch
```
### Step 2: Access Sourcegraph in Browser
To check the status of the load balancer and obtain its IP:
```bash
$ kubectl describe ingress sourcegraph-frontend
```
From you output, look for the IP address of the load balancer, which is listed under `Address`.
```bash
# Sample output:
Name: sourcegraph-frontend
Namespace: default
Address: 12.345.678.0
```
Once the load balancer is ready, you can access your new Sourcegraph instance at the returned IP address in your browser via HTTP. Accessing the IP address with HTTPS returns errors because TLS must be enabled first.
It might take about 10 minutes for the load balancer to be fully ready. In the meantime, you can access Sourcegraph using the port forward method as described below.
#### Port forward
Forward the remote port so that you can access Sourcegraph without network configuration temporarily.
```bash
kubectl port-forward svc/sourcegraph-frontend 3080:30080
```
You should now be able to access your new Sourcegraph instance at http://localhost:3080 🎉
### Further configuration
The steps above have guided you to deploy Sourcegraph using the [examples/gke](https://github.com/sourcegraph/deploy-sourcegraph-k8s/tree/master/examples/gke) overlay preconfigured by us.
If you would like to make other configurations to your existing instance, you can create a new overlay using its kustomization.yaml file shown below and build on top of it. For example, you can upgrade your instance from size XS to L, or add the monitoring stacks.
```yaml
# overlays/$INSTANCE_NAME/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: default
resources:
# Deploy Sourcegraph main stack
- ../../base/sourcegraph
# Deploy Sourcegraph monitoring stack
- ../../base/monitoring
components:
# Use resources for a size-XS instance
- ../../components/sizes/xs
# Apply configurations for GKE
- ../../components/gke/configure
```
You can also build the kustomization file remotely and build on top of it:
```yaml
kustomize create --resources https://github.com/sourcegraph/deploy-sourcegraph-k8s/examples/gke
```
#### Enable TLS
Once you have created a new overlay using the kustomization file from our examples overlay for gke, we strongly recommend you to:
- create a DNS A record for your Sourcegraph instance domain
- enable TLS is highly recommended.
If you would like to enable TLS with your own certificate, please read the [TLS configuration guide](../configure.md#tls) for detailed instructions.
##### Google-managed certificate
In order to use [Google-managed SSL certificates](https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs) to enable TLS:
Step 1: Add the `gke mange-cert` component to your overlay:
```yaml
# overlays/$INSTANCE_NAME/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: default
resources:
# Deploy Sourcegraph main stack
- ../../base/sourcegraph
# Deploy Sourcegraph monitoring stack
- ../../base/monitoring
components:
- ../../components/sizes/xs
- ../../components/clusters/gke/configure
- ../../components/clusters/gke/mange-cert
```
Step 2: Set the `GKE_MANAGED_CERT_NAME` variable with your Google-managed certificate name under the [BUILD CONFIGURATIONS](index.md#buildconfig-yaml) section:
```yaml
# overlays/$INSTANCE_NAME/kustomization.yaml
# BUILD CONFIGURATIONS
configMapGenerator:
# Handle updating configs using env vars for kustomize
- name: sourcegraph-kustomize-env
behavior: merge
literals:
...
- GKE_MANAGED_CERT_NAME=your-managed-cert-name
```

View File

@ -0,0 +1,251 @@
# Kustomize for Sourcegraph
An introduction to Kustomize created for Sourcegraph.
<div class="getting-started">
<a class="btn text-center" href="../index">Installation</a>
<a class="btn btn-primary text-center" href="#">★ Introduction</a>
<a class="btn text-center" href="../configure">Configuration</a>
<a class="btn text-center" href="../operations">Maintenance</a>
</div>
## Overview
Kustomize enables us to decompose our **[base](#base)** application into smaller building blocks, with multiple versions of each block preconfigured as **[components](#components)** for various use cases. This modular approach enables the mixing and matching of the building blocks to construct a customized version of the application by creating **[overlays](#overlay)**. This feature provides a high degree of flexibility and facilitates the maintenance and evolution of the application over time.
## Build process
During the build process, Kustomize will:
1. First build the resources from the base layer of the application.
2. If generators are used, it will then create ConfigMaps and Secrets. These resources can be generated from files, or from data stored in ConfigMaps, or from image metadata.
3. Next, Kustomize will apply patches specified by the components to selectively overwrite resources in the base layer. Patching allows you to modify the resources defined in the base layer without changing the original source files. This is useful for making small, targeted changes to the resources that are needed for your specific deployment.
4. Finally, Kustomize will perform validation to ensure that the modified resources are valid and conform to the Kubernetes API. This is to ensure that the customized deployment is ready for use.
Once the validation is passed, the modified resources are grouped into a single file, known as the output. After that, you can use kubectl to apply the overlaid resources to your cluster.
## Deployment repository
You can find all the configuration files and components needed to deploy Sourcegraph with Kustomize in the [deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s) repository.
Here is the file structure:
```bash
# github.com/sourcegraph/deploy-sourcegraph-k8s
├── base
│ ├── sourcegraph
│ └── monitoring
├── components
├── examples
└── instances
└── template
└── buildConfig.template.yaml
└── kustomization.template.yaml
```
> WARNING: Please create your own sets of overlays within the 'instances' directory and refrain from making changes to the other directories to prevent potential merge conflicts during future updates.
## Base
**Base** refers to a set of YAML files created for the purpose of deploying a Sourcegraph instance to a Kubernetes cluster. These files come preconfigured with default values that can be used to quickly deploy a Sourcegraph instance.
However, deploying with these default settings may not be suitable for all environments and specific use cases. For example, the default resource allocation may not match the requirements for your specific instance size, or the default deployments may include RBAC resources that you would like to remove. To address these issues, creating a Kustomize overlay can be an effective solution. It allows you to customize the resources defined in the base layer to suit the specific requirements of the deployment.
## Overlays
The **instances directory** is used to store customizations specific to your deployment environment. It allows you to create different overlays for different instances for different purposes, such as an instance for production and another for staging. It is best practice to avoid making changes to files outside of the **instances directory** in order to prevent merge conflicts during future updates.
An [overlay](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#bases-and-overlays) acts as a **customization layer** that contains a [kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/), where components are defined as the **configuration layer** to include a set of instructions for Kustomize to generate and apply configurations to the **base layer**.
Here is an example of a `kustomization` file from one of our [examples](#examples-overlays) that is configured for deploying a size XS instance to a k3s cluster:
```yaml
# examples/k3s/xs/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: sourcegraph-example
resources:
- ../../base/sourcegraph # Deploy Sourcegraph main stack
- ../../base/monitoring # Deploy Sourcegraph monitoring stack
components:
# Configurs the deployment for k3s cluster
- ../../components/clusters/k3s
# Configurs the resources we added above for a size XS instance
- ../../components/sizes/xs
```
### Examples overlays
In addition to providing a template for creating new overlays, we also provide a set of examples that are pre-configured for different environments. These pre-built overlays can be found inside the [examples](https://github.com/sourcegraph/deploy-sourcegraph-k8s/tree/master/examples) directory.
## Overlay
In this section, we will take a quick look at the essential part that make up a Kustomize overlay tailored for Sourcegraph.
### Template
An overlay is a directory that contains various files used to configure a deployment for a specific scenario. These files include the [kustomization.yaml file](#kustomization-yaml), which is used to specify how the resources defined in the base manifests should be customized and configured, as well as other files such as environment variable files, configuration files, and patches.
File structure:
```bash
# github.com/sourcegraph/deploy-sourcegraph-k8s
└── instances
├── $INSTANCE_NAME
│ ├── kustomization.yaml
│ ├── buildConfig.yaml
│ └── patches
│ └── additional config files go here...
└── template
└── buildConfig.template.yaml
└── kustomization.template.yaml
```
All custom overlays built for a specific instance should be stored in the [instances directory](https://github.com/sourcegraph/deploy-sourcegraph-k8s/tree/master/instances), where you can find the [instances/template folder](https://github.com/sourcegraph/deploy-sourcegraph-k8s/tree/master/instances/template). This folder contains a [kustomization.template.yaml file](#kustomization-yaml) that is preconfigured to construct an overlay for deploying Sourcegraph, and a [buildConfig.template.yaml](#buildconfig-yaml).
### kustomization.yaml
The [kustomization.yaml file](#kustomization-yaml) is a fundamental element of a Kustomize overlay. It is situated in the root directory of the overlay and serves as a means of customizing and configuring the resources defined in the base manifests, as outlined in our [configuration documentation](../configure.md).
To correctly configure your Sourcegraph deployment, it is crucial to create an overlay using the `kustomization.template.yaml` file provided. This [kustomization.yaml file](#kustomization-yaml) is specifically designed for Sourcegraph deployments, making the configuration process more manageable. The file includes various options and sections, allowing for the creation of a Sourcegraph instance that is tailored to the specific environment.
#### components-list
The order of components in the [kustomization.template.yaml file](kustomize/index.md#kustomization-yaml) is important and should be maintained. The components are listed in a specific order to ensure proper dependency management and compatibility between components. Reordering components can introduce conflicts or prevent components from interacting as expected. Only modify the component order if explicitly instructed to do so by the documentation. Otherwise, leave the component order as-is to avoid issues.
### buildConfig.yaml
Some Kustomize components may require additional configuration. These inputs typically specify environment/use-case-specific settings. For example, the name of your private registry to update images.
Only update the values inside the `buildConfig.yaml` file if a component's documentation explicitly instructs you to do so. Not all components need extra configuration, and some have suitable defaults.
Modifying `buildConfig.yaml` unnecessarily can cause errors or unintended behavior. Always check the [configuration docs](../configure.md) or comments in [kustomization.yaml](#kustomization-yaml) before changing this file.
### patches directory
The `patches directory` is designated to store configuration files that Kustomize uses to customize your deployment. These files can include Kustomize overlays, supplementary kustomization.yaml files, modified ConfigMaps, copies of base manifests, and other configuration files necessary for patching the base cluster.
When instructed by the configuration docs to set up the necessary files for configuring your Sourcegraph instance:
1. Create a directory called 'patches': `mkdir patches`
2. Create the required files within the 'patches' directory
This will ensure the files are in the correct location for the configuration process to access them.
> NOTE: Creating the patches directory is not mandatory unless instructed by the components defined in your overlay.
### Create a Sourcegraph overlay
The [instances/template](#template) directory serves as a starting point for creating a custom overlay for your deployment. It includes the template files that includes a list of components that are commonly used in Sourcegraph deployments. To create a new overlay, you can copy this directory to a new directory. Then, you can enable or disable specific components by commenting or uncommenting them in the overlay file `kustomization.yaml` inside the new directory. This allows you to customize your deployment to suit your specific needs.
**Step 1**: Set up a directory for your instance
Create a copy of the [instances/template](#template) directory within the `instances` subdirectory.
The name of this directory, `$INSTANCE_NAME`, serves as the name of your overlay for the specific instance, for example, `dev`, `prod`, `staging`, etc.
```bash
# from the root of the deploy-sourcegraph-k8s repository
$ export INSTANCE_NAME=staging # Update 'staging' to your instance name
$ cp -R instances/template instances/$INSTANCE_NAME
```
**Step 2**: Set up the configuration files
As described above, you can find two configuration files within the `$INSTANCE_NAME` directory:
1. The `kustomization.yaml` file is used to configure your Sourcegraph instance.
2. The `buildConfig.yaml` file is used to configure components included in your `kustomization` file when required.
Follow the steps listed below to set up the configuration files for your instance overlay: `$INSTANCE_NAME`.
#### kustomization.yaml
Rename the [kustomization.template.yaml](kustomize/index.md#kustomization-yaml) file in `instances/$INSTANCE_NAME` to `kustomization.yaml`:
```bash
$ mv instances/template/kustomization.template.yaml instances/$INSTANCE_NAME/kustomization.yaml
```
#### buildConfig.yaml
Rename the [buildConfig.template.yaml](kustomize/index.md#buildconfig-yaml) file in `instances/$INSTANCE_NAME` to `buildConfig.yaml`:
```bash
$ mv instances/template/buildConfig.template.yaml instances/$INSTANCE_NAME/buildConfig.yaml
```
**Step 3**: You can begin customizing your Sourcegraph deployment by updating the [kustomization.yaml file](#kustomization-yaml) inside your overlay, following our [configuration guides](../configure.md) for guidance.
## Components
An overlay in Kustomize is a set of configuration files that are used to customize the base resources. To understand an overlay, it's important to examine its components, which are listed under the components field inside the [kustomization.yaml file](#kustomization-yaml) of the overlay.
Most of our components are designed to be reusable for different environments and use cases. They can be used to add common labels and annotations, apply common configurations, or even generate resources. By using these components, you can minimize the amount of duplicated code in your overlays and make them more maintainable.
### Rule of thumbs
It is important to understand how each component covered in the [configuration guide](../configure.md) is used to configure your Sourcegraph deployment. Each component has specific configuration options and settings that need to be configured correctly in order for your deployment to function properly. By reading the details and understanding how each component is used, you can make informed decisions about which components to enable or disable in your overlay file, and how to configure them to meet your needs. It also helps to learn how to troubleshoot if something goes wrong.
Here are some **rule of thumbs** to follow when combining different components to ensure that they work together seamlessly and avoid any conflicts:
- Understand the dependencies between components: Some components may depend on others to function properly. For example, if you include a component to remove a daemonset, you should also include the monitoring component to make sure that there is something for the component to remove. If you don't, the overlay build process will fail because there is nothing for the component to remove.
- Be aware of the configuration settings of each component: Each component has its own configuration settings that need to be configured correctly. For example, if you include a component that adds RBAC resources to your deployment when your cluster is RBAC-disabled, it will cause the overlay build process to fail.
- Understand the resources each component creates: Each component creates its own set of resources that need to be managed. For example, if you include a component that creates a service and another component that creates a deployment, you need to make sure that the service points to the deployment.
- Be careful when disabling components: Some components may depend on others to function properly. When disabling a component, you need to consider the impact it may have on other components.
By following these rule of thumbs, you can ensure that the components you include in your overlay work together seamlessly and avoid any conflicts. It is also a good practice to review the manifests generated by the overlay before deploying them to the production environment, to make sure that the overlay is configured as desired.
## Remote build
Remote build feature allows you to deploy an overlay using a git URL, but it should be noted that it does not support custom configurations as the resources are hosted remotely.
To create manifests using a remote overlay, you can use the following command:
```bash
# Replace the $REMOTE_OVERLAY_URL with a URL of an overlays.
$ kubectl kustomize $REMOTE_OVERLAY_URL -o cluster.yaml
```
The command above will download the overlay specified in the $REMOTE_OVERLAY_URL and apply the customizations to the base resources and output the resulting customized manifests to the file cluster.yaml. This command allows you to preview the resources before running the apply command below to deploy using the remote overlay.
```bash
$ kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
```
## Preview manifests
To create a customized deployment using your overlay, run the following command from the root directory of your deployment repository.
```bash
$ kubectl kustomize $PATH_TO_OVERLAY -o cluster.yaml
```
This command will apply the customizations specified in the overlay located at $PATH_TO_OVERLAY to the base resources and output the customized manifests to the file `cluster.yaml`.
The $PATH_TO_OVERLAY path can be a local path or remote path. For example:
```bash
# Local
$ kubectl kustomize examples/k3s/xs -o cluster.yaml
# Remote
$ kubectl kustomize https://github.com/sourcegraph/deploy-sourcegraph-k8s/examples/k3s/xs -o cluster.yaml
```
> NOTE: This command will only generate the customized manifests and will not apply them to the cluster. It does not affect your current deployment until you run the apply command.
## Kustomize with Helm
Kustomize can be used in conjunction with Helm to configure Sourcegraph, as outlined in [this guidance](helm.md#integrate-kustomize-with-helm-chart). However, this approach is only recommended as a temporary workaround while Sourcegraph adds support for previously unsupported customizations in its Helm chart. This means that using Kustomize with Helm is not a long-term solution.
## Deprecated
The previous Kustomize structure we built for our Kubernetes deployments depends on scripting to create deployment manifests. It does not provide flexibility and requires direct changes made to the base manifests.
With the new Kustomize we have introduced in this documentation, these issues can now be avoided. The previous version of the Sourcegraph Kustomize Overlays are still available, but they should not be used for any new Kubernetes deployment.
See the [old deployment docs for deploying Sourcegraph on Kubernetes](https://docs.sourcegraph.com/@v4.4.2/admin/deploy/kubernetes).
> NOTE: The latest version of our Kustomize overlays does not work on instances that are older than v4.5.0.

View File

@ -0,0 +1,203 @@
# Migration Docs for Kustomize
The old method of deploying Sourcegraph with custom scripts has been deprecated. Instead, the new setup uses Kustomize, a Kubernetes-native tool, for configurations. This guide explains how to migrate from the old setup ([deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph)) to the new one ([deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s)).
>NOTE: Both the old custom scripts and Kustomize only create manifests for deployment and dont change any existing resources in an active cluster.
### Why migrate?
Here are the benefits of the new base cluster with the new Kustomize setup compared to the old one:
- Improved security defaults:
* Runs in non-privileged mode
* Uses non-root users
* Does not require RBAC resources
- Streamlined resource allocation process:
* Allocates resources based on the size of the instance
* Optimized through load testing
* The searcher and symbols use StatefulSets and do not require ephemeral storage
- Utilizes the Kubernetes-native tool Kustomize:
* Built into kubectl
* No additional scripting required
* More extensible and composable
* Highly reusable that enables creation of multiple instances with the same base resources and components
- Effortless configurations:
* A comprehensive list of components pre-configured for different use cases
* Designed to work seamlessly with Sourcegraphs design and functionality
* Prevents merge conflicts during upgrades
---
### Migration process
The migration process for transitioning from the Kustomize setup in [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph) to [deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s) involves the steps shown below.
The goal of this migration process is to create a new overlay that will generate similiar resources as the current cluster, ensuring a smooth deployment process without disrupting existing resources.
#### Step 1: Upgrade current instance
Upgrade your current instance to the latest version of Sourcegraph (must be 4.5.0 or above) following the [standard upgrade process](update.md#standard-upgrades) for [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph).
##### From Privileged to Non-privileged
Sourcegraph's deployment mode changed from privileged (containers run as root) to non-privileged (containers run as non-root) as the default in the new Kustomize setup. If your instance is currently running in privileged mode and you want to upgrade to `non-privileged` mode, use the [migrate-to-nonprivileged overlay](https://github.com/sourcegraph/deploy-sourcegraph/tree/master/overlays/migrate-to-nonprivileged) from the Sourcegraph [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph) repository when performing your upgrade.
Follow the [standard upgrade process](update.md#standard-upgrades), but applying the [migrate-to-nonprivileged overlay](https://github.com/sourcegraph/deploy-sourcegraph/tree/master/overlays/migrate-to-nonprivileged) will convert your deployment to run in non-privileged mode
#### Step 2: Set up a release branch
Set up a release branch from the latest version branch in your local fork of the [deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s) repository.
```bash
$ git checkout -b release
```
#### Step 3: Set up a directory
Create a copy of the [instances/template](index.md#template) directory and rename it to `instances/my-sourcegraph`:
```bash
$ cp -R instances/template instances/my-sourcegraph
```
#### Step 4: Set up the configuration files
#### kustomization.yaml
The `kustomization.yaml` file is used to configure your Sourcegraph instance.
1\. Rename the [kustomization.template.yaml](kustomize/index.md#kustomization-yaml) file in `instances/my-sourcegraph` to `kustomization.yaml`:
```bash
$ mv instances/template/kustomization.template.yaml instances/my-sourcegraph/kustomization.yaml
```
#### buildConfig.yaml
The `buildConfig.yaml` file is used to configure components included in your `kustomization` file when required.
2\. Rename the [buildConfig.template.yaml](kustomize/index.md#buildconfig-yaml) file in `instances/my-sourcegraph` to `buildConfig.yaml`:
```bash
$ mv instances/template/buildConfig.template.yaml instances/my-sourcegraph/buildConfig.yaml
```
#### Step 5: Set namespace
Replace `ns-sourcegraph` with a namespace that matches the existing namespace for your current instance.
You may set `namespace: default` to deploy to the default namespace.
```yaml
# instances/my-sourcegraph/kustomization.yaml
namespace: ns-sourcegraph
```
#### Step 6: Set storage class
To add the storage class name that your current instance is using for all associated resources:
1\. Include the `storage-class/name-update` component under the components list
```yaml
# instances/my-sourcegraph/kustomization.yaml
components:
# This updates storageClassName to
# the STORAGECLASS_NAME value from buildConfig.yaml
- ../../components/storage-class/name-update
```
2\. Input the storage class name by setting the value of `STORAGECLASS_NAME` in `buildConfig.yaml`.
For example, set `STORAGECLASS_NAME=sourcegraph` if `sourcegraph` is the name of an existing storage class:
```yaml
# instances/my-sourcegraph/buildConfig.yaml
kind: SourcegraphBuildConfig
metadata:
name: sourcegraph-kustomize-config
data:
STORAGECLASS_NAME: sourcegraph # -- [ACTION] Update storage class name here
```
#### Step 7: Recreate overlay
>NOTE: You may skip this step if your instance was not deployed using overlays from the [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph) repository.
If you are currently deployed using an existing overlay from the old setup that depends on custom scripts:
1. Manually merge the contexts from the old `kustomization.yaml` file into the new one.
2. Copy and paste everything else from the old overlay directory into the new one.
3. In the new `kustomization.yaml` file, replace the old base resources (`bases/deployments`, `bases/pvcs`) with the new base resources that include:
- `buildConfig.yaml`
- `base/sourcegraph`
- `base/monitoring`
```diff
# # instances/my-sourcegraph/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: default
resources:
- - ../bases/deployments
- - ../bases/pvcs
+ - buildConfig.yaml
+ - ../../base/sourcegraph
+ - ../../base/monitoring
```
#### Step 8: Recreate instance resources
Follow our [configuration guide](../configure.md) to recreate your running instance with the provided components.
Please keep in mind that you should not introduce any changes to the characteristics during the migration process. For example, if your cluster is currently running in privileged mode with root users, deploying the instance in non-privileged mode will cause permission issues.
##### Privileged
If your Sourcegraph instance is currently running in privileged mode, start building your overlay with the `clusters/old-base` component, which generates resources similar to the base cluster in deploy-sourcegraph.
```yaml
# instances/my-sourcegraph/kustomization.yaml
components:
- ../../components/clusters/old-base
```
##### Non-privileged
The default cluster now runs in non-privileged mode.
If your instance was deployed using the non-privileged overlay, you can follow the [configuration guide](../configure.md) without adding the `clusters/old-base` component.
>NOTE: `pgsql`, `codeinsights-db`, `searcher`, `symbols`, and `codeintel-db` have been changed from `Deployments` to `StatefulSets`. However, redeploying these services as StatefulSets should not affect your existing deployment as they are all configured to use the same PVCs.
#### Step 9: Review new manifests
[Compare the manifests](index.md#between-an-overlay-and-a-running-cluster) generated by your new overlay with the ones in your running cluster using the command below:
```bash
$ kubectl diff -f new-cluster.yaml
```
Review the changes to ensure that the manifests generated by your new overlay are similar to the ones currently being used by your active cluster.
##### From Deployment to StatefulSet
`searcher` and `symbols` are now StatefulSet that run as headless services. If your current `searcher` and `symbols` are running as Deployment, you will need to remove their services before re-deploying them as StatefulSet:
```bash
$ kubectl delete service/searcher
$ kubectl delete service/symbols
```
#### Step 10: Deploy new manifests
Once you are satisfied with the overlay output, you can now deploy the new overlay using these commands:
```bash
# Build manifests again with overlay
$ kubectl kustomize $PATH_TO_OVERLAY -o cluster.yaml
# Apply manifests to cluster
$ kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
```
> WARNING: Make sure to test the new overlay and the migration process in a non-production environment before applying it to your production cluster.

View File

@ -1,8 +1,13 @@
# Operations guides for Sourcegraph with Kubernetes
# Operations guides for Sourcegraph on Kubernetes
Operations guides specific to managing [Sourcegraph with Kubernetes](./index.md) installations.
Operations guides specific to managing [Sourcegraph on Kubernetes](./index.md) installations.
Trying to deploy Sourcegraph with Kubernetes? Refer to our [installation guide](./index.md#installation).
<div class="getting-started">
<a class="btn text-center" href="index">Installation</a>
<a class="btn text-center" href="kustomize">Introduction</a>
<a class="btn text-center" href="configure">Configuration</a>
<a class="btn text-center btn-primary" href="#">★ Maintenance</a>
</div>
## Featured guides
@ -26,24 +31,45 @@ Trying to deploy Sourcegraph with Kubernetes? Refer to our [installation guide](
</a>
</div>
Trying to deploy Sourcegraph on Kubernetes? Refer to our [installation guide](./index.md#installation).
## Configure
We strongly recommend referring to our [Configuration guide](configure.md) to learn about how to configure your Sourcegraph with Kubernetes instance.
## Deploy
Refer to our [installation guide](./index.md#installation) for details on how to deploy Sourcegraph.
Refer to our [installation guide](./index.md) for details on how to deploy Sourcegraph.
Migrating from another [deployment type](../index.md)? Refer to our [migration guides](../migrate-backup.md).
### Applying manifests
## Deploy with Kustomize
In general, Sourcegraph with Kubernetes is deployed by applying the [Kubernetes](./index.md#kubernetes) manifests in our [deploy-sourcegraph reference repository](./index.md#reference-repository) - see our [configuration guide](./configure.md) for more details.
In order to deploy Sourcegraph that is configured for your cluster:
We provide a `kubectl-apply-all.sh` script that you can use to do this, usually by running the following from the root directory of the [deploy-sourcegraph reference repository](./index.md#reference-repository):
#### Building manifests
```sh
./kubectl-apply-all.sh
Build a new set of manifests using an overlay you've created following our [configuration guide for Kustomize](kustomize/configure.md):
```bash
kubectl kustomize $PATH_TO_OVERLAY -o cluster.yaml
```
> NOTE: By default, this script applies our base manifests using [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) with a variety of arguments specific to the [reference repository](./index.md#reference-repository)'s layout.
> If you have specific commands that should be run whenever you apply your manifests, you should modify this script as needed. For example, if you use [overlays to make changes to the manifests](./configure.md#overlays), you should modify this script to apply your generated cluster instead.
#### Reviewing manifests
Review the manifests generated in the previous step:
```bash
less cluster.yaml
```
#### Applying manifests
Run the command below to apply the manifests from the ouput file `cluster.yaml` to the connected cluster:
```bash
kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
```
Once you have applied your changes:
@ -61,14 +87,64 @@ Once you have applied your changes:
- *Log in* - browse to your Sourcegraph deployment, login, and verify the instance is working as expected.
## Configure
We strongly recommend referring to our [Configuration guide](configure.md) to learn about how to configure your Sourcegraph with Kubernetes instance.
## Compare overlays
## Upgrade
Below are the commands that will output the differences between the two overlays, allowing you to review and compare the changes and ensure that the new overlay produces similar resources as the ones currently being used by the active cluster or another overlay you want to compare with, before applying the new overlay.
- See the [Updating Sourcegraph docs](update.md) on how to upgrade.<br/>
- See the [Updating a Kubernetes Sourcegraph instance docs](../../updates/kubernetes.md) for details on changes in each version to determine if manual migration steps are necessary.
### Between two overlays
To compare resources between two different Kustomize overlays:
```bash
$ diff \
<(kubectl kustomize $PATH_TO_OVERLAY_1) \
<(kubectl kustomize $PATH_TO_OVERLAY_2) |\
more
```
Example 1: compare diff between resources generated by the k3s overlay for size xs instance and the k3s overlay for size xl instance:
```bash
$ diff \
<(kubectl kustomize examples/k3s/xs) \
<(kubectl kustomize examples/k3s/xl) |\
more
```
Example 2: compare diff between the new base cluster and the old cluster:
```bash
$ diff \
<(kubectl kustomize examples/base) \
<(kubectl kustomize examples/old-cluster) |\
more
```
Example 3: compare diff between the output files from two different overlay builds:
```bash
$ kubectl kustomize examples/old-cluster -o old-cluster.yaml
$ kubectl kustomize examples/base -o new-cluster.yaml
$ diff old-cluster.yaml new-cluster.yaml
```
### Between an overlay and a running cluster
To compare the difference between the manifests generated by an overlay and the resources that are being used by the running cluster connected to the kubectl tool:
```bash
$ kubectl kustomize $PATH_TO_OVERLAY | kubectl diff -f -
```
The command will output the differences between the customizations specified in the overlay and the resources currently running in the cluster, allowing you to review the changes and ensure that the overlay produces similar resources to the ones currently being used by the active cluster before applying the new overlay.
Example: compare diff between the k3s overlay for size xl instance and the instance that is connected with `kubectl`:
```bash
$ kubectl kustomize examples/k3s/xl | kubectl diff -f -
```
## List pods in cluster
@ -342,6 +418,53 @@ and i.indisready AND i.indisvalid;
H. Start the remaining Sourcegraph services by following the steps in [applying manifests](#applying-manifests).
## List of ports
To see a list of ports that are currently being used by your Sourcegraph instance:
```bash
kubectl get services
```
Example output:
```bash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blobstore ClusterIP 10.72.3.144 <none> 9000/TCP 25h
cadvisor ClusterIP 10.72.14.130 <none> 48080/TCP 23h
codeinsights-db ClusterIP 10.72.6.240 <none> 5432/TCP,9187/TCP 25h
codeintel-db ClusterIP 10.72.5.10 <none> 5432/TCP,9187/TCP 25h
github-proxy ClusterIP 10.72.10.117 <none> 80/TCP,6060/TCP 25h
gitserver ClusterIP None <none> 10811/TCP 25h
grafana ClusterIP 10.72.6.245 <none> 30070/TCP 25h
indexed-search ClusterIP None <none> 6070/TCP 25h
indexed-search-indexer ClusterIP None <none> 6072/TCP 25h
kubernetes ClusterIP 10.72.0.1 <none> 443/TCP 25h
node-exporter ClusterIP 10.72.5.60 <none> 9100/TCP 25h
otel-collector ClusterIP 10.72.9.221 <none> 4317/TCP,4318/TCP,8888/TCP 25h
pgsql ClusterIP 10.72.6.23 <none> 5432/TCP,9187/TCP 25h
precise-code-intel-worker ClusterIP 10.72.11.102 <none> 3188/TCP,6060/TCP 25h
prometheus ClusterIP 10.72.12.201 <none> 30090/TCP 25h
redis-cache ClusterIP 10.72.15.138 <none> 6379/TCP,9121/TCP 25h
redis-store ClusterIP 10.72.4.162 <none> 6379/TCP,9121/TCP 25h
repo-updater ClusterIP 10.72.11.176 <none> 3182/TCP,6060/TCP 25h
searcher ClusterIP None <none> 3181/TCP,6060/TCP 23h
sourcegraph-frontend ClusterIP 10.72.12.103 <none> 30080/TCP,6060/TCP 25h
sourcegraph-frontend-internal ClusterIP 10.72.9.155 <none> 80/TCP 25h
symbols ClusterIP None <none> 3184/TCP,6060/TCP 23h
syntect-server ClusterIP 10.72.14.49 <none> 9238/TCP,6060/TCP 25h
worker ClusterIP 10.72.7.72 <none> 3189/TCP,6060/TCP 25h
```
## Migrate to Kustomize
See the [migration docs for Kustomize](kustomize/migrate.md) for more information.
## Upgrade
- See the [Updating Sourcegraph docs](update.md) on how to upgrade.<br/>
- See the [Updating a Kubernetes Sourcegraph instance docs](../../updates/kubernetes.md) for details on changes in each version to determine if manual migration steps are necessary.
## Troubleshoot
See the [Troubleshooting docs](troubleshoot.md).

View File

@ -1,51 +1,115 @@
# Updating Sourcegraph with Kubernetes
> WARNING: This guide applies exclusively to a Kubernetes deployment **without** Helm. If using Helm, please go to [Updating Sourcegraph in the Helm guide](helm.md#upgrading-sourcegraph).
> If you have not deployed Sourcegraph yet, it is higly recommended to use Helm as it simplifies the configuration and greatly simplifies the upgrade process. See our [Helm guide](helm.md) for more information.
A new version of Sourcegraph is released every month (with patch releases in between, released as needed). See the [Sourcegraph Blog](https://about.sourcegraph.com/blog) for release announcements.
A new version of Sourcegraph is released every month (with patch releases in between, released as needed). Check the [Sourcegraph blog](https://about.sourcegraph.com/blog) for release announcements.
> WARNING: This guide applies exclusively to Kubernetes deployments **without** Helm. Please refer to the [Updating Sourcegraph in the Helm guide](helm.md#upgrading-sourcegraph) when using Helm.
## Upgrades
**This guide assumes you have created a `release` branch following the [reference repositories docs](repositories.md)**.
### Standard upgrades
## Standard upgrades
A [standard upgrade](../../updates/index.md#standard-upgrades) occurs between two minor versions of Sourcegraph. If you are looking to jump forward several versions, you must perform a [multi-version upgrade](#multi-version-upgrades) instead.
**Before upgrading:**
### Prerequisites
- Read our [update policy](../../updates/index.md#update-policy) to learn about Sourcegraph updates.
- Find the relevant entry for your update in the [update notes for Sourcegraph with Kubernetes](../../updates/kubernetes.md).
- Find the relevant entry for your update version in the [update notes for Sourcegraph with Kubernetes](../../updates/kubernetes.md).
- [Backup](migrate-backup) (snapshot) your databases before performing upgrades.
- This up-to-date backup can be used for recovery in the event that a database upgrade fails or causes issues.
**The following steps assume that you have created a `release` branch following the [instructions in the configuration guide](configure.md)**.
### Upgrade with Kubernetes
First, merge the new version of Sourcegraph into your release branch.
**For Sourcegraph version prior to 4.5.0**
```bash
cd $DEPLOY_SOURCEGRAPH_FORK
# get updates
git fetch upstream
# to merge the upstream release tag into your release branch.
git checkout release
# Choose which version you want to deploy from https://github.com/sourcegraph/deploy-sourcegraph/releases
git merge $NEW_VERSION
```
For instances deployed using the old [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph) repository:
Then, deploy the updated version of Sourcegraph to your Kubernetes cluster:
**Step 1**: Merge the new version of Sourcegraph into your release branch.
```bash
./kubectl-apply-all.sh
```
```bash
cd $DEPLOY_SOURCEGRAPH_FORK
# get updates
git fetch upstream
# to merge the upstream release tag into your release branch.
git checkout release
# Choose which version you want to deploy from https://github.com/sourcegraph/deploy-sourcegraph/tags
git merge $NEW_VERSION
```
> NOTE: By default, this script applies our base manifests using [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) with a variety of arguments specific to the [reference repository](./index.md#reference-repository)'s layout.
> If you have specific commands that should be run whenever you apply your manifests, you should modify this script as needed. For example, if you use [overlays to make changes to the manifests](./configure.md#overlays), you should modify this script to apply your generated cluster instead.
**Step 2**: Update your install script `kubectl-apply-all.sh`
Monitor the status of the deployment to determine its success.
By default, the install script `kubectl-apply-all.sh` applies our base manifests using [`kubectl apply` command](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) with a variety of arguments specific to the layout of the [deploy-sourcegraph reference repository]([./index.md#reference-repository](https://github.com/sourcegraph/deploy-sourcegraph)).
```bash
kubectl get pods -o wide --watch
```
If you have specific commands that should be run whenever you apply your manifests, you should modify this script accordingly.
### Multi-version upgrades
For example, if you use [overlays to make changes to the manifests](https://github.com/sourcegraph/deploy-sourcegraph/tree/master/overlays), you should modify this script to apply the manifests from the `generated cluster` directory instead.
**Step 3**: Apply the updates to your cluster.
```bash
$ ./kubectl-apply-all.sh
```
**Step 4**: Monitor the status of the deployment to determine its success.
```bash
$ kubectl get pods -o wide --watch
```
### Upgrade with Kustomize
**For Sourcegraph version 4.5.0 and above**
For instances deployed using the [deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s) repository:
**Step 1**: Create a backup copy of the deployment configuration file
Make a duplicate of the current `cluster.yaml` deployment configuration file that was used to deploy the current Sourcegraph instance.
If the Sourcegraph upgrade fails, you can redeploy using the current `cluster.yaml` file to roll back and restore the instance to its previous state before the failed upgrade.
**Step 2**: Merge the new version of Sourcegraph into your release branch.
```bash
cd $DEPLOY_SOURCEGRAPH_FORK
# get updates
git fetch upstream
# to merge the upstream release tag into your release branch.
git checkout release
# Choose which version you want to deploy from https://github.com/sourcegraph/deploy-sourcegraph-k8s/tags
git merge $NEW_VERSION
```
**Step 3**: Build new manifests with Kustomize
Generate a new set of manifests locally using your current overlay `instances/$INSTANCE_NAME` (e.g. INSTANCE_NAME=my-sourcegraph) without applying to the cluster.
```bash
$ kubectl kustomize instances/my-sourcegraph -o cluster.yaml
```
Review the generated manifests to ensure they match your intended configuration and have the images for the `$NEW_VERSION` version.
```bash
$ less cluster.yaml
```
**Step 4**: Deploy the generated manifests
Apply the new manifests from the ouput file `cluster.yaml` to your cluster:
```bash
$ kubectl apply --prune -l deploy=sourcegraph -f cluster.yaml
```
**Step 5**: Monitor the status of the deployment to determine its success.
```bash
$ kubectl get pods -o wide --watch
```
---
## Multi-version upgrades
A [multi-version upgrade](../../updates/index.md#multi-version-upgrades) is a downtime-incurring upgrade from version 3.20 or later to any future version. Multi-version upgrades will run both schema and data migrations to ensure the data available from the instance remains available post-upgrade.
@ -59,6 +123,21 @@ A [multi-version upgrade](../../updates/index.md#multi-version-upgrades) is a do
- Read our [update policy](../../updates/index.md#update-policy) to learn about Sourcegraph updates.
- Find the entries that apply to the version range you're passing through in the [update notes for Sourcegraph with Kubernetes](../../updates/kubernetes.md#multi-version-upgrade-procedure).
- [Backup](migrate-backup) (snapshot) your databases before performing upgrades.
- This up-to-date backup can be used for recovery in the event that a database upgrade fails or causes issues.
### MVU with Kustomize
Due to limitations with the Kustomize deployment method introduced in Sourcegraph 4.5.0, multi-version upgrades (e.g. 4.2.0 -> 4.5.0) cannot be performed using the Kustomize deployment.
To upgrade your Sourcegraph instance from a version older than 4.5.0 to 4.5.0 or above:
1. Upgrade to 4.5.0 using the Kubernetes deployment method from the old [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph)
1. This is required as an intermediate step before the Kustomize deployment method can be used
1. Verify that the 4.5.0 upgrade completed successfully using [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph)
1. Migrate to the new Kustomize deployment method following the [Migration Docs for Kustomize](kustomize/migrate.md)
### MVU without Kustomize
To perform a multi-version upgrade on a Sourcegraph instance running on Kubernetes:
@ -78,12 +157,12 @@ To perform a multi-version upgrade on a Sourcegraph instance running on Kubernet
- If using an external database, follow the [upgrading external PostgreSQL instances](../../postgres.md#upgrading-external-postgresql-instances) guide.
- Otherwise, perform the following steps from the [upgrading internal Postgres instances](../../postgres.md#upgrading-internal-postgresql-instances) guide:
1. It's assumed that your fork of `deploy-sourcegraph` is up to date with your instance's current version. Pull the upstream changes for `v3.27.0` and resolve any git merge conflicts. We need to temporarily boot the containers defined at this specific version to rewrite existing data to the new Postgres 12 format.
1. Run `kubectl apply -l deploy=sourcegraph -f base/pgsql` to launch a new Postgres 12 container and rewrite the old Postgres 11 data. This may take a while, but streaming container logs should show progress. **NOTE**: The Postgres migration requires enough capacity in its attached volume to accommodate an additional copy of the data currently on disk. Resize the volume now if necessary—the container will fail to start if there is not enough free disk space.
1. Wait until the database container is accepting connections. Once ready, run the command `kubectl exec pgsql -- psql -U sg -c 'REINDEX database sg;'` issue a reindex command to Postgres to repair indexes that were silently invalidated by the previous data rewrite step. **If you skip this step**, then some data may become inaccessible under normal operation, the following steps are not guaranteed to work, and **data loss will occur**.
1. Follow the same steps for the `codeintel-db`:
2. Run `kubectl apply -l deploy=sourcegraph -f base/pgsql` to launch a new Postgres 12 container and rewrite the old Postgres 11 data. This may take a while, but streaming container logs should show progress. **NOTE**: The Postgres migration requires enough capacity in its attached volume to accommodate an additional copy of the data currently on disk. Resize the volume now if necessary—the container will fail to start if there is not enough free disk space.
3. Wait until the database container is accepting connections. Once ready, run the command `kubectl exec pgsql -- psql -U sg -c 'REINDEX database sg;'` issue a reindex command to Postgres to repair indexes that were silently invalidated by the previous data rewrite step. **If you skip this step**, then some data may become inaccessible under normal operation, the following steps are not guaranteed to work, and **data loss will occur**.
4. Follow the same steps for the `codeintel-db`:
- Run `kubectl apply -l deploy=sourcegraph -f base/codeintel-db` to launch Postgres 12.
- Run `kubectl exec codeintel-db -- psql -U sg -c 'REINDEX database sg;'` to issue a reindex command to Postgres.
1. Leave these versions of the databases running while the subsequent migration steps are performed. If `codeinsights-db` is a container new to your instance, now is a good time to start it as well.
5. Leave these versions of the databases running while the subsequent migration steps are performed. If `codeinsights-db` is a container new to your instance, now is a good time to start it as well.
1. Pull the upstream changes for the target instance version and resolve any git merge conflicts. The [standard upgrade procedure](#standard-upgrades) describes this step in more detail.
1. Follow the instructions on [how to run the migrator job in Kubernetes](../../how-to/manual_database_migrations.md#kubernetes) to perform the upgrade migration. For specific documentation on the `upgrade` command, see the [command documentation](../../how-to/manual_database_migrations.md#upgrade). The following specific steps are an easy way to run the upgrade command:
1. Edit the file `configure/migrator/migrator.Job.yaml` and set the value of the `args` key to `["upgrade", "--from=<old version>", "--to=<new version>"]`. It is recommended to also add the `--dry-run` flag on a trial invocation to detect if there are any issues with database connection, schema drift, or mismatched versions that need to be addressed. If your instance has in-use code intelligence data it's recommended to also temporarily increase the CPU and memory resources allocated to this job. A symptom of underprovisioning this job will result in an `OOMKilled`-status container.
@ -91,20 +170,45 @@ To perform a multi-version upgrade on a Sourcegraph instance running on Kubernet
1. Start the migrator job via `kubectl apply -f configure/migrator/migrator.Job.yaml`.
1. Run `kubectl wait -f configure/migrator/migrator.Job.yaml --for=condition=complete --timeout=-1s` to wait for the job to complete. Run `kubectl logs job.batch/migrator -f` stream the migrator's stdout logs for progress.
1. The remaining infrastructure can now be updated. The [standard upgrade procedure](#standard-upgrades) describes this step in more detail.
- Ensure that the replica counts adjusted in the previous steps are turned back up.
- Run `./kubectl-apply-all.sh` to deploy the new pods to the Kubernetes cluster.
- Monitor the status of the deployment via `kubectl get pods -o wide --watch`.
- Ensure that the replica counts adjusted in the previous steps are turned back up.
- Run `./kubectl-apply-all.sh` to deploy the new pods to the Kubernetes cluster.
- Monitor the status of the deployment via `kubectl get pods -o wide --watch`.
---
## Rollback
You can rollback by resetting your `release` branch to the old state and proceeding re-running the following:
```
./kubectl-apply-all.sh
```
You can rollback by resetting your `release` branch to the old state before redeploying the instance.
If you are rolling back more than a single version, then you must also [rollback your database](../../how-to/rollback_database.md), as database migrations (which may have run at some point during the upgrade) are guaranteed to be compatible with one previous minor version.
### Rollback with Kustomize
**For Sourcegraph version 4.5.0 and above**
For instances deployed using the [deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s) repository:
```bash
# Re-generate manifests
$ kubectl kustomize instances/$YOUR_INSTANCE -o cluster-rollback.yaml
# Review manifests
$ less cluster-rollback.yaml
# Re-deploy
$ kubectl apply --prune -l deploy=sourcegraph -f cluster-rollback.yaml
```
### Rollback without Kustomize
**For Sourcegraph version prior to 4.5.0**
For instances deployed using the old [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph) repository:
```bash
$ ./kubectl-apply-all.sh
```
---
## Improving update reliability and latency with node selectors
Some of the services that comprise Sourcegraph require more resources than others, especially if the
@ -125,6 +229,8 @@ Note that the need to run the above steps can be prevented altogether with [node
selectors](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector), which
tell Kubernetes to assign certain pods to specific nodes.
---
## High-availability updates
Sourcegraph is designed to be a high-availability (HA) service, but upgrades by default require a 10m downtime
@ -143,6 +249,8 @@ the following:
determine if an instance goes down.
- Database migrations are handled automatically on update when they are necessary.
---
## Database migrations
By default, database migrations will be performed during application startup by a `migrator` init container running prior to the `frontend` deployment. These migrations **must** succeed before Sourcegraph will become available. If the databases are large, these migrations may take a long time.

View File

@ -98,6 +98,7 @@ Please refer to the official [CIS hardening guide](https://docs.k3s.io/security/
> NOTE: See [Sourcegraph Vulnerability Management Policy](https://handbook.sourcegraph.com/departments/engineering/dev/policies/vulnerability-management-policy/#vulnerability-service-level-agreements) to learn more about our vulnerability and patching policy as well as the current [vulnerability service level agreements](https://handbook.sourcegraph.com/departments/engineering/dev/policies/vulnerability-management-policy/#vulnerability-service-level-agreements).
## Additional resources
- [sourcegraph/deploy](https://sourcegraph.com/github.com/sourcegraph/deploy)

View File

@ -0,0 +1,83 @@
# Reference Repositories
Sourcegraph provides reference repositories with branches corresponding to the version of Sourcegraph you wish to deploy for each supported deployment type. The reference repository contains everything you need to spin up and configure your instance depending on your deployment type, which also assists in your upgrade process going forward.
## List
| **Deployment type** | **Link to reference repository** |
|:--------------------------|:---------------------------------------------------------|
| Kubernetes | https://github.com/deploy-sourcegraph-k8s |
| Helm | https://github.com/sourcegraph/deploy-sourcegraph-helm |
| Docker and Docker Compose | https://github.com/sourcegraph/deploy-sourcegraph-docker |
> WARNING: [deploy-sourcegrap](https://github.com/deploy-sourcegrap) has been deprecated
## Create a private copy
### Step 1: Create an empty repository
Follow the [official GitHub docs](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository) on creating a new **empty** repository.
### Step 2: Set the environment variables
Export the following environment variables for the next steps.
- `SG_DEPLOY_REPO_NAME`: name of the deployment repository
- `deploy-sourcegraph-k8s` for Kubernetes with Kustomize deployment
- `deploy-sourcegraph-docker` for Docker and Docker Compose deployment
- `DEPLOY_GITHUB_USERNAME`: the account name that is hosting the empty repository created in step 1
- `SG_PRIVATE_DEPLOY_REPO_NAME`: default to the same name as $SG_DEPLOY_REPO_NAME
- `SG_DEPLOY_VERSION`: latest version number of Sourcegraph
Update the environment variables in the command below before running it in your terminal:
```bash
export SG_DEPLOY_GITHUB_USERNAME="YOUR_USERNAME"
export SG_DEPLOY_REPO_NAME="deploy-sourcegraph-k8s"
export SG_PRIVATE_DEPLOY_REPO_NAME="$SG_DEPLOY_REPO_NAME"
export SG_DEPLOY_VERSION="v4.5.0"
```
### Step 3: Create remote and local copies
Once the required environment variables are exported, run the following commands in the same terminal:
```bash
git clone --bare https://github.com/sourcegraph/$SG_DEPLOY_REPO_NAME
cd $DEPLOY_REPO.git
git push --mirror https://github.com/SG_DEPLOY_GITHUB_USERNAME/$SG_PRIVATE_DEPLOY_REPO_NAME.git
cd ..
rm -rf $DEPLOY_REPO.git
git clone https://github.com/SG_DEPLOY_GITHUB_USERNAME/$SG_PRIVATE_DEPLOY_REPO_NAME.git
```
### Step 4: Create a release branch
Create a `release` branch to track all of your customizations to Sourcegraph. This branch will be used to [upgrade Sourcegraph](../updates.md) and [install your Sourcegraph instance](./index.md#installation).
```bash
cd $SG_PRIVATE_DEPLOY_REPO_NAME
git checkout $SG_DEPLOY_VERSION -b release-$SG_DEPLOY_VERSION
```
You can now deploy using your private copy of the repository you've just created. Please follow the installation and configuration docs for your specific deployment type for next steps.
## Update your private copy
Before you can upgrade Sourcegraph, you will first update your private copy with the upstream branch, and then merge the upstream release tag for the next minor version into your release branch.
In the following example, the release branch is being upgraded to v4.5.1.
```bash
export YOUR_RELEASE_BRANCH=release-$SG_DEPLOY_VERSION
# first, checkout the release branch
git checkout $YOUR_RELEASE_BRANCH
# fetch updates
git fetch upstream
# merge the upstream release tag into your release branch
git checkout $YOUR_RELEASE_BRANCH
git merge v4.5.1
```
A [standard upgrade](../updates.md#standard-upgrades) occurs between two minor versions of Sourcegraph. If you are looking to jump forward several versions, you must perform a [multi-version upgrade](../updates.md#multi-version-upgrades) instead.

View File

@ -4,11 +4,13 @@ title: Install Sourcegraph locally with minikube
# Install Sourcegraph with minikube
This guide will take you through how to set up a Sourcegraph instance locally with [minikube](https://minikube.sigs.k8s.io/docs/), a tool that lets you run a single-node Kubernetes cluster on your local machine, using our [minikube overlay](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph@master/-/tree/overlays/minikube).
This guide will take you through how to set up a Sourcegraph deployment locally with [minikube](https://minikube.sigs.k8s.io/docs/), a tool that lets you run a single-node Kubernetes cluster on your local machine, using our [minikube overlay](https://github.com/sourcegraph/deploy-sourcegraph-k8s/tree/main/examples/minikube).
>WARNING: This installation method is not officially supported by Sourcegraph. We do not recommend using it for production deployments.
## Sourcegraph minikube overlay
The [Sourcegraph minikube overlay](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph@master/-/tree/overlays/minikube) deletes resource declarations and storage classnames to enable running Sourcegraph on minikube locally with less resources, as it normally takes a lot more of resources to run Sourcegraph at a production level. See our docs on creating [custom overlays](../kubernetes/kustomize.md#overlays) if you would like to customize the overlay.
The [Sourcegraph minikube overlay](https://github.com/sourcegraph/deploy-sourcegraph-k8s/tree/main/examples/minikube) deletes resource declarations and storage classnames to enable running Sourcegraph on minikube locally with less resources, as it normally takes a lot more of resources to run Sourcegraph at a production level. See our docs on creating [kustomize overlays](../kubernetes/kustomize) if you would like to customize the overlay.
## Prerequisites
@ -18,8 +20,7 @@ Following are the prerequisites for running Sourcegraph with minikube on your lo
- [minikube](https://minikube.sigs.k8s.io/docs/start/)
- [Docker Desktop](https://www.docker.com/products/docker-desktop/)
- Enable Kubernetes in Docker Desktop. See the [official docs](https://docs.docker.com/desktop/kubernetes/#enable-kubernetes) for detailed instruction.
> NOTE: Running Sourcegraph on minikube locally requires a minimum of **8 CPU** and **32GB memory** assigned to your Kubernetes instance in Docker.
- A minimum of **8 CPU** and **30GB memory** assigned to your Kubernetes cluster in Docker.
## Deploy
@ -30,38 +31,13 @@ Following are the prerequisites for running Sourcegraph with minikube on your lo
$ minikube start
```
2\. Create a clone of our Kubernetes deployment repository: [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph)
2\. Apply manifests generated by the remote minikube overlay from [deploy-sourcegraph-k8s](https://github.com/sourcegraph/deploy-sourcegraph-k8s)
```sh
$ git clone https://github.com/sourcegraph/deploy-sourcegraph.git
$ kubectl apply --prune -l deploy=sourcegraph -k https://github.com/sourcegraph/deploy-sourcegraph-k8s/examples/minikube/base
```
3\. Check out the branch of the version you would like to deploy
```sh
# Example: git checkout v4.1.0
$ git checkout $VERSION-NUMBER
```
4\. Apply the [Sourcegraph minikube overlay](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph@master/-/tree/overlays/minikube) by running the following command in the root directory of your local copy of the [deploy-sourcegraph](https://github.com/sourcegraph/deploy-sourcegraph) repository
```sh
$ ./overlay-generate-cluster.sh minikube generated-cluster
```
5\. Create the `ns-sourcegraph` namespace
```sh
$ kubectl create namespace ns-sourcegraph
```
6\. Apply the generated manifests from the `generated-cluster` directory
```sh
$ kubectl -n ns-sourcegraph apply --prune -l deploy=sourcegraph -f generated-cluster --recursive
```
7\. Make sure all the pods are up and running before moving to the next step
3\. Make sure all the pods are up and running before moving to the next step
```sh
$ kubectl get pods -A
@ -71,74 +47,41 @@ $ kubectl get pods -A
> WARNING: The deployment time depends on how much resources have been assigned to your instance. You will need to add more resources to your Kubernetes instance through the Docker Desktop dashboard if some of your pods are stucks in the `Pending` state.
8\. Create a Service object that exposes the deployment
4\. Create a Service object that exposes the deployment
```sh
$ kubectl -n ns-sourcegraph expose deployment sourcegraph-frontend --type=NodePort --name sourcegraph --port=3080 --target-port=3080
$ kubectl expose deployment sourcegraph-frontend --type=NodePort --name sourcegraph --port=3080 --target-port=3080
```
9\. Get the minikube node IP to access the nodePorts
5\. Start connnecting to the service endpoint for Sourcegraph
```sh
$ minikube ip
$ minikube service --url sourcegraph
# Example output: http://127.0.0.1:32034
```
10\. Get the minikube service endpoint
That's it! You can now access Sourcegraph in your browser using the IP address from the previous step. 🎉
```sh
# To get the service endpoint
$ minikube service list
# To get the service endpoint for Sourcegraph specifically
$ minikube service sourcegraph -n ns-sourcegraph
# To get the service endpoint URL for Sourcegraph specificlly
$ minikube service --url sourcegraph -n ns-sourcegraph
# Example return: http://127.0.0.1:32034
```
If you are on **Linux**, an URL will then be displayed in the service list if the instance has been deployed successfully
```
|----------------|-------------------------------|--------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
| ---------------- | ------------------------------- | -------------- | --------------------------- |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| ns-sourcegraph | backend | No node port |
| ns-sourcegraph | codeinsights-db | No node port |
| ns-sourcegraph | codeintel-db | No node port |
| ns-sourcegraph | github-proxy | No node port |
| ns-sourcegraph | gitserver | No node port |
| ns-sourcegraph | grafana | No node port |
| ns-sourcegraph | indexed-search | No node port |
| ns-sourcegraph | indexed-search-indexer | No node port |
| ns-sourcegraph | jaeger-collector | No node port |
| ns-sourcegraph | jaeger-query | No node port |
| ns-sourcegraph | blobstore | No node port |
| ns-sourcegraph | pgsql | No node port |
| ns-sourcegraph | precise-code-intel-worker | No node port |
| ns-sourcegraph | prometheus | No node port |
| ns-sourcegraph | query-runner | No node port |
| ns-sourcegraph | redis-cache | No node port |
| ns-sourcegraph | redis-store | No node port |
| ns-sourcegraph | repo-updater | No node port |
| ns-sourcegraph | searcher | No node port |
| ns-sourcegraph | sourcegraph | 3080 | http://127.0.0.1:32034 |
| ns-sourcegraph | sourcegraph-frontend | No node port |
| ns-sourcegraph | sourcegraph-frontend-internal | No node port |
| ns-sourcegraph | symbols | No node port |
| ns-sourcegraph | syntect-server | No node port |
| ns-sourcegraph | worker | No node port |
| ---------------- | ------------------------------- | -------------- | --------------------------- |
```
That's it! You can now access the local Sourcegraph instance in browser using the URL or IP address retrieved from the previous steps 🎉
<img class="screenshot" src="https://user-images.githubusercontent.com/68532117/141357183-905d0dbe-2d40-4dec-98b1-0a1cb13b0cf4.png" alt="minikube-startup-screen"/>
<img class="screenshot w-100" src="https://user-images.githubusercontent.com/68532117/141357183-905d0dbe-2d40-4dec-98b1-0a1cb13b0cf4.png" alt="minikube-startup-screen"/>
## Upgrade
Please refer to the [upgrade docs for all Sourcegraph kubernetes instances](../kubernetes/operations.md#upgrade).
Please refer to the [upgrade docs for all Sourcegraph kubernetes instances](../kubernetes/update.md).
### Quick upgrade
Replace $NEW_VERSION with a new version number (**must be v4.5.0+**).
```sh
$ kubectl apply --prune -l deploy=sourcegraph -k https://github.com/sourcegraph/deploy-sourcegraph-k8s/examples/minikube/base?ref=$NEW_VERSION
```
For example, to upgrade from 4.5.0 to 4.5.1:
```sh
$ kubectl apply --prune -l deploy=sourcegraph -k https://github.com/sourcegraph/deploy-sourcegraph-k8s/examples/minikube/base?ref=v4.5.1
```
## Downgrade
@ -148,19 +91,23 @@ Same instruction as upgrades.
Steps to remove your Sourcegraph minikube instance:
1\. Delete the `ns-sourcegraph` namespace
```sh
$ kubectl delete namespaces ns-sourcegraph
```
2\. Stop the minikube cluster
1\. Stop the minikube node
```sh
$ minikube stop
```
## Other userful commands
2\. Remove the minikube cluster
```sh
$ minikube delete
```
---
## Commands
Below is a list of userful commands when deploying Sourcegraph to minikube.
#### Un-expose sourcegraph
@ -180,8 +127,59 @@ $ kubectl get svc -n ns-sourcegraph
$ minikube delete
```
#### Get the service endpoint for Sourcegraph specifically
```sh
$ minikube service sourcegraph -n ns-sourcegraph
```
#### Get the service endpoint URL for Sourcegraph specificlly
```sh
$ minikube service --url sourcegraph -n ns-sourcegraph
# Example return: http://127.0.0.1:32034
```
#### Get a list of service endpoint
```sh
$ minikube service list
```
Example output:
```sh
|-------------|-------------------------------|--------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-------------------------------|--------------|-----|
| default | blobstore | No node port |
| default | codeinsights-db | No node port |
| default | codeintel-db | No node port |
| default | github-proxy | No node port |
| default | gitserver | No node port |
| default | indexed-search | No node port |
| default | indexed-search-indexer | No node port |
| default | kubernetes | No node port |
| default | pgsql | No node port |
| default | precise-code-intel-worker | No node port |
| default | redis-cache | No node port |
| default | redis-store | No node port |
| default | repo-updater | No node port |
| default | searcher | No node port |
| default | sourcegraph | 3080 | |
| default | sourcegraph-frontend | No node port |
| default | sourcegraph-frontend-internal | No node port |
| default | symbols | No node port |
| default | syntect-server | No node port |
| default | worker | No node port |
| default | worker-executors | No node port |
| kube-system | kube-dns | No node port |
|-------------|-------------------------------|--------------|-----|
```
## Resources
- [Customizations](https://docs.sourcegraph.com/admin/install/kubernetes/configure#customizations)
- [Deploy Sourcegraph with Kustomize](https://docs.sourcegraph.com/admin/install/kubernetes)
- [Introduction to Kubectl and Kustomize](https://kubectl.docs.kubernetes.io/guides/introduction/)
- [List of commonly used Kubernetes commands](https://sourcegraph.github.io/support-generator/)

View File

@ -31,7 +31,7 @@ To add CodeCommit repositories in Docker Container:
1. Confirm you can clone the repository locally.
1. Copy all the files at your `$HOME/.ssh directory` to `$HOME/.sourcegraph/config/ssh` directory. See [docs](../deploy/docker-single-container/index.md#ssh-authentication-config-keys-knownhosts) for more information about our ssh file system.
1. Read our [guide here](../deploy/docker-compose/index.md#git-ssh-configuration) for Docker Compose deployments
1. Read our [guide here](../deploy/kubernetes/configure.md#configure-repository-cloning-via-ssh) for Kubernetes deployments
1. Read our [guide here](../deploy/kubernetes/configure.md#ssh-for-cloning) for Kubernetes deployments
1. Start (or restart) the container.
1. Connect Sourcegraph to AWS CodeCommit by going to **Sourcegraph > Site Admin > Manage code hosts > Generic Git host** and add the following:

View File

@ -1,7 +1,5 @@
# Using external services with Sourcegraph
> NOTE: Using Sourcegraph with an external service is a [paid feature](https://about.sourcegraph.com/pricing). [Contact us](https://about.sourcegraph.com/contact/sales) to get a trial license.
Sourcegraph by default provides versions of services it needs to operate, including:
- A [PostgreSQL](https://www.postgresql.org/) instance for storing long-term information, such as user information when using Sourcegraph's built-in authentication provider instead of an external one.
@ -11,7 +9,12 @@ Sourcegraph by default provides versions of services it needs to operate, includ
- A `sourcegraph/blobstore` instance that serves as a local S3-compatible object storage to hold user uploads before they can be processed. _This data is for temporary storage and content will be automatically deleted once processed._
- A [Jaeger](https://www.jaegertracing.io/) instance for end-to-end distributed tracing.
Your Sourcegraph instance can be configured to use an external or managed version of these services. Using a managed version of PostgreSQL can make backups and recovery easier to manage and perform. Using a managed object storage service may decrease your hosting costs as persistent volumes are often more expensive than object storage space.
Your Sourcegraph instance can be configured to use an external or managed version of these services:
- Using a managed version of PostgreSQL can make backups and recovery easier to manage and perform.
- Using a managed object storage service may decrease your hosting costs as persistent volumes are often more expensive than object storage space.
## External services guides
See the following guides to use an external or managed version of each service type.
@ -20,6 +23,8 @@ See the following guides to use an external or managed version of each service t
- See [Using a managed object storage service (S3 or GCS)](./object_storage.md) to replace the bundled blobstore instance.
- See [Using an external Jaeger instance](../observability/tracing.md#Use-an-external-Jaeger-instance) to replace the bundled Jaeger instance.
> NOTE: Using Sourcegraph with an external service is a [paid feature](https://about.sourcegraph.com/pricing). [Contact us](https://about.sourcegraph.com/contact/sales) to get a trial license.
## Cloud alternatives
- Amazon Web Services: [AWS RDS for PostgreSQL](https://aws.amazon.com/rds/), [Amazon ElastiCache](https://aws.amazon.com/elasticache/redis/), and [S3](https://aws.amazon.com/s3/) for storing user uploads.

View File

@ -25,5 +25,5 @@ If using Docker for Desktop, `host.docker.internal` will resolve to the host IP
[See the Helm Redis guidance here](../deploy/kubernetes/helm.md#using-external-redis-instances)
### Kubernetes without Helm
- See our documentation for Kubernetes [here](../deploy/kubernetes/configure.md#configure-custom-redis)
- See our documentation for Kubernetes [here](../deploy/kubernetes/configure.md#external-redis)
- **Related:** [How to Set a Password for Redis using a ConfigMap](../how-to/redis_configmap.md)

View File

@ -43,14 +43,10 @@ Docker image, you can deploy a reverse proxy such as [Caddy](https://caddyserver
If you are running Sourcegraph as a Kubernetes cluster, you have two additional options:
1. If you are using [NGINX
ingress](https://github.com/sourcegraph/deploy-sourcegraph/blob/master/docs/configure.md#ingress-controller-recommended)
ingress](https://github.com/sourcegraph/deploy-sourcegraph/blob/master/docs/configure.md#ingress-controller)
(`kubectl get ingress | grep sourcegraph-frontend`), modify
[`sourcegraph-frontend.Ingress.yaml`](https://github.com/sourcegraph/deploy-sourcegraph/blob/master/base/frontend/sourcegraph-frontend.Ingress.yaml)
by [adding a rewrite rule](https://kubernetes.github.io/ingress-nginx/examples/rewrite/).
1. If you are using the [NGINX
service](https://github.com/sourcegraph/deploy-sourcegraph/blob/master/docs/configure.md#nginx-service),
modify
[`nginx.ConfigMap.yaml`](https://github.com/sourcegraph/deploy-sourcegraph/blob/master/configure/nginx-svc/nginx.ConfigMap.yaml).
## What external HTTP checks are configured?
@ -121,4 +117,4 @@ This error is expected if your instance was not [deployed with Kubernetes](./dep
## Troubleshooting
Please refer to our [dedicated troubleshooting page](troubleshooting.md).
Please refer to our [dedicated troubleshooting page](troubleshooting.md).

View File

@ -18,7 +18,7 @@ Reference Materials
* [Docs: Configure custom Redis](../deploy/kubernetes/configure.md#configure-custom-redis)
* [Docs: Configure custom Redis](../deploy/kubernetes/configure.md#external-redis)
* [Docs: Using your own Redis server](../external_services/redis.md)
@ -179,7 +179,7 @@ metadata:
</pre>
7. Modify the manifests for all services listed in[ Configure custom Redis](https://docs.sourcegraph.com/admin/install/kubernetes/configure#configure-custom-redis). The listing below is an example of the two environment variables that must be added to the services listed in the documentation.
7. Modify the manifests for all services listed in[ Configure custom Redis](https://docs.sourcegraph.com/admin/install/kubernetes/configure#external-redis). The listing below is an example of the two environment variables that must be added to the services listed in the documentation.
<pre>
@ -200,7 +200,7 @@ kind: Deployment
**Note:** Be sure to add both environment variables to all services listed in [Configure custom Redis](https://docs.sourcegraph.com/admin/install/kubernetes/configure#configure-custom-redis).
**Note:** Be sure to add both environment variables to all services listed in [Configure custom Redis](https://docs.sourcegraph.com/admin/install/kubernetes/configure#external-redis).

View File

@ -1,6 +1,6 @@
# How to setup HTTPS connection with Ingress controller on your Kubernetes instance
This document will take you through how to setup HTTPS connection using the preinstalled [Ingress controller](../deploy/kubernetes/configure.md#ingress-controller-recommended), which allows external users to access your main web server over the network. It installs rules for the default ingress, see comments to restrict it to a specific host. It is our recommended method to configure network access for production environments.
This document will take you through how to setup HTTPS connection using the preinstalled [Ingress controller](../deploy/kubernetes/configure.md#ingress-controller), which allows external users to access your main web server over the network. It installs rules for the default ingress, see comments to restrict it to a specific host. This is our recommended method to configure network access for production environments.
## Prerequisites
@ -8,7 +8,7 @@ This document will take you through how to setup HTTPS connection using the prei
## Steps for GCE-GKE user
> WARNING: Please visit our [Kubernetes Configuration Docs](../deploy/kubernetes/configure.md#ingress-controller-recommended) for more detail on Network-related topics
> WARNING: Please visit our [Kubernetes Configuration Docs](../deploy/kubernetes/configure.md#ingress-controller) for more detail on Network-related topics
>
### 1. Install the NGINX ingress controller (ingress-nginx)
@ -103,4 +103,4 @@ Update the ingress controller with the previous changes with the following comma
```bash
kubectl apply -f base/frontend/sourcegraph-frontend.Ingress.yaml
```
```

View File

@ -42,7 +42,7 @@ docker container run \
### Sourcegraph Cluster (Kubernetes)
We use the [ingress-nginx](https://kubernetes.github.io/ingress-nginx/) for Sourcegraph Cluster running on Kubernetes. Refer to the [deploy-sourcegraph Configuration](deploy/kubernetes/configure.md#configure-network-access) documentation for more information.
We use the [ingress-nginx](https://kubernetes.github.io/ingress-nginx/) for Sourcegraph Cluster running on Kubernetes. Refer to the [deploy-sourcegraph Configuration](deploy/kubernetes/configure.md#network-access) documentation for more information.
### NGINX SSL/HTTPS configuration

View File

@ -17,8 +17,8 @@ Refer to the [documentation](https://opentelemetry.io/docs/collector/configurati
For more details on configuring the OpenTelemetry collector for your deployment method, refer to the deployment-specific guidance:
- [Kubernetes (with Helm)](../deploy/kubernetes/helm.md#opentelemetry-collector)
- [Kubernetes (without Helm)](../deploy/kubernetes/configure.md#opentelemetry-collector)
- [Kubernetes with Kustomize](../deploy/kubernetes/configure.md#tracing)
- [Kubernetes with Helm](../deploy/kubernetes/helm.md#opentelemetry-collector)
- [Docker Compose](../deploy/docker-compose/operations.md#opentelemetry-collector)
## Tracing
@ -211,8 +211,8 @@ Refer to the [`jaeger` exporter documentation](https://github.com/open-telemetry
Most Sourcegraph deployment methods still ship with an opt-in Jaeger instance—to set this up, follow the relevant deployment guides, which will also set up the appropriate configuration for you:
- [Kubernetes (with Helm)](../deploy/kubernetes/helm.md#enable-the-bundled-jaeger-deployment)
- [Kubernetes (without Helm)](../deploy/kubernetes/configure.md#enable-the-bundled-jaeger-deployment)
- [Kubernetes with Kustomize](../deploy/kubernetes/configure.md#deploy-opentelemetry-collector-with-jaeger-as-tracing-backend)
- [Kubernetes with Helm](../deploy/kubernetes/helm.md#enable-the-bundled-jaeger-deployment)
- [Docker Compose](../deploy/docker-compose/operations.md#enable-the-bundled-jaeger-deployment)
If you wish to do additional configuration or connect to your own Jaeger instance, the deployed Collector image is bundled with a [basic configuration with Jaeger exporting](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/docker-images/opentelemetry-collector/configs/jaeger.yaml).

View File

@ -67,7 +67,7 @@ Generally, no additional steps are required to upgrade the databases shipped alo
#### Sourcegraph with Kubernetes
**The upgrade process is different for [Sourcegraph Kubernetes deployments](./deploy/kubernetes/index.md)** because [by default](https://github.com/sourcegraph/sourcegraph/blob/main/docker-images/postgres-12/build.sh#L10), it uses `sourcegraph/postgres-12` which can be [customized with environment variables](https://github.com/sourcegraph/deploy-sourcegraph/blob/7edcadb/docs/configure.md#configure-custom-postgresql).
**The upgrade process is different for [Sourcegraph Kubernetes deployments](./deploy/kubernetes/index.md)** because [by default](https://github.com/sourcegraph/sourcegraph/blob/main/docker-images/postgres-12/build.sh#L10), it uses `sourcegraph/postgres-12` which can be [customized with environment variables](https://github.com/sourcegraph/deploy-sourcegraph/blob/7edcadb/docs/configure.md#external-postgres).
If you have changed `PGUSER`, `PGDATABASE` or `PGDATA`, then the `PG*OLD` and `PG*NEW` environment variables are
required. Below are the defaults and documentation on what each variable is used for:

View File

@ -7,7 +7,7 @@ First, ensure your **Site admin > Manage code hosts** code host configuration is
Then, follow the directions below depending on your deployment type:
- [Sourcegraph with Docker Compose](../deploy/docker-compose/index.md): See [the Docker Compose git configuration guide](../deploy/docker-compose/index.md#git-configuration).
- [Sourcegraph with Kubernetes](../deploy/kubernetes/index.md): See [Configure repository cloning via SSH](../deploy/kubernetes/configure.md#configure-repository-cloning-via-ssh).
- [Sourcegraph with Kubernetes](../deploy/kubernetes/index.md): See [Configure repository cloning via SSH](../deploy/kubernetes/configure.md#ssh-for-cloning).
- [Single-container Sourcegraph](../deploy/docker-single-container/index.md): See [the single-container git configuration guide](../deploy/docker-single-container/index.md#git-configuration-and-authentication).
>NOTE: Repository access over SSH is not yet supported on [Sourcegraph Cloud](../../cloud/index.md).

View File

@ -5,7 +5,7 @@ Sourcegraph supports customising [git-config](https://git-scm.com/docs/git-confi
This guide documents how to configure git-config. To set up SSH and authentication for repositories, see [Repository authentication](auth.md).
- [Sourcegraph with Docker Compose](../deploy/docker-compose/index.md): See [the Docker Compose git configuration guide](../deploy/docker-compose/index.md#git-configuration).
- [Sourcegraph with Kubernetes](../deploy/kubernetes/index.md): See [Configure repository cloning via SSH](../deploy/kubernetes/configure.md#configure-repository-cloning-via-ssh).
- [Sourcegraph with Kubernetes](../deploy/kubernetes/index.md): See [Configure repository cloning via SSH](../deploy/kubernetes/configure.md#ssh-for-cloning).
- [Single-container Sourcegraph](../deploy/docker-single-container/index.md): See [the single-container git configuration guide](../deploy/docker-single-container/index.md#git-configuration-and-authentication).
## Example: alternate clone URL for repos

View File

@ -7,16 +7,16 @@
1. Read our [update policy](index.md#update-policy) to learn about Sourcegraph updates.
1. Find the relevant entry for your update in the update notes on this page. **If the notes indicate a patch release exists, target the highest one.**
1. After checking the relevant update notes, refer to either of the following guides to upgrade your instance:
* [Kubernetes with Helm upgrade guide](../deploy/kubernetes/helm.md#standard-upgrades)
* [Kubernetes without Helm upgrade guide](../deploy/kubernetes/update.md#standard-upgrades)
* [Upgrade guide for Kubernetes](../deploy/kubernetes/update.md#standard-upgrades)
* [Upgrade guide for Kubernetes with Helm](../deploy/kubernetes/helm.md#standard-upgrades)
## Multi-version upgrade procedure
1. Read our [update policy](index.md#update-policy) to learn about Sourcegraph updates.
1. Find the relevant entry for your update in the update notes on this page. **If the notes indicate a patch release exists, target the highest one.** These notes may contain relevant information about the infrastructure update such as resource requirement changes or versions of depencies (Docker, Kubernetes, externalized databases).
1. After checking the relevant update notes, refer to either of the following guides to upgrade your instance:
* [Kubernetes with Helm upgrade guide](../deploy/kubernetes/helm.md#multi-version-upgrades)
* [Kubernetes without Helm upgrade guide](../deploy/kubernetes/update.md#multi-version-upgrades)
* [Upgrade guide for Kubernetes](../deploy/kubernetes/update.md#multi-version-upgrades)
* [Upgrade guide for Kubernetes with Helm](../deploy/kubernetes/helm.md#multi-version-upgrades)
<!-- GENERATE UPGRADE GUIDE ON RELEASE (release tooling uses this to add entries) -->