Compare commits

..

39 Commits

Author SHA1 Message Date
Don Cote
a5d394b7ac
Merge pull request #1 from FlipsideCrypto/add_docker_image
add docker image to cluster
2021-06-29 12:34:34 -04:00
Don Cote
7d435671b9 add docker image to cluster 2021-06-29 12:32:57 -04:00
Don Cote
8cfac9f148 bump version 2021-04-07 10:07:46 -04:00
Don Cote
99e51e1dbd change package path to flipside 2021-04-07 10:01:52 -04:00
Don Cote
e9c4be6321 fix aws cluster get url 2021-04-07 09:48:41 -04:00
Jacob Zhou
173725fee4
Merge pull request #25 from xinsnake/siliang/req-resp-models
update folder structure under ./azure
2021-01-13 14:54:17 +11:00
Siliang Jiao
078fe7547c give back deepcopy under ./ 2021-01-03 16:37:24 +09:00
Siliang Jiao
b5604443fd separate models 2021-01-03 13:40:32 +09:00
siliang.j
0ceac36918 dos2unix 2021-01-03 13:02:23 +09:00
siliang.j
493390987f change folder structure to [interface]/[models/httpmodels] 2021-01-03 12:47:12 +09:00
Siliang Jiao
e472eabca8 add cluster models 2020-12-10 12:26:47 +09:00
Siliang Jiao
acf360f3c2 add spaces 2020-12-06 17:21:38 +09:00
Siliang Jiao
e9467350d4 add separate folder 2020-12-06 17:19:35 +09:00
Siliang Jiao
aa912a2d00 add cluster create req & resp 2020-12-06 14:09:09 +09:00
siliang-jiao
d12ea40315
Merge pull request #24 from xinsnake/siliang/update-cluster-api
Siliang/update cluster api
2020-12-01 12:49:21 +09:00
Siliang Jiao
f1bb5055af update NewCluster 2020-11-29 17:28:12 +09:00
siliang.j
bde69468a6 update ClusterAPI.create() response 2020-11-29 17:06:02 +09:00
Siliang Jiao
c8ef0a9689 update cluster api (added DockerImage settings) 2020-11-29 15:43:53 +09:00
azadeh khojandi
9a14864c91
Merge pull request #22 from EliiseS/es/add-building-linting-to-make-file
Add building/linting/testing to make file and fix issues
2020-03-14 17:23:11 +11:00
Eliise S
036c7c09f7 Remove commented out lines 2020-03-13 10:48:32 +00:00
Eliise S
2680368906 Pin tooling versions 2020-03-12 20:43:03 +00:00
Eliise S
29481e2b80 Move building 2020-02-19 10:15:09 +00:00
Eliise S
683f436eed remove docker build 2020-02-15 23:14:49 +00:00
Eliise S
c4ff38ed31 Remove unused file 2020-02-14 15:24:39 +00:00
Eliise S
1e2d9428c4 Fix formating, linting and build errors 2020-02-14 14:55:03 +00:00
Eliise S
ff7ebab040 Add command to make file 2020-02-14 14:54:12 +00:00
Eliise S
7026e7445d Fix build error 2020-02-14 11:15:45 +00:00
Xinyun Zhou
0db72db70f
Merge pull request #19 from EliiseS/es/declare-structs-more-consistently
Declare JobsListResponse
2020-02-10 02:55:36 +11:00
Eliise S
8360a2526e Declare JobsListResponse 2020-02-07 10:34:59 +00:00
azadeh khojandi
344b877e34
Merge pull request #18 from Azadehkhojandi/r/v0.1.3
update version
2020-01-10 16:03:55 +11:00
Azadeh Khojandi
58b95de8e7 update version 2020-01-10 04:58:00 +00:00
azadeh khojandi
c3951f12f1
Merge pull request #17 from stuartleeks/esst/Use-global-httpclient
Use global httpclient to fix port exhaustion
2020-01-09 22:27:42 +11:00
Eliise S
1ee73b1324 Use global httpclient 2020-01-09 10:12:41 +00:00
Xinyun Zhou
2ac0180b3c
Merge pull request #16 from xinsnake/err-check
fix: handle err in get cluster
2019-12-18 11:08:16 +11:00
Iain Cardnell
ca2d7b82bc fix: handle err in get cluster 2019-12-18 10:44:03 +11:00
azadeh khojandi
496587c3bf
Merge pull request #15 from Azadehkhojandi/az/cleanup
clean up
2019-11-13 15:52:34 +11:00
Azadeh Khojandi
b278f6dc9f clean up 2019-11-13 15:52:01 +11:00
azadeh khojandi
6c452f0206
Merge pull request #14 from storey247/f-fixclusterinfo
Fixed Cluster struct to correctly map default_tags property from json response
2019-11-13 10:13:11 +11:00
Dave Storey
5ecdd40da8 changed clusterinfo to map 2019-11-01 12:35:33 +00:00
136 changed files with 483 additions and 2042 deletions

View File

@ -19,36 +19,26 @@ RUN apt-get update \
&& go get -x -d github.com/stamblerre/gocode 2>&1 \
&& go build -o gocode-gomod github.com/stamblerre/gocode \
&& mv gocode-gomod $GOPATH/bin/ \
# Install Go tools
&& go get -u -v \
github.com/mdempsky/gocode \
github.com/uudashr/gopkgs/cmd/gopkgs \
github.com/ramya-rao-a/go-outline \
github.com/acroca/go-symbols \
github.com/godoctor/godoctor \
golang.org/x/tools/cmd/guru \
golang.org/x/tools/cmd/gorename \
github.com/rogpeppe/godef \
github.com/zmb3/gogetdoc \
github.com/haya14busa/goplay/cmd/goplay \
github.com/sqs/goreturns \
github.com/josharian/impl \
github.com/davidrjenni/reftools/cmd/fillstruct \
github.com/fatih/gomodifytags \
github.com/cweill/gotests/... \
golang.org/x/tools/cmd/goimports \
golang.org/x/lint/golint \
golang.org/x/tools/cmd/gopls \
github.com/alecthomas/gometalinter \
honnef.co/go/tools/... \
github.com/golangci/golangci-lint/cmd/golangci-lint \
github.com/mgechev/revive \
github.com/derekparker/delve/cmd/dlv 2>&1 \
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# Enable go modules
ENV GO111MODULE=on
# Install Go tools
RUN \
# --> Go language server
go get golang.org/x/tools/gopls@v0.3.3 \
# --> GolangCI-lint
&& curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sed 's/tar -/tar --no-same-owner -/g' | sh -s -- -b $(go env GOPATH)/bin \
# --> Delve for debugging
&& go get github.com/go-delve/delve/cmd/dlv@v1.4.0 \
# --> Go-outline for extracting a JSON representation of the declarations in a Go source file
&& go get -v github.com/ramya-rao-a/go-outline \
&& rm -rf /go/src/ && rm -rf /go/pkg
RUN apt-get update \
# Install Docker CE CLI
&& apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common lsb-release \
@ -61,12 +51,7 @@ RUN apt-get update \
RUN apt-get -y install git procps wget nano zsh inotify-tools jq
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
ENV GO111MODULE=on
COPY ./Makefile ./
RUN mkdir -p /go/src/github.com/xinsnake/databricks-sdk-golang
ENV SHELL /bin/bash
RUN mkdir -p /go/src/github.com/FlipsideCrypto/databricks-sdk-golang
ENV SHELL /bin/bash

View File

@ -1,25 +1,43 @@
// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.
{
"name": "Go",
"dockerComposeFile": "docker-compose.yml",
"service": "docker-in-docker",
"workspaceFolder": "/go/src/github.com/xinsnake/databricks-sdk-golang",
"postCreateCommand": "",
"shutdownAction": "stopCompose",
"extensions": [
"ms-azuretools.vscode-docker",
"ms-vscode.go"
],
"settings": {
"terminal.integrated.shell.linux": "zsh",
"go.gopath": "/go",
"go.inferGopath": true,
"go.useLanguageServer": true,
"go.toolsEnvVars": {
"GO111MODULE": "on"
},
"remote.extensionKind": {
"ms-azuretools.vscode-docker": "workspace"
}
}
"name": "Go",
"dockerComposeFile": "docker-compose.yml",
"service": "docker-in-docker",
"workspaceFolder": "/go/src/github.com/FlipsideCrypto/databricks-sdk-golang",
"postCreateCommand": "",
"shutdownAction": "stopCompose",
"extensions": ["ms-azuretools.vscode-docker", "ms-vscode.go"],
"settings": {
"terminal.integrated.shell.linux": "zsh",
"go.gopath": "/go",
"go.useLanguageServer": true,
"[go]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
},
// Optional: Disable snippets, as they conflict with completion ranking.
"editor.snippetSuggestions": "none"
},
"[go.mod]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
},
"gopls": {
"usePlaceholders": true, // add parameter placeholders when completing a function
// Experimental settings
"completeUnimported": true, // autocomplete unimported packages
"deepCompletion": true // enable deep completion
},
"go.toolsEnvVars": {
"GO111MODULE": "on"
},
"go.lintTool": "golangci-lint",
"go.lintFlags": ["--fast"],
"remote.extensionKind": {
"ms-azuretools.vscode-docker": "workspace"
}
}
}

View File

@ -1,13 +1,13 @@
version: '3'
version: "3"
services:
docker-in-docker:
build:
build:
context: ../
dockerfile: .devcontainer/Dockerfile
network_mode: "host"
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/go/src/github.com/xinsnake/databricks-sdk-golang
- ..:/go/src/github.com/FlipsideCrypto/databricks-sdk-golang
# This lets you avoid setting up Git again in the container
- ~/.gitconfig:/root/.gitconfig
@ -15,7 +15,6 @@ services:
# Forwarding the socket is optional, but lets docker work inside the container if you install the Docker CLI.
# See the docker-in-docker-compose definition for details on how to install it.
- /var/run/docker.sock:/var/run/docker.sock
# Overrides default command so things don't shut down after the process ends - useful for debugging
command: sleep infinity

View File

@ -1,2 +1,14 @@
all : checks test
checks:
go build all
golangci-lint run
test: checks
go test ./...
fmt:
find . -name '*.go' | grep -v vendor | xargs gofmt -s -w
deepcopy:
./cmd/deepcopy-gen -i ./,./aws/...,./azure/... -h ./hack/boilerplate.go.txt -v 3

View File

@ -8,9 +8,9 @@ This is a Golang SDK for [DataBricks REST API 2.0](https://docs.databricks.com/a
```go
import (
databricks "github.com/xinsnake/databricks-sdk-golang"
dbAzure "github.com/xinsnake/databricks-sdk-golang/azure"
// dbAws "github.com/xinsnake/databricks-sdk-golang/aws"
databricks "github.com/FlipsideCrypto/databricks-sdk-golang"
dbAzure "github.com/FlipsideCrypto/databricks-sdk-golang/azure"
// dbAws "github.com/FlipsideCrypto/databricks-sdk-golang/aws"
)
var o databricks.DBClientOption
@ -27,21 +27,21 @@ jobs, err := c.Jobs().List()
Everything except SCIM API are implemented. Please refer to the progress below:
| API | AWS | Azure |
| :--- | :---: | :---: |
| Clusters API | ✔ | ✔ |
| DBFS API | ✔ | ✔ |
| Groups API | ✔ | ✔ |
| Instance Pools API (preview) | ✗ | ✗ |
| Instance Profiles API | ✔ | N/A |
| Jobs API | ✔ | ✔ |
| Libraries API | ✔ | ✔ |
| MLflow API | ✗ | ✗ |
| SCIM API (preview) | ✗ | ✗ |
| Secrets API | ✔ | ✔ |
| Token API | ✔ | ✔ |
| Workspace API | ✔ | ✔ |
| API | AWS | Azure |
| :--------------------------- | :-: | :---: |
| Clusters API | | |
| DBFS API | | |
| Groups API | | |
| Instance Pools API (preview) | | |
| Instance Profiles API | | N/A |
| Jobs API | | |
| Libraries API | | |
| MLflow API | | |
| SCIM API (preview) | | |
| Secrets API | | |
| Token API | | |
| Workspace API | | |
## Notes
- [Deepcopy](https://godoc.org/k8s.io/gengo/examples/deepcopy-gen) is generated shall you need it.
- [Deepcopy](https://godoc.org/k8s.io/gengo/examples/deepcopy-gen) is generated shall you need it.

View File

@ -1,6 +1,6 @@
package aws
import databricks "github.com/xinsnake/databricks-sdk-golang"
import databricks "github.com/FlipsideCrypto/databricks-sdk-golang"
// DBClient is the client for Azure implements DBClient
type DBClient struct {
@ -10,6 +10,7 @@ type DBClient struct {
// Init initializes the client
func (c *DBClient) Init(option databricks.DBClientOption) DBClient {
c.Option = option
option.Init()
return *c
}

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// ClustersAPI exposes the Clusters API
@ -107,7 +107,11 @@ func (a ClustersAPI) Get(clusterID string) (models.ClusterInfo, error) {
}{
clusterID,
}
resp, err := a.Client.performQuery(http.MethodGet, "/clusters/get-delete", data, nil)
resp, err := a.Client.performQuery(http.MethodGet, "/clusters/get", data, nil)
if err != nil {
return clusterInfo, err
}
err = json.Unmarshal(resp, &clusterInfo)
return clusterInfo, err

View File

@ -5,7 +5,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// DbfsAPI exposes the DBFS API
@ -60,6 +60,10 @@ func (a DbfsAPI) Create(path string, overwrite bool) (DbfsCreateResponse, error)
}
resp, err := a.Client.performQuery(http.MethodPost, "/dbfs/create", data, nil)
if err != nil {
return createResponse, err
}
err = json.Unmarshal(resp, &createResponse)
return createResponse, err
}
@ -88,7 +92,12 @@ func (a DbfsAPI) GetStatus(path string) (models.FileInfo, error) {
}
resp, err := a.Client.performQuery(http.MethodGet, "/dbfs/get-status", data, nil)
if err != nil {
return fileInfo, err
}
err = json.Unmarshal(resp, &fileInfo)
return fileInfo, err
}
@ -108,6 +117,10 @@ func (a DbfsAPI) List(path string) ([]models.FileInfo, error) {
}
resp, err := a.Client.performQuery(http.MethodGet, "/dbfs/list", data, nil)
if err != nil {
return listResponse.Files, err
}
err = json.Unmarshal(resp, &listResponse)
return listResponse.Files, err
}
@ -176,6 +189,10 @@ func (a DbfsAPI) Read(path string, offset, length int64) (DbfsReadResponse, erro
}
resp, err := a.Client.performQuery(http.MethodGet, "/dbfs/read", data, nil)
if err != nil {
return readResponse, err
}
err = json.Unmarshal(resp, &readResponseBase64)
if err != nil {
return readResponse, err

View File

@ -5,7 +5,7 @@
package aws
import (
models "github.com/xinsnake/databricks-sdk-golang/aws/models"
models "github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@ -195,23 +195,6 @@ func (in *GroupsCreateResponse) DeepCopy() *GroupsCreateResponse {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InstancePoolsAPI) DeepCopyInto(out *InstancePoolsAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InstancePoolsAPI.
func (in *InstancePoolsAPI) DeepCopy() *InstancePoolsAPI {
if in == nil {
return nil
}
out := new(InstancePoolsAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InstanceProfilesAPI) DeepCopyInto(out *InstanceProfilesAPI) {
*out = *in

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// GroupsAPI exposes the Groups API

View File

@ -1,11 +0,0 @@
package aws
// InstancePoolsAPI exposes the InstancePools API
type InstancePoolsAPI struct {
Client DBClient
}
func (a InstancePoolsAPI) init(client DBClient) InstancePoolsAPI {
a.Client = client
return a
}

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// InstanceProfilesAPI exposes the Instance Profiles API

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// JobsAPI exposes Jobs API endpoints

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// LibrariesAPI exposes the Libraries API

View File

@ -1,8 +1,8 @@
package models
type ClusterEvent struct {
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
Timestamp int64 `json:"timestamp,omitempty" url:"timestamp,omitempty"`
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
Timestamp int64 `json:"timestamp,omitempty" url:"timestamp,omitempty"`
Type *ClusterEventType `json:"type,omitempty" url:"type,omitempty"`
Details *AwsAttributes `json:"details,omitempty" url:"details,omitempty"`
}

View File

@ -34,4 +34,5 @@ type ClusterInfo struct {
DefaultTags []ClusterTag `json:"default_tags,omitempty" url:"default_tags,omitempty"`
ClusterLogStatus *LogSyncStatus `json:"cluster_log_status,omitempty" url:"cluster_log_status,omitempty"`
TerminationReason *S3StorageInfo `json:"termination_reason,omitempty" url:"termination_reason,omitempty"`
DockerImage *DockerImage `json:"docker_image,omitempty" url:"docker_image,omitempty"`
}

View File

@ -1,6 +1,6 @@
package models
type ClusterSize struct {
NumWorkers int32 `json:"num_workers,omitempty" url:"num_workers,omitempty"`
NumWorkers int32 `json:"num_workers,omitempty" url:"num_workers,omitempty"`
Autoscale *AutoScale `json:"autoscale,omitempty" url:"autoscale,omitempty"`
}

View File

@ -1,7 +1,7 @@
package models
type ClusterSpec struct {
ExistingClusterID string `json:"existing_cluster_id,omitempty" url:"existing_cluster_id,omitempty"`
ExistingClusterID string `json:"existing_cluster_id,omitempty" url:"existing_cluster_id,omitempty"`
NewCluster *NewCluster `json:"new_cluster,omitempty" url:"new_cluster,omitempty"`
Libraries []Library `json:"libraries,omitempty" url:"libraries,omitempty"`
Libraries []Library `json:"libraries,omitempty" url:"libraries,omitempty"`
}

View File

@ -1,7 +1,7 @@
package models
type DiskSpec struct {
DiskType *DiskType `json:"disk_type,omitempty" url:"disk_type,omitempty"`
DiskType *DiskType `json:"disk_type,omitempty" url:"disk_type,omitempty"`
DiskCount int32 `json:"disk_count,omitempty" url:"disk_count,omitempty"`
DiskSize int32 `json:"disk_size,omitempty" url:"disk_size,omitempty"`
}
}

11
aws/models/DockerImage.go Normal file
View File

@ -0,0 +1,11 @@
package models
type DockerImage struct {
URL string `json:"url,omitempty" url:"url,omitempty"`
BasicAuth *BasicAuth `json:"basic_auth,omitempty" url:"basic_auth,omitempty"`
}
type BasicAuth struct {
Username string `json:"username,omitempty" url:"username,omitempty"`
Password string `json:"password,omitempty" url:"password,omitempty"`
}

View File

@ -1,8 +1,8 @@
package models
type EventDetails struct {
CurrentNumWorkers int32 `json:"current_num_workers,omitempty" url:"current_num_workers,omitempty"`
TargetNumWorkers int32 `json:"target_num_workers,omitempty" url:"target_num_workers,omitempty"`
CurrentNumWorkers int32 `json:"current_num_workers,omitempty" url:"current_num_workers,omitempty"`
TargetNumWorkers int32 `json:"target_num_workers,omitempty" url:"target_num_workers,omitempty"`
PreviousAttributes *ClusterAttributes `json:"previous_attributes,omitempty" url:"previous_attributes,omitempty"`
Attributes *ClusterAttributes `json:"attributes,omitempty" url:"attributes,omitempty"`
PreviousClusterSize *ClusterSize `json:"previous_cluster_size,omitempty" url:"previous_cluster_size,omitempty"`

View File

@ -1,8 +1,8 @@
package models
type Job struct {
JobID int64 `json:"job_id,omitempty" url:"job_id,omitempty"`
CreatorUserName string `json:"creator_user_name,omitempty" url:"creator_user_name,omitempty"`
JobID int64 `json:"job_id,omitempty" url:"job_id,omitempty"`
CreatorUserName string `json:"creator_user_name,omitempty" url:"creator_user_name,omitempty"`
Settings *JobSettings `json:"settings,omitempty" url:"settings,omitempty"`
CreatedTime int64 `json:"created_time,omitempty" url:"created_time,omitempty"`
CreatedTime int64 `json:"created_time,omitempty" url:"created_time,omitempty"`
}

View File

@ -1,19 +1,19 @@
package models
type JobSettings struct {
ExistingClusterID string `json:"existing_cluster_id,omitempty" url:"existing_cluster_id,omitempty"`
ExistingClusterID string `json:"existing_cluster_id,omitempty" url:"existing_cluster_id,omitempty"`
NewCluster *NewCluster `json:"new_cluster,omitempty" url:"new_cluster,omitempty"`
NotebookTask *NotebookTask `json:"notebook_task,omitempty" url:"notebook_task,omitempty"`
SparkJarTask *SparkJarTask `json:"spark_jar_task,omitempty" url:"spark_jar_task,omitempty"`
SparkPythonTask *SparkPythonTask `json:"spark_python_task,omitempty" url:"spark_python_task,omitempty"`
SparkSubmitTask *SparkSubmitTask `json:"spark_submit_task,omitempty" url:"spark_submit_task,omitempty"`
Name string `json:"name,omitempty" url:"name,omitempty"`
Libraries []Library `json:"libraries,omitempty" url:"libraries,omitempty"`
Name string `json:"name,omitempty" url:"name,omitempty"`
Libraries []Library `json:"libraries,omitempty" url:"libraries,omitempty"`
EmailNotifications *JobEmailNotifications `json:"email_notifications,omitempty" url:"email_notifications,omitempty"`
TimeoutSeconds int32 `json:"timeout_seconds,omitempty" url:"timeout_seconds,omitempty"`
MaxRetries int32 `json:"max_retries,omitempty" url:"max_retries,omitempty"`
MinRetryIntervalMillis int32 `json:"min_retry_interval_millis,omitempty" url:"min_retry_interval_millis,omitempty"`
RetryOnTimeout bool `json:"retry_on_timeout,omitempty" url:"retry_on_timeout,omitempty"`
TimeoutSeconds int32 `json:"timeout_seconds,omitempty" url:"timeout_seconds,omitempty"`
MaxRetries int32 `json:"max_retries,omitempty" url:"max_retries,omitempty"`
MinRetryIntervalMillis int32 `json:"min_retry_interval_millis,omitempty" url:"min_retry_interval_millis,omitempty"`
RetryOnTimeout bool `json:"retry_on_timeout,omitempty" url:"retry_on_timeout,omitempty"`
Schedule *CronSchedule `json:"schedule,omitempty" url:"schedule,omitempty"`
MaxConcurrentRuns int32 `json:"max_concurrent_runs,omitempty" url:"max_concurrent_runs,omitempty"`
MaxConcurrentRuns int32 `json:"max_concurrent_runs,omitempty" url:"max_concurrent_runs,omitempty"`
}

View File

@ -1,9 +1,9 @@
package models
type Library struct {
Jar string `json:"jar,omitempty" url:"jar,omitempty"`
Egg string `json:"egg,omitempty" url:"egg,omitempty"`
Whl string `json:"whl,omitempty" url:"whl,omitempty"`
Jar string `json:"jar,omitempty" url:"jar,omitempty"`
Egg string `json:"egg,omitempty" url:"egg,omitempty"`
Whl string `json:"whl,omitempty" url:"whl,omitempty"`
Pypi *PythonPyPiLibrary `json:"pypi,omitempty" url:"pypi,omitempty"`
Maven *MavenLibrary `json:"maven,omitempty" url:"maven,omitempty"`
Cran *RCranLibrary `json:"cran,omitempty" url:"cran,omitempty"`

View File

@ -3,6 +3,6 @@ package models
type LibraryFullStatus struct {
Library *Library `json:"library,omitempty" url:"library,omitempty"`
Status *LibraryInstallStatus `json:"status,omitempty" url:"status,omitempty"`
Messages []string `json:"messages,omitempty" url:"messages,omitempty"`
IsLibraryForAllClusters bool `json:"is_library_for_all_clusters,omitempty" url:"is_library_for_all_clusters,omitempty"`
Messages []string `json:"messages,omitempty" url:"messages,omitempty"`
IsLibraryForAllClusters bool `json:"is_library_for_all_clusters,omitempty" url:"is_library_for_all_clusters,omitempty"`
}

View File

@ -15,4 +15,5 @@ type NewCluster struct {
InitScripts []InitScriptInfo `json:"init_scripts,omitempty" url:"init_scripts,omitempty"`
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty" url:"spark_env_vars,omitempty"`
EnableElasticDisk bool `json:"enable_elastic_disk,omitempty" url:"enable_elastic_disk,omitempty"`
DockerImage *DockerImage `json:"docker_image,omitempty" url:"docker_image,omitempty"`
}

View File

@ -1,11 +1,11 @@
package models
type NodeType struct {
NodeTypeID string `json:"node_type_id,omitempty" url:"node_type_id,omitempty"`
MemoryMb int32 `json:"memory_mb,omitempty" url:"memory_mb,omitempty"`
NumCores float32 `json:"num_cores,omitempty" url:"num_cores,omitempty"`
Description string `json:"description,omitempty" url:"description,omitempty"`
InstanceTypeID string `json:"instance_type_id,omitempty" url:"instance_type_id,omitempty"`
IsDeprecated bool `json:"is_deprecated,omitempty" url:"is_deprecated,omitempty"`
NodeTypeID string `json:"node_type_id,omitempty" url:"node_type_id,omitempty"`
MemoryMb int32 `json:"memory_mb,omitempty" url:"memory_mb,omitempty"`
NumCores float32 `json:"num_cores,omitempty" url:"num_cores,omitempty"`
Description string `json:"description,omitempty" url:"description,omitempty"`
InstanceTypeID string `json:"instance_type_id,omitempty" url:"instance_type_id,omitempty"`
IsDeprecated bool `json:"is_deprecated,omitempty" url:"is_deprecated,omitempty"`
NodeInfo *ClusterCloudProviderNodeInfo `json:"node_info,omitempty" url:"node_info,omitempty"`
}

View File

@ -2,6 +2,6 @@ package models
type ObjectInfo struct {
ObjectType *ObjectType `json:"object_type,omitempty" url:"object_type,omitempty"`
Path string `json:"path,omitempty" url:"path,omitempty"`
Path string `json:"path,omitempty" url:"path,omitempty"`
Language *Language `json:"language,omitempty" url:"language,omitempty"`
}

View File

@ -3,5 +3,5 @@ package models
type RunState struct {
LifeCycleState *RunLifeCycleState `json:"life_cycle_state,omitempty" url:"life_cycle_state,omitempty"`
ResultState *RunResultState `json:"result_state,omitempty" url:"result_state,omitempty"`
StateMessage string `json:"state_message,omitempty" url:"state_message,omitempty"`
StateMessage string `json:"state_message,omitempty" url:"state_message,omitempty"`
}

View File

@ -1,6 +1,6 @@
package models
type SecretScope struct {
Name string `json:"name,omitempty" url:"name,omitempty"`
Name string `json:"name,omitempty" url:"name,omitempty"`
BackendType *ScopeBackendType `json:"backend_type,omitempty" url:"backend_type,omitempty"`
}

View File

@ -1,11 +1,11 @@
package models
type SparkNode struct {
PrivateIP string `json:"private_ip,omitempty" url:"private_ip,omitempty"`
PublicDNS string `json:"public_dns,omitempty" url:"public_dns,omitempty"`
NodeID string `json:"node_id,omitempty" url:"node_id,omitempty"`
InstanceID string `json:"instance_id,omitempty" url:"instance_id,omitempty"`
StartTimestamp int64 `json:"start_timestamp,omitempty" url:"start_timestamp,omitempty"`
PrivateIP string `json:"private_ip,omitempty" url:"private_ip,omitempty"`
PublicDNS string `json:"public_dns,omitempty" url:"public_dns,omitempty"`
NodeID string `json:"node_id,omitempty" url:"node_id,omitempty"`
InstanceID string `json:"instance_id,omitempty" url:"instance_id,omitempty"`
StartTimestamp int64 `json:"start_timestamp,omitempty" url:"start_timestamp,omitempty"`
NodeAwsAttributes *SparkNodeAwsAttributes `json:"node_aws_attributes,omitempty" url:"node_aws_attributes,omitempty"`
HostPrivateIP string `json:"host_private_ip,omitempty" url:"host_private_ip,omitempty"`
HostPrivateIP string `json:"host_private_ip,omitempty" url:"host_private_ip,omitempty"`
}

View File

@ -2,5 +2,5 @@ package models
type TerminationReason struct {
Code *TerminationCode `json:"code,omitempty" url:"code,omitempty"`
Parameters []ParameterPair `json:"parameters,omitempty" url:"parameters,omitempty"`
Parameters []ParameterPair `json:"parameters,omitempty" url:"parameters,omitempty"`
}

View File

@ -1,7 +1,7 @@
package models
type ViewItem struct {
Content string `json:"content,omitempty" url:"content,omitempty"`
Name string `json:"name,omitempty" url:"name,omitempty"`
Content string `json:"content,omitempty" url:"content,omitempty"`
Name string `json:"name,omitempty" url:"name,omitempty"`
Type *ViewType `json:"type,omitempty" url:"type,omitempty"`
}

View File

@ -5,7 +5,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// SecretsAPI exposes the Secrets API

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// TokenAPI exposes the Token API

View File

@ -5,7 +5,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/aws/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/aws/models"
)
// WorkspaceAPI exposes the Workspace API

View File

@ -1,6 +1,6 @@
package azure
import databricks "github.com/xinsnake/databricks-sdk-golang"
import databricks "github.com/FlipsideCrypto/databricks-sdk-golang"
// DBClient is the client for Azure implements DBClient
type DBClient struct {
@ -10,6 +10,7 @@ type DBClient struct {
// Init initializes the client
func (c *DBClient) Init(option databricks.DBClientOption) DBClient {
c.Option = option
option.Init()
return *c
}

View File

@ -4,8 +4,8 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
databricks "github.com/xinsnake/databricks-sdk-golang"
. "github.com/xinsnake/databricks-sdk-golang/azure"
databricks "github.com/FlipsideCrypto/databricks-sdk-golang"
. "github.com/FlipsideCrypto/databricks-sdk-golang/azure"
)
var _ = Describe("Client", func() {

View File

@ -4,7 +4,8 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/azure/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/httpmodels"
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/models"
)
// ClustersAPI exposes the Clusters API
@ -18,21 +19,21 @@ func (a ClustersAPI) init(client DBClient) ClustersAPI {
}
// Create creates a new Spark cluster
func (a ClustersAPI) Create(cluster models.NewCluster) (models.ClusterInfo, error) {
var clusterInfo models.ClusterInfo
func (a ClustersAPI) Create(cluster httpmodels.CreateReq) (httpmodels.CreateResp, error) {
var createResp httpmodels.CreateResp
resp, err := a.Client.performQuery(http.MethodPost, "/clusters/create", cluster, nil)
if err != nil {
return clusterInfo, err
return createResp, err
}
err = json.Unmarshal(resp, &clusterInfo)
return clusterInfo, err
err = json.Unmarshal(resp, &createResp)
return createResp, err
}
// Edit edits the configuration of a cluster to match the provided attributes and size
func (a ClustersAPI) Edit(clusterInfo models.ClusterInfo) error {
_, err := a.Client.performQuery(http.MethodPost, "/clusters/edit", clusterInfo, nil)
func (a ClustersAPI) Edit(editReq httpmodels.EditReq) error {
_, err := a.Client.performQuery(http.MethodPost, "/clusters/edit", editReq, nil)
return err
}
@ -99,8 +100,8 @@ func (a ClustersAPI) PermanentDelete(clusterID string) error {
}
// Get retrieves the information for a cluster given its identifier
func (a ClustersAPI) Get(clusterID string) (models.ClusterInfo, error) {
var clusterInfo models.ClusterInfo
func (a ClustersAPI) Get(clusterID string) (httpmodels.GetResp, error) {
var clusterInfo httpmodels.GetResp
data := struct {
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
@ -108,6 +109,9 @@ func (a ClustersAPI) Get(clusterID string) (models.ClusterInfo, error) {
clusterID,
}
resp, err := a.Client.performQuery(http.MethodGet, "/clusters/get", data, nil)
if err != nil {
return clusterInfo, err
}
err = json.Unmarshal(resp, &clusterInfo)
return clusterInfo, err
@ -138,9 +142,9 @@ func (a ClustersAPI) Unpin(clusterID string) error {
// List return information about all pinned clusters, currently active clusters,
// up to 70 of the most recently terminated interactive clusters in the past 30 days,
// and up to 30 of the most recently terminated job clusters in the past 30 days
func (a ClustersAPI) List() ([]models.ClusterInfo, error) {
func (a ClustersAPI) List() ([]httpmodels.GetResp, error) {
var clusterList = struct {
Clusters []models.ClusterInfo `json:"clusters,omitempty" url:"clusters,omitempty"`
Clusters []httpmodels.GetResp `json:"clusters,omitempty" url:"clusters,omitempty"`
}{}
resp, err := a.Client.performQuery(http.MethodGet, "/clusters/list", nil, nil)
@ -153,9 +157,9 @@ func (a ClustersAPI) List() ([]models.ClusterInfo, error) {
}
// ListNodeTypes returns a list of supported Spark node types
func (a ClustersAPI) ListNodeTypes() ([]models.NodeType, error) {
func (a ClustersAPI) ListNodeTypes() ([]httpmodels.ListNodeTypesRespItem, error) {
var nodeTypeList = struct {
NodeTypes []models.NodeType `json:"node_types,omitempty" url:"node_types,omitempty"`
NodeTypes []httpmodels.ListNodeTypesRespItem `json:"node_types,omitempty" url:"node_types,omitempty"`
}{}
resp, err := a.Client.performQuery(http.MethodGet, "/clusters/list-node-types", nil, nil)
@ -168,9 +172,9 @@ func (a ClustersAPI) ListNodeTypes() ([]models.NodeType, error) {
}
// SparkVersions return the list of available Spark versions
func (a ClustersAPI) SparkVersions() ([]models.SparkVersion, error) {
func (a ClustersAPI) SparkVersions() ([]httpmodels.SparkVersionsRespItem, error) {
var versionsList = struct {
Versions []models.SparkVersion `json:"versions,omitempty" url:"versions,omitempty"`
Versions []httpmodels.SparkVersionsRespItem `json:"versions,omitempty" url:"versions,omitempty"`
}{}
resp, err := a.Client.performQuery(http.MethodGet, "/clusters/spark-versions", nil, nil)
@ -182,23 +186,12 @@ func (a ClustersAPI) SparkVersions() ([]models.SparkVersion, error) {
return versionsList.Versions, err
}
// ClustersEventsResponse is the response from Events
type ClustersEventsResponse struct {
Events []models.ClusterEvent `json:"events,omitempty" url:"events,omitempty"`
NextPage struct {
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
EndTime int64 `json:"end_time,omitempty" url:"end_time,omitempty"`
Offset int32 `json:"offset,omitempty" url:"offset,omitempty"`
} `json:"next_page,omitempty" url:"next_page,omitempty"`
TotalCount int32 `json:"total_count,omitempty" url:"total_count,omitempty"`
}
// Events retrieves a list of events about the activity of a cluster
func (a ClustersAPI) Events(
clusterID string, startTime, endTime int64, order models.ListOrder,
eventTypes []models.ClusterEventType, offset, limit int64) (ClustersEventsResponse, error) {
eventTypes []models.ClusterEventType, offset, limit int64) (httpmodels.EventsResp, error) {
var eventsResponse ClustersEventsResponse
var eventsResponse httpmodels.EventsResp
data := struct {
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`

View File

@ -0,0 +1,28 @@
package httpmodels
import (
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/models"
)
type CreateReq struct {
NumWorkers int32 `json:"num_workers,omitempty" url:"num_workers,omitempty"`
Autoscale *models.AutoScale `json:"autoscale,omitempty" url:"autoscale,omitempty"`
ClusterName string `json:"cluster_name,omitempty" url:"cluster_name,omitempty"`
SparkVersion string `json:"spark_version,omitempty" url:"spark_version,omitempty"`
SparkConf map[string]string `json:"spark_conf,omitempty" url:"spark_conf,omitempty"`
NodeTypeID string `json:"node_type_id,omitempty" url:"node_type_id,omitempty"`
DriverNodeTypeID string `json:"driver_node_type_id,omitempty" url:"driver_node_type_id,omitempty"`
CustomTags []models.ClusterTag `json:"custom_tags,omitempty" url:"custom_tags,omitempty"`
ClusterLogConf *models.ClusterLogConf `json:"cluster_log_conf,omitempty" url:"cluster_log_conf,omitempty"`
InitScripts []models.InitScriptInfo `json:"init_scripts,omitempty" url:"init_scripts,omitempty"`
DockerImage models.DockerImage `json:"docker_image,omitempty" url:"docker_image,omitempty"`
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty" url:"spark_env_vars,omitempty"`
EnableElasticDisk bool `json:"enable_elastic_disk,omitempty" url:"enable_elastic_disk,omitempty"`
AutoterminationMinutes int32 `json:"autotermination_minutes,omitempty" url:"autotermination_minutes,omitempty"`
InstancePoolID string `json:"instance_pool_id,omitempty" url:"instance_pool_id,omitempty"`
IdempotencyToken string `json:"idempotency_token,omitempty" url:"idempotency_token,omitempty"`
}
type CreateResp struct {
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
}

View File

@ -0,0 +1,36 @@
package httpmodels
import (
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/models"
)
type EditReq struct {
NumWorkers int32 `json:"num_workers,omitempty" url:"num_workers,omitempty"`
AutoScale *models.AutoScale `json:"autoscale,omitempty" url:"autoscale,omitempty"`
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
CreatorUserName string `json:"creator_user_name,omitempty" url:"creator_user_name,omitempty"`
Driver *models.SparkNode `json:"driver,omitempty" url:"driver,omitempty"`
Executors []models.SparkNode `json:"executors,omitempty" url:"executors,omitempty"`
SparkContextID int64 `json:"spark_context_id,omitempty" url:"spark_context_id,omitempty"`
JdbcPort int32 `json:"jdbc_port,omitempty" url:"jdbc_port,omitempty"`
ClusterName string `json:"cluster_name,omitempty" url:"cluster_name,omitempty"`
SparkVersion string `json:"spark_version,omitempty" url:"spark_version,omitempty"`
SparkConf *models.SparkConfPair `json:"spark_conf,omitempty" url:"spark_conf,omitempty"`
NodeTypeID string `json:"node_type_id,omitempty" url:"node_type_id,omitempty"`
DriverNodeTypeID string `json:"driver_node_type_id,omitempty" url:"driver_node_type_id,omitempty"`
ClusterLogConf *models.ClusterLogConf `json:"cluster_log_conf,omitempty" url:"cluster_log_conf,omitempty"`
InitScripts []models.InitScriptInfo `json:"init_scripts,omitempty" url:"init_scripts,omitempty"`
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty" url:"spark_env_vars,omitempty"`
AutoterminationMinutes int32 `json:"autotermination_minutes,omitempty" url:"autotermination_minutes,omitempty"`
State *models.ClusterState `json:"state,omitempty" url:"state,omitempty"`
StateMessage string `json:"state_message,omitempty" url:"state_message,omitempty"`
StartTime int64 `json:"start_time,omitempty" url:"start_time,omitempty"`
TerminateTime int64 `json:"terminate_time,omitempty" url:"terminate_time,omitempty"`
LastStateLossTime int64 `json:"last_state_loss_time,omitempty" url:"last_state_loss_time,omitempty"`
LastActivityTime int64 `json:"last_activity_time,omitempty" url:"last_activity_time,omitempty"`
ClusterMemoryMb int64 `json:"cluster_memory_mb,omitempty" url:"cluster_memory_mb,omitempty"`
ClusterCores float32 `json:"cluster_cores,omitempty" url:"cluster_cores,omitempty"`
DefaultTags map[string]string `json:"default_tags,omitempty" url:"default_tags,omitempty"`
ClusterLogStatus *models.LogSyncStatus `json:"cluster_log_status,omitempty" url:"cluster_log_status,omitempty"`
TerminationReason *models.TerminationReason `json:"termination_reason,omitempty" url:"termination_reason,omitempty"`
}

View File

@ -0,0 +1,15 @@
package httpmodels
import (
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/models"
)
type EventsResp struct {
Events []models.ClusterEvent `json:"events,omitempty" url:"events,omitempty"`
NextPage struct {
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
EndTime int64 `json:"end_time,omitempty" url:"end_time,omitempty"`
Offset int32 `json:"offset,omitempty" url:"offset,omitempty"`
} `json:"next_page,omitempty" url:"next_page,omitempty"`
TotalCount int32 `json:"total_count,omitempty" url:"total_count,omitempty"`
}

View File

@ -0,0 +1,36 @@
package httpmodels
import (
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/models"
)
type GetResp struct {
NumWorkers int32 `json:"num_workers,omitempty" url:"num_workers,omitempty"`
AutoScale *models.AutoScale `json:"autoscale,omitempty" url:"autoscale,omitempty"`
ClusterID string `json:"cluster_id,omitempty" url:"cluster_id,omitempty"`
CreatorUserName string `json:"creator_user_name,omitempty" url:"creator_user_name,omitempty"`
Driver *models.SparkNode `json:"driver,omitempty" url:"driver,omitempty"`
Executors []models.SparkNode `json:"executors,omitempty" url:"executors,omitempty"`
SparkContextID int64 `json:"spark_context_id,omitempty" url:"spark_context_id,omitempty"`
JdbcPort int32 `json:"jdbc_port,omitempty" url:"jdbc_port,omitempty"`
ClusterName string `json:"cluster_name,omitempty" url:"cluster_name,omitempty"`
SparkVersion string `json:"spark_version,omitempty" url:"spark_version,omitempty"`
SparkConf *models.SparkConfPair `json:"spark_conf,omitempty" url:"spark_conf,omitempty"`
NodeTypeID string `json:"node_type_id,omitempty" url:"node_type_id,omitempty"`
DriverNodeTypeID string `json:"driver_node_type_id,omitempty" url:"driver_node_type_id,omitempty"`
ClusterLogConf *models.ClusterLogConf `json:"cluster_log_conf,omitempty" url:"cluster_log_conf,omitempty"`
InitScripts []models.InitScriptInfo `json:"init_scripts,omitempty" url:"init_scripts,omitempty"`
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty" url:"spark_env_vars,omitempty"`
AutoterminationMinutes int32 `json:"autotermination_minutes,omitempty" url:"autotermination_minutes,omitempty"`
State *models.ClusterState `json:"state,omitempty" url:"state,omitempty"`
StateMessage string `json:"state_message,omitempty" url:"state_message,omitempty"`
StartTime int64 `json:"start_time,omitempty" url:"start_time,omitempty"`
TerminateTime int64 `json:"terminate_time,omitempty" url:"terminate_time,omitempty"`
LastStateLossTime int64 `json:"last_state_loss_time,omitempty" url:"last_state_loss_time,omitempty"`
LastActivityTime int64 `json:"last_activity_time,omitempty" url:"last_activity_time,omitempty"`
ClusterMemoryMb int64 `json:"cluster_memory_mb,omitempty" url:"cluster_memory_mb,omitempty"`
ClusterCores float32 `json:"cluster_cores,omitempty" url:"cluster_cores,omitempty"`
DefaultTags map[string]string `json:"default_tags,omitempty" url:"default_tags,omitempty"`
ClusterLogStatus *models.LogSyncStatus `json:"cluster_log_status,omitempty" url:"cluster_log_status,omitempty"`
TerminationReason *models.TerminationReason `json:"termination_reason,omitempty" url:"termination_reason,omitempty"`
}

View File

@ -1,11 +1,15 @@
package models
package httpmodels
type NodeType struct {
import (
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/models"
)
type ListNodeTypesRespItem struct {
NodeTypeID string `json:"node_type_id,omitempty" url:"node_type_id,omitempty"`
MemoryMb int32 `json:"memory_mb,omitempty" url:"memory_mb,omitempty"`
NumCores float32 `json:"num_cores,omitempty" url:"num_cores,omitempty"`
Description string `json:"description,omitempty" url:"description,omitempty"`
InstanceTypeID string `json:"instance_type_id,omitempty" url:"instance_type_id,omitempty"`
IsDeprecated bool `json:"is_deprecated,omitempty" url:"is_deprecated,omitempty"`
NodeInfo *ClusterCloudProviderNodeInfo `json:"node_info,omitempty" url:"node_info,omitempty"`
NodeInfo *models.ClusterCloudProviderNodeInfo `json:"node_info,omitempty" url:"node_info,omitempty"`
}

View File

@ -1,6 +1,6 @@
package models
package httpmodels
type SparkVersion struct {
type SparkVersionsRespItem struct {
Key string `json:"key,omitempty" url:"key,omitempty"`
Name string `json:"name,omitempty" url:"name,omitempty"`
}

View File

@ -26,7 +26,7 @@ type ClusterInfo struct {
LastActivityTime int64 `json:"last_activity_time,omitempty" url:"last_activity_time,omitempty"`
ClusterMemoryMb int64 `json:"cluster_memory_mb,omitempty" url:"cluster_memory_mb,omitempty"`
ClusterCores float32 `json:"cluster_cores,omitempty" url:"cluster_cores,omitempty"`
DefaultTags []ClusterTag `json:"default_tags,omitempty" url:"default_tags,omitempty"`
DefaultTags map[string]string `json:"default_tags,omitempty" url:"default_tags,omitempty"`
ClusterLogStatus *LogSyncStatus `json:"cluster_log_status,omitempty" url:"cluster_log_status,omitempty"`
TerminationReason *TerminationReason `json:"termination_reason,omitempty" url:"termination_reason,omitempty"`
}

View File

@ -0,0 +1,6 @@
package models
type DockerBasicAuth struct {
Username string `json:"username,omitempty" url:"username,omitempty"`
Password string `json:"password,omitempty" url:"password,omitempty"`
}

View File

@ -0,0 +1,6 @@
package models
type DockerImage struct {
Url string `json:"url,omitempty" url:"url,omitempty"`
BasicAuth DockerBasicAuth `json:"basic_auth,omitempty" url:"basic_auth,omitempty"`
}

View File

@ -5,7 +5,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/azure/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/dbfs/models"
)
// DbfsAPI exposes the DBFS API
@ -60,6 +60,10 @@ func (a DbfsAPI) Create(path string, overwrite bool) (DbfsCreateResponse, error)
}
resp, err := a.Client.performQuery(http.MethodPost, "/dbfs/create", data, nil)
if err != nil {
return createResponse, err
}
err = json.Unmarshal(resp, &createResponse)
return createResponse, err
}
@ -88,6 +92,10 @@ func (a DbfsAPI) GetStatus(path string) (models.FileInfo, error) {
}
resp, err := a.Client.performQuery(http.MethodGet, "/dbfs/get-status", data, nil)
if err != nil {
return fileInfo, err
}
err = json.Unmarshal(resp, &fileInfo)
return fileInfo, err
}
@ -108,6 +116,10 @@ func (a DbfsAPI) List(path string) ([]models.FileInfo, error) {
}
resp, err := a.Client.performQuery(http.MethodGet, "/dbfs/list", data, nil)
if err != nil {
return listResponse.Files, err
}
err = json.Unmarshal(resp, &listResponse)
return listResponse.Files, err
}
@ -176,6 +188,10 @@ func (a DbfsAPI) Read(path string, offset, length int64) (DbfsReadResponse, erro
}
resp, err := a.Client.performQuery(http.MethodGet, "/dbfs/read", data, nil)
if err != nil {
return readResponse, err
}
err = json.Unmarshal(resp, &readResponseBase64)
if err != nil {
return readResponse, err

View File

@ -1,375 +0,0 @@
// +build !ignore_autogenerated
// Code generated by deepcopy-gen. DO NOT EDIT.
package azure
import (
models "github.com/xinsnake/databricks-sdk-golang/azure/models"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClustersAPI) DeepCopyInto(out *ClustersAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClustersAPI.
func (in *ClustersAPI) DeepCopy() *ClustersAPI {
if in == nil {
return nil
}
out := new(ClustersAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClustersEventsResponse) DeepCopyInto(out *ClustersEventsResponse) {
*out = *in
if in.Events != nil {
in, out := &in.Events, &out.Events
*out = make([]models.ClusterEvent, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
out.NextPage = in.NextPage
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClustersEventsResponse.
func (in *ClustersEventsResponse) DeepCopy() *ClustersEventsResponse {
if in == nil {
return nil
}
out := new(ClustersEventsResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DBClient) DeepCopyInto(out *DBClient) {
*out = *in
in.Option.DeepCopyInto(&out.Option)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DBClient.
func (in *DBClient) DeepCopy() *DBClient {
if in == nil {
return nil
}
out := new(DBClient)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DbfsAPI) DeepCopyInto(out *DbfsAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DbfsAPI.
func (in *DbfsAPI) DeepCopy() *DbfsAPI {
if in == nil {
return nil
}
out := new(DbfsAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DbfsCreateResponse) DeepCopyInto(out *DbfsCreateResponse) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DbfsCreateResponse.
func (in *DbfsCreateResponse) DeepCopy() *DbfsCreateResponse {
if in == nil {
return nil
}
out := new(DbfsCreateResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DbfsListResponse) DeepCopyInto(out *DbfsListResponse) {
*out = *in
if in.Files != nil {
in, out := &in.Files, &out.Files
*out = make([]models.FileInfo, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DbfsListResponse.
func (in *DbfsListResponse) DeepCopy() *DbfsListResponse {
if in == nil {
return nil
}
out := new(DbfsListResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DbfsReadResponse) DeepCopyInto(out *DbfsReadResponse) {
*out = *in
if in.Data != nil {
in, out := &in.Data, &out.Data
*out = make([]byte, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DbfsReadResponse.
func (in *DbfsReadResponse) DeepCopy() *DbfsReadResponse {
if in == nil {
return nil
}
out := new(DbfsReadResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GroupsAPI) DeepCopyInto(out *GroupsAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GroupsAPI.
func (in *GroupsAPI) DeepCopy() *GroupsAPI {
if in == nil {
return nil
}
out := new(GroupsAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GroupsCreateResponse) DeepCopyInto(out *GroupsCreateResponse) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GroupsCreateResponse.
func (in *GroupsCreateResponse) DeepCopy() *GroupsCreateResponse {
if in == nil {
return nil
}
out := new(GroupsCreateResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InstancePoolsAPI) DeepCopyInto(out *InstancePoolsAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InstancePoolsAPI.
func (in *InstancePoolsAPI) DeepCopy() *InstancePoolsAPI {
if in == nil {
return nil
}
out := new(InstancePoolsAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JobsAPI) DeepCopyInto(out *JobsAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobsAPI.
func (in *JobsAPI) DeepCopy() *JobsAPI {
if in == nil {
return nil
}
out := new(JobsAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JobsRunsGetOutputResponse) DeepCopyInto(out *JobsRunsGetOutputResponse) {
*out = *in
out.NotebookOutput = in.NotebookOutput
in.Metadata.DeepCopyInto(&out.Metadata)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobsRunsGetOutputResponse.
func (in *JobsRunsGetOutputResponse) DeepCopy() *JobsRunsGetOutputResponse {
if in == nil {
return nil
}
out := new(JobsRunsGetOutputResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JobsRunsListResponse) DeepCopyInto(out *JobsRunsListResponse) {
*out = *in
if in.Runs != nil {
in, out := &in.Runs, &out.Runs
*out = make([]models.Run, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobsRunsListResponse.
func (in *JobsRunsListResponse) DeepCopy() *JobsRunsListResponse {
if in == nil {
return nil
}
out := new(JobsRunsListResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *LibrariesAPI) DeepCopyInto(out *LibrariesAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LibrariesAPI.
func (in *LibrariesAPI) DeepCopy() *LibrariesAPI {
if in == nil {
return nil
}
out := new(LibrariesAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *LibrariesClusterStatusResponse) DeepCopyInto(out *LibrariesClusterStatusResponse) {
*out = *in
if in.LibraryStatuses != nil {
in, out := &in.LibraryStatuses, &out.LibraryStatuses
*out = make([]models.LibraryFullStatus, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LibrariesClusterStatusResponse.
func (in *LibrariesClusterStatusResponse) DeepCopy() *LibrariesClusterStatusResponse {
if in == nil {
return nil
}
out := new(LibrariesClusterStatusResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScimAPI) DeepCopyInto(out *ScimAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScimAPI.
func (in *ScimAPI) DeepCopy() *ScimAPI {
if in == nil {
return nil
}
out := new(ScimAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretsAPI) DeepCopyInto(out *SecretsAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SecretsAPI.
func (in *SecretsAPI) DeepCopy() *SecretsAPI {
if in == nil {
return nil
}
out := new(SecretsAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TokenAPI) DeepCopyInto(out *TokenAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TokenAPI.
func (in *TokenAPI) DeepCopy() *TokenAPI {
if in == nil {
return nil
}
out := new(TokenAPI)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TokenCreateResponse) DeepCopyInto(out *TokenCreateResponse) {
*out = *in
out.TokenInfo = in.TokenInfo
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TokenCreateResponse.
func (in *TokenCreateResponse) DeepCopy() *TokenCreateResponse {
if in == nil {
return nil
}
out := new(TokenCreateResponse)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkspaceAPI) DeepCopyInto(out *WorkspaceAPI) {
*out = *in
in.Client.DeepCopyInto(&out.Client)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkspaceAPI.
func (in *WorkspaceAPI) DeepCopy() *WorkspaceAPI {
if in == nil {
return nil
}
out := new(WorkspaceAPI)
in.DeepCopyInto(out)
return out
}

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/azure/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/groups/models"
)
// GroupsAPI exposes the Groups API

View File

@ -1,7 +1,7 @@
package models
type DiskSpec struct {
DiskType *DiskType `json:"disk_type,omitempty" url:"disk_type,omitempty"`
DiskType *DiskType `json:"disk_type,omitempty" url:"disk_type,omitempty"`
DiskCount int32 `json:"disk_count,omitempty" url:"disk_count,omitempty"`
DiskSize int32 `json:"disk_size,omitempty" url:"disk_size,omitempty"`
}
}

View File

@ -0,0 +1,21 @@
package models
import (
clusterModels "github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/models"
)
type InstancePoolAndStats struct {
InstancePoolName string `json:"instance_pool_name,omitempty" url:"instance_pool_name,omitempty"`
MinIdleInstances int32 `json:"min_idle_instances,omitempty" url:"min_idle_instances,omitempty"`
MaxCapacity int32 `json:"max_capacity,omitempty" url:"max_capacity,omitempty"`
NodetypeID string `json:"node_type_id,omitempty" url:"node_type_id,omitempty"`
CustomTags []clusterModels.ClusterTag `json:"custom_tags,omitempty" url:"custom_tags,omitempty"`
IdleInstanceAutoterminationMinutes int32 `json:"idle_instance_autotermination_minutes,omitempty" url:"idle_instance_autotermination_minutes,omitempty"`
EnableElasticDisk bool `json:"enable_elastic_disk,omitempty" url:"enable_elastic_disk,omitempty"`
DiskSpec DiskSpec `json:"disk_spec,omitempty" url:"disk_spec,omitempty"`
PreloadedSparkVersions []string `json:"preloaded_spark_versions,omitempty" url:"preloaded_spark_versions,omitempty"`
InstancePoolID string `json:"instance_pool_id,omitempty" url:"instance_pool_id,omitempty"`
DefaultTags []clusterModels.ClusterTag `json:"default_tags,omitempty" url:"default_tags,omitempty"`
State InstancePoolState `json:"state,omitempty" url:"state,omitempty"`
Stats InstancePoolStats `json:"stats,omitempty" url:"stats,omitempty"`
}

View File

@ -4,7 +4,7 @@ import (
"encoding/json"
"net/http"
"github.com/xinsnake/databricks-sdk-golang/azure/models"
"github.com/FlipsideCrypto/databricks-sdk-golang/azure/jobs/models"
)
// JobsAPI exposes Jobs API endpoints
@ -30,11 +30,14 @@ func (a JobsAPI) Create(jobSettings models.JobSettings) (models.Job, error) {
return job, err
}
// JobsListResponse is the response type returned by JobsList
type JobsListResponse = struct {
Jobs []models.Job `json:"jobs,omitempty" url:"jobs,omitempty"`
}
// List lists all jobs
func (a JobsAPI) List() ([]models.Job, error) {
var jobsList = struct {
Jobs []models.Job `json:"jobs,omitempty" url:"jobs,omitempty"`
}{}
var jobsList JobsListResponse
resp, err := a.Client.performQuery(http.MethodGet, "/jobs/list", nil, nil)
if err != nil {
@ -131,7 +134,7 @@ func (a JobsAPI) RunsSubmit(runName string, clusterSpec models.ClusterSpec, jobT
return run, err
}
// JobsRunsListResponse is a bit special because it has a HasMore field
// JobsRunsListResponse is the response type returned by RunsList
type JobsRunsListResponse struct {
Runs []models.Run `json:"runs,omitempty" url:"runs,omitempty"`
HasMore bool `json:"has_more,omitempty" url:"has_more,omitempty"`

View File

@ -0,0 +1,11 @@
package models
import (
libraryModels "github.com/FlipsideCrypto/databricks-sdk-golang/azure/libraries/models"
clusterHttpModels "github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/httpmodels"
)
type ClusterSpec struct {
ExistingClusterID string `json:"existing_cluster_id,omitempty" url:"existing_cluster_id,omitempty"`
NewCluster *clusterHttpModels.CreateReq `json:"new_cluster,omitempty" url:"new_cluster,omitempty"`
Libraries []libraryModels.Library `json:"libraries,omitempty" url:"libraries,omitempty"`
}

8
azure/jobs/models/Job.go Normal file
View File

@ -0,0 +1,8 @@
package models
type Job struct {
JobID int64 `json:"job_id,omitempty" url:"job_id,omitempty"`
CreatorUserName string `json:"creator_user_name,omitempty" url:"creator_user_name,omitempty"`
Settings *JobSettings `json:"settings,omitempty" url:"settings,omitempty"`
CreatedTime int64 `json:"created_time,omitempty" url:"created_time,omitempty"`
}

View File

@ -0,0 +1,24 @@
package models
import (
libraryModels "github.com/FlipsideCrypto/databricks-sdk-golang/azure/libraries/models"
clusterHttpModels "github.com/FlipsideCrypto/databricks-sdk-golang/azure/clusters/httpmodels"
)
type JobSettings struct {
ExistingClusterID string `json:"existing_cluster_id,omitempty" url:"existing_cluster_id,omitempty"`
NewCluster *clusterHttpModels.CreateReq `json:"new_cluster,omitempty" url:"new_cluster,omitempty"`
NotebookTask *NotebookTask `json:"notebook_task,omitempty" url:"notebook_task,omitempty"`
SparkJarTask *SparkJarTask `json:"spark_jar_task,omitempty" url:"spark_jar_task,omitempty"`
SparkPythonTask *SparkPythonTask `json:"spark_python_task,omitempty" url:"spark_python_task,omitempty"`
SparkSubmitTask *SparkSubmitTask `json:"spark_submit_task,omitempty" url:"spark_submit_task,omitempty"`
Name string `json:"name,omitempty" url:"name,omitempty"`
Libraries []libraryModels.Library `json:"libraries,omitempty" url:"libraries,omitempty"`
EmailNotifications *JobEmailNotifications `json:"email_notifications,omitempty" url:"email_notifications,omitempty"`
TimeoutSeconds int32 `json:"timeout_seconds,omitempty" url:"timeout_seconds,omitempty"`
MaxRetries int32 `json:"max_retries,omitempty" url:"max_retries,omitempty"`
MinRetryIntervalMillis int32 `json:"min_retry_interval_millis,omitempty" url:"min_retry_interval_millis,omitempty"`
RetryOnTimeout bool `json:"retry_on_timeout,omitempty" url:"retry_on_timeout,omitempty"`
Schedule *CronSchedule `json:"schedule,omitempty" url:"schedule,omitempty"`
MaxConcurrentRuns int32 `json:"max_concurrent_runs,omitempty" url:"max_concurrent_runs,omitempty"`
}

Some files were not shown because too many files have changed in this diff Show More