initial commit

This commit is contained in:
sido
2018-10-23 21:22:53 +02:00
commit d365e29472
15 changed files with 879 additions and 0 deletions

12
charts/opal/Chart.yaml Normal file
View File

@ -0,0 +1,12 @@
apiVersion: v1
appVersion: "1.0"
description: Opal - helm stack (in BETA)
name: opal
version: 0.5.4
sources:
- https://git.webhosting.rug.nl/opal/opal-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/opal/opal-ops-docker-helm/
home: https://obiba.org
maintainers:
- name: sidohaakma
- name: fdlk

138
charts/opal/README.md Normal file
View File

@ -0,0 +1,138 @@
# Opal
This chart is used for acceptance and production use cases.
## Containers
The created containers are:
- MOLGENIS
- ElasticSearch
- PostgreSQL **(optional)**
## Provisioning
You can choose from which registry you want to pull. There are 2 registries:
- https://registry.molgenis.org
- https://hub.docker.com
The registry.molgenis.org contains the bleeding edge versions (PR's and master merges). The hub.docker.com contains the released artifacts (MOLGENIS releases and release candidates).
The three properties you need to specify are:
- ```molgenis.image.repository```
- ```molgenis.image.name```
- ```molgenis.image.tag```
Besides determining which image you want to pull, you also have to set an administrator password. You can do this by specifying the following property.
- ```molgenis.adminPassword```
### Firewall
Is defined at service level you can specify this attribute in the values:
- ```molgenis.firewall.enabled``` default 'false'
If set to 'true' the following options are available. One of the options below has to be set.
- ```molgenis.firewall.umcg.enabled``` default 'false'
- ```molgenis.firewall.cluster.enabled``` default 'false'
UMCG = only available within the UMCG.
Cluster = only available within the GCC cluster environment.
## Services
When you start MOLGENIS you need:
- an elasticsearch instance (5.5.6)
- an postgres instance (9.6)
You can attach additional services like:
- an opencpu instance
### Elasticsearch
You can configure elasticsearch by giving in the cluster location.
To configure the transport address you can address the node communication channel but also the native JAVA API. Which MOLGENIS uses to communicate with Elasticsearch.
From Elasticsearch version 6 and further the JAVA API is not supported anymore. At this moment you can only use Elastic instance till major version 5.
- ```molgenis.services.elasticsearch.transportAddresses: localhost:9300```
To configure the index on a Elasticsearch cluster you can specify the clusterName property.
- ```molgenis.services.elasticsearch.clusterName: molgenis```
### Postgres
You can specify the location of the postgres instance by specify the following property:
- ```molgenis.services.postgres.host: localhost```
You can specify the schema by filling out this property:
- ```molgenis.services.postgres.scheme: molgenis```
You can specify credentials for the database scheme by specifying the following properties:
- ```molgenis.services.postgres.user: molgenis```
- ```molgenis.services.postgres.password: molgenis```
To test you can use the **PostgreSQL**-helm chart of Kubernetes and specify these answers:
```bash
# answers for postgresql chart
postgresUser=molgenis
postgresPassword=molgenis
postgresDatabase=molgenis
persistence.enabled=false
```
### OpenCPU
You can specify the location of the OpenCPU cluster by specifying this property:
- ```molgenis.services.opencpu.host: localhost```
You can test OpenCPU settings using the **OpenCPU**-helm chart of MOLGENIS.
## Resources
You can specify resources by resource type. There are 2 resource types.
- memory of container
- maximum heap space JVM
Specify memory usage of container:
- ```molgenis.resources.limits.memory```
Specify memory usage for Java JVM:
- ```molgenis.javaOpts.maxHeapSpace```
Select the resources you need dependant on the customer you need to serve.
## Persistence
You can enable persistence on your MOLGENIS stack by specifying the following property.
- ```persistence.enabled``` default 'true'
You can also choose to retain the volume of the NFS.
- ```persistence.retain``` default 'false'
The size and claim name can be specified per service. There are now two services that can be persist.
- MOLGENIS
- ElasticSearch
- PostgreSQL **(optional)**
MOLGENIS persistent properties.
- ```molgenis.persistence.claim```
- ```molgenis.persistence.size```
ElasticSearch persistent properties.
- ```elasticsearch.persistence.claim```
- ```elasticsearch.persistence.size```
PostgreSQL persistent properties.
- ```postgres.persistence.claim```
- ```postgres.persistence.size```
### Resolve you persistent volume
You do not know which volume is attached to your MOLGENIS instance. You can resolve this by executing:
```
kubectl get pv
```
You can now view the persistent volume claims and the attached volumes.
| NAME | CAPACITY | ACCESS | MODES | RECLAIM | POLICY | STATUS | CLAIM | STORAGECLASS | REASON | AGE |
| ---- | -------- | ------ | ----- | ------- | ------ | ------ | ----- | ------------ | ------ | --- |
| pvc-45988f55-900f-11e8-a0b4-005056a51744 | 30G | RWX | | Retain | Bound | molgenis-solverd/molgenis-nfs-claim | nfs-provisioner-retain | | | 33d |
| pvc-3984723d-220f-14e8-a98a-skjhf88823kk | 30G | RWO | | Delete | Bound | molgenis-test/molgenis-nfs-claim | nfs-provisioner | | | 33d |
You see the ```molgenis-test/molgenis-nfs-claim``` is bound to the volume: ```pvc-3984723d-220f-14e8-a98a-skjhf88823kk```.
When you want to view the data in the this volume you can go to the nfs-provisioning pod and execute the shell. Go to the directory ```export``` and lookup the directory ```pvc-3984723d-220f-14e8-a98a-skjhf88823kk```.

137
charts/opal/questions.yml Normal file
View File

@ -0,0 +1,137 @@
categories:
- OPAL
questions:
- variable: opal.environment
label: Environment
default: "test"
description: "Environment of Opal instance"
type: enum
options:
- development
- test
- acceptance
- production
required: true
group: "Provisioning"
- variable: molgenis.type.kind
label: Type
default: "medium"
description: "Type of MOLGENIS resources"
type: enum
options:
- small
- medium
- large
required: true
group: "Provisioning"
- variable: molgenis.image.tag
label: Version
default: "stable"
description: "Select a MOLGENIS version (check the registry.molgenis.org or hub.docker.com for released tags)"
type: string
required: true
group: "Provisioning"
- variable: molgenis.adminPassword
label: Administrator password
default: ""
description: "Enter an administrator password"
type: password
required: true
group: "Provisioning"
- variable: service.firewall.enabled
label: Firewall enabled
default: false
description: "Firewall enabled (can be cluster or UMCG scoped)"
type: boolean
required: true
group: "Services"
show_subquestion_if: true
subquestions:
- variable: service.firewall.kind
default: "umcg"
description: "Firewall kind. This can be 'umcg' or 'cluster' environment"
type: enum
required: true
options:
- umcg
- cluster
label: Firewall kind
- variable: molgenis.advanced
label: Advanced mode
default: false
description: "Do you want to override the default values in advanced mode"
type: boolean
required: true
group: "Advanced"
show_subquestion_if: true
subquestions:
- variable: molgenis.image.repository
label: Registry
default: "registry.hub.docker.com"
description: "Select a registry to pull from"
type: enum
options:
- "registry.hub.docker.com"
- "registry.molgenis.org"
required: true
group: "Provisioning"
- variable: molgenis.services.opencpu.host
label: OpenCPU cluster
default: "molgenis-opencpu.opencpu"
description: "Specify the OpenCPU cluster"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.embedded
label: Postgres embedded
default: true
description: "Do you want an embedded postgres"
type: boolean
required: true
group: "Services"
- variable: molgenis.services.postgres.host
label: Postgres cluster location
default: "localhost"
description: "Set the location of the postgres cluster. This can be localhost when the postgres is enabled else you need to specify a cluster location if you do not want a embedded postgres instance)"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.scheme
label: Database scheme
default: "molgenis"
description: "Set the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.user
label: Database username
default: "molgenis"
description: "Set user of the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.password
label: Database password
default: "molgenis"
description: "Set the password of the database scheme"
type: string
required: true
group: "Services"
- variable: persistence.retain
default: false
description: "Do you want to retain the persistent volume"
type: boolean
label: Retain volume
group: "Persistence"
- variable: persistence.molgenis.size
default: "default"
description: "Size of MOLGENIS filestore (PostgreSQL and ElasticSearch excluded)"
type: enum
options:
- "default"
- "5Gi"
- "10Gi"
- "30Gi"
label: Size MOLGENIS filestore
group: "Persistence"

View File

@ -0,0 +1,4 @@
dependencies:
- name: mysql
version: ^0.16
repository: https://kubernetes-charts.storage.googleapis.com/

View File

@ -0,0 +1,19 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "molgenis.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "molgenis.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "molgenis.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "molgenis.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}

View File

@ -0,0 +1,32 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "opal.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "opal.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "opal.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@ -0,0 +1,103 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
name: {{ template "opal.fullname" . }}
labels:
app: {{ template "opal.name" . }}
chart: {{ template "opal.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "opal.name" . }}
release: {{ .Release.Name }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ template "opal.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: opal
{{- with .Values.opal }}
image: {{ .image.repository }}/{{ .image.name }}:{{ .image.tag }}
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: opal.home
value: /home/opal
- name: db_uri
value: jdbc:postgresql://localhost/opal
- name: db_user
value: opal
- name: db_password
value: opal
- name: admin.password
value: "{{ .adminPassword }}"
- name: CATALINA_OPTS
{{- if eq .type.kind "small" }}
value: "-Xmx{{ .type.small.javaOpts.maxHeapSpace }} -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
{{- else if eq .type.kind "medium"}}
value: "-Xmx{{ .type.medium.javaOpts.maxHeapSpace }} -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
{{- else }}
value: "-Xmx{{ .type.large.javaOpts.maxHeapSpace }} -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
{{- end }}
ports:
- containerPort: 8080
{{- if $.Values.persistence.enabled }}
volumeMounts:
- name: opal-nfs
mountPath: /home/opal
{{- end }}
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 5
failureThreshold: 25
successThreshold: 1
readinessProbe:
httpGet:
path: /api/v2/version
port: 8080
initialDelaySeconds: 120
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
resources:
{{- if eq .type.kind "small" }}
{{ toYaml .type.small.resources | indent 12 }}
{{- else if eq .type.kind "medium" }}
{{ toYaml .type.medium.resources | indent 12 }}
{{- else }}
{{ toYaml .type.large.resources | indent 12 }}
{{- end }}
{{- end }}
{{- if .Values.persistence.enabled }}
volumes:
- name: opal-nfs
persistentVolumeClaim:
claimName: {{ .Values.opal.persistence.claim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@ -0,0 +1,44 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "opal.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "{{ $.Release.Name }}-ingress"
labels:
app: {{ template "opal.name" . }}
chart: {{ template "opal.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- if eq $.Values.opal.environment "development" }}
- host: {{ .Release.Name }}.dev.opal.org
{{- else if eq $.Values.opal.environment "test" }}
- host: {{ .Release.Name }}.test.opal.org
{{- else if eq $.Values.opal.environment "acceptance" }}
- host: {{ .Release.Name }}.accept.opal.org
{{- else }}
- host: {{ .Release.Name }}.opal.org
{{- end }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $.Values.service.port }}
{{- end }}

View File

@ -0,0 +1,29 @@
{{- if .Values.persistence.enabled }}
apiVersion: extensions/v1beta1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.mysql.persistence.claim }}
annotations:
{{- if .Values.persistence.retain }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner-retain"
{{- else }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner"
{{- end }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
{{- if eq .Values.persistence.mysql.size "default" }}
{{- if eq .Values.opal.type.kind "small" }}
storage: {{ .Values.mysql.type.small.persistence.size }}
{{- else if eq .Values.opal.type.kind "medium" }}
storage: {{ .Values.mysql.type.medium.persistence.size }}
{{- else }}
storage: {{ .Values.mysql.type.large.persistence.size }}
{{- end }}
{{ else }}
storage: {{ .Values.persistence.mysql.size }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,29 @@
{{- if .Values.persistence.enabled -}}
apiVersion: extensions/v1beta1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.opal.persistence.claim }}
annotations:
{{- if .Values.persistence.retain }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner-retain"
{{- else }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner"
{{- end }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
{{- if eq .Values.persistence.opal.size "default" }}
{{- if eq .Values.opal.type.kind "small" }}
storage: {{ .Values.opal.type.small.persistence.size }}
{{- else if eq .Values.opal.type.kind "medium" }}
storage: {{ .Values.opal.type.medium.persistence.size }}
{{- else }}
storage: {{ .Values.opal.type.large.persistence.size }}
{{- end }}
{{ else }}
storage: {{ .Values.persistence.opal.size }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "opal.fullname" . }}
labels:
app: {{ template "opal.name" . }}
chart: {{ template "opal.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- name: opal
port: {{ .Values.service.port }}
selector:
app: {{ template "opal.name" . }}
release: {{ .Release.Name }}

143
charts/opal/values.yaml Normal file
View File

@ -0,0 +1,143 @@
# Default values for molgenis.
replicaCount: 1
service:
type: LoadBalancer
firewall:
enabled: false
kind: "umcg"
umcg:
rules:
- 127.0.0.1/32
cluster:
rules:
- 127.0.0.1/32
port: 8080
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
# This will be used again when external domains need be attached to the instance
# hosts:
# - name: test
path: /
tls: []
opal:
advanced: false
type:
kind: medium
small:
javaOpts:
maxHeapSpace: "2g"
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 200m
memory: 2Gi
persistence:
size: 5Gi
medium:
javaOpts:
maxHeapSpace: "3g"
resources:
limits:
cpu: 3
memory: 3Gi
requests:
cpu: 200m
memory: 3Gi
persistence:
size: 10Gi
large:
javaOpts:
maxHeapSpace: "4g"
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 200m
memory: 4Gi
persistence:
size: 30Gi
environment: test
image:
repository: registry.hub.docker.com
name: obiba/opal
tag: stable
pullPolicy: Always
adminPassword:
persistence:
claim: opal-nfs-claim
services:
rserver:
host: localhost
mysql:
host: localhost
rserver:
image:
repository: obiba/opal-rserver
tag: stable
pullPolicy: IfNotPresent
mysql:
type:
small:
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 100m
memory: 512Mi
persistence:
size: 5Gi
medium:
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 100m
memory: 2Gi
persistence:
size: 10Gi
large:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 100m
memory: 4Gi
persistence:
size: 15Gi
image:
repository: postgres
tag: 9.6-alpine
pullPolicy: IfNotPresent
persistence:
claim: mysql-nfs-claim
persistence:
enabled: true
retain: false
opal:
size: "default"
mysql:
size: "default"
nodeSelector: {
deployPod: "true"
}
tolerations: []
affinity: {}