1
0

55 Commits

Author SHA1 Message Date
d4a8a3ed42 Merge branch 'chore/pipeline' of P129679/molgenis-ops-docker-helm into master 2018-10-01 12:31:22 +02:00
d4f2dadb06 chore (Jenkinsfile): Only deploy if branch is master 2018-09-29 19:38:49 +02:00
f440437862 chore (Jenkinsfile): package and deploy 2018-09-29 19:35:51 +02:00
71a0889ad8 feat (molgenis-jenkins): Add helm build pod. 2018-09-29 14:53:32 +02:00
f43b21bcc6 fix (molgenis-vault): Add maintainers and home and fix whitespace 2018-09-29 14:52:26 +02:00
367c63eaa5 fix (molgenis-nexus): Add maintainers and home and fix newline 2018-09-29 14:52:26 +02:00
55ad5b26fb fix (molgenis): Add maintainers and home and fix indentation 2018-09-29 14:52:26 +02:00
86d6dfb86b fix (molgenis-httpd): Add maintainers and home and newlines 2018-09-29 14:52:26 +02:00
b58a1d2042 fix (molgenis-jenkins): Fix chart 2018-09-29 14:52:26 +02:00
28bf98e6b2 fix (molgenis-opencpu): Add home and maintainers and fix whitespace 2018-09-29 14:52:26 +02:00
26d366f1a1 chore: Add Jenkinsfile 2018-09-29 14:52:11 +02:00
b64ee00cff refactor: Move charts to charts dir 2018-09-29 14:51:27 +02:00
2190ada376 Merge branch 'revert/nodeSelector' of P129679/molgenis-ops-docker-helm into master 2018-09-29 11:32:29 +02:00
f6905334e1 Revert "Merge branch 'chore/nodeSelector' of P129679/molgenis-ops-docker-helm into master"
This reverts commit f94e6da6e3, reversing
changes made to b73fd578ea.
2018-09-29 11:29:53 +02:00
f94e6da6e3 Merge branch 'chore/nodeSelector' of P129679/molgenis-ops-docker-helm into master 2018-09-28 20:50:18 +02:00
b73fd578ea Merge branch 'feat/add-slack' of p281392/molgenis-ops-docker-helm into master 2018-09-28 14:43:14 +02:00
486ab89b41 Merge branch 'updated-molgenis-documentation' of p281392/molgenis-ops-docker-helm into master 2018-09-28 14:41:49 +02:00
d8b8bd9a22 chore: add nodeSelectors to the charts 2018-09-28 14:26:42 +02:00
4e6349dacb update plugins to install Slack integration 2018-09-28 13:14:01 +02:00
4312e92860 added plugin to plugin range 2018-09-28 12:41:43 +02:00
02f7b7de1b updated to push to registry 2018-09-28 12:24:21 +02:00
2a1e9eacbb Merge branch 'master' of p281392/molgenis-ops-docker-helm into master 2018-09-28 08:34:19 +02:00
395292cf37 updated nexus chart to 0.4.2 2018-09-28 08:33:35 +02:00
62bba3fcdd Merge branch 'master' of p281392/molgenis-ops-docker-helm into master 2018-09-28 08:32:51 +02:00
beeb59bbb3 updated probes 2018-09-28 08:31:29 +02:00
4b5f7deb16 Merge branch 'master' of p281392/molgenis-ops-docker-helm into master 2018-09-28 08:20:44 +02:00
a3d8adcdde version bump to 0.4.1 2018-09-28 08:19:36 +02:00
d0c02147b8 Merge branch 'master' of p281392/molgenis-ops-docker-helm into master 2018-09-28 08:19:08 +02:00
24781ff4ff updated label in ingress 2018-09-28 08:17:50 +02:00
9435220896 Merge branch 'master' of p281392/molgenis-ops-docker-helm into master 2018-09-28 08:03:38 +02:00
e220cb736e bumped version 2018-09-28 08:01:51 +02:00
3e78e896a3 Merge branch 'add-init-container' of p281392/molgenis-ops-docker-helm into master 2018-09-27 16:31:42 +02:00
dfd8872e58 Merge branch 'chore/jenkins-gitsource' of P129679/molgenis-ops-docker-helm into master 2018-09-27 16:28:20 +02:00
a9571dbdcb Merge branch 'master' of https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm into chore/jenkins-gitsource 2018-09-27 16:20:41 +02:00
7048bf3655 updated backup documnentation 2018-09-27 16:19:58 +02:00
fd86066cee added init container and chown for nexus files 2018-09-27 16:17:35 +02:00
364fe53114 Merge branch 'chore/upgrade-jenkins' of P129679/molgenis-ops-docker-helm into master 2018-09-27 16:10:52 +02:00
4f9c9866cf Merge branch 'fix/75' of P129679/molgenis-ops-docker-helm into master 2018-09-27 16:10:03 +02:00
525847fdf5 fix(molgenis): Recreate pods upon upgrade
The default upgrade strategy would cause multiple instances of MOLGENIS to run on the same database.
Use Recreate strategy instead.

Fixes #75
2018-09-27 11:46:11 +02:00
3ae115c429 chore(molgenis-jenkins): Upgrade chart version 2018-09-27 11:26:25 +02:00
76b39cc236 chore(molgenis-jenkins): Upgrade plugins 2018-09-27 11:21:34 +02:00
0a328dd9d3 chore(molgenis-jenkins): Upgrade jenkins to 0.18.0 2018-09-27 11:01:41 +02:00
5760171c4b Merge branch 'use-molgenis-prod-in-helm' of p281392/molgenis-ops-docker-helm into master 2018-09-26 18:05:06 +02:00
e192f5819a Merge branch 'add-skip-build-config' of p281392/molgenis-ops-docker-helm into master 2018-09-26 18:00:56 +02:00
aaad66b40f updated pvc creation of postgres 2018-09-26 17:39:47 +02:00
b201117f9a updated pvc initialization 2018-09-26 17:27:45 +02:00
74a87892fb bumped patch version 2018-09-26 17:18:33 +02:00
6f995f45bd updated postgres instances and firewall configuration 2018-09-26 16:52:06 +02:00
35c7fd79af Merge branch 'implement-nfs-provisioning-nexus' of p281392/molgenis-ops-docker-helm into master 2018-09-26 16:34:34 +02:00
039c9993f6 fix for postgres volume mount, not available when persistence is not enabled 2018-09-26 16:30:33 +02:00
d4d9d5931d added embedded containers 2018-09-26 16:09:20 +02:00
f10b8d7ea8 updated production chart and removed preview chart 2018-09-26 16:04:22 +02:00
a4c4d19fe2 renamed service again 2018-09-20 16:54:34 +02:00
d0c9c91ff3 updated nexus to connect to nfs provisioning 2018-09-20 16:50:46 +02:00
7c9a7a143b add plugin for skipping build after release 2018-08-10 08:20:46 +02:00
102 changed files with 593 additions and 1716 deletions

3
.gitignore vendored
View File

@ -1,2 +1,3 @@
.idea
*.iml
*.iml
target

40
Jenkinsfile vendored Normal file
View File

@ -0,0 +1,40 @@
pipeline {
agent {
kubernetes {
label 'helm'
}
}
stages {
stage('Test') {
steps {
container('chart-testing') {
sh "chart_test.sh --no-install --all"
}
}
}
stage('Package') {
steps {
container('chart-testing'){
sh 'mkdir target'
sh 'for dir in charts/*; do helm package --destination target "$dir"; done'
}
}
}
stage('Deploy') {
when {
branch 'master'
}
steps {
container('vault') {
script {
env.NEXUS_USER = sh(script: 'vault read -field=username secret/ops/account/nexus', returnStdout: true)
env.NEXUS_PWD = sh(script: 'vault read -field=password secret/ops/account/nexus', returnStdout: true)
}
}
container('alpine') {
sh 'set +x; for chart in target/*; do curl -L -u $NEXUS_USER:$NEXUS_PWD http://registry.molgenis.org/repository/helm/ --upload-file "$chart"; done'
}
}
}
}
}

View File

@ -104,6 +104,7 @@ This repository is serves also as a catalogue for Rancher. We have serveral apps
- [Jenkins](molgenis-jenkins/README.md)
- [NEXUS](molgenis-nexus/README.md)
- [HTTPD](molgenis-httpd/README.md)
- [MOLGENIS](molgenis/README.md)
- [MOLGENIS preview](molgenis-preview/README.md)
- [MOLGENIS vault](molgenis-vault/README.md)
@ -122,6 +123,26 @@ You can you need to know to easily develop and deploy helm-charts
Do it in the root of the project where the Chart.yaml is located
It installs a release of a kubernetes stack. You also store this as an artifact in a kubernetes repository
- ```helm package .```
You can create a package which can be uploaded in the molgenis helm repository
- ```helm publish```
You still have to create an ```index.yaml``` for the chart. You can do this by executing this command: ```helm repo index #directory name of helm chart#```
Then you can upload it by executing:
- ```curl -v --user #username#:#password# --upload-file index.yaml https://registry.molgenis.org/repository/helm/#chart name#/index.yml```
- ```curl -v --user #username#:#password# --upload-file #chart name#-#version#.tgz https://registry.molgenis.org/repository/helm/#chart name#/#chart name#-#version#.tgz```
Now you have to add the repository locally to use in your ```requirements.yaml```.
- ```helm repo add #repository name# https://registry.molgenis.org/repository/helm/molgenis```
- ```helm dep build```
You can build your dependencies (create a ```charts``` directory and install the chart in it) of the helm-chart.
- ```helm list```
Lists all installed releases

View File

@ -5,4 +5,8 @@ name: molgenis-httpd
version: 0.1.0
sources:
- https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-httpd/catalogIcon-molgenis-httpd.svg
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-httpd/catalogIcon-molgenis-httpd.svg
home: http://httpd.apache.org
maintainers:
- name: sidohaakma
- name: fdlk

View File

Before

Width:  |  Height:  |  Size: 89 KiB

After

Width:  |  Height:  |  Size: 89 KiB

View File

@ -48,4 +48,4 @@ nodeSelector: {}
tolerations: []
affinity: {}
affinity: {}

View File

@ -1,8 +1,11 @@
name: molgenis-jenkins
home: https://jenkins.io/
version: 0.7.0
appVersion: 2.121
version: 0.8.1
appVersion: 2.138.1
description: Molgenis installation for the jenkins chart.
sources:
- https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-jenkins/catalogIcon-molgenis-jenkins.svg
maintainers:
- name: fdlk
- name: sidohaakma

View File

@ -75,6 +75,10 @@ Token used by Jenkins to authenticate on the [RuG Webhosting Gogs](https://git.w
| `secret.gogs.user` | username for the account | `p281392` |
| `secret.gogs.token` | token for the account | `xxxx` |
#### Slack
The Slack integration is done mostly in the Jenkinsfile of each project. It is sufficient to only add the plugin to the Jenkins configuration in Helm.
#### Legacy:
##### Docker Hub

View File

Before

Width:  |  Height:  |  Size: 133 KiB

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

View File

@ -1,6 +1,6 @@
dependencies:
- name: jenkins
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.16.4
version: 0.18.0
digest: sha256:39f694515489598fa545c9a5a4f1347749e8f2a8d7fae6ccae3e2acae1564685
generated: 2018-06-27T14:36:23.172954738+02:00
generated: 2018-09-27T11:00:15.795416984+02:00

View File

@ -1,3 +1,5 @@
# Helm in Jenkins
To be able to run helm inside a jenkins pod, you'll need to
* create a role in the namespace where tiller is installed
* bind that role to the user that jenkins pods run as

View File

@ -3,17 +3,18 @@ jenkins:
HostName: jenkins.molgenis.org
ServiceType: ClusterIP
InstallPlugins:
- kubernetes:1.12.0
- kubernetes:1.12.6
- workflow-aggregator:2.5
- workflow-job:2.21
- workflow-job:2.25
- credentials-binding:1.16
- git:3.9.1
- github-branch-source:2.3.6
- kubernetes-credentials-provider:0.9
- blueocean:1.6.2
- kubernetes-credentials-provider:0.10
- blueocean:1.8.3
- github-oauth:0.29
- gogs-webhook:1.0.14
- sauce-ondemand:1.176
- github-scm-trait-commit-skip:0.1.1
- slack:2.3
Security:
UseGitHub: false
GitHub:
@ -82,6 +83,7 @@ jenkins:
<strategyId>1</strategyId>
<trust class="org.jenkinsci.plugins.github_branch_source.ForkPullRequestDiscoveryTrait$TrustPermission"/>
</org.jenkinsci.plugins.github__branch__source.ForkPullRequestDiscoveryTrait>
<org.jenkinsci.plugins.scm__filter.GitHubCommitSkipTrait plugin="github-scm-trait-commit-skip@0.1.1"/>
<jenkins.plugins.git.traits.LocalBranchTrait plugin="git@3.9.1">
<extension class="hudson.plugins.git.extensions.impl.LocalBranch">
<localBranch>**</localBranch>
@ -581,8 +583,42 @@ jenkins:
cpu: "1"
memory: "512Mi"
NodeSelector: {}
helm:
Label: helm
NodeUsageMode: EXCLUSIVE
Containers:
chart-testing:
Image: "quay.io/helmpack/chart-testing"
ImageTag: v1.1.0
Command: cat
WorkingDir: /home/jenkins
TTY: true
alpine:
Image: "spotify/alpine"
Command: cat
WorkingDir: /home/jenkins
TTY: true
vault:
Image: "vault"
Command: cat
WorkingDir: /home/jenkins
TTY: true
EnvVars:
- type: Secret
key: VAULT_TOKEN
secretName: molgenis-pipeline-vault-secret
secretKey: token
- type: Secret
key: VAULT_SKIP_VERIFY
secretName: molgenis-pipeline-vault-secret
secretKey: skipVerify
- type: Secret
key: VAULT_ADDR
secretName: molgenis-pipeline-vault-secret
secretKey: addr
NodeSelector: {}
#secret contains configuration for the kubernetes secrets that jenkins can access
# secret contains configuration for the kubernetes secrets that jenkins can access
secret:
# vault configures the vault secret
vault:
@ -604,4 +640,4 @@ secret:
# dockerHubPassword contains password for hub.docker.com
dockerHub:
user: molgenisci
password: xxxx
password: xxxx

View File

@ -2,7 +2,11 @@ apiVersion: v1
appVersion: "1.0"
description: Nexus stack for MOLGENIS
name: molgenis-nexus
version: 0.3.0
version: 0.4.2
sources:
- https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-nexus/catalogIcon-molgenis-nexus.svg
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-nexus/catalogIcon-molgenis-nexus.svg
home: https://www.sonatype.com/nexus-repository-oss
maintainers:
- name: sidohaakma
- name: fdlk

View File

@ -0,0 +1,69 @@
# MOLGENIS - NEXUS Helm Chart
NEXUS repository for kubernetes to deploy on a kubernetes cluster with NFS-share
## Chart Details
This chart will deploy:
- 1 NEXUS-nfs initialization container
We need this container to avoid permission issues on the NEXUS docker
- 1 NEXUS container
- 1 MOLGENIS-httpd container (to proxy the registry and docker to one domain)
## Backup restore
There are two steps in restoring the NEXUS.
- Database
- Blobstore
### Restore the database
Go to the commandline:
```bash
kubectl get pv
```
```bash
| NAME | CAPACITY | ACCESS | MODES | RECLAIM | POLICY | STATUS | CLAIM | STORAGECLASS | REASON | AGE |
| ---- | -------- | ------ | ----- | ------- | ------ | ------ | ----- | ------------ | ------ | --- |
| pvc-45988f55-900f-11e8-a0b4-005056a51744 | 30G | RWX | | Retain | Bound | molgenis-nexus/molgenis-nfs-claim | nfs-provisioner-retain | | | 33d |
| pvc-3984723d-220f-14e8-a98a-skjhf88823kk | 30G | RWO | | Delete | Bound | molgenis-test/molgenis-nfs-claim | nfs-provisioner | | | 33d |
```
The persistent volume is the one in the molgenis-nexus namespace.
Go to the NFS-provisioner to the path of the persistent volume:
```bash
ls -t --full-time | head -7 | xargs cp ../restore-from-backup/
```
### Restore the blobstore
You can copy the directory ```blobs``` to the target persistent volume ```/ blobs```.
You can now bring the NEXUS back up.
## Installing the Chart
You can test in install the chart by executing:
```helm lint .```
To test if your helm chart-syntax is right and:
```helm install . --dry-run --debug```
To test if your hem chart works and:
```helm install .```
To deploy it on the cluster.
```curl -L -u xxxx:xxxx http://registry.molgenis.org/repository/helm/ --upload-file molgenis-x.x.x.tgz```
To push it to the registry

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

@ -19,18 +19,14 @@ spec:
app: {{ .Values.nexus.name }}
creationTimestamp: null
spec:
volumes:
- name: {{ .Values.persistence.name }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.name }}
restartPolicy: {{ .Values.nexus.restartPolicy }}
initContainers:
- name: volume-mount-nexus
- name: nexus-nfs
image: busybox
command: ["sh", "-c", "chown -R 200:200 {{ .Values.persistence.mountPath }}"]
command: ["sh", "-c", "chown -R 200:200 /nexus-data"]
volumeMounts:
- name: {{ .Values.persistence.name }}
mountPath: "{{ .Values.persistence.mountPath }}"
- name: molgenis-nexus-nfs
mountPath: "/nexus-data"
containers:
- name: {{ .Values.nexus.name }}
image: "{{ .Values.nexus.image.repository }}:{{ .Values.nexus.image.tag }}"
@ -39,6 +35,31 @@ spec:
- containerPort: {{ .Values.nexus.port.ui }}
- containerPort: {{ .Values.nexus.port.docker }}
volumeMounts:
- name: {{ .Values.persistence.name }}
mountPath: "/nexus-data"
- name: molgenis-nexus-nfs
mountPath: /nexus-data
livenessProbe:
httpGet:
path: /
port: {{ .Values.nexus.port.ui }}
initialDelaySeconds: 120
periodSeconds: 20
failureThreshold: 15
successThreshold: 1
readinessProbe:
httpGet:
path: /
port: {{ .Values.nexus.port.ui }}
initialDelaySeconds: 120
periodSeconds: 20
failureThreshold: 15
successThreshold: 1
volumes:
- name: molgenis-nexus-nfs
persistentVolumeClaim:
claimName: {{ .Values.persistence.claim }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@ -0,0 +1,55 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: {{ .Values.nexusProxy.name }}
labels:
app: {{ .Values.nexusProxy.name }}
environment: {{ .Values.environment }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: {{ .Values.nexusProxy.strategy.type }}
selector:
matchLabels:
app: {{ .Values.nexusProxy.selector }}
template:
metadata:
labels:
app: {{ .Values.nexusProxy.name }}
creationTimestamp: null
spec:
restartPolicy: {{ .Values.nexusProxy.restartPolicy }}
containers:
- name: {{ .Values.nexusProxy.name }}
image: "{{ .Values.nexusProxy.image.repository }}:{{ .Values.nexusProxy.image.tag }}"
imagePullPolicy: {{ .Values.nexusProxy.image.pullPolicy }}
env:
- name: PROXY_SERVICE
value: "{{ .Values.nexus.name }}:{{ .Values.nexus.port.ui }},{{ .Values.nexus.name }}:{{ .Values.nexus.port.docker }}:{{ .Values.nexus.path.dockerV2 }}"
- name: SERVER_NAME
value: {{ .Values.nexusProxy.hostname }}
ports:
- containerPort: {{ .Values.nexusProxy.port }}
resources: {}
livenessProbe:
httpGet:
path: /
port: {{ .Values.nexusProxy.port }}
initialDelaySeconds: 1500
periodSeconds: 20
failureThreshold: 5
successThreshold: 1
readinessProbe:
httpGet:
path: /
port: {{ .Values.nexusProxy.port }}
initialDelaySeconds: 150
periodSeconds: 20
failureThreshold: 15
successThreshold: 1
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@ -5,7 +5,7 @@ kind: Ingress
metadata:
name: "{{ $.Release.Name }}-ingress"
labels:
app: httpd
app: {{ $.Values.nexusProxy.name }}
chart: "{{ $.Chart.Name }}-{{ $.Chart.Version }}"
release: "{{ $.Release.Name }}"
heritage: "{{ $.Release.Service }}"
@ -25,8 +25,8 @@ spec:
paths:
- path: {{ default "/" .path }}
backend:
serviceName: httpd
servicePort: 80
serviceName: {{ $.Values.nexusProxy.name }}
servicePort: {{ $.Values.nexusProxy.port }}
{{- if .tls }}
tls:
- hosts:

View File

@ -0,0 +1,15 @@
{{- if .Values.persistence.enabled -}}
apiVersion: extensions/v1beta1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.persistence.claim }}
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-provisioner-retain"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- end }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.nexusProxy.name }}
labels:
app: {{ .Values.nexusProxy.name }}
spec:
type: {{ .Values.nexusProxy.service.type }}
ports:
- name: {{ .Values.nexusProxy.name }}
port: {{ .Values.nexusProxy.port }}
selector:
app: {{ .Values.nexusProxy.selector }}

View File

@ -0,0 +1,65 @@
# Default values for nexus.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
environment: production
nexus:
name: nexus
strategy:
type: Recreate
selector: nexus
restartPolicy: Always
image:
repository: molgenis/nexus3
tag: latest
pullPolicy: Always
port:
docker: 5000
ui: 8081
path:
dockerV2: v2
service:
type: ClusterIP
nexusProxy:
name: nexus-proxy
hostname: registry.molgenis.org
strategy:
type: Recreate
selector: nexus-proxy
restartPolicy: Always
image:
repository: molgenis/httpd
tag: latest
pullPolicy: Always
port: 80
service:
type: LoadBalancer
ingress:
enabled: true
annotations: {}
path: /
hosts:
- name: registry.molgenis.org
tls: []
persistence:
enabled: true
claim: molgenis-nexus
size: 500Gi
resources: {}
nodeSelector: {
deployPod: "true"
}
tolerations: []
affinity: {}

View File

@ -5,4 +5,8 @@ name: molgenis-opencpu
version: 0.1.1
sources:
- https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-opencpu/catalogIcon-molgenis-opencpu.svg
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-opencpu/catalogIcon-molgenis-opencpu.svg
home: https://www.opencpu.org
maintainers:
- name: sidohaakma
- name: fdlk

View File

Before

Width:  |  Height:  |  Size: 245 KiB

After

Width:  |  Height:  |  Size: 245 KiB

View File

@ -8,7 +8,7 @@ questions:
description: "Enable ingress"
type: boolean
required: true
group: "Loadbalancing"
group: "Load balancing"
- variable: opencpu.image.repository
label: Registry
default: "registry.hub.docker.com"

View File

@ -4,3 +4,7 @@ description: MOLGENIS vault
name: molgenis-vault
version: 0.1.1
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-vault/catalogIcon-molgenis-vault.svg
home: https://github.com/coreos/vault-operator
maintainers:
- name: fdlk
- name: sidohaakma

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

@ -70,10 +70,10 @@ ui:
# limits:
# cpu: 100m
# memory: 128Mi
#requests:
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
vault:
auth: GITHUB
url: https://vault.vault-operator:8200
url: https://vault.vault-operator:8200

View File

@ -1,8 +1,12 @@
apiVersion: v1
appVersion: "1.0"
description: MOLGENIS - helm stack (in BETA)
name: molgenis-beta
version: 0.3.0
name: molgenis
version: 0.4.3
sources:
- https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis/catalogIcon-molgenis.svg
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis/catalogIcon-molgenis.svg
home: http://molgenis.org
maintainers:
- name: sidohaakma
- name: fdlk

View File

@ -5,6 +5,8 @@ This chart is used for acceptance and production use cases.
This chart spins up a MOLGENIS instance with HTTPD. The created containers are:
- MOLGENIS
- ElasticSearch
- PostgreSQL **(optional)**
## Provisioning
You can choose from which registry you want to pull. There are 2 registries:
@ -21,6 +23,19 @@ The three properties you need to specify are:
Besides determining which image you want to pull, you also have to set an administrator password. You can do this by specifying the following property.
- ```molgenis.adminPassword```
### Firewall
Is defined at service level you can specify this attribute in the values:
- ```molgenis.firewall.enabled``` default 'false'
If set to 'true' the following options are available. One of the options below has to be set.
- ```molgenis.firewall.umcg.enabled``` default 'false'
- ```molgenis.firewall.cluster.enabled``` default 'false'
UMCG = only available within the UMCG.
Cluster = only available within the GCC cluster environment.
## Services
When you start MOLGENIS you need:
- an elasticsearch instance (5.5.6)
@ -82,15 +97,16 @@ Select the resources you need dependant on the customer you need to serve.
## Persistence
You can enable persistence on your MOLGENIS stack by specifying the following property.
- ```persistence.enabled```
- ```persistence.enabled``` default 'true'
You can also choose to retain the volume of the NFS.
- ```persistence.retain```
- ```persistence.retain``` default 'false'
The size and claim name can be specified per service. There are now two services that can be persist.
- MOLGENIS
- ElasticSearch
- PostgreSQL **(optional)**
MOLGENIS persistent properties.
- ```molgenis.persistence.claim```
@ -100,6 +116,9 @@ ElasticSearch persistent properties.
- ```elasticsearch.persistence.claim```
- ```elasticsearch.persistence.size```
PostgreSQL persistent properties.
- ```postgres.persistence.claim```
- ```postgres.persistence.size```
### Resolve you persistent volume
You do not know which volume is attached to your MOLGENIS instance. You can resolve this by executing:
@ -116,7 +135,4 @@ You can now view the persistent volume claims and the attached volumes.
| pvc-3984723d-220f-14e8-a98a-skjhf88823kk | 30G | RWO | | Delete | Bound | molgenis-test/molgenis-nfs-claim | nfs-provisioner | | | 33d |
You see the ```molgenis-test/molgenis-nfs-claim``` is bound to the volume: ```pvc-3984723d-220f-14e8-a98a-skjhf88823kk```.
When you want to view the data in the this volume you can go to the nfs-provisioning pod and execute the shell. Go to the directory ```export``` and lookup the directory ```pvc-3984723d-220f-14e8-a98a-skjhf88823kk```.
## Firewall
Is defined at cluster level. This chart does not facilitate firewall configuration.
When you want to view the data in the this volume you can go to the nfs-provisioning pod and execute the shell. Go to the directory ```export``` and lookup the directory ```pvc-3984723d-220f-14e8-a98a-skjhf88823kk```.

View File

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 77 KiB

View File

@ -8,7 +8,7 @@ questions:
description: "Hostname for your stack"
type: hostname
required: true
group: "Load Balancing"
group: "Load balancing"
- variable: molgenis.image.repository
label: Registry
default: "registry.hub.docker.com"
@ -33,6 +33,24 @@ questions:
type: password
required: true
group: "Provisioning"
- variable: service.firewall.enabled
label: Firewall enabled
default: false
description: "Firewall enabled (can be cluster or UMCG scoped)"
type: boolean
required: true
group: "Provisioning"
show_subquestion_if: true
subquestions:
- variable: service.firewall.kind
default: "umcg"
description: "Firewall kind. This can be 'umcg' or 'cluster' environment"
type: enum
required: true
options:
- umcg
- cluster
label: Firewall kind
- variable: molgenis.services.opencpu.host
label: OpenCPU cluster
default: "localhost"
@ -40,34 +58,43 @@ questions:
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.host
label: Postgres cluster location
default: "postgresql.molgenis-postgresql.svc"
description: "Set the location of the postgres cluster"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.scheme
label: Database scheme
default: "molgenis"
description: "Set the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.user
label: Database username
default: "molgenis"
description: "Set user of the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.password
label: Database password
default: "molgenis"
description: "Set the password of the database scheme"
type: string
- variable: molgenis.services.postgres.embedded
label: Postgres embedded
default: false
description: "Do you want an embedded postgres"
type: boolean
required: true
group: "Services"
show_subquestion_if: false
subquestions:
- variable: molgenis.services.postgres.host
label: Postgres cluster location
default: ""
description: "Set the location of the postgres cluster. This can be localhost when the postgres is enabled else you need to specify a cluster location if you do not want a embedded postgres instance)"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.scheme
label: Database scheme
default: "molgenis"
description: "Set the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.user
label: Database username
default: "molgenis"
description: "Set user of the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.password
label: Database password
default: "molgenis"
description: "Set the password of the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.resources.limits.memory
label: Container memory limit
default: 1250Mi
@ -98,7 +125,7 @@ questions:
- "2g"
group: "Resources"
- variable: persistence.enabled
default: false
default: true
description: "Do you want to use persistence"
type: boolean
required: true
@ -112,20 +139,29 @@ questions:
type: boolean
label: Retain volume
- variable: molgenis.persistence.size
default: "30Gi"
default: "5Gi"
description: "Size of MOLGENIS filestore (PostgreSQL and ElasticSearch excluded)"
type: enum
options:
- "30Gi"
- "50Gi"
- "100Gi"
- "5Gi"
- "10Gi"
- "20Gi"
label: Size MOLGENIS filestore
- variable: elasticsearch.persistence.size
default: "50Gi"
default: "5Gi"
description: "Size of ElasticSearch data (directory that is persist: /usr/share/elasticsearch/data)"
type: enum
options:
- "5Gi"
- "10Gi"
- "50Gi"
- "100Gi"
- "200Gi"
label: Size for ElasticSearch data
label: Size for ElasticSearch data
- variable: postgres.persistence.size
default: "5Gi"
description: "Size of PostgreSQL data (directory that is persist: /var/lib/postgresql/data/pgdata)"
type: enum
options:
- "5Gi"
- "10Gi"
- "50Gi"
label: Size for PostgreSQL data

View File

@ -17,6 +17,8 @@ spec:
matchLabels:
app: {{ template "molgenis.name" . }}
release: {{ .Release.Name }}
strategy:
type: Recreate
template:
metadata:
labels:
@ -97,11 +99,33 @@ spec:
- name: elasticsearch-nfs
mountPath: /usr/share/elasticsearch/data
{{- end }}
resources:
{{ toYaml .resources | indent 12 }}
{{- end }}
- name: postgres
{{- with .Values.postgres }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: POSTGRES_USER
value: {{ $.Values.molgenis.services.postgres.user }}
- name: POSTGRES_PASSWORD
value: {{ $.Values.molgenis.services.postgres.password }}
- name: POSTGRES_DB
value: {{ $.Values.molgenis.services.postgres.scheme }}
ports:
- containerPort: 5432
resources:
{{ toYaml .resources | indent 12 }}
{{- if $.Values.persistence.enabled }}
volumeMounts:
- name: postgres-nfs
mountPath: /var/lib/postgresql/data
{{- end }}
{{- end }}
{{- if .Values.persistence.enabled }}
volumes:
- name: molgenis-nfs
@ -110,6 +134,9 @@ spec:
- name: elasticsearch-nfs
persistentVolumeClaim:
claimName: {{ .Values.elasticsearch.persistence.claim }}
- name: postgres-nfs
persistentVolumeClaim:
claimName: {{ .Values.postgres.persistence.claim }}
{{- end }}
{{- with .Values.nodeSelector }}

View File

@ -4,7 +4,7 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
name: "{{ $.Release.Name }}-ingress"
labels:
app: {{ template "molgenis.name" . }}
chart: {{ template "molgenis.chart" . }}
@ -33,6 +33,6 @@ spec:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8080
servicePort: {{ $.Values.service.port }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.molgenis.services.postgres.embedded }}
{{- if .Values.persistence.enabled }}
apiVersion: extensions/v1beta1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.postgres.persistence.claim }}
annotations:
{{- if .Values.persistence.retain }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner-retain"
{{- else }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner"
{{- end }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ .Values.postgres.persistence.size }}
{{- end }}
{{- end }}

View File

@ -9,6 +9,18 @@ metadata:
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
{{- if .Values.service.firewall.enabled }}
loadBalancerSourceRanges:
{{- if .Values.service.firewall.kind eq "umcg" }}
{{- range $index, $rule := .Values.service.firewall.umcg.rules }}
- {{ $rule }}
{{- end }}
{{- else }}
{{- range $index, $rule := .Values.service.firewall.cluster.rules }}
- {{ $rule }}
{{- end }}
{{- end }}
{{- end }}
ports:
- name: molgenis
port: {{ .Values.service.port }}

View File

@ -4,6 +4,15 @@ replicaCount: 1
service:
type: LoadBalancer
firewall:
enabled: false
kind: "umcg"
umcg:
rules:
- 127.0.0.1/32
cluster:
rules:
- 127.0.0.1/32
port: 8080
ingress:
@ -33,7 +42,7 @@ molgenis:
memory: 1250Mi
persistence:
claim: molgenis-nfs-claim
size: 30Gi
size: 5Gi
services:
opencpu:
host: localhost
@ -41,6 +50,7 @@ molgenis:
transportAddresses: localhost:9300
clusterName: molgenis
postgres:
embedded: false
host: localhost
scheme: molgenis
user: molgenis
@ -54,18 +64,34 @@ elasticsearch:
javaOpts: "-Xms1g -Xmx1g"
clusterName: molgenis
resources:
limits:
cpu: 2
memory: 3Gi
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 2
memory: 3Gi
requests:
cpu: 100m
memory: 1Gi
persistence:
claim: elasticsearch-nfs-claim
size: 50Gi
size: 5Gi
postgres:
image:
repository: postgres
tag: 9.6-alpine
pullPolicy: IfNotPresent
resources:
limits:
cpu: 1
memory: 250Mi
requests:
cpu: 100m
memory: 250Mi
persistence:
claim: postgres-nfs-claim
size: 5Gi
persistence:
enabled: false
enabled: true
retain: false
nodeSelector: {

View File

@ -1,28 +0,0 @@
# MOLGENIS - NEXUS Helm Chart
NEXUS repository for kubernetes to deploy on a kubernetes cluster with NFS-share
## Chart Details
This chart will deploy:
- 1 NEXUS container
- 1 MOLGENIS-httpd container ()to proxy the registry and docker to one domain)
## Installing the Chart
You can test in install the chart by executing:
```helm lint .```
To test if your helm chart-syntax is right and:
```helm install . --dry-run --debug```
To test if your hem chart works and:
```helm install .```
To deploy it on the cluster.

View File

@ -1,34 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: {{ .Values.httpd.name }}
labels:
app: {{ .Values.httpd.name }}
environment: {{ .Values.environment }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: {{ .Values.httpd.strategy.type }}
selector:
matchLabels:
app: {{ .Values.httpd.selector }}
template:
metadata:
labels:
app: {{ .Values.httpd.name }}
creationTimestamp: null
spec:
restartPolicy: {{ .Values.httpd.restartPolicy }}
containers:
- name: {{ .Values.httpd.name }}
image: "{{ .Values.httpd.image.repository }}:{{ .Values.httpd.image.tag }}"
imagePullPolicy: {{ .Values.httpd.image.pullPolicy }}
env:
- name: PROXY_SERVICE
value: "{{ .Values.nexus.name }}:{{ .Values.nexus.port.ui }},{{ .Values.nexus.name }}:{{ .Values.nexus.port.docker }}:{{ .Values.nexus.path.dockerV2 }}"
- name: SERVER_NAME
value: {{ .Values.httpd.hostname }}
ports:
- containerPort: {{ .Values.httpd.port }}
resources: {}

View File

@ -1,13 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.httpd.name }}
labels:
app: {{ .Values.httpd.name }}
spec:
type: {{ .Values.httpd.service.type }}
ports:
- name: {{ .Values.httpd.name }}
port: {{ .Values.httpd.port }}
selector:
app: {{ .Values.httpd.selector }}

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.persistence.name }}
labels:
name: nfs2
spec:
storageClassName: {{ .Values.persistence.storageClass }}
capacity:
storage: {{ .Values.persistence.size }}
accessModes:
- {{ .Values.persistence.accessMode }}
persistentVolumeReclaimPolicy: {{ .Values.persistence.reclaimPolicy }}
nfs:
server: {{ .Values.persistence.server }}
path: {{ .Values.persistence.mountPath }}

View File

@ -1,11 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.persistence.name }}
spec:
storageClassName: {{ .Values.persistence.storageClass }}
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.size }}

View File

@ -1,82 +0,0 @@
# Default values for nexus.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
environment: production
nexus:
name: nexus
strategy:
type: Recreate
selector: nexus
restartPolicy: Always
image:
repository: sonatype/nexus3
tag: latest
pullPolicy: Always
port:
docker: 5000
ui: 8081
path:
dockerV2: v2
service:
type: ClusterIP
httpd:
name: httpd
hostname: registry.molgenis.org
strategy:
type: Recreate
selector: httpd
restartPolicy: Always
image:
repository: registry.webhosting.rug.nl/molgenis/httpd
tag: lts
pullPolicy: Always
port: 80
service:
type: LoadBalancer
ingress:
enabled: true
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- name: registry.molgenis.org
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence:
name: molgenis-nexus-data
storageClass: nfs-class
size: 30G
reclaimPolicy: Retain
server: 192.168.64.12
accessMode: ReadWriteMany
mountPath: /gcc/molgenis/nexus
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -1,8 +0,0 @@
apiVersion: v1
appVersion: "1.0"
description: MOLGENIS - helm stack for testing purposes
name: molgenis-preview
version: 0.2.0
sources:
- https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/molgenis/molgenis-ops-docker-helm/raw/master/molgenis-preview/catalogIcon-molgenis.svg

View File

@ -1,16 +0,0 @@
# MOLGENIS preview
This chart is used for testing purposes. It can be used by data managers or developers to test MOLGENIS (e.g. integration testing).
## Containers
This chart spins up a complete stack to run MOLGENIS. The created containers are:
- MOLGENIS
- PostgreSQL
- Elasticsearch
- OpenCPU
## Rancher
You can spin up a test instance by navigating to https://rancher.molgenis.org:7777 and login with your LDAP-account.
Go to the test-environment and click on "Launch". Search for MOLGENIS.

View File

@ -1,61 +0,0 @@
categories:
- MOLGENIS
questions:
- variable: ingress.hosts[0].name
default: "test.molgenis.org"
description: "Hostname for your stack"
type: hostname
required: true
group: "Services and Load Balancing"
label: Hostname
- variable: molgenis.image.repository
default: "registry.hub.docker.com"
description: "Select a registry to pull from"
type: enum
options:
- "registry.hub.docker.com"
- "registry.molgenis.org"
required: true
group: "MOLGENIS - Version"
label: Registry
- variable: molgenis.image.tag
default: "stable"
description: "Select a MOLGENIS version (check the registry.molgenis.org or hub.docker.com for other tags)"
type: string
required: true
group: "MOLGENIS - Version"
label: Version
- variable: molgenis.resources.limits.cpu
default: 1
description: "CPU limit for this MOLGENIS instance"
type: enum
options:
- "1"
- "2"
- "3"
- "4"
required: true
group: "MOLGENIS - Resource limits"
label: CPU limit
- variable: molgenis.resources.limits.memory
default: 1250Mi
description: "Memory limit for this MOLGENIS instance"
type: enum
options:
- "1250Mi"
- "1500Mi"
- "2000Mi"
- "2500Mi"
required: true
group: "MOLGENIS - Resource limits"
label: Memory limit
- variable: molgenis.javaOpts
default: "-Xmx1g -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
description: "Java runtime options for the MOLGENIS instance"
type: enum
options:
- "-Xmx1g -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
- "-Xmx2g -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
group: "MOLGENIS - Resource limits"
label: Java memory options

View File

@ -1,124 +0,0 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
name: {{ template "molgenis.fullname" . }}
labels:
app: {{ template "molgenis.name" . }}
chart: {{ template "molgenis.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "molgenis.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "molgenis.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: molgenis
{{- with .Values.molgenis }}
image: "{{ .image.repository }}/{{ .image.name }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: molgenis.home
value: /home/molgenis
- name: opencpu.uri.host
value: localhost
- name: elasticsearch.transport.addresses
value: localhost:9300
- name: elasticsearch.cluster.name
value: {{ $.Values.elasticsearch.clusterName }}
- name: db_uri
value: "jdbc:postgresql://localhost/{{ $.Values.postgres.db }}"
- name: db_user
value: {{ $.Values.postgres.user }}
- name: db_password
value: {{ $.Values.postgres.password }}
- name: admin.password
value: {{ .adminPassword }}
- name: CATALINA_OPTS
value: "{{ .javaOpts }}"
ports:
- containerPort: 8080
# livenessProbe:
# httpGet:
# path: /
# port: 8080
# readinessProbe:
# httpGet:
# path: /api/v2/version
# port: 8080
resources:
{{ toYaml .resources | indent 12 }}
{{- end }}
- name: elasticsearch
{{- with .Values.elasticsearch }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: cluster.name
value: {{ .clusterName }}
- name: bootstrap.memory_lock
value: "true"
- name: ES_JAVA_OPTS
value: "{{ .javaOpts }}"
- name: xpack.security.enabled
value: "false"
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
- containerPort: 9300
resources:
{{ toYaml .resources | indent 12 }}
{{- end }}
- name: postgres
{{- with .Values.postgres }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: POSTGRES_USER
value: {{ .user }}
- name: POSTGRES_PASSWORD
value: {{ .password }}
- name: POSTGRES_DB
value: {{ .db }}
ports:
- containerPort: 5432
resources:
{{ toYaml .resources | indent 12 }}
{{- end }}
- name: opencpu
{{- with .Values.opencpu }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
ports:
- containerPort: 8004
resources:
{{ toYaml .resources | indent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@ -1,38 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "molgenis.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "molgenis.name" . }}
chart: {{ template "molgenis.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .name }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8080
{{- end }}
{{- end }}

View File

@ -1,82 +0,0 @@
# Default values for molgenis.
replicaCount: 1
service:
type: LoadBalancer
port: 8080
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
path: /
hosts:
- name: test.molgenis.org
tls: []
molgenis:
image:
repository: registry.molgenis.org
name: molgenis/molgenis-app
tag: 7.0.0-SNAPSHOT
pullPolicy: Always
adminPassword: admin
javaOpts: "-Xmx1g -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
resources:
limits:
cpu: 1
memory: 1250Mi
requests:
cpu: 200m
memory: 1Gi
postgres:
image:
repository: postgres
tag: 9.6-alpine
pullPolicy: IfNotPresent
user: molgenis
password: molgenis
db: molgenis
resources:
limits:
cpu: 1
memory: 250Mi
requests:
cpu: 100m
memory: 250Mi
elasticsearch:
image:
repository: docker.elastic.co/elasticsearch/elasticsearch
tag: 5.5.3
pullPolicy: IfNotPresent
javaOpts: "-Xms512m -Xmx512m"
clusterName: molgenis
resources:
limits:
cpu: 1
memory: 1500Mi
requests:
cpu: 100m
memory: 1Gi
opencpu:
image:
repository: molgenis/opencpu
tag: latest
pullPolicy: Always
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

File diff suppressed because it is too large Load Diff

Before

Width:  |  Height:  |  Size: 77 KiB

View File

@ -1,19 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "molgenis.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "molgenis.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "molgenis.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "molgenis.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}

Some files were not shown because too many files have changed in this diff Show More