Helm Overview
kubectl
helm
Whole chart consists of several components:
- Finex
- FrontDEX
- Kong
- Realtime
- PgMeta
- Admin
- Config
- InfluxDB
- [Optional] NATS cluster
Component starts both trading engine, P2P node and gRPC server, thus the chart has only one deployment and a service for each of the servers that are launched by Finex. There are also hooks to prepare database.
This is frontend for OpenDAX v4, chart creates deployment, service and ingress, so after that frontend is available on
*global.deploymentId*.*global.externalHostname*
.There is also option that starts slightly identical component by changing the value of
mockDeployment
to true
, but instead of behaving like production frontend, it finds services on localhost for local testing. If this option is enabled frontent should be available on local-*global.deploymentId*.*global.externalHostname*
.If you set
*global.deploymentId*.*global.externalHostname*
to http://main.opendax.app
, Kong API Gateway configuration would be generated in the following way:- Kong:
http://main.opendax.app
- FrontDEX:
http://main.opendax.app
- Finex Engine:
http://main.opendax.app/api/v2/finex
- Finex GRPC:
http://main.opendax.app/api/v1-grpc/finex
orgrpc://main.opendax.app
- GoTrue:
http://main.opendax.app/auth/v1
- Realtime:
http://main.opendax.app/realtime/v1
- PostgREST:
http://main.opendax.app/rest/v1
- Storage API:
http://main.opendax.app/storage/v1
Optionally, you can enable NATS cluster deployment by setting
nats.enabled
to true
This would create a NATS cluster, a test box(nats-box) containing everything needed for NATS cluster usage/testing. The cluster includes 3 nodes and a nats
Service, the available ports are:4222
- main client port8222
- HTTP management port for information reporting6222
- routing port for clustering
This chart prepares all static K8s resources and is the main configuration of other components. It also stores ingress configuration of above mentioned Kong, because it is an external dependency.
To use AWS S3: Set
backup.provider
to aws
and backup.aws.destination
to the S3 path of choice(e.g. "s3://example-bucket/platform-backups") Then, create a Secret containing backupS3AccessKeyId
and backupS3SecretAccessKey
and pass its name to backups.aws.secretName
To use GCS: Set
backup.provider
to gcs
and backup.gcs.destination
to the GCS path you of choice(e.g. "gs://example-bucket/platform-backups")Then, create a Google Cloud Storage bucket and a service account with Storage Admin rights and a JSON key for it. Then, create a Secret containing the key in
backupGCSCredentials
field, and pass its name to backup.gcs.secretName
: kubectl create secret generic postgres-backup-gcs-key -n odax --from-file=backupGCSCredentials=*filename.json*
To use a PVC: Set
backup.provider
to pvc
and backup.pvc.name
to the name of the PVC you'd like to use. You can pass the following template to kubectl apply -f
to create a matching PVC:apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
# storage value is responsible for how much storage will be able to store.
# It's set to 100Mi (Mebibytes) but on large applications this can take 10-20Gi (Gibibytes)
storage: 100Mi
Only used for scheduled InfluxDB backups and hook execution for database seeding.
Here is explanation of value parameters that can be configured in main
values.yaml
located in this directory:Parameter | Value | Notes |
---|---|---|
global | | Global configuration |
deploymentId | opendax_example | Normally it results in string like odax-*branch_name* and is also used as namespace name where underscores are replaced with dashes |
externalHostname | example.opendax.app | Fully qualified domain name |
infra.dbHost | mysql.core | Database hostname |
infra.dbPort | 3306 | Database portgotrue. |
infra.influxdbDatabaseName | finex_opendax_example | Influx Database Name, if not set used string like finex-*global.deploymentId* |
infra.vaultHost | vault.core | Vault storage hostname |
storybook.externalHostname | web-sdk-example.opendax.app | Web SDK external hostname |
storybook.enabled | true | Deploy Web SDK or not |
kaigara.storageDriver | sql | Normally should be sql , but can be set to vault or values |
kaigara.driver | postgres | Database driver, postgres and mysql are supported |
kaigara.encryptor | plaintext | Encryptor type, plaintext , transit and aes are supported |
kaigara.version | v1.0.10 | kaigara and kai versions used in all deployments |
secretPrefix | opendax | Used for most ConfigMaps and Secrets as name prefix |
svcNames.*component* | *component_name* | Service name replacement |
profile.replicas.*component* | 1 | Number of replicas per component, default is 1 for all |
profile.requests.cpu.*component* | 0.1 | CPU request for component pod |
profile.requests.memory.*component* | 128Mi | Memory request for component pod |
profile.limits.cpu.*component* | 1.5 | CPU limit for component pod |
profile.limits.memory.*component* | 1024Mi | Memory limit for component pod |
| | |
config | | General configuration |
networkPolicy.enabled | true | |
kaigara.appNames | [] | List of components that DB user is configured for |
hooks.kaigaraDbUserCreation | true | Configure DB users for a list of components |
| | |
finex | | Finex configuration |
service.name | finex | Engine service name |
service.externalPort | 8080 | Engine external port |
service.internalPort | 8080 | Engine internal port |
grpc_service.name | finex-grpc | GRPC service name |
grpc_service.externalPort | 50051 | GRPC external port |
grpc_service.internalPort | 50051 | GRPC internal portgotrue. |
p2pService.enabled | true | P2P service enabled |
p2pService.externalIP | 51.68.37.186 | P2P load balancer IP address |
p2pService.internalPort | 4200 | P2P internal port |
metrics.enabled | false | Prometheus scraping enabled |
metrics.port | 4242 | Prometheus scraping port |
hooks.dbCreation | true | Create database if not exist |
hooks.dbUserCreation | true | Create database user if not exist |
hooks.dbMigration | true | Migrate database |
hooks.dbSeed | true | Import CSV files into database |
secrets | {"ENV_NAME":"env value"} | map of optional envs |
externalSecret | name-of-secret | name of external Secret to be mounted |
| | |
influxdb.enableSeed | true | Create DB and continuous queries for every InfluxDB shard |
| | |
frontdex | | FrontDEX configuration |
service.name | frontdex | Service name |
service.externalPort | 3000 | External port |
service.internalPort | 3000 | Internal port |
ingress.enabled | true | Ingress enabled |
ingress.annotations | {.:.} | Annotations for Ingress (don't replace existing ones) |
ingress.tls.enabled | true | TLS enabled for Ingress |
ingress.redirect.enabled | false | Redirection from secondary domains to main one enabled |
ingress.redirect.from_domains | [] | List of domains to redirect from |
mockDeployment | false | Deploy additional resources for testing with local mockservers |
| | |
gotrue | | GoTrue configuration |
extraEnv | [] | Extra env vars to be passed into the Pod |
externalSecret | [] | External Secret name to be passed into env |
service.name | gotrue | Service name |
service.externalPort | 9999 | External port |
service.internalPort | 9999 | Internal port |
hooks.dbCreation | true | Prepare database if it is not |
dbCreationImage | supabase/postgres:13.3.0 | Image used to run hooks |
secrets | {ENV_NAME":"env value"} | map of optional envs |
externalSecret | name-of-secret | name of external Secret to be mounted |
| | |
postgrest | | PostgREST configuration |
service.port | 3000 | Service port |
ingress.enabled | false | Ingress enabled |
hooks.dbCreation | true | Prepare database if it is not |
dbCreationImage | supabase/postgres:13.3.0 | Image used to run hooks |
| | |
realtime | | Realtime configuration |
service.port | 4000 | Service port |
ingress.enabled | false | Ingress enabled |
hooks.dbUserCreation | true | Prepare database if it is not |
dbCreationImage | supabase/postgres:13.3.0 | Image used to run hooks |
| | |
meta | PgMeta configuration | |
service.name | meta | Service name |
service.externalPort | 8080 | External port |
service.internalPort | 8080 | Internal port |
hooks.dbCreation | true | Prepare database if it is not |
dbCreationImage | supabase/postgres:13.3.0 | Image used to run hooks |
| | |
admin | Studio Admin configuration | |
service.name | admin | Service name |
service.externalPort | 3030 | External port |
service.internalPort | 3000 | Internal port |
ingress.enabled | true | Ingress enabled |
ingress.path | admin | Ingress path |
| | |
storage | Storage configuration | |
service.name | storage | Service name |
service.externalPort | 5000 | External port |
service.internalPort | 5000 | Internal port |
hooks.dbCreation | true | Prepare database if it is not |
dbCreationImage | supabase/postgres:13.3.0 | Image used to run hooks |
backend.remote | false | Whether to use remote S3 storage or not |
backend.storage.size | 10Gi | Size of memory to allocate for volume if not remote backend is used |
postgrest.host | postgrest | Fallback PostgREST service hostname if global is not set |
postgrest.port | 3000 | PostgREST service port |
postgrest.options | -c search_path=storage | Options passed to PostgREST connection |
credentials.path | /root/.aws | S3 credentials mount path |
credentials.volumeName | s3-creds | Credentials volume name |
credentials.secretKeyName | s3Creds | The name of OpenDAX global secret key with S3 credentials |
| | |
nats | | |
enabled | false | Enable NATS cluster deployment(includes 3 nodes and a test box(nats-box)) |
| | |
*component* | | Common configurations, where *component* should be replaced with the name of component (e.g. finex , frontdex ) |
nameOverride | some_new_name | Replacement for chart name in K8s resources names |
fullnameOverride | *component* | Replacement for K8s resources names |
image.repository | quay.io/openware/*component* | Remote repository |
image.tag | 4.0.0 | Tag for image |
image.pullSecret | openware-quay | K8s secret with credentials to pull from private repository |
image.pullPolicy | IfNotPresent | Pull only if there're not such image |
kaigara.appName | *component* | Name of app in Kaigara secret store |
kaigara.scopes | public,private,secret | Available secret scopes for exact app |
vault.token | changeme | Unique Vault token, that shouldn't be changed |
Every OpenDAX deployment should be isolated from other namespaces except some components from the
core
namespace which contains all the deployment dependencies(PostgreSQL etc.), this is achieved by using K8s NetworkPoliciesNote: make sure to whitelist all the required namespaces and external IPs before applying this to existing deployments
Make sure every whitelisted namespace is labeled with
kubernetes.io/metadata.name=*name*
. This is done automatically if the Kubernetes version is >= 1.21, otherwise you need to configure it manually: kubectl label ns *name* kubernetes.io/metadata.name=*name*
NetworkPolicies are configured from the
config.networkPolicy
block as follows:Parameter | Description | Default |
---|---|---|
enabled | Enable NetworkPolicies | true |
k8sApiServer.ipCidr | IP CIDR block of K8s API Servers | "" |
allowedBySelector | List of selectors that are allowed to be accessed by the deployment | [] |
allowedIpCIDRs | List of CIDR's that are allowed to be accessed by the deployment | [] |
Example of NetworkPolicies config:
networkPolicy:
enabled: true
allowedBySelector:
# enables access to port 53 on every namespace
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
# enables access to every port on core namespace, vault app
- namespace: core
name: vault
allowedIpCIDRs:
- 172.15.56.0/24
In case there are Pods that need access to specific hosts, you can use labels to grant them access:
Label | Description | Value |
---|---|---|
networking/allow-db-egress | Allow access to main DB instance in core namespace | true |
networking/allow-internet-egress | Allow access to all public IPs | true |
Here are listed some popular issues that could happen while deploying this chart and how to get rid of them.
- How to deploy this chart?
Normally some values are unique per deployment so it is better to set them with cli parameter. Explanation of all parameters is located above.
helm upgrade \
--reset-values \
--skip-crds \
--set global.deploymentId=${deployment_id} \
--set global.externalHostname=${ext_hostname} \
--set frontdex.image.tag=${tag} \
--set frontdex.mockDeployment=${mock_deployment} \
--set finex.image.tag=${tag} \
--set finex.p2pService.enabled=true \
--set finex.p2pService.externalIP=${external_ip} \
--set finex.p2pService.nodePort=${node_port} \
--set-file "config.postgres.schemas={path-to-schemas}" \
${finex_seeds} \
-n ${namespace} \
-i \
opendax \
config/charts/opendax
helm install
/helm upgrade
command failed withUPGRADE FAILED: pre-upgrade / pre-install hooks failed
- 1.Check which hook exactly failed by checking K8s jobs
kubectl get jobs -n <namespace>
. - 2.Check if pod which was created by the failed K8s job still exists
kubectl get pod -n <namespace>
(name of the pod will contain the name of the related job). - 3.optional If pods created by the jobs were deleted - uninstall the whole release
helm uninstall <release-name> -n <namespace>
and install it again. After that, you should monitor your namespace using the command from the previous steps - 4.When you will see the pod of the failed job you should check its logs to find the root cause of the problem
kubectl logs -f <pod_name> -n <namespace>
- 5.After the problem is fixed you should uninstall release using the command from step 3 and install it again
- Some pod crashes with errors like
database not fount
/wrong password
- 1.Ensure that all hooks are enabled:
*component*.hooks.*hook_name*: true
. - 2.If all hooks had been already enabled proceed to step 5.
- 3.Delete K8s namespace (
helm delete -n $ns opendax
) and try to deploy again. - 4.If all pods are running you can finish here.
- 5.Port-forward Postgres/MySQL to localhost or use
kubectl exec -it
and enter into it with root user, all info about credentials you can get from secret*global.secretPrefix*-global
. - 6.Delete user and database with commands like
DROP USER $DB_USER
andDROP DATABASE $DB_NAME
. - 7.Proceed to step 3.
Last modified 4mo ago