Skip to main content

Deploy on Kubernetes with Helm

You can deploy Timeplus Enterprise on a Kubernetes cluster with Helm.

Prerequisites

  • Ensure you have Helm 3.12 + installed in your environment. For details about how to install Helm, see the Helm documentation
  • Ensure you have Kubernetes 1.25 or higher installed in your environment
  • Ensure you have allocated enough resources for the deployment. For a 3-nodes cluster deployment, by default each timeplusd requires 2 cores and 4GB memory. You'd better assign the node with at least 8 cores and 16GB memory.
  • Network access to Docker Hub

Quickstart with minikube

This is the quickstart guide to install a 3 nodes Timeplus Enterprise cluster with default configurations on minikube using Helm package manager.

Although this guidance is focus on minikube, you should be able to install it on other k8s such as Amazon EKS or your own k8s cluster as well. You may need to update configurations accordingly to fit your k8s enviroment. Please refer to Configuration Guide for available values of the chart.

Get minikube ready

Please follow https://minikube.sigs.k8s.io/docs/start/ to get the minikube ready. For Mac users, you may get it via:

brew install minikube
minikube start

Add Timeplus Helm chart repository

Simply run the following commands to add the repo and list charts from the repo.

 helm repo add timeplus https://install.timeplus.com/charts
helm search repo timeplus

A sample output would be:

NAME                            CHART VERSION   APP VERSION     DESCRIPTION
timeplus/timeplus-enterprise v2.3.2 2.2.5 A Helm chart for deploying a Timeplus enterpris...

Create Namespace

User can choose to install the Timeplus Enterprise into different namespace. In this guide, we use namespace name timeplus.

export NS=timeplus
kubectl create ns $NS

Prepare the values.yaml

Copy and paste the following sample yaml file into values.yaml.

timeplusd:
storage:
stream:
className: <Your storage class name>
size: 10Gi
selector: null
history:
className: <Your storage class name>
size: 10Gi
selector: null
additionalUsers:
- username: timeplus_user
password: changeme
resources:
limits:
cpu: "32"
memory: "60Gi"
requests:
cpu: "2"
memory: 4Gi
kv:
storage:
className: <Your storage class name>
size: 10Gi
selector: null

Then make changes to better fit your need.

  1. Update the storage class name and size accordingly. You can check available storage class on your cluster by running kubectl get storageclass.
  2. Update the username and password of the additionalUsers. You will be able to login to Timeplus web with those users. See User management section for advanced user management.
  3. Update the resources and make sure your cluster has enough CPU and memory to run the stack. For a 3-nodes cluster deployment, by default each timeplusd requires 2 cores and 4GB memory. You'd better assign the node with at least 8 cores and 16GB memory.
  4. Optionally refer to Configuration Guide and add other configurations.

Install Helm chart

In this step, we install the helm chart using release name timeplus.

export RELEASE=timeplus
helm -n $NS install -f values.yaml $RELEASE timeplus/timeplus-enterprise

It will take 1 or 2 minutes to start the whole stack, run following kubectl get pods -n $NS to check the stack status, for example:

NAME                                  READY   STATUS    RESTARTS   AGE
kv-0 1/1 Running 0 118s
timeplus-appserver-75dff8f964-g4fl9 1/1 Running 0 118s
timeplus-connector-7c85b7c9c9-gwdtn 1/1 Running 0 118s
timeplus-web-58bcb4f486-s8wmx 1/1 Running 0 118s
timeplusd-0 1/1 Running 0 118s
timeplusd-1 1/1 Running 0 118s
timeplusd-2 1/1 Running 0 118s

If all the pods status are in Running status, then the stack is ready to access. If some pods cannot turn to Running status, you can run kubectl describe pod <pod_name> -n $NS to get more information.

Expose the Timeplus Console

There are different ways to expose the services of Timeplus stack. In this step, we use port forward of kubectl to get a quick access. Run kubectl port-forward svc/timeplus-appserver 8000:8000 -n $NS --address 0.0.0.0 and then open the address http://localhost:8000 in your browser to visit Timeplus Console web UI. After finishing the onboarding, you should be able to login with the username and password which you set in additionalUsers.

Uninstall and cleanup

To uninstall the helm release, just run helm -n $NS uninstall $RELEASE to uninstall it.

Please note, by default, all the PVCs will not be deleted. You can use kubectl get pvc -n $NS and kubectl delete pvc <pvc_name> -n $NS to manually delete them.

Operations

User management

Currently Timeplus web doesn't support user management yet. You will need to deploy the timeplus cli pod to run timeplus cli to manage users. In order to do so, please add the following section to values.yaml and upgrade the helm chart.

timeplusCli:
enabled: true

Once timeplus-cli pod is up and running, you can run kubectl exec -it timeplus-cli -- /bin/bash -n $NS to run commands in the pod. Please refer to the following commands to do the user management. Make sure you update the command accordingly to your own deployment.

# Get the IP of timeplusd pods
export TIMEPLUSD_POD_IPS=$(kubectl get pods -n $NS -l app.kubernetes.io/component=timeplusd -o jsonpath='{.items[*].status.podIP}' | tr ' ' '\n' | sed "s/\$/:${TIMEPLUSD_TCP_PORT}/" | paste -sd ',' -)

# List users
timeplus user list --address ${TIMEPLUSD_POD_IPS} --admin-password mypassword

# Create an user with username "hello" and password "word"
timeplus user create --address ${TIMEPLUSD_POD_IPS} --admin-password mypassword --user hello --password world

# Delete the user "hello"
timeplus user delete --address ${TIMEPLUSD_POD_IPS} --admin-password mypassword --user hello

Troubleshooting

If something goes wrong, you can run the following commands to get more information.

  1. kubectl get pods -n $NS: Make sure all pods are in Running status and the READY is 1/1.
  2. kubectl logs <pod> -n $NS: Try to check the logs of each pod to make sure there is no obvious errors.
  3. Run kubectl cluster-info dump -n $NS to dump all the information and send it to us.

Configuration Guide

KeyDescriptionDefault Value
global
global.nodeSelectorNode selector for scheduling pods{}
global.tolerationsTolerations for scheduling pods[]
global.affinityAffinity settings for scheduling pods{}
global.imageRegistryImage registry for pulling images""
global.imagePullPolicyImage pull policy"IfNotPresent"
global.pvcDeleteOnStsDeleteDelete PVCs when StatefulSet is deleted (K8s >= 1.27.0)false
global.pvcDeleteOnStsScaleDelete PVCs when StatefulSet is scaled (K8s >= 1.27.0)false
timeplus
timeplus.publicDomainPublic domain or IP address of the clustertimeplus.local
timeplus.portPort for accessing the service, should be quoted"80"
timeplusWeb
timeplusWeb.enabledEnable Timeplus Web servicetrue
timeplusWeb.imageRegistryImage registry for Timeplus Web""
timeplusWeb.imagePullPolicyImage pull policy for Timeplus Web"IfNotPresent"
timeplusWeb.imageImage name for Timeplus Webtimeplus/timeplus-web
timeplusWeb.tagImage tag for Timeplus Web1.4.17
timeplusWeb.labelsLabels for Timeplus Web pods and deployment{}
timeplusWeb.affinityAffinity settings for Timeplus Web{}
timeplusWeb.resourcesResource requests and limits for Timeplus Web{}
timeplusAppserver
timeplusAppserver.enabledEnable Timeplus Appservertrue
timeplusAppserver.imageRegistryImage registry for Timeplus Appserver""
timeplusAppserver.imagePullPolicyImage pull policy for Timeplus Appserver"IfNotPresent"
timeplusAppserver.imageImage name for Timeplus Appservertimeplus/timeplus-appserver
timeplusAppserver.tagImage tag for Timeplus Appserver1.4.32
timeplusAppserver.labelsLabels for Timeplus Appserver pods and deployment{}
timeplusAppserver.replicasNumber of replicas for Timeplus Appserver1
timeplusAppserver.configsCustom configurations for Timeplus Appserver{}
timeplusAppserver.affinityAffinity settings for Timeplus Appserver{}
timeplusAppserver.resourcesResource requests and limits for Timeplus Appserver{}
timeplusd
timeplusd.enabledEnable Timeplus Daemontrue
timeplusd.imageRegistryImage registry for Timeplus Daemon""
timeplusd.imagePullPolicyImage pull policy for Timeplus Daemon"IfNotPresent"
timeplusd.imageImage name for Timeplus Daemontimeplus/timeplusd
timeplusd.tagImage tag for Timeplus Daemon2.2.7
timeplusd.labelsLabels for Timeplus Daemon pods and StatefulSet{}
timeplusd.replicasNumber of replicas for Timeplus Daemon3
timeplusd.affinityAffinity settings for Timeplus Daemon{}
timeplusd.defaultAdminUsernameDefault admin usernameadmin
timeplusd.defaultAdminPasswordDefault admin passwordtimeplusd@t+
timeplusd.initJob.imageRegistryImage registry for initialization job""
timeplusd.initJob.imagePullPolicyImage pull policy for initialization job"IfNotPresent"
timeplusd.initJob.imageImage name for initialization jobtimeplus/boson
timeplusd.initJob.tagImage tag for initialization job0.0.2
timeplusd.initJob.resourcesResource requests and limits for initialization job{}
timeplusd.ingress.enabledEnable ingress for Timeplus Daemonfalse
timeplusd.ingress.restPathPath for REST API calls to Timeplus Daemon"/timeplusd"
timeplusd.service.typeService type for Timeplus DaemonClusterIP
timeplusd.storage.log.enabledEnable separate PV for logsfalse
timeplusd.storage.log.classNameStorage class for logslocal-storage
timeplusd.storage.log.sizeSize of PV for logs10Gi
timeplusd.storage.log.selector.matchLabels.appSelector labels for log PVtimeplusd-log
timeplusd.storage.stream.classNameStorage class for stream datalocal-storage
timeplusd.storage.stream.sizeSize of PV for stream data10Gi
timeplusd.storage.stream.selector.matchLabels.appSelector labels for stream data PVtimeplusd-data-stream
timeplusd.storage.history.classNameStorage class for historical datalocal-storage
timeplusd.storage.history.sizeSize of PV for historical data10Gi
timeplusd.storage.history.selector.matchLabels.appSelector labels for historical data PVtimeplusd-data-history
timeplusd.resources.limits.cpuCPU limits for Timeplus Daemon"32"
timeplusd.resources.limits.memoryMemory limits for Timeplus Daemon"60Gi"
timeplusd.resources.requests.cpuCPU requests for Timeplus Daemon"2"
timeplusd.resources.requests.memoryMemory requests for Timeplus Daemon4Gi
timeplusd.configCustom configurations for Timeplus Daemon{}
timeplusd.livenessProbeLiveness probe settings for Timeplus DaemonSee values.yaml
timeplusConnector
timeplusConnector.enabledEnable Timeplus Connectortrue
timeplusConnector.imageRegistryImage registry for Timeplus Connector""
timeplusConnector.imagePullPolicyImage pull policy for Timeplus Connector"IfNotPresent"
timeplusConnector.imageImage name for Timeplus Connectortimeplus/timeplus-connector
timeplusConnector.tagImage tag for Timeplus Connector1.5.3
timeplusConnector.labelsLabels for Timeplus Connector pods and deployment{}
timeplusConnector.affinityAffinity settings for Timeplus Connector{}
timeplusConnector.resourcesResource requests and limits for Timeplus Connector{}
kv
kv.enabledEnable KV servicetrue
kv.imageRegistryImage registry for KV service""
kv.imagePullPolicyImage pull policy for KV service"IfNotPresent"
kv.imageImage name for KV servicetimeplus/timeplusd
kv.tagImage tag for KV service2.2.7
kv.labelsLabels for KV service pods and StatefulSet{}
kv.storage.classNameStorage class for KV servicelocal-storage
kv.storage.sizeSize of PV for KV service10Gi
kv.storage.selector.matchLabels.appSelector labels for KV service PVkv
kv.resourcesResource requests and limits for KV service{}
kv.affinityAffinity settings for KV service{}
ingress
ingress.enabledEnable ingr