Pre-Installation
Before installation, there are several considerations to make when configuring your Kubernetes cluster for the Zebrium application. In order for the Zebrium software to be fully functional, the following software requirements must be met. Additional details and example of these requirements can be found in the sections to follow.
- Kubernetes Version 1.19 or higher required.
- Helm V3 is required for installation of Zebrium
- Kubernetes cluster availability meeting or exceeding the Zebrium sizing specifications. See Sizing Considerations for additional information
- Ability to provision block storage. See Storage Considerations for additional information
- Ingress Controller with https support for a Fully Qualified Host Name (FQHN). See Ingress Considerations for additional information.
- Access to Zebrium’s Registry. See Helm Chart and Image Repository Access for additional information.
- A helm override file with your respective configurations.
Once these requirements have been met, please be sure to review the Additional Configurations documentation for any optional configurations. Otherwise, you are ready to move onto the Installation Steps
Sizing Considerations
Zebrium provides a sizing calculator at https://sizing.zebrium.com/ to assist with determining the vCPU and Memory requirements of the destination Kubernetes Cluster. The sizing guide provides several key metrics for different Zebrium ingest volumes, detailed below:
- Minimum Node Size (vCPU) - This is the minium node size for each node. This is based off the sizing of the vertica container, as it is the largest requester of resources.
- Minimum Node Size (Memory) - This is the minium node size for each node. This is based off the sizing of the vertica container, as it is the largest requester of resources.
- Total vCPUs required - This is the total number of CPUs that need to be available in the cluster specifically allocated for the Zebrium deployment.
- Total Memory required - This is the total memory that needs to be available in the cluster specifically allocated for the Zebrium deployment.
- Total SSD Storage required - This is the total amount of disk space that is required to be able to provisioned within the cluster that is specifically allocated to the Zebrium Deployment. While this is specified across the entire cluster as a sum, the storage requirements per node will very with the containers that are deployed.
The Zebrium sizing guide also exposes information about the major components that are deployed into the cluster, and their respective configurations under the Show Advanced Info button. This information can be used in conjunction with the metrics above in order to more effectively determine the necessary resource requirements.
Example 1:
ABC Corp is looking to install a Zebrium deployment in their local cluster with the intention to ingest up to 50Gb a day. After consulting the sizing guide, the provision a cluster that has 2 nodes attached to it. Each node has 12 vCPU and 24Gb of ram attached to it. When installed, the vertica pod is deployed to one node, and required 280GB of storage, while the zapi and core-engine pod was deployed to node 2, with 150Gb of storage required.
Example 2:
CDE Corp is looking to install a Zebrium deployment in their local cluster with the intention to ingest up to 50Gb a day. After consulting the sizing guide, the provision a cluster that has 2 nodes attached to it. One node has 12 vCPU and 24Gb, and the other node has 6vCPU and 18GB of ram attached. When they installed the zebrium application the vertica and zapi pod is deployed to one node, and required 380GB of storage, 9 cores and 18Gb of ram. However, they noticed that the core engine logs container is pending and will not start. This is because, while the node has 6 vCPU, the system pods deployed to that node has used 1.1 vCPU’s. This left only 4.9 vCPU’s available to core engine logs, which is requesting 5 vCPU.
Note: While it is possible to meet the total resource requirements while disregarding the minimum node sizing, you run the risk of having pods being unable to schedule, due to lack of resources. Please consult the Advanced Info section of the sizing guideline to ensure that all pods will have sufficient headroom to provision, while taking into account any other applications/pods deployed on the node.
Storage Considerations
The Zebrium deployment is comprised of several statefulsets, all of which will require persistent volumes. Due to the constraints of our current database, Zebrium does not support NFS mounts, and all volumes provisioned will need to be physical or block storage. While the Zebrium deployment provides several ways to define the respective kubernetes storage classes, it does not configure any external dependencies or permissions that may be needed to ensure provisioning happens correctly. It is the responsibility of the cluster operator to ensure that any and all external dependencies are met for your provisioner of choice.
The Zebrium application separates volumes into two different flavors, core and vertica. This allows operators the flexibility of defining different retention and drive configurations for volumes that are mounted onto our central databases (Vertica), vs those used for our workers as buffers and working dir. Zebrium generally recommends that you configure the reclaimPolicy for your vertica storage class to Retain
to prevent any unintentional data loss if the corresponding statefulset or pvc is lost or deleted. This is not required on the core storage classes as loss of data stored in these claims will not be detrimental to the system.
Our application has two different options for how cluster operators can provide storage to satisfy the needs of the Zebrium Storage. Operators can bring their own storage classes, or use our helm chart to define Zebrium specific classes. Both of these options are explored in more detail below.
Bring Your Own Storage Classes (BYOSC)
Cluster operators may wish to use existing storage classes rather then redefining these inside of the Zebrium helm charts. In order to bring your own storage class, we will need add the following section to your override file. In the example below, we wish to use the StorageClass called gp2 instead of the Zebrium created ones.
zebrium-core:
storageProvisioners:
vertica:
enabled: false
customStorageClass: "gp2"
core:
enabled: false
customStorageClass: "gp2"
This configuration will disable the creation of the Zebrium storage classes, and instead instruct the pvc’s to use the class gp2
Using Zebrium Storage Classes
By default, Zebrium will provision two Storage Classes to be used for the zebrium application. Configurations for these two storageclasses are managed through the storageProvisioners
section within the helm chart. The default options for this settings are represented below:
storageProvisioners:
vertica:
enabled: true
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain
parameters: {}
core:
enabled: true
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain
parameters: {}
As we can see in the example above, we have two separate declarations of a storage provisioner, vertica and core. They both function the exact same way, so we will only focus on walking through core currently. As we can see, we have 4 configuration options available for the core storage provisioner and will dive into what each one does.
- enabled - This options enables or disables the creation of the storage class within helm. See BYOSC for an example of disabling this.
- provisioner - This configures which underlying storage class provisioner. We support any provisioner that is available to Kubernetes and that has been configured for your system.
- reclaimPolicy - Since these will be dynamically created volumes, we need to define a reclaim policy for when the resource is deleted. The available options are
Retain
orDelete
. Read more here - parameters - Every provisioner provides a series of additional parameters that help to describe the volumes. These are unique to your selected provisioner. Read more here
Dynamic vs Manual Volume Provisioning
While the Zebrium application will dynamically create persistent volume claims as the necessary pods are scheduled, cluster operators may choose to use a provisioner that does not support dynamic provisioning. An example of a provisioner that does not support dynamic provisioning would be the local. When using such provisioners, it is the responsibility of the cluster operator to ensure that any needed persistent volumes are created and available to the requesting persistent volume claims. A walkthrough of this can be found here.
Ingress Considerations
The Zebrium application leverages kubernetes ingress as it’s preferred method for exposing it’s internal services and UI’s to external consumers. Ingress resources are automatically created for each necessary route, and can be customized through the helm chart parameters.
Helm Parameter Overrides
The helm chart provides several level of configuration for modifying the ingress resources provisioned in order to tailor it to your desired ingress controller’s requirements. Since ingress frequently uses annotations to configure some options depending on the controller, our helm chart provides two ways to customize the ingress controller: through a global configuration, as well as application level configurations. When the chart is templated, global values will override the resource level configurations when both are set.
Global Overrides
Below are the available global configurations for all ingress resources. For annotations and tls, the helm chart will combine values defined in both the global and individual level. For the most update list of all options, please see the values.yaml file of the current helm chart.
global:
ingress:
# -- Ingress Class to use for all objects.
className:
# -- Hostname to expose all ingress objects on.
hostname: 'zebrium.example.com'
# -- Global Annotations to add to all ingress objects
annotations: {}
tls: []
Resource Overrides
Below are the locations of the individual ingress resources, allowing you to modify only that particular ingress resource, instead of all resources. For the most update list of all options, please see the values.yaml file of the current helm chart.
zebrium-core:
zapi:
ingress:
path: '/api/v2'
annotations: {}
tls: []
report:
ingress:
path: '/report/v1'
annotations: {}
tls: []
mwsd:
ingress:
path: '/mwsd/v1'
annotations: {}
tls: []
zebrium-ui:
ingress:
path: '/'
annotations: {}
tls: []
zebrium-auth:
ingress:
path: '/auth'
annotations: {}
tls: []
Ingress Controllers
In order to expose the ingress resources defined by the Zebrium deployment, an ingress controller must be defined and configured by your cluster operator.
Packaged Ingress Controller
We do provide the option to install ingress-nginx as part of the Zebrium chart. If you wish to use the provided ingress-nginx, you can use the following configuration to get started.
ingress-nginx:
enabled: true
global:
ingress:
className: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: '0'
Hostname and DNS Resolution
Currently, the Zebrium deployment requires a dns hostname that allows access to the ingress endpoints. This endpoint needs to a be a fully qualified domain name (FQDN). Cluster operators should also ensure that this FQDN is added as a record to their DNS server and is resolvable from all systems intending to access the Zebrium installation. Network access from desired systems to the ingress endpoint should also be verified. To set the FQDN in the Zebrium helm chart, please use the following override:
global:
ingress:
hostname: ""
TLS
Due to browser security configurations, Zebrium’s UI must be served over HTTPS with a backing tls certificate. Failure to do so will create a sign in loop within the UI, blocking the user from being able to access the internal system. There are several ways to secure the ingress endpoint with a TLS certificate, including through the ingress resources themselves, through configuration of your ingress controller, using tools like cert-manager, a service mesh, or attaching certificates directly to provisioned resources, like cloud load balancers. It is at the discretion of the cluster operator to determine the best solution for their environment.
Helm Chart and Image Repository Access
Zebrium host it’s helm charts and associated docker images within it’s own registry. Zebrium will provide credentials (username/password) for access these resources as part of the onprem onboarding process. As part of the installation process, you will create a kubernetes image pull secret. Zebrium host it’s helm charts and associated docker images within it’s own registry. Zebrium will provide credentials (username/password) for access these resources as part of the onprem onboarding process. As part of the installation process, you will create a kubernetes image pull secret.