Link

Zebrium OnPrem allows for additional configurations to enable advanced features. Below is a list of these features and the necessary steps needed in order to configure them within the Zebrium helm chart.

Enabling OpenAI Models

Zebrium supports leveraging OpenAI models to augment and enhance the summarization and titling of root cause reports. Currently we support only the following OpenAI model providers:

Zebrium also only supports the following OpenAI models:

  • Davinci
  • GPT 3.5 Turbo
  • GPT 4
  • GPT 4 32k

In order to leverage these models, you will need to have previously created and setup OpenAI services from one of the the above providers. We support multiple model configurations, utilizing the following json format:

[
  {
    "name":  "gpt-3-davinci",
    "model": "gpt-3-davinci",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": true,
    "provider": "azure"
  },
  {
    "name":  "gpt-35-turbo",
    "model": "gpt-35-turbo",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": false,
    "provider": "azure"
  },
  {
    "name":  "gpt-4",
    "model": "gpt-4",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": false,
    "provider": "azure"
  },
  {
    "name":  "gpt-4-32k",
    "model": "gpt-4-32k",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": false,
    "provider": "azure"
  }
]

Prerequisites

  • You have completed all assumptions/prerequisites from the installation
  • You have created an account in one of the supported OpenAI Providers
  • You have onboard one or more supported models in your provider and have the appropriate URL and API key’s

Installation

  1. Save the above json configuration into a json file on a machine with access to your kubernetes cluster. For this example, we will be storing the file with the name of ai-nlp-models.json.

  2. Create a configmap in the namespace that you are deploying your zebrium-onprem application into, using the following command:

kubectl create configmap -n example ai-nlp-models --from-file ai-nlp-models.json

In this example, we are naming our configmap ai-nlp-models and deploying it into the namespace example. When we created the configmap above, the contents of the file was stored in the configmap under a key corresponding to the filename. So in this example, the key of the configmap is ai-nlp-models.json. We can verify this by running

kubectl describe configmap -n example ai-nlp-models
  1. Update your helm override file and include the following section:
zebrium-core:
  additionalEnvs:
    - name: AI_NLP_MODELS
    valueFrom:
      configMapKeyRef:
        name: ai-nlp-models
        key: ai-nlp-models.json

Here we are setting the new Environment Variable AI_NLP_MODELS to the value of the configmap we created in step two. Be sure to update the name and key references to the appropriate values from step 2.

  1. Add any more configurations or continue on with the installation process.