Contact usRequest a demo

Cluster deployment

Prerequisites

Before you begin installing Unblu on a cluster, you need the following:

  • A running Kubernetes cluster. This may be an existing cluster already used in production at your organization, a newly installed cluster, or a cloud cluster managed by your organization. (This is distinct from Unblu’s own cloud offering, the Unblu Cloud.) Unblu supports both Kubernetes and OpenShift.

  • The Kustomize configuration management tool. Kustomize has been integrated in kubectl since version 1.14, so it may already be available in your Kubernetes installation.

  • The Unblu Kustomize bundle used to deploy the software. This is provided to you by the Unblu delivery team.

  • Access to the Unblu container image registry (gcr.io/unblu-containerrepo-public) to pull the images.

Your cluster must satisfy the following requirements:

  1. It must have at least 3 nodes. If it doesn’t, Unblu’s anti-affinity rules prevent a successful deployment.

  2. You don’t enforce a thread limit, or your thread limit is at least 4096. If you have a lower thread limit, Unblu will run in to errors under load.

    Note that OpenShift 4 enforces a default thread limit of 1024.

  3. You have a working Ingress Controller. The OpenShift Router works fine too.

  4. Unblu must be able to request persistent volumes using a PersistentVolumeClaim. If it can’t, the pre-configured monitoring stack won’t work.

You might want to check the cluster hardware requirements before you start.

If you’re unable to run kustomize, the Unblu delivery team can send you a prebuilt YAML deployment file.

Access to the Unblu image registry

A Kubernetes cluster requires access to the image registry at all times to pull images. If a company policys prevents this, you can use a company-internal registry as a proxy. Products such as Artifactory can be used to either manually push images or download images transparently in the background.

Access credentials to the Unblu image registry are usually provided as a gcr-secret.yaml YAML file. Apply this file to your cluster before you perform the installation:

Listing 1. Create a namespace and apply the image pull secret (Kubernetes)
kubectl create namespace unblu-test
kubectl apply -f gcr-secret.yaml --namespace=unblu-test
Listing 2. Create a project and apply the image pull secret (OpenShift)
oc new-project unblu-test \
    --description="Unblu Test Environment" \
    --display-name="unblu-test"
oc project unblu-test
oc apply -f gcr-secret.yaml

Database secret

Unblu stores all data in a relational database. The credentials to access the database must be passed to Unblu as a secret named database.

Listing 3. Database secret
kind: Secret
apiVersion: v1
metadata:
  name: database
type: Opaque
stringData:
  DB_USER: unblu
  DB_PASSWORD: unblu_password
  DB_ADMIN_USER: unblu_admin
  DB_ADMIN_PASSWORD: admin_password

The database secret is used to populate the user configuration. Consequently, you don’t need to manually declare those parameters in the configuration file unblu-customer.properties. In other words you don’t need the following lines in unblu-customer.properties:

Listing 4. When using the secret, the following lines are not needed in unblu-customer.properties
com.unblu.storage.database.user=unblu
com.unblu.storage.database.password=<pwd>
com.unblu.storage.database.adminUser=unblu_admin
com.unblu.storage.database.adminPassword=<pwd>

Other database related configuration is part of unblu-customer.properties file and follows the Unblu configuration standard. Please refer to the section Database configuration for more details.

Performing the installation

The Unblu delivery team will send a compressed archive containing a set of files. The listing below assumes that you’ve extracted the bundle into a folder called unblu-installation.

Listing 5. Build a kustomize bundle and apply the YAML to a cluster
kustomize build unblu-installation > unblu.yaml
kubectl apply -f unblu.yaml

Before deploying Unblu into a cluster, you may want to adjust the following in kustomization.yaml.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: customer (1)

bases: (2)
- unblu-kubernetes-base/collaboration-server
- unblu-kubernetes-base/renderingservice
- unblu-kubernetes-base/k8s-ingress
- unblu-kubernetes-base/k8s-prometheus
- unblu-kubernetes-base/grafana

resources: [] (3)

patchesStrategicMerge: [] (4)

configMapGenerator:
- name: collaboration-server-config
  behavior: merge
  files:
    - unblu-customer.properties (5)

secretGenerator:
- name: ingress-tls (6)
  behavior: merge
  files:
    - certs/tls.crt
    - certs/tls.key
  type: "kubernetes.io/tls"

images: (7)
  - name: gcr.io/unblu-containerrepo-public/collaborationserver-public-centos7
    newName: example.com/unblu/collaborationserver-dev-centos7
  - name: gcr.io/unblu-containerrepo-public/headless-browser-public-ubuntu1804
    newName: example.com/unblu/headless-browser-private-ubuntu1804
  - name: gcr.io/unblu-containerrepo-public/nginx-public-centos7
    newName: example.com/unblu/nginx-private-centos7
  - name: gcr.io/unblu-containerrepo-public/haproxy-public-centos7
    newName: example.com/unblu/haproxy-private-centos7
  - name: gcr.io/unblu-containerrepo-public/coturn-public-centos7
    newName: example.com/unblu/coturn-private-centos7
1 Change the namespace (Kubernetes) or project (OpenShift) to be used.
2 Add or remove base modules, depending on your environment or license.
3 Deploy custom components as part of Unblu.
4 Patch some values of the deployment.
5 Add the configuration file unblu-customer.properties to the deployment.
6 Add the TLS certificate as a secret to be used for the Ingress or Route.
7 Rewrite the images source to a new registry.

Instead of updating the kustomization.yaml that was delivered to you, we recommend to create a new one and separate your customizations from our deliveries.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: unblu-production

bases:
- unblu-delivery

Update an existing installation

Upgrading an existing Unblu installation implies the following steps:

  1. Remove the existing deployment from the cluster using clean.sh.

  2. Apply the new deployment, identical to a new installation.

  3. Database patches are automatically applied when the Unblu server starts.

For simple configuration updates the first step may be omitted, for Unblu release upgrades all steps are mandatory.

Listing 6. clean.sh for Kubernetes
#!/usr/bin/env bash

NAMESPACE="demo-latest"

read -p "Do you really want to clean environment \"$NAMESPACE\"? (y/N) " -n 1 -r
if [[ ! $REPLY =~ ^[yY]$ ]]
then
  exit 1
fi

echo ""
echo "Dropping Unblu"
kubectl delete deployment,pod -n $NAMESPACE -l "component = collaboration-server"
kubectl delete statefulset,pod -n $NAMESPACE -l "component in (kafka, zookeeper)" \
  --force --grace-period=0
kubectl delete deployment,statefulset,pod,service,configmap,persistentvolumeclaim,secret \
  -n $NAMESPACE -l "app = unblu"

read -p "Do you want to drop the metrics platform (Prometheus, Grafana) as well? (y/N) " -n 1 -r
echo

if [[ $REPLY =~ ^[yY]$ ]]
then
  kubectl delete deployment,pod,service,configmap,persistentvolumeclaim,secret \
    -n $NAMESPACE -l "app in (grafana, prometheus)"
fi

echo "Finished"
Listing 7. clean.sh for OpenShift
#!/usr/bin/env bash

oc whoami &>/dev/null
if [ "$?" != "0" ]
then
  echo "You are not logged in to any openshift cluster. Please login first (oc login) and select the correct project"
  exit 1
fi

if [[ ! $1 = "-f" ]]
then
  read -p "Do you want to delete the contents of $(oc project -q) (y/N) " -r
  echo

  if [[ ! $REPLY =~ ^[nNyY]?$ ]]
  then
    echo "Unexpected answer. Exiting"
    exit 2
  fi

  if [[ ! $REPLY =~ ^[yY]$ ]]
  then
    exit 0
  fi
fi

echo "Dropping Unblu"
oc delete deployment,pod -l "component = collaboration-server"
oc delete statefulset,pod -l "component in (kafka, zookeeper)" --force --grace-period=0
oc delete deployment,statefulset,pod,service,configmap,persistentvolumeclaim,secret -l "app = unblu"

read -p "Do you want to drop the metrics platform (Prometheus, Grafana) as well? (y/N) " -n 1 -r
echo

if [[ $REPLY =~ ^[yY]$ ]]
then
  oc delete deployment,pod,service,configmap,persistentvolumeclaim,secret -l "app in (grafana, prometheus)"
fi

echo "Finished"

Smoke test of an OpenShift installation

Once you have completed an OpenShift installation, you can check the installation with the following procedure.

The listed instructions must all succeed in order for the smoke test to be successful. Perform the tests immediately after installation to ensure that you don’t miss important log messages.

OpenShift deployment status

CLI

oc status

Success criteria

No errors reported.

Example

In an example project on server https://example.intranet.ch:443.

$ oc status

svc/alertmanager - 10.1.1.130:80 -> 9093
  deployment/alertmanager deploys docker.io/prom/alertmanager:v0.16.1
    deployment #1 running for 6 days - 1 pod

svc/blackbox-exporter - 10.1.1.246:80 -> 9115
  deployment/blackbox-exporter deploys docker.io/prom/blackbox-exporter:v0.14.0
    deployment #1 running for 6 days - 1 pod

svc/collaboration-server - 10.1.1.113:9001
  deployment/collaboration-server deploys gcr.io/unblu-containerrepo-public/collaborationserver-centos7:6.0.0-beta.1
    deployment #1 running for 4 days - 1 pod

svc/glusterfs-dynamic-bd5fa376-fb0a-11e9-8274-00ffffffffff - 10.1.1.59:1
svc/glusterfs-dynamic-bd66cca4-fb0a-11e9-8274-00ffffffffff - 10.1.2.62:1
svc/glusterfs-cluster - 10.1.1.255:1

svc/grafana - 10.1.1.56:80 -> 3000
  deployment/grafana deploys docker.io/grafana/grafana:6.2.2
    deployment #1 running for 6 days - 1 pod

svc/haproxy - 10.1.1.9:8080
  deployment/haproxy deploys gcr.io/unblu-containerrepo-public/haproxy-public-centos7:1.9.5-0,docker.io/prom/haproxy-exporter:v0.10.0
    deployment #1 running for 6 days - 2 pods

svc/kafka-hs (headless):9092
svc/kafka - 10.1.1.66:9092
  statefulset/kafka manages gcr.io/unblu-containerrepo-public/collaborationserver-centos7:6.0.0-beta.1
    created 4 days ago - 3 pods

https://example.intranet.ch (redirects) to pod port 8080-tcp (svc/nginx)
  deployment/nginx deploys gcr.io/unblu-containerrepo-public/nginx-public-centos7:1.0,docker.io/nginx/nginx-prometheus-exporter:0.3.0
    deployment #1 running for 6 days - 2 pods

svc/prometheus - 10.1.1.108:80 -> 9090
  deployment/prometheus-server deploys docker.io/prom/prometheus:v2.10.0
    deployment #1 running for 6 days - 1 pod

svc/prometheus-kube-state-metrics - 10.1.1.121:80 -> 8080
  deployment/prometheus-kube-state-metrics deploys docker.io/kube-state-metrics:v1.5.0
    deployment #1 running for 6 days - 0/1 pods

svc/zookeeper-hs (headless) ports 2888, 3888
svc/zookeeper - 10.1.1.249:2181
  statefulset/zookeeper manages gcr.io/unblu-containerrepo-public/collaborationserver-centos7:6.0.0-beta.1
    created 4 days ago - 0/3 pods growing to 3

1 info identified, use 'oc status --suggest' to see details.

Unblu server startup status

CLI

$ oc logs <collaborationserverpod name>
for Unix/Linux systems:
$ oc logs $(oc get pods -l component=collaboration-server -o name | cut -d '/' -f 2) | grep "ready for requests"

Success criteria

A message containing "ready for requests" must exist in the logs.

Example
$ oc logs collaboration-server-123

{"message":"Initializing Timer ","logger":"org.eclipse.gemini.blueprint.extender.internal.support.ExtenderConfiguration$1","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"Start Level: Equinox Container: a46608a9-4214-4f0e-871a-a24812ffffff","@timestamp":"2019-11-01T13:55:15.463Z"}
{"message":"all bundles (247) started in 64039ms ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.753Z"}
{"message":"Removed down state INITIALIZING. New states [ENTITY_CONFIGURATION_IMPORTING] ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.753Z"}
{"message":"No entity import source configured ","logger":"com.unblu.core.server.entityconfig.internal.EntityConfigImport","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.756Z"}
{"message":"Removed down state ENTITY_CONFIGURATION_IMPORTING. New states [] ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.756Z"}
{"message":"product.com.unblu.universe.core 6.0.0-beta.1-WjNnGKRa ready for requests ","logger":"com.unblu.platform.server.core.UnbluPlatform","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"RxComputationThreadPool-1","@timestamp":"2019-11-01T13:55:15.756Z"}
{"message":"disabling the agentAvailability auto updating due to request inactivity ","logger":"com.unblu.core.server.livetracking.agent.internal.AgentAvailabilityService","severity":"INFO","user":"","client":"","page":"","request":"","execution":"","thread":"AgentAvail-timer","@timestamp":"2019-11-01T14:55:03.814Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"","client":"","page":"","request":"ROxknrGXQCuse2Q3CMFu2Q","execution":"","thread":"qtp1897380042-37","@timestamp":"2019-11-05T16:13:49.036Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"","client":"","page":"","request":"TXaZh7OxRhW2N6tFtRHJ9g","execution":"","thread":"qtp1897380042-42","@timestamp":"2019-11-05T16:13:49.067Z"}
{"message":"enabling agentAvailability auto updating ","logger":"com.unblu.core.server.livetracking.agent.internal.AgentAvailabilityService","severity":"INFO","user":"","client":"","page":"","request":"TXaZh7OxRhW2N6tFtRHJ9g","execution":"","thread":"qtp1897380042-42","@timestamp":"2019-11-05T16:13:49.087Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"","client":"","page":"","request":"_Drn9FNIRaODTCAqZiuSug","execution":"","thread":"qtp1897380042-37","@timestamp":"2019-11-05T16:14:07.306Z"}
{"message":"unsupported language: en-US falling back to en ","logger":"com.unblu.platform.server.clientsupport.internal.AbstractEntryPointWrapperServlet","severity":"WARN","user":"superadmin","client":"","page":"","request":"G5UzUttIRmCkc3QXUbR3Pw","execution":"","thread":"qtp1897380042-39","@timestamp":"2019-11-05T16:14:09.965Z"}
{"message":"sessionItem prepared: TrackingItem type: TRACKINGLIST status: OPEN id: null details: accountId=wZvcAnbBSpOps9oteH-Oxw&status=OPEN&type=AGENTFORWARDING session: hPAkysS1Qqa7V5DVLrth7w node: collaboration-server-559b6487c8-qzqkx node instance: 1x2j3Qn_T--dszMYT_MI8g created: Tue Nov 05 16:14:11 UTC 2019  ","logger":"com.unblu.core.server.collaboration.CollaborationSession","severity":"INFO","user":"","client":"","page":"","request":"UR8u7Fh6TJCRaeJfKBPmxA","execution":"CollaborationSessionStore","thread":"RxCachedThreadScheduler-1 - CollaborationSessionStore -  $ FixedContextScheduler#CollaborationSessionStore $ ","@timestamp":"2019-11-05T16:14:11.554Z"}

Check browser access

Browser
  • Open a browser and open the Agent Desk domain. For example https://example.intranet.ch:443/app/desk.

  • Perform login.

Success criteria

Unblu displays the login screen and after login, the Agent Desk. There are no errors in the browser console.

JavaScript demo page and documentation

If you need to use the Unblu JavaScript demo page, you can activate it by setting com.unblu.server.resources.enableDemoResources to true. If you also want the Unblu docs available locally, set com.unblu.server.resources.enableDocResources to true.