Skip to main content
Version: v5.0

SMI Installation

Installation procedure steps for SMI#

To install the SMI, perform the following steps

  1. Create DNS names
  2. Set up network connectivity rules
  3. Set up cloud access policies
  4. Set up Kubernetes network policy
  5. Add configurations to Prometheus
  6. Create Kubernetes secrets
  7. Configure Helm Charts

Click on each step for a detailed description.

Create DNS names#

Create the three DNS names listed in the table below and point them to the load balancer (or ingress method of your choice). These DNS names will be expected by the platform and the values will be needed for the Helm Chart values.yaml configuration file.

DNSUsageExample
Web ApplicationsFor static files served up for Reference, Console, and CAD Plugins.app.example-domain.com
REST APIFor the REST APIs / server-side web application (for login) of the Platform Servicesapi.example-domain.com
Identity ServiceFor the Identity Service for the Platformid.example-domain.com

Set up network connectivity rules#

Now set up the network connectivity rules, required by the Platform. Refer to the tables below which list the rules for both AWS and OCI as they differ.

Amazon Web Services#

SourceDestinationPortDescription
InternetLoad Balancer80/tcp 443/tcpAllow incoming traffic from internet to load balancer to access cluster
KubernetesPostgreSQL or Proxy5432/tcpAccess to PostgreSQL cluster
KubernetesMongoDB27017/tcpAccess to MongoDB cluster
KubernetesRedis6379/tcpAccess to Redis Cluster
KubernetesAWS MSK2181/tcp 9096/tcpAccess to the ZooKeeper and bootstrap ports of the Kafka cluster
KubernetesAWS MQ1183/tcp 61616/tcpFor MQTT and TCP access to ActiveMQ cluster
KubernetesNeo4j7687/tcpFor access to Neo4j
KubernetesSMTP587/tcpAccess to SMTP service (adjust ports as necessary for customer configuration)
KubernetesInternet443/tcpAccess for services to access public APIs

Note: These rules do not include the ports required for the operation or management of the private networking gateways, clusters, etc.

Oracle Cloud Infrastructure#

SourceDestinationPortDescription
InternetLoad Balancer80/tcp 443/tcpAllow incoming traffic from internet to load balancer to access cluster
KubernetesOCI Database PostgreSQL5432/tcpAccess to PostgreSQL cluster
KubernetesScaleGrid MongoDB27017/tcpAccess to MongoDB cluster
KubernetesOCI Cache Redis6379/tcpAccess to Redis cluster
KubernetesApache Kafka2181/tcp 9096/tcpAccess to the bootstrap ports of the Kafka cluster
KubernetesApache ActiveMQ1183/tcp 61616/tcpFor MQTT and TCP access to ActiveMQ cluster
KubernetesNeo4j7687/tcpFor access to Neo4j
KubernetesSMTP587/tcpAccess to SMTP service (adjust ports as necessary for customer configuration)
KubernetesInternet443/tcpAccess for services to access public APIs

Set up cloud access policies#

There are four cloud policies required for the platform: one for each blob storage bucket. Refer to the sections below on seting up access policies for Amazon Web Services and Oracle Cloud Infrastructure.

Amazon Web Services access policies#

Note: The platform will use the AWS EKS Pod Identity Agent for credentials to access AWS services. Ensure that the EKS add-in is installed.

For AWS, you need to create a role, trust policy, and access policy for each of the following four buckets:

  • kafka
  • filesvc
  • scriptmanager
  • datasourcesvc
  • workflowsvc

Do the following:

  1. Create a trust policy for the AWS EKS Pod Identity Agent to be used on all roles. See below.
{    "Version": "2012-10-17",    "Statement": [        {            "Effect": "Allow",            "Principal": {                "Service": "pods.eks.amazonaws.com"            },            "Action": [                "sts:TagSession",                "sts:AssumeRole"            ]        }    ]}
  1. Create one policy for each of the buckets to allow access to find the bucket/location and read/write objects.

Note: This might require an additional statement to allow access to custom encryption keys if required by customers.

{    "Statement": [        {            "Action": [                "s3:ListBucket",                "s3:GetBucketLocation"            ],            "Effect": "Allow",            "Resource": "${BUCKET_ARN}",            "Sid": "Buckets"        },        {            "Action": [                "s3:PutObject",                "s3:ListMultipartUploadParts",                "s3:GetObjectVersion",                "s3:GetObject",                "s3:DeleteObjectVersion",                "s3:DeleteObject",                "s3:AbortMultipartUpload"            ],            "Effect": "Allow",            "Resource": "${BUCKET_ARN}/*",            "Sid": "Objects"        }    ],    "Version": "2012-10-17"}
  1. Create a role for each of the Kubernetes service accounts in the table below and attach policies as listed.
Kubernetes Service AccountPolicy
aisvckafka
datasourcesvckafka, datasourcesvc
filesvc-migrationskafka, filesvc
filesvckafka, filesvc
itemsvc-migrationskafka, filesvc
itemsvc-rdbms-migrationskafka, filesvc
itemsvc-telemetry-workerkafka, filesvc
itemsvc-workerkafka, filesvc
itemsvckafka, filesvc
objectmodelsvckafka
passportsvc-migrationskafka, filesvc
passportsvckafka, filesvc
platform-notificationsvc-apikafka
platform-notificationsvc-workerkafka
scriptmanagerkafka, scriptmanager
workflowsvc-apikafka, workflowsvc
workflowsvc-backendkafka, workflowsvc
workflowwkr-backendkafka, workflowsvc

Oracle Cloud Infrastructure access policies#

If deploying to OCI, only three statements are needed for a pod to access a bucket. Refer to the statements listed below:

Allow any-user to manage objectstorage-namespaces in compartment id ${COMPARTMENT_ID} where all { request.principal.type = 'workload', request.principal.namespace = '${NAMESPACE_NAME}', request.principal.service_account = '${SERVICE_ACCOUNT_NAME}', request.principal.cluster_id = '${KUBERNETES_CLUSTER_ID}' }",Allow any-user to manage buckets in compartment id ${COMPARTMENT_ID} where all { target.bucket.name = '${BUCKET_NAME}', request.permission = 'PAR_MANAGE', request.principal.type = 'workload', request.principal.namespace = '${NAMESPACE_NAME}', request.principal.service_account = '${SERVICE_ACCOUNT_NAME}', request.principal.cluster_id = '${KUBERNETES_CLUSTER_ID}' }",Allow any-user to manage objects in compartment id ${COMPARTMENT_ID} where all { target.bucket.name = '${BUCKET_NAME}', request.principal.type = 'workload', request.principal.namespace = '${NAMESPACE_NAME}', request.principal.service_account = '${SERVICE_ACCOUNT_NAME}', request.principal.cluster_id = '${KUBERNETES_CLUSTER_ID}' }"

In the statements above, replace the following variables with each service account and bucket combination:

  • COMPARTMENT_ID - The Oracle Cloud ID (OCID) for the compartment that contains the Kubernetes cluster
  • BUCKET_NAME - The Object Storage bucket name
  • NAMESPACE_NAME - The Kubernetes namespace where the services are running
  • SERVICE_ACCOUNT_NAME - The Kubernetes service account name used by the pod needing access
  • KUBERNETES_CLUSTER_ID - The Oracle Cloud ID (OCID) for the Kubernetes cluster running the pods.

Use the table below to create rules for each service account and bucket combination. (Replace the bucket name with your actual bucket name).

Kubernetes Service AccountPolicy
aisvckafka
datasourcesvckafka, datasourcesvc
filesvc-migrationskafka, filesvc
filesvckafka, filesvc
itemsvc-migrationskafka, filesvc
itemsvc-rdbms-migrationskafka, filesvc
itemsvc-telemetry-workerkafka, filesvc
itemsvc-workerkafka, filesvc
itemsvckafka, filesvc
objectmodelsvckafka
passportsvc-migrationskafka, filesvc
passportsvckafka, filesvc
platform-notificationsvc-apikafka
platform-notificationsvc-workerkafka
scriptmanagerkafka, scriptmanager
workflowsvc-apikafka, workflowsvc
workflowsvc-backendkafka, workflowsvc
workflowwkr-backendkafka, workflowsvc

Set up Kubernetes network policy#

The Script Worker pods will execute Javascript code created by application developers on the Platform. You need to isolate the Script Worker pods from the rest of Kubernetes and the private subnets. However, the Script Worker will need to be able to communicate with the Script Manager.

Refer to the example network policy listed below which uses Calico as the implementation.

apiVersion: crd.projectcalico.org/v1kind: NetworkPolicymetadata:  name: scriptworker-policyspec:  selector: app.kubernetes.io/name == 'dtplatform-scriptworker'  types:    - Ingress    - Egress  order: 1000  egress:    - action: Allow      protocol: UDP      source: {}      destination:        ports:          - 53    - action: Allow      protocol: TCP      source: {}      destination:        ports:          - 53    - action: Allow      source: {}      destination:        selector: app.kubernetes.io/name == 'dtplatform-scriptmanager'    - action: Deny      source: {}      destination:        nets:          - 10.0.0.0/8          - 169.254.0.0/16          - 172.16.0.0/12          - 192.168.0.0/16

Add configurations to Prometheus#

To ensure that the platform’s Script Worker will scale properly, you need to add two configurations to Prometheus.

Do the following:

  1. If using the kube-prometheus-stack with Custom Resource Definitions (CRDs), add a ServiceMonitor for the scriptmanager.

Example:

apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata:  name: scriptmanager-${var.env_name}  namespace: ${var.prometheus_namespace}  labels:    release: ${var.prometheus_release_label_value}spec:  selector:    matchLabels:      app.kubernetes.io/instance: scriptmanager-${var.env_name}  namespaceSelector:    matchNames:      - ${var.env_name}  endpoints:    - port: http-web      interval: 5s
  1. Now, you will have to configure this data as an external metric rule in the Prometheus Adapter to combine the values into a single external metric as shown below.
- seriesQuery: '{__name__=~"scriptmanager_job_queue_size",service!=""}'      metricsQuery: avg(<<.Series>>{<<.LabelMatchers>>}) by (service)      resources:        overrides: { namespace: {resource: "namespace"} }

Note: The reason this step is required is because the scriptworker horizontal pod autoscaler has to use a value scraped from the scriptmanager pods. This value needs to be combined into a single value to allow for the HPA to use it.

  1. If using the prometheus-adapter helm chart, add the value describe above to your values.yaml file.
rules:  external:    - seriesQuery: '{__name__=~"scriptmanager_job_queue_size",service!=""}'      metricsQuery: avg(<<.Series>>{<<.LabelMatchers>>}) by (service)      resources:        overrides: { namespace: {resource: "namespace"} }

Create Kubernetes secrets#

Now you need to create the Kubernetes secrets. Almost every service will need a Kubernetes secret in the same namespace with sensitive values for external systems or encryption keys, keypairs, or certificates needed by the Platform.

Note: Refer to the page Notes on Kubernetes secrets for useful information.

Create a new Kubernetes namespace for your new installation (only one Digital Twin Platform per namespace) and then create the secrets.

Some of these secrets are generated by the AWS managed service while others are user generated. Regardless, it is suggested that these values are generated by something like Terraform and stored in AWS Secrets Manager for long term storage. Refer to the resource, the External Secrets operator which can be helpful for this part of the installation.

Note: Many of the secrets are encryption keys. If lost, encrypted data is not recoverable.

Configure Helm Charts#

The final step is to configure the Helm Charts. The main part of this process is configure the values.yaml files which Helm uses for the installation.

Note: Configuration information for the Helm Charts can be found in Notes on Helm Charts. Refer to that page and configure your Helm Charts as required.