SMI Installation
Installation procedure steps for SMI#
To install the SMI, perform the following steps
- Create DNS names
- Set up network connectivity rules
- Set up cloud access policies
- Set up Kubernetes network policy
- Add configurations to Prometheus
- Create Kubernetes secrets
- Configure Helm Charts
Click on each step for a detailed description.
Create DNS names#
Create the three DNS names listed in the table below and point them to the load balancer (or ingress method of your choice). These DNS names will be expected by the platform and the values will be needed for the Helm Chart values.yaml configuration file.
| DNS | Usage | Example |
|---|---|---|
| Web Applications | For static files served up for Reference, Console, and CAD Plugins. | app.example-domain.com |
| REST API | For the REST APIs / server-side web application (for login) of the Platform Services | api.example-domain.com |
| Identity Service | For the Identity Service for the Platform | id.example-domain.com |
Set up network connectivity rules#
Now set up the network connectivity rules, required by the Platform. Refer to the tables below which list the rules for both AWS and OCI as they differ.
Amazon Web Services#
| Source | Destination | Port | Description |
|---|---|---|---|
| Internet | Load Balancer | 80/tcp 443/tcp | Allow incoming traffic from internet to load balancer to access cluster |
| Kubernetes | PostgreSQL or Proxy | 5432/tcp | Access to PostgreSQL cluster |
| Kubernetes | MongoDB | 27017/tcp | Access to MongoDB cluster |
| Kubernetes | Redis | 6379/tcp | Access to Redis Cluster |
| Kubernetes | AWS MSK | 2181/tcp 9096/tcp | Access to the ZooKeeper and bootstrap ports of the Kafka cluster |
| Kubernetes | AWS MQ | 1183/tcp 61616/tcp | For MQTT and TCP access to ActiveMQ cluster |
| Kubernetes | Neo4j | 7687/tcp | For access to Neo4j |
| Kubernetes | SMTP | 587/tcp | Access to SMTP service (adjust ports as necessary for customer configuration) |
| Kubernetes | Internet | 443/tcp | Access for services to access public APIs |
Note: These rules do not include the ports required for the operation or management of the private networking gateways, clusters, etc.
Oracle Cloud Infrastructure#
| Source | Destination | Port | Description |
|---|---|---|---|
| Internet | Load Balancer | 80/tcp 443/tcp | Allow incoming traffic from internet to load balancer to access cluster |
| Kubernetes | OCI Database PostgreSQL | 5432/tcp | Access to PostgreSQL cluster |
| Kubernetes | ScaleGrid MongoDB | 27017/tcp | Access to MongoDB cluster |
| Kubernetes | OCI Cache Redis | 6379/tcp | Access to Redis cluster |
| Kubernetes | Apache Kafka | 2181/tcp 9096/tcp | Access to the bootstrap ports of the Kafka cluster |
| Kubernetes | Apache ActiveMQ | 1183/tcp 61616/tcp | For MQTT and TCP access to ActiveMQ cluster |
| Kubernetes | Neo4j | 7687/tcp | For access to Neo4j |
| Kubernetes | SMTP | 587/tcp | Access to SMTP service (adjust ports as necessary for customer configuration) |
| Kubernetes | Internet | 443/tcp | Access for services to access public APIs |
Set up cloud access policies#
There are four cloud policies required for the platform: one for each blob storage bucket. Refer to the sections below on seting up access policies for Amazon Web Services and Oracle Cloud Infrastructure.
Amazon Web Services access policies#
Note: The platform will use the AWS EKS Pod Identity Agent for credentials to access AWS services. Ensure that the EKS add-in is installed.
For AWS, you need to create a role, trust policy, and access policy for each of the following four buckets:
kafkafilesvcscriptmanagerdatasourcesvcworkflowsvc
Do the following:
- Create a trust policy for the AWS EKS Pod Identity Agent to be used on all roles. See below.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "pods.eks.amazonaws.com" }, "Action": [ "sts:TagSession", "sts:AssumeRole" ] } ]}- Create one policy for each of the buckets to allow access to find the bucket/location and read/write objects.
Note: This might require an additional statement to allow access to custom encryption keys if required by customers.
{ "Statement": [ { "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": "${BUCKET_ARN}", "Sid": "Buckets" }, { "Action": [ "s3:PutObject", "s3:ListMultipartUploadParts", "s3:GetObjectVersion", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:AbortMultipartUpload" ], "Effect": "Allow", "Resource": "${BUCKET_ARN}/*", "Sid": "Objects" } ], "Version": "2012-10-17"}- Create a role for each of the Kubernetes service accounts in the table below and attach policies as listed.
| Kubernetes Service Account | Policy |
|---|---|
| aisvc | kafka |
| datasourcesvc | kafka, datasourcesvc |
| filesvc-migrations | kafka, filesvc |
| filesvc | kafka, filesvc |
| itemsvc-migrations | kafka, filesvc |
| itemsvc-rdbms-migrations | kafka, filesvc |
| itemsvc-telemetry-worker | kafka, filesvc |
| itemsvc-worker | kafka, filesvc |
| itemsvc | kafka, filesvc |
| objectmodelsvc | kafka |
| passportsvc-migrations | kafka, filesvc |
| passportsvc | kafka, filesvc |
| platform-notificationsvc-api | kafka |
| platform-notificationsvc-worker | kafka |
| scriptmanager | kafka, scriptmanager |
| workflowsvc-api | kafka, workflowsvc |
| workflowsvc-backend | kafka, workflowsvc |
| workflowwkr-backend | kafka, workflowsvc |
Oracle Cloud Infrastructure access policies#
If deploying to OCI, only three statements are needed for a pod to access a bucket. Refer to the statements listed below:
Allow any-user to manage objectstorage-namespaces in compartment id ${COMPARTMENT_ID} where all { request.principal.type = 'workload', request.principal.namespace = '${NAMESPACE_NAME}', request.principal.service_account = '${SERVICE_ACCOUNT_NAME}', request.principal.cluster_id = '${KUBERNETES_CLUSTER_ID}' }",Allow any-user to manage buckets in compartment id ${COMPARTMENT_ID} where all { target.bucket.name = '${BUCKET_NAME}', request.permission = 'PAR_MANAGE', request.principal.type = 'workload', request.principal.namespace = '${NAMESPACE_NAME}', request.principal.service_account = '${SERVICE_ACCOUNT_NAME}', request.principal.cluster_id = '${KUBERNETES_CLUSTER_ID}' }",Allow any-user to manage objects in compartment id ${COMPARTMENT_ID} where all { target.bucket.name = '${BUCKET_NAME}', request.principal.type = 'workload', request.principal.namespace = '${NAMESPACE_NAME}', request.principal.service_account = '${SERVICE_ACCOUNT_NAME}', request.principal.cluster_id = '${KUBERNETES_CLUSTER_ID}' }"
In the statements above, replace the following variables with each service account and bucket combination:
COMPARTMENT_ID- The Oracle Cloud ID (OCID) for the compartment that contains the Kubernetes clusterBUCKET_NAME- The Object Storage bucket nameNAMESPACE_NAME- The Kubernetes namespace where the services are runningSERVICE_ACCOUNT_NAME- The Kubernetes service account name used by the pod needing accessKUBERNETES_CLUSTER_ID- The Oracle Cloud ID (OCID) for the Kubernetes cluster running the pods.
Use the table below to create rules for each service account and bucket combination. (Replace the bucket name with your actual bucket name).
| Kubernetes Service Account | Policy |
|---|---|
| aisvc | kafka |
| datasourcesvc | kafka, datasourcesvc |
| filesvc-migrations | kafka, filesvc |
| filesvc | kafka, filesvc |
| itemsvc-migrations | kafka, filesvc |
| itemsvc-rdbms-migrations | kafka, filesvc |
| itemsvc-telemetry-worker | kafka, filesvc |
| itemsvc-worker | kafka, filesvc |
| itemsvc | kafka, filesvc |
| objectmodelsvc | kafka |
| passportsvc-migrations | kafka, filesvc |
| passportsvc | kafka, filesvc |
| platform-notificationsvc-api | kafka |
| platform-notificationsvc-worker | kafka |
| scriptmanager | kafka, scriptmanager |
| workflowsvc-api | kafka, workflowsvc |
| workflowsvc-backend | kafka, workflowsvc |
| workflowwkr-backend | kafka, workflowsvc |
Set up Kubernetes network policy#
The Script Worker pods will execute Javascript code created by application developers on the Platform. You need to isolate the Script Worker pods from the rest of Kubernetes and the private subnets. However, the Script Worker will need to be able to communicate with the Script Manager.
Refer to the example network policy listed below which uses Calico as the implementation.
apiVersion: crd.projectcalico.org/v1kind: NetworkPolicymetadata: name: scriptworker-policyspec: selector: app.kubernetes.io/name == 'dtplatform-scriptworker' types: - Ingress - Egress order: 1000 egress: - action: Allow protocol: UDP source: {} destination: ports: - 53 - action: Allow protocol: TCP source: {} destination: ports: - 53 - action: Allow source: {} destination: selector: app.kubernetes.io/name == 'dtplatform-scriptmanager' - action: Deny source: {} destination: nets: - 10.0.0.0/8 - 169.254.0.0/16 - 172.16.0.0/12 - 192.168.0.0/16Add configurations to Prometheus#
To ensure that the platform’s Script Worker will scale properly, you need to add two configurations to Prometheus.
Do the following:
- If using the
kube-prometheus-stackwith Custom Resource Definitions (CRDs), add aServiceMonitorfor the scriptmanager.
Example:
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: scriptmanager-${var.env_name} namespace: ${var.prometheus_namespace} labels: release: ${var.prometheus_release_label_value}spec: selector: matchLabels: app.kubernetes.io/instance: scriptmanager-${var.env_name} namespaceSelector: matchNames: - ${var.env_name} endpoints: - port: http-web interval: 5s- Now, you will have to configure this data as an external metric rule in the Prometheus Adapter to combine the values into a single external metric as shown below.
- seriesQuery: '{__name__=~"scriptmanager_job_queue_size",service!=""}' metricsQuery: avg(<<.Series>>{<<.LabelMatchers>>}) by (service) resources: overrides: { namespace: {resource: "namespace"} }Note: The reason this step is required is because the scriptworker horizontal pod autoscaler has to use a value scraped from the scriptmanager pods. This value needs to be combined into a single value to allow for the HPA to use it.
- If using the prometheus-adapter helm chart, add the value describe above to your
values.yamlfile.
rules: external: - seriesQuery: '{__name__=~"scriptmanager_job_queue_size",service!=""}' metricsQuery: avg(<<.Series>>{<<.LabelMatchers>>}) by (service) resources: overrides: { namespace: {resource: "namespace"} }
Create Kubernetes secrets#
Now you need to create the Kubernetes secrets. Almost every service will need a Kubernetes secret in the same namespace with sensitive values for external systems or encryption keys, keypairs, or certificates needed by the Platform.
Note: Refer to the page Notes on Kubernetes secrets for useful information.
Create a new Kubernetes namespace for your new installation (only one Digital Twin Platform per namespace) and then create the secrets.
Some of these secrets are generated by the AWS managed service while others are user generated. Regardless, it is suggested that these values are generated by something like Terraform and stored in AWS Secrets Manager for long term storage. Refer to the resource, the External Secrets operator which can be helpful for this part of the installation.
Note: Many of the secrets are encryption keys. If lost, encrypted data is not recoverable.
Configure Helm Charts#
The final step is to configure the Helm Charts. The main part of this process is configure the values.yaml files which Helm uses for the installation.
Note: Configuration information for the Helm Charts can be found in Notes on Helm Charts. Refer to that page and configure your Helm Charts as required.