View Issue Details

IDProjectCategoryView StatusLast Update
000721610000-002: SecuritySpecpublic2024-06-14 20:43
Reporterkristian mo Assigned ToPaul Hunkar  
PrioritynormalSeveritytweakReproducibilityalways
Status acknowledgedResolutionreopened 
Fixed in Version1.05.04 RC1 
Summary0007216: Support kubernetes deployment
Description

A "newer" deployment pattern in IT is containers on kubernetes. In kubernetes the hostname of an opc ua server is the pod name. But the pod name is very transient and changes every deployment/change making it hard to present clients with the same certificate, the end result is a server which changes certificate fairly often, as the hostname changes. This is deployed as a single instance.

If one were to take advantage of more kubernetes capabilities and deploy many servers in parallel for horizontal scaling then a client would see different certificates depending on the request routing.

It would be good if the security model would support this deployment pattern.

Steps To Reproduce
  1. Make a server in a docker container
  2. Deploy on kubernetes with a persistent volume for the PKI folder
  3. Connect with a client like UAExpert and subscribe to some values
  4. Make a new version of the same server in a container
  5. Deploy the new version in kubernetes
  6. The client can not automatically reconnect as the certificate has changed
TagsCertificate Management, Docker, Kubernetes, Networking
Commit Version
Fix Due Date

Relationships

related to 0009332 assignedMatthias Damm 10000-004: Services Support kubernetes deployment 

Activities

Jim Luth

2021-11-30 17:28

administrator   ~0015437

Needs to be cloned to Part 4. Part 4 redundancy section describe a pool of hot redundant servers that can be accessed round robin by clients for load balancing. This works where the servers in the hot set all have certs from the same CA and its the CA that is trusted by the Clients (not the individual certs.);

Paul Hunkar

2024-01-03 06:14

developer   ~0020559

Added text explaining that unique certificates are required and if CA based certificate are used, then trust issue can be easily handled

Matthias Damm

2024-01-03 09:11

developer   ~0020565

I am not sure why any solution that adresses the original question would make sense for what the hostname/DNS name is in the certificate.

The DNS name in the certificate is used by the client to compare the hostname in the EndpointUrl used to connect with the certificate.
Does this DNS name of the server really change?
Are the intenal names used by clients to connect?

Paul Hunkar

2024-01-08 03:52

developer   ~0020586

As I understand it - there are a couple of possibilities depending on where the client is and how the application is spun up, but typically each container is a lightweight Virtual machine. They can connect to other containers, ,a host or external network. Container are deployed in a pod on a host. If two containers are in the same pod, they can share storage and networking resources - each pod gets an IP address. A server and a client that are in different containers but in the same pod would communicate via LocalHost and a port number. A virtual machine (or physical one) can have multiple pods on it. Communication between pods looks like normal network connection, but it is over a virtual network (Kubernetes network) that is on that node. \ As I understand it there is a special DNS that further processes IP Address when communication is between nodes on a single machine. If communications is to a node that is not part of the virtual network on the given node, it is transfer to the external network for standard network processing. Kubernetes clusters have a specialized service for DNS resolution. A service automatically receives a domain name in the form <service>.<namespace>.svc.cluster.local. This service is required for to allow external access to the containers.

Matthias Damm

2024-01-08 07:41

developer   ~0020587

Last edited: 2024-01-16 17:24

There is a related note from Jouni on 0009332:
The containers can share the hostname, although it's not necessarily straight forward how to accomplish it (via a configuration file on the persistent volume or the certificate - or an argument to the container if it's started manually from the Docker host).

They will need to share the connection address anyway, so that the clients can connect.

But, maybe it's a good idea to describe how the containerised applications should be configured.

Paul Hunkar

2024-01-17 03:09

developer   ~0020642

had discussion in UA call today - so resulting notes (will also add some notes to the Part 4 issue)

  • don't us specific tech in descriptions (i.e. discuss containers not kubernetes), Discussion could also include VMs especially when deployed behind a NAT (cloud has a number of VMs running but externally they are one address, just different ports.

Paul Hunkar

2024-05-23 13:08

developer   ~0021228

The following updated was proposed for Part 2 - but pulled until additional text is provided in other parts (first to describe containers)

6.20 Container related deployment issues
The use of containers for deployment of applications (both Clients and Servers) is a growing trend. Containers can be used for scaling, in that multiple instances of the same OPC UA Application can be activated – each in their own container. Containers can be used for Hot redundant server, where each container is a distinct UA Server that can be used for load sharing. Hot redundant systems, which are further described in OPC 10000-4, All distinct OPC UA Applications are required to have a unique security footprint, thus even if multiple instances of the same OPC UA Application are running each shall have a unique ApplicationInstance and utilize a unique ApplicationInstanceCertificate, that describe the unique instance. The deployment of new container instance, would require security configuration to allow the new OPC UA Application to be trusted, unless the ApplicationInstanceCertificate are all CA based and the CA is trusted. In this case any new container would automatically be trusted.

Issue History

Date Modified Username Field Change
2021-09-07 07:26 kristian mo New Issue
2021-09-07 07:26 kristian mo Tag Attached: Certificate Management
2021-09-07 07:26 kristian mo Tag Attached: Docker
2021-09-07 07:26 kristian mo Tag Attached: Kubernetes
2021-09-07 07:26 kristian mo Tag Attached: Networking
2021-11-30 17:28 Jim Luth Note Added: 0015437
2021-11-30 17:28 Jim Luth Assigned To => Paul Hunkar
2021-11-30 17:28 Jim Luth Status new => assigned
2023-12-22 16:04 Paul Hunkar Issue cloned: 0009332
2023-12-22 16:04 Paul Hunkar Relationship added related to 0009332
2024-01-03 06:14 Paul Hunkar Status assigned => resolved
2024-01-03 06:14 Paul Hunkar Resolution open => fixed
2024-01-03 06:14 Paul Hunkar Fixed in Version => 1.05.04 RC1
2024-01-03 06:14 Paul Hunkar Note Added: 0020559
2024-01-03 09:11 Matthias Damm Status resolved => feedback
2024-01-03 09:11 Matthias Damm Resolution fixed => reopened
2024-01-03 09:11 Matthias Damm Note Added: 0020565
2024-01-08 03:52 Paul Hunkar Note Added: 0020586
2024-01-08 07:41 Matthias Damm Note Added: 0020587
2024-01-08 07:41 Matthias Damm Note Edited: 0020587
2024-01-16 17:24 Paul Hunkar Note Edited: 0020587
2024-01-17 03:09 Paul Hunkar Note Added: 0020642
2024-05-23 13:08 Paul Hunkar Note Added: 0021228
2024-06-14 20:43 Paul Hunkar Status feedback => acknowledged