Ablageort im Repository (GitLab): Projekt579-HH UDP IoT
Diskussionsforum (Discourse): Projekt579-HH UDP IoT
Readme: Projekt579-HH UDP IoT
Beschreibung des Projektes: This is the HH UDP IoT helm chart repository and cookbook. This is a manual and a softeare package for deploying a FROST-Server-based SensorThingsAPI IoT-Platform to a Kubernetes-Cluster.
PublicCode.YML: anzeigen
OSS Compliance: anzeigen
This is the Hamburg Urban Data Platform - IoT Helm chart (HH-UDP-IoT) repository and cookbook. It provides components for the real time data infrastructure of the Urban Data Platform.
The cookbook comprises a guide to deploy a FRaunhofer Opensource SensorThings-Server (FROST-Server) along with Keycloak, an identity provider for FROST, and utility applications to provide an integrated OGC SensorThings API conform, production-ready (TLS-encrypted) environment into your Kubernetes cluster.
The FROST-Server is the first complete, open-source implementation of the OGC SensorThings API standard. Whereas Keycloak is an OpenIDConnect-implementation for identity and access management developed by Red Hat. It is used to administer user roles (e.g. read, create, update) and to manage users with different access privileges to the FROST-Server.
For production ready usage this chart installs a NGINX-Ingress resource to connect your cloud-provider based Loadbalancers DNS and IP addresses to the FROST-Server and Keycloak instances. Client-Server connections are encrypted by default. The jetstack cert-manager provides and updates TLS certificates from Let's Encrypt fully automatic.
This Readme provides you with the basic setup and installation steps necessary to install the HH-UDP-IoT Helm chart into your Kubernetes cluster.
<img src="pics/flag_yellow_low.jpg" alt="drawing" width="180"/> | This Repository is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 731297. |
The HH-UDP-IoT helm chart deploys the three FROST-Server main components HTTP, MQTT and Bus to your Kubernetes-Cluster. For detailed information about its architecture please consult the official FROST-Servr documentation page.
With our helm chart the HTTP-component can optionally be deployed twice. This is reasonable if you have more than one database servers, one main and one or more additional replicas. By connecting your public DNS to the external http-deployment, it can be used exclusively for read access. Whereas the internal http-deployment can be used for writing to your database and is connected with the primary DB-Server. Below is a diagram displaying the application structure. Dashed components are optional
![app](pics/hh-udp-iot-helm-chart-v3-2-x.svg "application-structure")
HH-UDP-IoT FROST-Server components and connections
```bash
helm repo add stable https://charts.helm.sh/stable
helm repo add codecentric https://codecentric.github.io/helm-charts
helm repo add jetstack https://charts.jetstack.io
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add hh-udp https://api.bitbucket.org/2.0/repositories/geowerkstatt-hamburg/hh-udp-iot/src/master/
helm repo list
helm search repo hh-udp/
###helm repository update
helm repo update
```
To receive TLS-certificates automatically install cert-manager by following the steps provided here: https://cert-manager.io/docs/installation/
The first thing you'll need to configure after you've installed cert-manager is a ClusterIssuer or an Issuer in every namespace.
Create cluster issuers for the prod and staging environment of let's encrypt.
```bash
kubectl apply -f manifests/clusterissuer_prod.yaml
kubectl apply -f manifests/clusterissuer_staging.yaml
```
By using the cluster-issuer there are problems with applications/certs in in different namespaces and more than one ingressclass.
Our recommendation is to use an issuer in every namespace.
Create an issuer for the prod environment of let's encrypt in every namespace. (frosta, frostb, keycloak and so on....)
Edit the manifest and change the mail,namespace and the ingress-class, than create the issuer:
```bash
kubectl apply -f manifests/issuer_prod.yaml
```
In the frostserver and keycloak values file you have to use the issuerType: Issuer !
Example:
```bash
cert:
ingressClass: "tlf"
extra: true
enabled: true
issuerType: Issuer
```
connect to the server where your Postgresql database engine is installed
switch to the postgres user with `sudo su postgres`
Start the psql cli with `psql`, and run these queries (edit username and password):
```POSTGRESQL
CREATE USER YOUR_USER with encrypted password 'PASSWORD';
CREATE DATABASE keycloak ENCODING=UTF8 owner=YOUR_USER;
```
Check the postgresql.conf for `listen_addresses = '*'` or modify it to reflect the IP range of your postgres server (look here for details)
Check the pg_hba.conf for an appropriate setting that allows for connections via IPv4 from your kubernetes cluster
example entry in pg_hba.conf:
```conf
host keycloakuser keycloakDB 192.168.178.99/24 md5
```
Verify that IPv4 connectivity is possible between your kubernetes cluster and the PostgeSQL server.
```POSTGRESQL
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'keycloakdb'
AND pid <> pg_backend_pid();
CREATE DATABASE keycloakxdb
WITH TEMPLATE keycloakdb
OWNER udhkeycloak;
```
```Deploy
kubectl create ns keycloakx
kubectl apply -f 00_keycloakx-secrets.yaml
```
```Deploy
kubectl apply -f 01_keycloakx-issuer.yaml
kubectl get issuers.cert-manager.io -A
```
```Deploy
helm install keycloakxnginxcontroller ingress-nginx/ingress-nginx --version 4.2.5 -f 02_keycloakx_nginx_values.yaml -n keycloakx
```
```Deploy
helm install keycloakx codecentric/keycloakx -n keycloakx -f 03_keycloakx_values.yaml
```
```SQL
-- create user
CREATE USER sensorthings WITH ENCRYPTED PASSWORD 'ChangeMe';
-- create database
CREATE DATABASE sensorthings ENCODING=UTF8 OWNER=sensorthings;
-- check if the database is there
\l
-- connect to your database
\c sensorthings
-- activate postgis extension for the FROST-Server database
CREATE EXTENSION postgis;
-- quit psql cli
\q
```
```bash
curl -OL https://api.bitbucket.org/2.0/repositories/geowerkstatt-hamburg/helm-charts/src/master/hh-udp-iot/values.yaml
```
You need to adjust at least these values to account for your environment and your specific needs. Before deploying the HH-UDP-IoT Helm Chart, you need to have an external IP adress with a DNS entry in your cloud environment. The exact procedure to obtain these ressource depends on your cloud provider.
There will be an external Frost-HTTP deployment and an internal Frost-HTTP deployment. The parameters have to changed in both deployments. The prupose of the second deployment is to separate incomming data from the publich read requests.
The deployment can be used to serve internet facing requests by a read-only database replica on a second databaseserver for example. The scaling and the performance of the pods and database servers can be individually optimized in both deployments.
Parameter | Description | Default |
---|---|---|
frost.serviceHost.fqdn | DNS entry for your Frost-Server | |
frost.auth.keycloakConfigUrl | client registration endpoint of your FROST-Server (look here for setup details) | http://{your-keycloak-address}/realms/{REALMNAME}/clients-registrations/install/ |
frost.auth.keycloakConfigSecret | Insert the client secret here that you have Keycloak config steps | |
frost.db.ip | IP adress of your external postgres database server or multiple ip separated by "," if read replicas can be used. | |
frost.db.frostConnection.database | name of the created FROST-Server database on the external postgres server | sensorthings |
frost.db.frostConnection.username | usernamename of the created user for the FROST-Server database on the external postgres server | sensorthings |
frost.db.frostConnection.password | password of the created user for the FROST-Server database on the postgres server | |
frost.db.keycloakConnection.database | name of the created keycloak database on the postgres server | keycloak |
frost.db.keycloakConnection.username | username of the created user for the keycloak database on the postgres server | keycloakuser |
frost.db.keycloakConnection.password | password of the created user for the keycloak database on the postgres server | |
nginx-frost.controller.service.type | (Depending on your environement for external access LoadBalancer) | Loadbalancer |
nginx-frost.controller.service.loadBalancerIP | external static IP of your cloud DNS for the FROST-Server, will be added to the Load balancer Ressource | |
cert.email | email to get certificate information from let's encrypt | |
cert.frost.extraTlsHosts | your FROST-Server DNS entry | |
nginx-frost.tcp.1883 | service name and location of the mqtt-Service the Ingress Controller searches for, when redirecting requests to the mqtt-Pod, it`s name consists of {NAMESPACE}/{HELM-RELEASE}-{CHART.NAME}-mqtt:1883 | default/frost-hh-udp-iot-mqtt:1883 |
Note:
The value for `nginx-frost.tcp.1883`-entry needs to be changed to account for the helm chart release name and the kubernetes namespace used, e.g. for `helm install -f values.yaml --name iot --namespace frost` it needs to be changed to "iot/frost-hh-udp-iot-mqtt:1883".
```bash
helm install frost -n frosta -f ./values.yaml . --debug --dry-run
```
```bash
helm install frost -n frosta -f ./values.yaml . --debug
```
```bash
kubectl get svc
kubectl get certs -n frost
kubectl get ingress -n frost
kubectl get pods -n frost
kubectl logs -f {POD-NAME}
```
To configure streaming replication between two postgres nodes do the following:
```bash
sudo su postgres
```
```SQL
-- create replication user (change the password)
CREATE USER rep REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD 'password';
```
```config
host replication replication 172.16.0.6/32 md5
```
```PostgreSQL
postgres=# SHOW data_directory;
```
```bash
pg_basebackup --checkpoint=fast -h 172.16.0.5 -U replication -p 5432 -D /data/postgresql/12/ -Fp -Xs -P -R
```
check if the files in /data/postgresql/12/ have the right "postgres" user.
restart the replica database
login on the master
```bash
sudo -u postgres psql
```
```PostgreSQL
postgres=# SELECT client_addr, state FROM pg_stat_replication;
```
```bash
client_addr | state
------------------+-----------
your_replica_IP | streaming
```
If you have trouble aquiring certificates set the `cert.productionEnabled` to false to use the Let's Encrypt staging environment. Otherwise on multiple retries you probably will hit the LE rate limit (Failed Validation limit of 5 failures per account, per hostname, per hour on the LE). On the LE staging environment this limit is significantly higher.
If you deploy from a local folder instead of Helm repositories configure a .helmignore file to prevent parsing errors during the deployment. See the helm docs (i.e. add .vscode for example).
Product | more Info | Licence |
---|---|---|
FROST-Server | Fraunhofer IOSB | GNU |
Keycloak | keycloak.org | Apache License 2.0 |
Ingress-NGINX | kubernetes/ingress-nginx | Apache License 2.0 |
Cert-manager | cert-manager.io | Apache License 2.0 |