- Change affinity type of CoreDNS Pods to "RequiredDuringSchedulingIgnoredDuringExecution"
This avoids downtime in some rare cases and improves load balancing.
- Fix UI bug when selecting images
- Enable CSI snapshot support
- Add support for VolumeSnapshots of OpenStack Volumes
- Add support for Kubernetes 1.27.7
- Add support for Kubernetes 1.25.14 & 1.26.9
- Fix image update recommendation in UI
- Raise limit of LoadBalancer services that can share one Octavia LB to 100.
- Patch Calico in canal Pods to v3.23.5 and v3.25.2 respectively
- Enable Calico metrics in canal Pods
- Upgrade node-local-dns-cache to 1.22.23
- Upgrade metrics-server to 0.6.4
- Fix issue where no storage class was annotated as default.
- Fix issue where annotation wasn't removed from storage class after another was annotated.
priorityClassName to Pods managed by SysEleven
- Replace image registry with registry.k8s.io for several images
This prevents ImagePull errors for clusters with a lot of node churn.
- Add support for Kubernetes 1.24.17, 1.25.13 & 1.26.8
- Update konnectivity to v0.0.37 – Improves stability for connections from the API server to the cluster, e.g.
kubectl logs ...,
kubectl port-forward ...
- Add support for Ubuntu 22.04 (Jammy)
- Fix: CSI node plugin crashes when node is under high load
- Add StorageClass backed by Ceph in all clusters in the FES OpenStack region
- Add support for Kubernetes 1.24.15, 1.25.11 & 1.26.6
- Fix issue where controller-manager was stuck on 1.26 clusters that underwent ccm migration
- Add support for Kubernetes 1.26.5
- Add support for Kubernetes 1.24.14 and 1.25.10
- Add support for Kubernetes 1.24.13 and 1.25.9
- Fix: Cinder CSI plugin now correctly sets volume attach limits on CSINodes
Nodes will have to be recreated for it to take effect.
- Fix bug where nodes couldn't be cleaned up when Floating IP was already deleted.
- Add support for Kubernetes 1.24.12 and 1.25.8
- Fix performance issue with cluster autoscaler in big clusters
- Replace references to "master" in the UI with "control plane"
- Replace use of registry k8s.gcr.io with registry.k8s.io
- Allow setting default storage class. For more information, see storage classes.
- Add support for Kubernetes 1.25.7 for clusters on SysEleven Stack.
Check our migration guide for details.
- Improve behavior of cluster autoscaler when changing
If the current replica count becomes out of bounds, it will now always be adjusted.
This avoids an issue with the autoscaler where it would refuse to scale.
- Add support for Kubernetes 1.24.11.
- Fix issue where changes to clusters weren't propagated in time.
This also showed as cluster control plane components appearing "NotReady" on the dashboard.
- Fix an issue where MachineDeployment updates were refused, because SysEleven Stack API was slow.
- Fix ImagePullErrors with the user-ssh-keys-agent Pods.
- Fix UI error due to the use of a removed API.
- Add support for Kubernetes 1.23.16 and 1.24.10.
- Add support for Kubernetes 1.23.15 and 1.24.9.
- Fix: Permission issue that inhibited project deletion.
- Remove support for Kubernetes version 1.20.
- Add support for Kubernetes 1.24.8.
- Add field
tenant-id in Secret
cloud-config in the
kube-system Namespace when cluster is using OpenStack username/password.
- Add support for Kubernetes 1.23.13.
- Update OpenStack cloud controller manager to version 1.24 in 1.23 clusters.
- Update Calico to 3.23.4 in clusters with Kubernetes version >=1.22.
- Remove StorageClass
sys11-quobyte-multi in clusters that don't reference it in existing PersistentVolumeClaims.
You may also remove the StorageClass yourself if you no longer use it.
- No longer install StorageClass
sys11-quobyte-multi in new clusters.
- Remove StorageClass
sys11-quobyte in clusters that don't reference it in a StatefulSet.
Likewise, replace driver from in-tree to CSI driver in StorageClass
sys11-quobyte in clusters that do reference it in a StatefulSet.
This ensures that these StatefulSets can be scaled up if wanted. You may also remove the StorageClass yourself if you no longer use it.
- Improve cluster name validation in UI
- Remove option of subnet ID in cluster creation wizard
This didn't work as intended as OpenStack chooses an arbitrary subnet from the network.
- Remove limit range from default namespace.
- Provide support for Kubernetes versions 1.20.15, 1.21.14 and 1.22.13
- Migrate all existing clusters from OpenVPN to Konnectivity.
This improves reliability and performance of the kube-apiserver to cluster tunneling.
For more information, check our documentation on Konnectivity.
- Remove support for Kubernetes 1.19
- Remove Add-on panels in UI
- New clusters use Konnectivity instead of OpenVPN
- Remove automatic updates for addons.
Existing Addon resources remain and can be managed in different ways.
See Migrate from a MetaKube Addon for details.
- Switch the MetaKube API endpoint to
- Fix: metrics-server fails to collect metrics due to RBAC regression
- Add support for Kubernetes 1.23.8
- Fix an issue where horizontal node autoscaler annotations weren't removed when it was disabled
- Add support for Kubernetes 1.22.12
This contains a fix for a Kubelet bug.
- Fix an issue with machine deployments using AWS Spot instances
- Remove the option to leave PVs and Load Balancers behind when deleting a cluster
This could cause a deadlock in the cluster clean up.
- Source coredns image from external registry
- Remove possibility to install new addons
- Fixes an issue, where the CSI controller plugin was not available in migrated clusters.
- Deleted UI Theme picker that allowed choice between Dark vs Light modes is deleted. Light mode is the default and only mode now.
- Fixed some unintuitive behavior of cluster autoscaler.
max replicas in UI now changes the current replica count, if it's outside of bounds.
- Fixed a race condition that could lead to a deadlock in cluster deletion
- Fixed a bug where a blocking write call caused the DNAT controller to never install NAT rules, which broke apiserver to Kubelet communication.
This may have caused issues with
kubectl logs|exec|port-forward as well as missing
- Remove support for outdated Kubernetes patch versions 1.19.7, 1.20.8, 1.21.3
- Move CSI-controller out of MetaKube cluster
This fixes an issue where clusters got stuck deleting, because PVs could not be cleaned up.
- Fix issue where ssh-key-agent wrongfully removed authorized keys from machines.
- Fix issue with Flatcar nodes that use Containerd
- Add labels
addons.syseleven.de/name to all managed addon resources.
- Remove obsolete form fields in UI.
- Add-ons are deprecated. SysEleven plans to disable the add-ons in Q3 and will inform you in time (8 weeks before) about the exact date. You can read on migrate from a MetaKube add-ons here.
- Addons can now be deleted without removing their resources from the cluster.
- Fixed an issue where the UI would not show the actual state of a MetaKube cluster
This could lead to certain actions being disabled or deleted clusters not being cleaned up.
- Added labels
addons.syseleven.de/name to resources managed through SysEleven Add-Ons.
addons.syseleven.de/name: "<addon name>"
- Fixed UI issue that prevented users from updating their cloud credentials
- Fix bug where new fields unexpectedly triggered rollovers of machine deployments
- Fixed an issue where oidc authentication wasn't possible for clusters in the FES region
- Update cert-manager version and CRDs
- Fixed regression causing failing image pulls for Canal CNI images
- Fixed an issue with Openstack users with no email as username
- Enable docker to containerd Container Runtime Migration
- Make containerd the default runtime migration for new clusters
- Add endpoint to restart machine deployments
- Add secret with Helm chart & release information for each Metakube addon
- Add option to allow Cluster autoscaler to delete nodes with local storage volumes on scale down. Please contact support if you want this feature.
- Fixed an issue where CSI components would fail to authenticate when cloud credentials were changed.
- Release region fes1 (Frankfurt)
- Fix coredns pods preventing horizontal node autoscaler from evicting the node
- Fix undesired node-rollovers because of inconsistent ssh-key ordering
- Add option to use OpenStack Application Credentials instead of username/password to create and manage Metakube clusters
- Provide kubernetes version v1.21.3
- Provide the option to disable Proxy Protocol for Ingress Addon at installation
- Move images used by the addons to AWS ECR to avoid DockerHub pull limits
- Provide kubernetes versions 1.19.12 and 1.20.8.
These versions are safe to install for clusters with migrated PVs thanks to a dedicated fix.
- Update cert-manager addon helm chart to version 1.3.1
- Update ingress addon helm chart to version 3.34.0
beta channel for monitoring addon.
- Ingress addon configmap can be set via user-interface.
- Disable admission webhooks for ingress addon
- Fix for too many requests from velero addon to kubernetes API
- Fix cluster stuck at deletion issues
- Provide kubernetes versions 1.18.15, 1.19.7 and 1.20.2. These are the latest stable versions for clusters with migrated PVs.
- Enable upgrade from 1.17.* to 1.18.* on Azure and AWS
- Provide kubernetes versions 1.18.18, 1.19.10 and 1.20.6
- Update kubernetes dashboard addon
- Update helm exporter of metakube-helm-operator to gather metrics in background
- Speed up cluster creation
- Fix azure cluster gets stuck at deletion
- Update Weavescope Addon to 1.13.2
- Use images from ECR public registry instead of docker.io
- Fix machine controller bug when new nodes of old machinedeployments don't receive floating ips
- Migration to external openstack cloud provider
- Enable SSH Key Agent option during cluster creation. Useful for those who want to manage keys manually.
- Added ability to downscale for clusters version>=1.18 using horizontal node auto scaler
Remove Add-ons Linkerd and Vault
Upgrade machine-controller to 1.24.4-sys11-1 release.
Switched kubelet images repository. The change affects newly-provisioned Machines running Flatcar Linux.
Provide kubernetes version 1.20.4.
Remove support for helm2 including the tiller deployment.
Updated Addon External DNS.
Updated Addon Redis Server.
Updated Openstack External Cloud Controller.
Updated Cinder CSI Plugins.
- Provide kubernetes versions 1.18.16, 1.19.7 and 1.17.17.
- Update kubermatic to v2.15.7.
- Add admission control configuration for the user cluster API deployment.
- Fix deployment of flatcar clusters.
- Remove CoreOS as option for new deployments.
- Add Flatcar images to image options for Flatcar clusters.
- Dashboard UI changes.
- Fix broken links on RabbitMQ addon UI.
- Add metrics service port to the node problem detector daemonset.
- Provide kubernetes versions 1.18.13, 1.17.15 and 1.16.15.
- Provide dynamic kubelet configuration via a configmap in the MetaKube cluster. Please consult the documentation for additional information.
- Added a second storage class (sys11-quobyte-multi) to support Read Write Many volumes (RWX).
- Updated the external cloud controller to v1.19.2 which adds support for RWX volumes and fixes a few open issues.
- Provide kubernetes versions 1.18.8, 1.17.11 and 1.16.14.
- Provide ingress addon with PROXY protocol on Octavia for clusters in version 1.18.x.
- Finished migration of cluster etcds to local storage nodes.
- New clusters get created with etcd launcher.
- Added tolerations to cinder CSI plugin to be able to mount volumes in tainted nodes.
- Updated seed infrastructure nodes to Ubuntu 20.04 LTS.
- Added validation for taints when creating and updating node deployments using the Web UI.
- Fixed kubeconfig API endpoint.
- Fixed issue with kubelet restart for new/updated clusters.
- On OpenStack every Node now gets an additional label
machine-controller/physical-host-id that contains the id of the physical server the Node is scheduled. It is meant to replace
- Remove datacenters without valid VM types on Azure.
- Update velero (backups) addon to 1.4.0.
- Provide Octavia load balancers for clusters in version 1.18 on Openstack. See here for more information on Octavia in over Syseleven Stack.
- Provide autoscaling and pod-anti-affinity policies for CoreDNS.
- Kubernetes versions 1.16.10 and 1.17.6 got updated to 1.16.13 and 1.17.9.
- New Add-on Redis has been added to provide redis key-value storage.
- We updated MetaKube to 2.14. Which includes many new features and UI improvments. This release includes full support for FlatcarOS our replacement for CoreOS which is EOL. You can also change the UI theme to a dark or light theme.
- We also added a new feature to download the cluster usage data of your running clusters in JSON or CSV format. To ensure that you are always aware of your current usage.
- We now add a soft affinity policy to all clusters in openstack. This will help to ensure that your worker VM's are distributed between multiple compute nodes in openstack. We also add a label with the host id of the compute node to every worker. This allows you to check if the soft affinity policy is working as expected.
- Kubernetes versions 1.15.12, 1.16.10 and 1.17.6 are now available.
- We replaced the current CoreOS image with the new Flatcar image. CoreOS will not be further maintained by Redhat. The new Flatcar image is a friendly fork of CoreOS Container Linux and as such, compatible with it.
- Changes of the SSH-keys do not require a rebuild of existing nodes anymore, instead they are updated in realtime.
- The MetaKube dashboard offers an easy way to manage user/group permissions.
- MetaKube supports additional Azure regions in the US and Canada.
- For new OpenStack clusters MetaKube uses the external cloud controller manager and CSI.
- New Add-on Cluster Log Management has been added to provide central log management with Loki and Grafana.
- In the Add-On Cluster Monitoring Stack Prometheus has been updated to version 2.17.1
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc7
- The Kubernetes Software Defined Network canal has been updated: flannel to 0.12.0. This brings several bugfixes and network performance improvements.
- MetaKube now supports creating Kubernetes clusters on Azure regions westeurope, northeurope, francecentral, uksouth, ukwest, germanywestcentral, switzerlandnorth and norwayeast
- You can now configure authentication at MetaKube clusters with SysEleven Login
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.7.1
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc6
- The Add-On weave-scope has been updated to version 1.13.0
- The Add-On external-dns has been updated to version 0.7.0
- The Add-On Nginx Ingress Controller was updated to version 0.30.0
- The Add-On velero has been updated to version 1.3.1
- The Kubernetes Software Defined Network canal has been updated: calico to 3.13.0. This brings several bugfixes and network performance improvements.
- Kubernetes versions 1.15.11, 1.16.8 and 1.17.4 are now available.
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.6.2
- In the Add-On Cluster Monitoring Stack Prometheus has been updated to version 2.16.0
- The Add-On external-dns has been updated to version 0.6.0
- Kubernetes versions 1.15.10, 1.16.7 and 1.17.3 are now available.
- When creating a cluster you can now change the IP ranges for Pods, Services and Nodes
- In the Add-On Linkerd Linkerd has been updated to version 2.7.0
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc5
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.6.1
- In the Add-On Linkerd Flagger has been updated to version 0.23.0
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc4
- Each cluster now runs node-problem-detector with
a customization that sets an initial taint on each newly created node that will only be removed when
pod networking and dns are up and running on the node. This prevents pods, especially from Jobs or CronJobs,
from being scheduled and started even though the node has no working DNS or network.
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc3
- The Kubernetes Software Defined Network canal has been updated: calico to 3.12.0. This brings several bugfixes and network performance improvements.
- Kubernetes versions 1.15.9, 1.16.6 and 1.17.2 are now available.
- We limited the Docker max log size to 100mb per container.
- On clusters with CoreOS Container Linux nodes, the time window where nodes are rebooted after an update is now configurable
- The Add-On Nginx Ingress Controller was updated to version 0.27.1
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc2
- In the Add-On Linkerd Flagger has been updated to version 0.22.0
- Kubernetes versions 1.15.7, 1.16.4 and 1.17.0 are now available.
- In the Add-On Linkerd Flagger has been updated to version 0.21.0
- We extended our Cluster Monitoring Stack Add-On. The Add-On now creates a persistent volume to store your Grafana users and plugins. You can still continue to use ConfigMaps if you prefer to.
- In the Add-On Cluster Monitoring Stack Prometheus has been updated to version 2.15.2
- For Kubernetes 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc1
- The Kubernetes Software Defined Network canal has been updated: calico to 3.11.1. This brings several bugfixes and network performance improvements.
- The CoreDNS configuration in a MetaKube cluster can now be extended with a ConfigMap
- When creating a new Node, taints from the NodeDeployment are now added synchronously during the registration of the Node instead of asynchronously shortly after
- In the Add-On Linkerd Linkerd has been updated to version 2.6.1
- In the Add-On Linkerd Flagger has been updated to version 0.20.4
- Linkerd can now also be installed in a non-highly-available mode
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.5.2
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-beta8
- In the Add-On Cluster Monitoring Stack alertmanager has been updated to version 0.20.0
- On OpenStack every Node now gets an additional label
machine-controller/host-id that contains the id of the physical server the Node is scheduled on.
- Kubernetes version 1.16.3 is now available.
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.5.1
- In the Add-On Cluster Monitoring Stack Prometheus has been updated to version 2.14.0
- The Calico CNI Plugin received an update to the version 3.10.1
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-beta6
- It is now possible to activate audit logging in a MetaKube cluster
- AWS clusters can now have NodeDeployments in different availability zones of the chosen region
- Labels can now be added to MetaKube projects and clusters. Nodes inherit these labels automatically
- There is now an indicator that warns you while editing a NodeDeployment if the edit would result in recreating the nodes
- Kubernetes versions 1.14.9 and 1.15.6 are now available.
- MetaKube now supports creating Kubernetes clusters on AWS regions eu-west-2, eu-west-3 and eu-north-1
- In the Add-On Vault Consul has been updated to version 1.6.2
- Updated Helm to 2.16.1
- The kubelet has been patched to address an issue, where a Pod can be stuck in NotReady. We advise to rebuild your worker nodes so that the patched version gets installed.
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.4.4
- The Add-On velero has been updated to version 1.2.0
- CoreDNS has been updated to version 1.6.5
- New Add-on Linkerd has been added to enable service mesh in metakube clusters.
- New Add-on Vault has been added to allow you to secure, store and control access to tokens, passwords, certificates and encryption codes. With Vault you can protect your secrets and other sensitive data.
- Updated Helm to 2.15.2
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.4.3
- In the Add-On Cluster Monitoring Stack Prometheus has been updated to version 2.13.1
- In the Add-On Cluster Monitoring Stack prometheus-operator has been updated to version 0.32.0
- Kubernetes versions 1.13.12, 1.14.8 and 1.15.5 are now available. This fixes CVE-2019-11253, which renders the Kubernetes API server vulnerable to a denial-of-service attack Security Advisory
- The Add-On cert-manager has been updated to version 0.10.1
- The Add-On external-dns has now configuration options for a domain filter and the Cloudflare proxy feature
- The Add-On Nginx Ingress Controller has now a configuration option for a default tls certificate
- etcd is updated to 3.4.1
- The Add-On external-dns has been updated to version 0.5.17
- CoreDNS has been updated to version 1.6.4
- Kubernetes versions 1.13.11, 1.14.7 and 1.15.4 are now available
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.3.6
- The Add-On external-dns supports now RcodeZero AnyCast DNS
- Beta test for authentication with SysEleven Login started
- Node deployments on OpenStack can now be created without floating IPs. For the implications have a look at our cluster-api documentation.
- The stable channel of the Add-On velero has been updated to version 1.1.0
- Added Add-On external-dns
- Added Add-On cert-manager
- CentOS is now removed as a supported OS for new clusters and NodeDeployments, we recommend you also change your existing NodeDeployments to either Ubuntu or CoreOS to improve performance and stability due to newer Linux kernels in these images.
- The beta channel of the Add-On velero has been updated to version 1.1.0
- On OpenStack clusters the root disk size is now configurable for network storage flavors
- The Calico CNI Plugin received an update to the version 3.8.2
- CoreDNS has been updated to version 1.6.2
- Kubernetes versions 1.13.10, 1.14.6 and 1.15.3 are now available, because of a Denial-of-Service vulnerability in the HTTP/2 stack all clusters have been updated to the next patch version automatically. For details on the vulnerability see Security Advisory NFLX-2019-002
- The Add-On Nodelocal DNS Cache has been updated to version 1.15.3
- Pod Security Policies can now be activated in a MetaKube cluster
- The Add-On Nginx Ingress Controller was updated to version 0.25.1 with fixes for for CVE-2018-16843, CVE-2018-16844, CVE-2019-9511, CVE-2019-9513, and CVE-2019-9516
- The Calico CNI Plugin received an update to the version 3.8.0
- Added experimental Node Optimizations Add-On. This Add-On configures system settings to optimize your virtual machine for using docker containers under load. This should improve your general system and network performance.
- Added MetaKube Monitoring Add-On (Prometheus Operator). The Prometheus Operator deloys a full monitoring stack. Including Grafana, Prometheus and the Alertmanager. You can configure several setting like the retention period of your data. You can also configure a mail address or a slack channel to recieve your alerts.
- Kubernetes versions 1.13.9, 1.14.5 and 1.15.2 are now available, because of two security vulnerabilities in previous releases clusters should be updated to the newest patch release. See #80983 and #80984 for details
- The name of a cluster can now be changed from the dashboard
- Added default Add-On Nodelocal DNS Cache
- Previously Velero was installed in every cluster, now it can be installed manually as an Add-On
- CoreDNS has been updated to version 1.6.1
- Previously the Web Terminal was installed in every cluster, now it can be installed manually as an Add-On
- Updated Helm to 2.14.3
- Add-Ons can now directly be installed during cluster creation
- MetaKube Add-Ons are now in beta. Add-Ons are fully managed applications that can be installed into your cluster
- Added Add-On Nginx Ingress Controller
- Added Add-On Weave Scope
- Added Add-On Kubernetes Dasbhoard
- Previously the Kubernetes Dashboard was installed in every cluster, now it can be installed manually as an Add-On
- Stability and performance improvements when deleting persistent volumes during cluster deletion
- Kubernetes versions 1.12.10, 1.13.8, 1.14.4, 1.15.0 and 1.15.1 are now available
- Updated Helm to 2.14.2
- On Openstack, the scheduler is now aware that only 25 additional volumes may be attached to a node. It will make sure that pods with mounted volumes are only scheduled on nodes that can support them.
- There are now Api Accounts that can be used to authenticate with the MetaKube API to automate management of clusters
- More details are shown when using kubectl get machine/machineset/machinedeployment
- ICMP traffic to clusters is now always permitted to allow MTU discovery
- Cluster events are now visible on the Cluster Detail Page
- Various design and usability improvements
- MetaKube kubeconfig files now use a unique context and username to simplify usage of them.
- MetaKube dashboard now shows more prominent indicators if a node deployment is outdated.
- MetaKube now supports creating Kubernetes clusters on AWS (regions eu-central-1 and eu-west-1 for now), with full feature parity between clusters on SysEleven Stack and AWS.
- On OpenStack MetaKube now also supports VMs with CPU:memory ratios of 1:2 and 1:8 in addition to the existing 1:4 flavors (Details)
- The node deployment editor now automatically detects when a newer OpenStack image is available and allows the user to perform a rolling update
- Kubernetes versions 1.14.2 and 1.13.6 were removed because of a security vulnerability
- Major network performance and reliability improvements
- CoreDNS is not logging every dns query anymore
- Kubernetes version 1.12.9 is now available
- Updated Helm to 2.14.0
- When creating a NodeDeployment you can define the taints that should be attached to a node
- Kubernetes version 1.14.2 is now available
- Kubernetes versions 1.14.1, 1.14.0, 1.13.6 and 1.12.8 are now available
- When deleting a MetaKube cluster, you can now choose if you want to delete its OpenStack LoadBalancers or not
- When creating a NodeDeployment you can define the labels that should be attached to a node
- When upgrading a MetaKube cluster, you can also automatically update all NodeDeployments to do a rolling update of all nodes
- The Kubernetes Software Defined Network canal has been updated: calico to 2.6.12 and flannel to 0.11.0. This brings several bugfixes and network performance improvements.
- Every node in a MetaKube cluster now reserves 200m CPU for system components to not overload the node and increase stability
- The Kubernetes cluster DNS service has been switched to CoreDNS version 1.3.1
- Stability improvements when creating the initial NodeDeployment after the creation of a MetaKube cluster
- Updated metrics-server to 0.3.2
- When interacting with MachineDeployments, MachineSets and Machines over the kubectl CLI, you can now use the short names
- You can now use
kubectl scale machinedeployment to easily scale a MachineDeployment over the kubectl CLI
- An upgrade of a MetaKube cluster is now only available, if the versions of the existing nodes would work together with the new control plane components version
- etcd is updated to 3.3.12
- The NodeDeployment detail page in the MetaKube dashboard now also contains Node related events
- The name of a project can now be edited
- Kubernetes control plane components resource requests, limits and pod affinities have been updated to increase performance and stability
- The Web Terminal access is now restricted to owners and editors
- The kubeconfig download is now restricted to owners and editors
- Kubernetes versions 1.12.7 and 1.13.5 are now available
- We now provide up to date default images for Ubuntu, CoreOS and CentOS, see Supported Operating Systems for details
- The Web Terminal now contains the alpine coreutils package
- MetaKube clusters can now be created in the second SysEleven OpenStack region
- OpenStack CLI tools are available in Web Terminal
- Kubernetes versions 1.12.6 and 1.13.4 are now available
- Disabled all Kubernetes versions before 1.12.6 and 1.13.4 due to a Kubernetes security incident, all existing clusters have been upgraded automatically
- Updated the backup utility ark to 0.10.1
- Updated Helm to 2.13.0
- The dashboard shows now the Kernel and Container Runtime Versions of nodes, which makes it easier to see if a node is affected by a security vulnerability
- Kubernetes versions 1.10.x are not available anymore corresponding to our Supported Versions Matrix
- You can now add tags to nodes when adding them
- For new clusters and new nodes in existing clusters, nodes are now wrapped inside of NodeDeployments which makes it possible to trigger rolling updates of nodes within a cluster from the dashboard
- New nodes now receive Docker 18.9.2 which fixes CVE-2019-5736 in Runc for all new nodes
- Multiple owners can now be added to one MetaKube project
- Detailed events for the lifecycle of a node are shown in the dashboard
- Every MetaKube cluster comes now with a web-terminal that is accessible from the MetaKube dashboard
- Kubernetes version 1.13.3 is now available
- Kubernetes version 1.12.5 is now available
- The control plane service health indicators on the cluster details page now contains all services that are fully managed
- Kubernetes versions 1.13.1 and 1.13.2 are now available
- Updated Helm to 2.12.2
- The cluster details page now comes with a dashboard showing the amount of pods in the cluster and the cpu and memory usage of the worker nodes
- Kubernetes versions 1.10.12 and 1.12.4 are now available
- Updated Helm to 2.12.1
- Updated kubernetes-dashboard to 1.10.1
- MetaKube now supports projects to group, maintain and access clusters with multiple users
- Kubernetes versions 1.10.9, 1.12.1 and 1.12.2 are now available
- etcd is updated to 3.3.9
- Updated the backup utility ark to 0.9.9
- New Ubuntu worker nodes are now using Ubuntu 18.04
- Kubernetes versions 1.11 are now disabled due to a bug in kubernetes where nodes will loose their IP addresses
- Kubernetes versions 1.9.11 and 1.10.8 are now available
- Updated the backup utility ark to 0.9.6
- Clusters now come with Helm already installed and configured.
- Kubernetes versions 1.10.7 and 1.11.3 are now available
- kubernetes-dashboard is updated to version 1.10.0
default namespace now comes with a LimitRange that assigns a default CPU and memory request to pods that do not define one explicitly.
- Kubernetes version 1.11.1 and 1.11.2 are now available
- Clusters now come with managed cross-region cluster backup solution, see Backups
- Security groups in LoadBalancer services are managed automatically
- Clusters now come with managed metrics-server which enables Horizontal Pod AutoScaling
- Cluster creation: Drop-downs instead of free-text input boxes to choose from available floating IP pools, security groups, networks and subnets
- Kubernetes versions 1.10.6 and 1.9.10 are now available
- MetaKube is now generally available