- Kubernetes versions 1.15.12, 1.16.10 and 1.17.6 are now available.
- We replaced the current CoreOS image with the new Flatcar image. CoreOS will not be further maintained by Redhat. The new Flatcar image is a friendly fork of CoreOS Container Linux and as such, compatible with it.
- Changes of the SSH-keys do not require a rebuild of existing nodes anymore, instead they are updated in realtime.
- The MetaKube dashboard offers an easy way to manage user/group permissions.
- MetaKube supports additional Azure regions in the US and Canada.
- For new OpenStack clusters MetaKube uses the external cloud controller manager and CSI.
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc7
- The Kubernetes Software Defined Network canal has been updated: flannel to 0.12.0. This brings several bugfixes and network performance improvements.
- MetaKube now supports creating Kubernetes clusters on Azure regions westeurope, northeurope, francecentral, uksouth, ukwest, germanywestcentral, switzerlandnorth and norwayeast
- You can now configure authentication at MetaKube clusters with SysEleven Login
- The Add-On external-dns has been updated to version 0.7.0
- The Add-On Nginx Ingress Controller was updated to version 0.30.0
- The Add-On velero has been updated to version 1.3.1
- The Kubernetes Software Defined Network canal has been updated: calico to 3.13.0. This brings several bugfixes and network performance improvements.
- Kubernetes versions 1.15.11, 1.16.8 and 1.17.4 are now available.
- Kubernetes versions 1.15.10, 1.16.7 and 1.17.3 are now available.
- When creating a cluster you can now change the IP ranges for Pods, Services and Nodes
- Each cluster now runs node-problem-detector with
a customization that sets an initial taint on each newly created node that will only be removed when
pod networking and dns are up and running on the node. This prevents pods, especially from Jobs or CronJobs,
from being scheduled and started even though the node has no working DNS or network.
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc3
- The Kubernetes Software Defined Network canal has been updated: calico to 3.12.0. This brings several bugfixes and network performance improvements.
- Kubernetes versions 1.15.9, 1.16.6 and 1.17.2 are now available.
- We limited the Docker max log size to 100mb per container.
- Kubernetes versions 1.15.7, 1.16.4 and 1.17.0 are now available.
- In the Add-On Linkerd Flagger has been updated to version 0.21.0
- We extended our Cluster Monitoring Stack Add-On. The Add-On now creates a persistent volume to store your Grafana users and plugins. You can still continue to use ConfigMaps if you prefer to.
- In the Add-On Cluster Monitoring Stack Prometheus has been updated to version 2.15.2
- For Kubernetes 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-rc1
- The Kubernetes Software Defined Network canal has been updated: calico to 3.11.1. This brings several bugfixes and network performance improvements.
- The CoreDNS configuration in a MetaKube cluster can now be extended with a ConfigMap
- When creating a new Node, taints from the NodeDeployment are now added synchronously during the registration of the Node instead of asynchronously shortly after
- In the Add-On Linkerd Linkerd has been updated to version 2.6.1
- In the Add-On Linkerd Flagger has been updated to version 0.20.4
- Linkerd can now also be installed in a non-highly-available mode
- On OpenStack every Node now gets an additional label
machine-controller/host-id that contains the id of the physical server the Node is scheduled on.
- Kubernetes version 1.16.3 is now available.
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.5.1
- In the Add-On Cluster Monitoring Stack Prometheus has been updated to version 2.14.0
- The Calico CNI Plugin received an update to the version 3.10.1
- For Kubernetes >= 1.16 clusters the kubernetes-dashboard has been updated to 2.0.0-beta6
- It is now possible to activate audit logging in a MetaKube cluster
- AWS clusters can now have NodeDeployments in different availability zones of the chosen region
- Labels can now be added to MetaKube projects and clusters. Nodes inherit these labels automatically
- There is now an indicator that warns you while editing a NodeDeployment if the edit would result in recreating the nodes
- Kubernetes versions 1.14.9 and 1.15.6 are now available.
- MetaKube now supports creating Kubernetes clusters on AWS regions eu-west-2, eu-west-3 and eu-north-1
- In the Add-On Vault Consul has been updated to version 1.6.2
- Updated Helm to 2.16.1
- The kubelet has been patched to address an issue, where a Pod can be stuck in NotReady. We advise to rebuild your worker nodes so that the patched version gets installed.
- New Add-on Linkerd has been added to enable service mesh in metakube clusters.
- New Add-on Vault has been added to allow you to secure, store and control access to tokens, passwords, certificates and encryption codes. With Vault you can protect your secrets and other sensitive data.
- Updated Helm to 2.15.2
- The Add-On cert-manager has been updated to version 0.10.1
- The Add-On external-dns has now configuration options for a domain filter and the Cloudflare proxy feature
- The Add-On Nginx Ingress Controller has now a configuration option for a default tls certificate
- etcd is updated to 3.4.1
- Kubernetes versions 1.13.11, 1.14.7 and 1.15.4 are now available
- In the Add-On Cluster Monitoring Stack Grafana has been updated to version 6.3.6
- Node deployments on OpenStack can now be created without floating IPs. For the implications have a look at our cluster-api documentation.
- The stable channel of the Add-On velero has been updated to version 1.1.0
- CentOS is now removed as a supported OS for new clusters and NodeDeployments, we recommend you also change your existing NodeDeployments to either Ubuntu or CoreOS to improve performance and stability due to newer Linux kernels in these images.
- The beta channel of the Add-On velero has been updated to version 1.1.0
- On OpenStack clusters the root disk size is now configurable for network storage flavors
- Kubernetes versions 1.13.10, 1.14.6 and 1.15.3 are now available, because of a Denial-of-Service vulnerability in the HTTP/2 stack all clusters have been updated to the next patch version automatically. For details on the vulnerability see Security Advisory NFLX-2019-002
- The Add-On Nodelocal DNS Cache has been updated to version 1.15.3
- The Calico CNI Plugin received an update to the version 3.8.0
- Added experimental Node Optimizations Add-On. This Add-On configures system settings to optimize your virtual machine for using docker containers under load. This should improve your general system and network performance.
- Added MetaKube Monitoring Add-On (Prometheus Operator). The Prometheus Operator deloys a full monitoring stack. Including Grafana, Prometheus and the Alertmanager. You can configure several setting like the retention period of your data. You can also configure a mail address or a slack channel to recieve your alerts.
- Kubernetes versions 1.13.9, 1.14.5 and 1.15.2 are now available, because of two security vulnerabilities in previous releases clusters should be updated to the newest patch release. See #80983 and #80984 for details
- The name of a cluster can now be changed from the dashboard
- Added default Add-On Nodelocal DNS Cache
- Previously Velero was installed in every cluster, now it can be installed manually as an Add-On
- Previously the Web Terminal was installed in every cluster, now it can be installed manually as an Add-On
- Updated Helm to 2.14.3
- Add-Ons can now directly be installed during cluster creation
- MetaKube Add-Ons are now in beta. Add-Ons are fully managed applications that can be installed into your cluster
- Added Add-On Nginx Ingress Controller
- Added Add-On Weave Scope
- Added Add-On Kubernetes Dasbhoard
- Previously the Kubernetes Dashboard was installed in every cluster, now it can be installed manually as an Add-On
- Stability and performance improvements when deleting persistent volumes during cluster deletion
- Kubernetes versions 1.12.10, 1.13.8, 1.14.4, 1.15.0 and 1.15.1 are now available
- Updated Helm to 2.14.2
- On Openstack, the scheduler is now aware that only 25 additional volumes may be attached to a node. It will make sure that pods with mounted volumes are only scheduled on nodes that can support them.
- There are now Api Accounts that can be used to authenticate with the MetaKube API to automate management of clusters
- More details are shown when using kubectl get machine/machineset/machinedeployment
- ICMP traffic to clusters is now always permitted to allow MTU discovery
- Cluster events are now visible on the Cluster Detail Page
- Various design and usability improvements
- MetaKube kubeconfig files now use a unique context and username to simplify usage of them.
- MetaKube dashboard now shows more prominent indicators if a node deployment is outdated.
- MetaKube now supports creating Kubernetes clusters on AWS (regions eu-central-1 and eu-west-1 for now), with full feature parity between clusters on SysEleven Stack and AWS.
- On OpenStack MetaKube now also supports VMs with CPU:memory ratios of 1:2 and 1:8 in addition to the existing 1:4 flavors (Details)
- The node deployment editor now automatically detects when a newer OpenStack image is available and allows the user to perform a rolling update
- Kubernetes versions 1.14.2 and 1.13.6 were removed because of a security vulnerability
- Major network performance and reliability improvements
- CoreDNS is not logging every dns query anymore
- Kubernetes version 1.12.9 is now available
- Updated Helm to 2.14.0
- When creating a NodeDeployment you can define the taints that should be attached to a node
- Kubernetes version 1.14.2 is now available
- Kubernetes versions 1.14.1, 1.14.0, 1.13.6 and 1.12.8 are now available
- When deleting a MetaKube cluster, you can now choose if you want to delete its OpenStack LoadBalancers or not
- When creating a NodeDeployment you can define the labels that should be attached to a node
- When upgrading a MetaKube cluster, you can also automatically update all NodeDeployments to do a rolling update of all nodes
- The Kubernetes Software Defined Network canal has been updated: calico to 2.6.12 and flannel to 0.11.0. This brings several bugfixes and network performance improvements.
- Every node in a MetaKube cluster now reserves 200m CPU for system components to not overload the node and increase stability
- The Kubernetes cluster DNS service has been switched to CoreDNS version 1.3.1
- Stability improvements when creating the initial NodeDeployment after the creation of a MetaKube cluster
- Updated metrics-server to 0.3.2
- When interacting with MachineDeployments, MachineSets and Machines over the kubectl CLI, you can now use the short names
- You can now use
kubectl scale machinedeployment to easily scale a MachineDeployment over the kubectl CLI
- An upgrade of a MetaKube cluster is now only available, if the versions of the existing nodes would work together with the new master components version
- etcd is updated to 3.3.12
- The NodeDeployment detail page in the MetaKube dashboard now also contains Node related events
- The name of a project can now be edited
- Kubernetes master components resource requests, limits and pod affinities have been updated to increase performance and stability
- The Web Terminal access is now restricted to owners and editors
- The kubeconfig download is now restricted to owners and editors
- Kubernetes versions 1.12.7 and 1.13.5 are now available
- We now provide up to date default images for Ubuntu, CoreOS and CentOS, see Supported Operating Systems for details
- The Web Terminal now contains the alpine coreutils package
- MetaKube clusters can now be created in the second SysEleven OpenStack region
- Kubernetes versions 1.12.6 and 1.13.4 are now available
- Disabled all Kubernetes versions before 1.12.6 and 1.13.4 due to a Kubernetes security incident, all existing clusters have been upgraded automatically
- Updated the backup utility ark to 0.10.1
- Updated Helm to 2.13.0
- The dashboard shows now the Kernel and Container Runtime Versions of nodes, which makes it easier to see if a node is affected by a security vulnerability
- Kubernetes versions 1.10.x are not available anymore corresponding to our Supported Versions Matrix
- You can now add tags to nodes when adding them
- For new clusters and new nodes in existing clusters, nodes are now wrapped inside of NodeDeployments which makes it possible to trigger rolling updates of nodes within a cluster from the dashboard
- New nodes now receive Docker 18.9.2 which fixes CVE-2019-5736 in Runc for all new nodes
- Multiple owners can now be added to one MetaKube project
- Detailed events for the lifecycle of a node are shown in the dashboard
- Kubernetes version 1.13.3 is now available
- Kubernetes version 1.12.5 is now available
- The master service health indicators on the cluster details page now contains all services that are fully managed
- Kubernetes versions 1.13.1 and 1.13.2 are now available
- Updated Helm to 2.12.2
- The cluster details page now comes with a dashboard showing the amount of pods in the cluster and the cpu and memory usage of the worker nodes
- MetaKube now supports projects to group, maintain and access clusters with multiple users
- Kubernetes versions 1.10.9, 1.12.1 and 1.12.2 are now available
- etcd is updated to 3.3.9
- Updated the backup utility ark to 0.9.9
- New Ubuntu worker nodes are now using Ubuntu 18.04
- Kubernetes versions 1.11 are now disabled due to a bug in kubernetes where nodes will loose their IP addresses
- Kubernetes versions 1.9.11 and 1.10.8 are now available
- Updated the backup utility ark to 0.9.6
- Clusters now come with Helm already installed and configured. See also Using Helm.
- Kubernetes versions 1.10.7 and 1.11.3 are now available
default namespace now comes with a LimitRange
that assigns a default CPU and memory request to pods that do not define one explicitly. See also LimitRange.
- Kubernetes version 1.11.1 and 1.11.2 are now available
- Clusters now come with managed cross-region cluster backup solution, see Backups
- Security groups in LoadBalancer services are managed automatically
- Clusters now come with managed metrics-server which enables Horizontal Pod AutoScaling
- Cluster creation: Drop-downs instead of free-text input boxes to choose from available floating IP pools, security groups, networks and subnets
- Kubernetes versions 1.10.6 and 1.9.10 are now available
- MetaKube is now generally available