SysEleven Stack offers two flavors of ephemeral storage. The default storage type is a distributed storage based on quobyte. It offers high performance, redundancy and scalability at the cost of latency. For some application workloads, where latency is a bottleneck, we offer local ssd storage.
We provide a tabular overview of our flavor types.
For a quick start to evaluate or just play with local ssd storage, please see our Tutorials.
Traditional single server setups often suffer from performance penalty when run on distributed storage. While it may be tempting to build them on local storage to gain speed, we discourage to use is this way because of the lower redundancy and availability. To put it into simple words, you put your service and your data at risk, when running a single server setup on local storage.
Depending on your application, there are many proven ways to design either your setup or your application in a way that provides for the missing redundancy. Application servers can be multiplied and put behind a loadbalancer, databases can be set up in replication topologies or clusters. We try to provide some inspiration among our heat teamplates, please contact us if you need further assistance.
No, local SSD storage is only available as ephemeral storage.
Yes. You can attach volumes with distributed storage to your instances and this can be very handy to seed them with data or create backups, where latency isn't such an issue as with the running service.
Yes, but this rarely makes sense as it isn't a local volume instance anymore. It may make sense in certain cases to setup or repair things, see our rescue tutorial.
Local SSD storage is exclusively available in the sizes defined in our flavors.
No, local storage instances cannot be resized, you have to recreate them with a different size and migrate your data.
You can, however, attach a volume, move your data onto it, then detach it and attach it to another system where you move your data to the local SSD storage.
Depending on your application, there are likely smarter ways to bring new nodes into the cluster and seed them, e.g. promoting a replication slave.
Local SSD storage instances cannot be moved between hypervisors. That means in case of a hardware failure, data loss is inevitable.
You need to replace them with a new system from scratch and data restored from one of your backups or surviving members of the cluster.
Local SSD storage instances cannot be (live) migrated, we need to regularly reboot our compute nodes for maintenance and this inevitably affects all virtual machines with local storage hosted on them.
To keep the impact predictable, we hereby announce a regular maintenance window:
Every week in the night from wednesday, 11pm (23h) to thursday, 4am,
we will restart about 25% of our local ssd storage compute nodes. Statistically, every instance
will be shut down once a month.
Affected instances will receive an ACPI shutdown event that gives the operating system one minute grace period to shut down orderly. After the maintenance it will we brought up again. Expect up to half an hour (30min) of downtime. There will be no further annoucement aside from the ACPI shutdown event.
Planned maintenances will only affect one compute node at a time and between two maintenances there will be half an hour of recreation to allow the affected systems to re-join their clusters or whatever. It will, however, affect all local ssd storage instances on the same compute node. To ensure, that redundant systems will not be affected simultaneously, you must put them into anti-affinity-groups.