Compute Service


SysEleven Stacks Compute Service is built on the OpenStack Nova project.
It manages the life-cycle of compute instances in your environment. Its responsibilities include spawning, scheduling and decommissioning of virtual machines on demand.

You can manage your compute instance both via our public OpenStack API endpoints, as well as using the Dashboard.


Standard instance types (M1)

Standard instances generally offer you good performance, availability and storage durability.
Disk data will be distributed across multiple servers (SysEleven Distributed Storage).

We recommend these instance types for most workloads and applications.


Name API Name Memory vCPUs Storage*
M1 Tiny m1.tiny 4GB 1 50GB
M1 Small m1.small 8GB 2 50GB
M1 Medium m1.medium 16GB 4 50GB
M1 Large m1.large 32GB 8 50GB
(M1 XLarge**) (m1.xlarge)** 64GB 16 50GB
(M1 XXLarge**) (m1.xxlarge)** 128GB 32 50GB

CPU optimized

Name API Name Memory vCPUs Storage*
M1 CPU Tiny m1c.tiny 2GB 1 50GB
M1 CPU Small m1c.small 4GB 2 50GB
M1 CPU Medium m1c.medium 8GB 4 50GB
M1 CPU Large m1c.large 16GB 8 50GB
M1 CPU XLarge m1c.xlarge 32GB 16 50GB
(M1 CPU XXLarge)** (m1c.xxlarge)** 64GB 32 50GB

RAM optimized

Name API Name Memory vCPUs Storage*
M1 RAM Tiny m1r.tiny 8GB 1 50GB
M1 RAM Small m1r.small 16GB 2 50GB
M1 RAM Medium m1r.medium 32GB 4 50GB
(M1 RAM Large)** m1r.large 64GB 8 50GB
(M1 RAM XLarge)** m1r.xlarge 128GB 16 50GB

You can extend ephemeral storage using our durable Block Storage Service.

Only available upon request

Local SSD storage instance types (L1)

Local SSD storage instances offer low latency SSD storage directly on the local host.
These can be useful for special workloads like replicated databases.

Availability and data durability are reduced , because data is only stored locally on one server.

For more information, see the local storage documentation.


Name API Name Memory vCPUs Storage*
L1 Tiny l1.tiny 4GB 1 25GB
L1 Small l1.small 8GB 2 50GB
L1 Medium l1.medium 16GB 4 100GB
L1 Large l1.large 32GB 8 200GB
L1 XLarge l1.xlarge 64GB 16 400GB
L1 2XLarge l1.2xlarge 128GB 32 800GB
L1 4XLarge l1.4xlarge 256GB 64 1600GB

CPU optimized

Name API Name Memory vCPUs Storage*
L1 CPU Tiny l1c.tiny 2GB 1 25GB
L1 CPU Small l1c.small 4GB 2 50GB
L1 CPU Medium l1c.medium 8GB 4 100GB
L1 CPU Large l1c.large 16GB 8 200GB
L1 CPU XLarge l1c.xlarge 32GB 16 400GB
L1 CPU 2XLarge l1c.2xlarge 64GB 32 800GB
L1 CPU 4XLarge l1c.4xlarge 128GB 64 1600GB

RAM optimized

Name API Name Memory vCPUs Storage*
L1 RAM Tiny l1r.tiny 8GB 1 25GB
L1 RAM Small l1r.small 16GB 2 50GB
L1 RAM Medium l1r.medium 32GB 4 100GB
L1 RAM Large l1r.large 64GB 8 200GB
L1 RAM XLarge l1r.xlarge 128GB 16 400GB
L1 RAM 2XLarge l1r.2xlarge 256GB 32 800GB

You can extend local ephemeral storage using our distributed Block Storage Service,
to place less latency critical data on it.

Flavor change (resizing)

After the initial resize request was placed, additional confirmation is required before the system will resize the instance when resizing via GUI/CLI.

M1 flavors

It is possible to resize all M1 flavors since they use the same base storage backend.

L1 flavors

Resizing local storage flavors is currently not possible.

Flavor change to different storage type

If more resources are required for an instance, the fastest solution is to build a new instance and migrate the required data (if any) via network or an attached volume.

If a conversion of an existing instance seems inevitable, a similar result can be achieved by creating an image from that instance and using it as a boot source for a new instance with another flavor. Please keep in mind that hardware specifications and CPU flags may change with change of flavor.

Questions & Answers

What is the difference between local ssd storage and distributed storage?

SysEleven Stack distributed storage distributes several copies of segments of your data over many physical ssd devices attached to different physical compute nodes connected via network. This allows for high overall performance, because several devices can work simultaneously, but introduces a latency for single operations, because data has to be transmitted via network.

SysEleven Stack local ssd storage stores your data on a local raid mirrored ssd storage directly attached to the compute node. This reduces the latency, because no network is involved, but also redundancy, because only two devices and one compute node are involved.

Which storage flavor fits my needs best?

In general, workloads where large volumes of data are transmitted or many small chunks of data are handled in parallel benefit from the overall performance of distributed storage and of course the redundancy and availability whereas workloads with tiny requests that need to be executed serially benefits from the lower latency of local ssd storage.

Why are instances migrated?

Software Updates

SysEleven regularly updates the software on the hypervisor host machines.
Sometimes those updates require restarts of services or even a reboot of the whole hypervisor.
In such cases we will live-migrate all running instances to another hypervisor host prior to applying the update.

Hardware Maintenance

All hardware nodes require maintenance at some point.
Sometimes the required maintenance work cannot be done while the machine is online.
In such cases we will live-migrate all running instances to another hypervisor host prior to performing the maintenance.

Hardware failure

Unfortunately live migrations are not possible in case of a hardware failure.
In such a situation running instances will be automatically restarted on another hardware node.
Stopped instances will also be assigned to another hypervisor but remain stopped.

How long does a migration take?

A live migration takes usually about 500ms. In some situations migrations may take longer.

Why are instances disconnected while migrating?

To transfer the active state of instances (incl. RAM/Memory) they need to be 'frozen' prior to the migration. During the transfer network packets can get lost. It depends on the operating system and application that is being used if connection can be reestablished.

Can I allocate a fixed IP to a compute instance?

Normally a fixed IP shouldn't play a big role in a cloud setup, since the infrastructure might change a lot.
If you need a fixed IP, you can assign a port from our networking service as a fixed IP to our compute instance. Here is an example which shows how to use the orchestration service to fetch a fixed IP address to use in a template:

    type: OS::Neutron::Port
      network_id: { get_resource: management_net }
        - ip_address:

My compute instance was created, but is e.g. not accessible via SSH/HTTP

By default all compute instances of are using the "default" security group. It's settings do not allow any other packets, except of ICMP in order to be able to ping your compute instance. Any other ports needed by a given instance need to be opened by adding a rule to the security group your instance uses (i.e., SSH or HTTPS).
Here is an example that shows how you can use a heat template to allow incoming HTTP/HTTPS traffic via your security group:

    type: OS::Neutron::SecurityGroup
      description: allow incoming webtraffic from anywhere.
      name: allow webtraffic
        - { direction: ingress, remote_ip_prefix:, port_range_min: 80, port_range_max: 80, protocol: tcp }
        - { direction: ingress, remote_ip_prefix:, port_range_min: 443, port_range_max: 443, protocol: tcp }

This security group can now be connected to a port of your network:

    type: OS::Neutron::Port
      security_groups: [ get_resource: allow_webtraffic, default ]
      network_id: { get_resource: example_net}

The security group "default" is added in this example, since this group is taking care of allowing outbound traffic.