Investigating - A few hours after closing the previous incident, continuous monitoring and customer feedback indicated that the read/write performance of the RAID subsystem on epyc01 (FC cluster) is below expected levels.

We are currently conducting a more detailed analysis to identify and permanently resolve the underlying cause of the I/O latency.

Further updates will be provided as soon as new information becomes available.

Jan 29, 2026 - 19:59 CET
General Services Operational
90 days ago
99.93 % uptime
Today
Index-Hosting Web & CP Operational
90 days ago
99.68 % uptime
Today
dataflair Web & CP Operational
90 days ago
100.0 % uptime
Today
Dedicated Control Panel Operational
90 days ago
100.0 % uptime
Today
Hotline Operational
90 days ago
100.0 % uptime
Today
IPMI VPN Operational
90 days ago
100.0 % uptime
Today
maincubes FRA01 Operational
90 days ago
100.0 % uptime
Today
Core Network Operational
90 days ago
100.0 % uptime
Today
DDoS Protection Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
firstcolo FRA4 Degraded Performance
90 days ago
99.93 % uptime
Today
Core Network Operational
90 days ago
100.0 % uptime
Today
DDoS Protection Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Xeon Gold KVM Server Cluster Operational
90 days ago
100.0 % uptime
Today
EPYC KVM Server Cluster Degraded Performance
90 days ago
100.0 % uptime
Today
Ryzen KVM Server Cluster Operational
90 days ago
99.52 % uptime
Today
SkyLink Operational
90 days ago
100.0 % uptime
Today
Core Network Operational
90 days ago
100.0 % uptime
Today
DDoS Protection Operational
90 days ago
100.0 % uptime
Today
Facilites Operational
90 days ago
100.0 % uptime
Today
Dedicated Servers Operational
90 days ago
100.0 % uptime
Today
Xeon KVM Server Cluster Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.

Scheduled Maintenance

Planned Shutdown of Temporary Subnet 5.175.237.0/24 Feb 4, 2026 21:30 - Feb 5, 2026 01:30 CET

As announced several weeks ago, we are now beginning the shutdown of the temporary subnet 5.175.237.0/24. This subnet was used exclusively as a temporary solution during the migration of active servers from maincubes to firstcolo.

The subnet was provided by a partner and will now be replaced with a contract-based, clean subnet from our IP portfolio.

Customers may already open a support ticket today to request an immediate migration. Otherwise, the migration will begin automatically during the previously scheduled time window.

Further details will be communicated as the migration progresses.

Posted on Feb 03, 2026 - 14:35 CET
Feb 4, 2026

No incidents reported today.

Feb 3, 2026

No incidents reported.

Feb 2, 2026

No incidents reported.

Feb 1, 2026

No incidents reported.

Jan 31, 2026

No incidents reported.

Jan 30, 2026

No incidents reported.

Jan 29, 2026

Unresolved incident: I/O Latency on epyc01 (FC Cluster).

Jan 28, 2026

No incidents reported.

Jan 27, 2026
Completed - The scheduled maintenance has been completed.
Jan 27, 15:30 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 27, 15:00 CET
Scheduled - During the cluster join of ryzen04, the cluster synchronization process did not complete correctly. As a result, the nodes are experiencing significant difficulties re-establishing proper inter-node synchronization.

To resolve this, we are initiating a full cluster restart. A maximum downtime of up to 10 minutes per node is expected during this process.

We will provide an update once the cluster is fully operational again.

Jan 27, 13:43 CET
Resolved - Since the disk replacement, I/O performance has stabilized significantly. We are therefore closing this incident.

Thank you for your patience.

Jan 27, 13:41 CET
Update - We are continuing to monitor for any further issues.
Jan 24, 10:42 CET
Monitoring - The NVMe drives were replaced yesterday at 18:36 CET. Since then, system performance has been stable and no further anomalies have been observed.

Please contact support if you notice any remaining issues.

Jan 24, 10:41 CET
Update - We will restart node epyc01 in approximately 10 minutes in order to replace a potentially defective NVMe drive.

Further updates will follow once the maintenance has been completed.

Jan 23, 18:04 CET
Identified - We currently suspect that one of the installed NVMe drives may be the source of the I/O issues. This is under active investigation, and we will provide an update here as soon as further findings are available.
Jan 22, 13:49 CET
Investigating - We are currently investigating I/O-related performance issues on node epyc01. Our engineering team is analyzing disk and storage performance metrics to identify the root cause.

Further updates will be provided as soon as more information becomes available.

Jan 22, 13:49 CET
Jan 26, 2026

No incidents reported.

Jan 25, 2026

No incidents reported.

Jan 24, 2026
Jan 23, 2026
Jan 22, 2026
Resolved - Affected customers are kindly requested to open a support ticket so we can review the case individually.
Jan 22, 13:48 CET
Investigating - We are currently investigating a global outage affecting the host ryzen7950x3d-2. This system is impacted by the ongoing migration from the legacy cluster to the new cluster.

As part of this process, the affected host will be replaced as soon as possible. Further information and timelines will be shared once available.

Jan 17, 11:06 CET
Jan 21, 2026
Completed - The scheduled maintenance has been completed.
Jan 21, 10:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 21, 06:00 CET
Scheduled - Please ensure that you secure a copy of all critical data as soon as possible. Due to a significant overload of our backup infrastructure, we are required to rebuild the entire backup environment.

As part of this process, we will transition to multiple redundant backup systems to improve reliability and scalability going forward.

No customer-facing products are affected by this change. However, all existing historical backups will be permanently deleted in order to properly reinitialize the infrastructure. Please take this into account and make sure any required data is backed up externally beforehand.

Thank you for your understanding.

Jan 20, 08:12 CET