On the 21st of July at 11:25 UTC our team received an internal alert from our monitoring dashboard about a degraded performance on one of our European datacenters. At 11:51 our Cloud Hosting Provider notified us of a problem causing network latency and connection loss. This caused issues for customers trying to access our Dashboard until our provider fixed the problem at 16:28 UTC time. Although we initially assessed the impact across more APIs, the scope was identified to be limited only to the Dashboard.
Our Cloud Hosting Provider detected anomalous traffic coming from the public network and worked until they shut down the source of the anomalous traffic mitigating public network packet loss and latency.
We are currently working on adding more redundancy to our infrastructure, to minimise the disruption of similar events may have to our customers in the future.