The issue started at 12:40 PM UTC and was resolved at 14:35. During that time, connections routed to our Washington DC datacenter experienced
intermittent packet loss and high latency (on flaps of 5 to 10 minutes). The root cause was a recently introduced change to our load balancer
that forced all traffic to one of our links, saturating some of our servers.
12:45 - We started receiving alerts from our monitoring that some servers on DC were failing, at this point we were unable to reproduce the issue as it only affected some customers and very briefly
13:00 - The frequency of packet loss increased. High latency also started - Engineers already working on it
13:40 - Traffic estabilized.
14:30 - Packet loss and high latency again
14:40 - Traffic estabilized.
Resolution and recovery
We had to roll back the changes applied to the loadbalancer. Those changes were applied last week and did not had any issues until this moment.
We have been monitoring the network closely since traffic was stabilized and no other errors were detected, if you are experiencing any issues with your site please contact our support team for further troubleshooting.
Posted about 1 year ago. Feb 27, 2018 - 17:43 UTC
We will be monitoring this issue to ensure connectivity remains stable. Additional details regarding the issue will be provided in a post-mortem.
Posted about 1 year ago. Feb 26, 2018 - 16:21 UTC
We are aware of connectivity issues affecting the servers in our DC data center and are currently investigating. We will continue to provide additional updates as this incident develops.