00:40 UTC | 17:40 PT
Our teams have resolved all connectivity issues affecting our customers on all pods.
23:68 UTC | 16:58 PT
We are seeing improvements for the performance issues that were affecting all pods.
23:41 UTC | 16:41 PT
Systems are still down across all pods. We’ve located a likely root cause and still investigating the issue at hand.
23:25 UTC | 16:25 PT
We’re continuing our investigation which is currently affecting all pods. Stay tuned!
23:01 UTC | 16:01 PT
We are investigating connectivity issues for some customers
Our View Service needed to be reloaded as part of a regular database password rotation change. Upon issuing the reload command, some View workers didn’t reload properly and became stuck. This resulted in a service disruption, wherein the View service became unavailable and requests started hitting the database directly. The resolution came after we disabled the stuck nodes, allowing the View service to restart. We then needed to clear the firewall devices to finish the recovery. To help prevent type of issue from reoccuring, we're investigating better ways to restart our services.
FOR MORE INFORMATION
For current system status information about your Zendesk, check out our system status page. During an incident, you can also receive status updates by following @ZendeskOps on Twitter. The summary of our post-mortem investigation is usually posted here a few days after the incident has ended. If you have additional questions about this incident, please log a ticket with us.