Resolved -
This incident has been resolved.
Jul 11, 14:36 UTC
Update -
We've downgraded this and marked ingestion as operational now that we have duplicate ingestion infarstructure
Replay is working normally and we are continuing to process the delayed recordings
Jul 10, 13:23 UTC
Update -
We've duplicated our ingestion infrastructure so that we can protect current recordings from the delay.
you should no longer see delay on ingestion of current recordings
we'll continue to ingest the delayed recordings in the background
Jul 10, 11:19 UTC
Update -
We're continuing to work to increase ingestion throughput
Sorry for the continued interruption
Jul 10, 09:25 UTC
Update -
We're continuing to slowly catch up with ingestion. We're being a little cautious as we don't want to overwhelm kafka while we're making solid process.
Appreciate delays like this are super frustrating and we're really grateful for your patience 🙏
Jul 9, 14:00 UTC
Update -
We've continued to monitor ingestion overnight. Some kafka partitions are completely caught up, so some people won't experience any delay.
Unfortunately others are still lagging and so you will still see delayed availability of recordings
really sorry for the continued interruption!
Jul 9, 05:56 UTC
Update -
We're continuing to monitor recovery, apologies for the delay!
Jul 8, 18:13 UTC
Monitoring -
We've confirmed that the config rollback has resolved the problem, but we've kept ingestion throttled to ensure systems can recover.
We're slowly increasing ingestion rate to allow recovery and will keep monitoring
Sorry for the interruption
Jul 8, 14:05 UTC
Identified -
A recent config change has unexpectedly impacted processing speed during ingestion of recordings
The change has been rolled back and we're monitoring for recovery
Jul 8, 11:31 UTC