We have updated automation to apply the fix upon new instance creation eliminating the impact of this incident.
Feb 18, 21:45 UTC
We have implemented a fix for this issue and all Cx stacks have been repaired. Remaining are Single Instance stacks are expected to complete 2/17/24 PDT. During this time we are monitoring the results to confirm the resolution and will continue to provide any additional updates once available.
Feb 17, 06:50 UTC
We have implemented a fix for this issue. During this time we are monitoring the results to confirm the resolution and will continue to provide any additional updates once available.
Feb 16, 19:58 UTC
Our technical resources has identified the issue. The hot-fix has been applied on the fleet. Please be aware, once the issue will be confirmed as resolved the Root Cause Messaging will be shared accordingly.
Feb 16, 08:14 UTC
Splunk has identified the issue causing memory growth on the indexers. Splunk is taking steps to remediate over the next 3 hours.
Feb 16, 04:30 UTC
We are investigating a potential issue where Splunk instances are experiencing out-of-memory events, causing searches to fail or take longer to complete on multiple Indexers and search heads that may impact several Splunk cloud platform customers.
Our teams are working to resolve this issue. Your patience is greatly appreciated and we will provide more updates upon resolution.
Feb 16, 03:50 UTC