Logs indicate issue has resurfaced, so updating impact - team continues to investigate
Posted Jul 16, 2019 - 04:54 UTC
Monitoring
A fix has been implemented and we are monitoring the results.
Posted Jul 16, 2019 - 03:14 UTC
Identified
The team has identified root cause and is deploying a fix.
Posted Jul 16, 2019 - 02:59 UTC
Update
End of day update - team continues to investigate root cause. Work around provided earlier (and copied directly below) should mitigate customer impact. Investigation and updates to continue tomorrow morning.
Investigation continues. Another update in < 60 minutes
Posted Jul 15, 2019 - 22:59 UTC
Update
Issue investigation continues; another update to be provided in ~ 1 hr.
Posted Jul 15, 2019 - 21:48 UTC
Update
We are continuing to investigate this issue.
Posted Jul 15, 2019 - 20:52 UTC
Update
Work around for current Splunk Forwarder service delays - customers should increase the maxFailuresPerInterval from default of 2 to 10 in the Forwarder outputs.conf file. See the following link for documentation: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf Investigation ongoing.
Posted Jul 15, 2019 - 19:51 UTC
Update
Updating status to a partial outage due to the greatly degraded ingest performance of the forwarding service when forwarders have acknowledgments enabled.
Posted Jul 15, 2019 - 18:50 UTC
Investigating
We are experiencing an issue where connections from Splunk Forwarders to SCP are getting dropped approximately every minute. This causes the forwarders to backoff and quarantine the IP addresses of the service which causes severe delays in data ingest. This only affects forwarders with acknowledgements enabled, but disabling acknowledgments can lead to data loss. We are actively investigating root cause of the issue and don't currently have any work arounds.