0

Splunk | uberAgent:Process:NetworkTargetPerformance | Date parser error

Hello :)

Since last week we started receiving the error message pasted below. It is coming from our Windows machines which we do not have direct access to. My idea was to get an event sample and match a new regex to it and update the conf file, however we do not know where this data is coming from (which file has it been extracted from?) and which conf stores stanzas for it.

Do you have a time-stamp sample or a ready-to-use regex?

What would your recommendation be for solving this issue?

Thank you in advance,

ilkay

 

*****error message:

Accepted time format has changed ((?i)(?<![\d\.])([01]\d|2[0-3])\.(?i)([0-6]\d)(?!\d)(?:\.?(?i)([0-6]\d)(?!\d)(?:[:,]\d+)?(?:\.(\d\d\d\d+))?) {0,2}(?i)((?:(?:UT|UTC|GMT(?![+-])|CET|CEST|CETDST|MET|MEST|METDST|MEZ|MESZ|EET|EEST|EETDST|WET|WEST|WETDST|MSK|MSD|IST|JST|KST|HKT|AST|ADT|EST|EDT|CST|CDT|MST|MDT|PST|PDT|CAST|CADT|EAST|EADT|WAST|WADT|Z)|(?:GMT)?[+-]\d\d?:?(?:\d\d)?)(?!\w))?(?![0-9\.])), possibly indicating a problem in extracting timestamps. Context: source=uberAgent|host=*|uberAgent:Process:NetworkTargetPerformance|

3 comments

  • 0
    Avatar
    Dominik Britz

    Hi,

    Please only change the .conf files of the uberAgent app when instructed by our support. We can't support installations where settings are not default.

    It is not the first time that we see truncated events in Splunk. Any time we saw this issue, uberAgent was not the cause. The cause was either an overloaded Splunk server (like a heavy forwarder sitting between the endpoint and indexer) or something in the network which manipulated the TCP packages. While we do not have a solution to the problem, we have a suggestion how you could narrow down the issue. The goal is to isolate a (possibly) corrupted link in the chain.

    I understand that you don't have direct access to the endpoints. However, our suggestion requires access. So please ask a client engineer in your team for help.

    Enable tracing for POQ logs

    uberAgent uses a local SQLite database to buffer events before sending them to Splunk, so events don't get lost in case the machine has no network connection or alike. This can be logged and one can see the raw events that go into the SQLite database. What is written to the database is also sent to Splunk. That is, if it is written correctly to the database, it is not uberAgent which has a problem.

    In your uberAgent configuration, enable trace logging for POQ events. Restart the agent service afterward to apply the config change.

    [Miscellaneous]
    DebugMode = true
    ConfigFlags = TraceLogFilterExpression:.*POQ.*

    Once configured, you can see messages like the below in the log file. 

    2023-04-20 08:58:50.091 +0200,TRACE,DOMAIN,HOSTNAME$,9008,LogEvent,Event POQ/queue send: Source: Queue - Success: 1 - HttpStatusCode: 0 - # rows: 1 - Event: ***SPLUNK*** host=HOSTNAME index=uberagent source=uberAgent sourcetype=uberAgent:System:SmbClient

    Now compare your corrupted events in Splunk with the events in the log file. 

    If that does not help, there are further steps we can try, but the above is the easiest to configure.

    Best regards

    Dominik

     

     

     

  • 0
    Avatar
    ilkay amirova

    Hello Dominik,

    I checked the logs we are currenly receiving and 

    sourcetype=uberAgent:System:SmbClient 

    is among them. This makes me assume POQ is already enabled. 
    To be honest, these logs are not telling me much. Could you elaborate on what to compare between corrupted events and POQ logs?
    Sample timestamp of one of our events:
    2024-07-05T13:23:50.972+02:00

    all the best,

    ilkay

  • 0
    Avatar
    Dominik Britz

    I've created a ticket to analyze this further. Any findings will be posted here.

Please sign in to leave a comment.