0

upgrades from 5.3.1 to 6.2.x

As there are changes in the sourcetype compared to v5.3.1 to v6.2.x , upgrading the agents and Uberagent UXM app for splunk , we would be missing the dashboards in new version for the historical data - as the data in previous version will come with a different sourcetype than the current version. Is there a an easier way to get these stats in the dashboard post upgrade other than running the queries manually using older sourcetype ?

3 comments

  • Avatar
    Martin Kretzschmar Official comment

    Hi Andy,

    We always aim to provide backward compatibility with existing data during uberAgent product updates.

    However, it may be technically necessary in rare cases, and some limitations may not always be avoidable.

    Is there a specific sourcetype change you are concerned about?

    Kind regards, Martin

  • 0
    Avatar
    Druva Kumar

    Hi Martin,

    I'm working along with Andy. The sourcetype: uberAgent:Logon:ADLogonScriptTimeMs that was present in the uberAgent version 5.3.1 is no longer available in version 6.2.3 and looks like replaced by sourcetype: uberAgent:Logon:LogonDetail.

    This is one of the sourcetype which have identified that is changed in the newer version and it would be difficult for us to go through all the sourcetypes. 

    Also, we have another concern with the Data Models in the uberAgent UXM app for Splunk. The dashboards from the app relies on the data model acceleration and this app in splunkbase comes with acceleration disabled by default, when we have enabled the acceleration for these data models, our indexers went down multiple times due to out of memory issues all due to the data model acceleration of uberAgent UXM app. So we had to disable the acceleration on these data models.

    Could you please suggest us a way to enable the acceleration without impacting the Splunk services? Please note that, we have kept the default configuration of data models like backfill days and cron scheduling. 

    PS: We have 10 indexers with 64 GB of memory on each indexers and normal memory usage averages around 10% across each indexers.

     

    Thanks,
    Druva

     

  • 0
    Avatar
    Martin Kretzschmar
    Hi Druva,

    Thank you for your feedback. Your example is one of the very few cases where updated uberAgent dashboards do not visualize historical data.
     
    Prior to uberAgent 6, there used to be 10 different KV sourcetypes, all starting with uberAgent:Logon:* (e.g. uberAgent:Logon:ADLogonScriptTimeMs or uberAgent:Logon:SessionLogonTime), which then got joined using a transaction-based search in the Splunk data model.
     
    Starting with uberAgent version 6, we merged those into the new CSV sourcetype uberAgent:Logon:LogonDetail, which comes with many benefits such as less data volume and overall better data quality.
     
    Another example where a sourcetype adjustment does not lead to non-visualized data is, also with version 6, that we replaced the KV sourcetype uberAgent:Logon:GroupPolicyCSEDetail with a CSV sourcetype uberAgent:Logon:GroupPolicyCSEDetail2. Although the sourcetype name changes here, we took care of this by simply adjusting the search constraint for the data model.
     
    index=`uberAgent_index` sourcetype=uberAgent:Logon:GroupPolicyCSEDetail2 OR sourcetype=uberAgent:Logon:GroupPolicyCSEDetail

    One option to continue accessing historical login data for a certain period of time would be a dedicated search head or cluster that continues to run with the uberAgent search head app in version 5.3.1.
    It may also be worth considering disabling data model acceleration and saved searches on these systems to reduce the load.

    It is, among other things, a requirement by Splunk that the apps in Splunkbase must not have data model acceleration enabled.
    The apps available for download on our website come with acceleration enabled by default.

    In our experience, resource problems in combination with accelerated data models have mostly been due to bugs in the deployed Splunk version or configuration issues.

    In cases where the initial creation of the data model led to problems, we have had positive experiences by modifying the setting acceleration.backfill_time (datamodels.conf) from -7d (our default) to, e.g., -2d.
    Please find the Splunk documentation here.

    We highly recommend involving Splunk Support in severe cases like the one you described.
     
    Thanks, Martin
Please sign in to leave a comment.