This search command is packaged with the following external libraries:
+ Splunk SDK for Python (http://dev.splunk.com/python)
+ Python Croniter Library (https://github.com/taichino/croniter)
+ Python dateutil Library (https://github.com/dateutil/dateutil)
+ Python six Library (https://pypi.org/project/six/)
+ Python natsort Library (https://github.com/SethMMorton/natsort)
Nothing further is required for this add-on to function.
Follow standard Splunk installation procedures to install this app.
Reference: https://docs.splunk.com/Documentation/AddOns/released/Overview/Singleserverinstall
Reference: https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall
The purpose of this command is to help visualize cron schedules and produce timestamps for expected runs based on the cron schedule. This was created largely to address the question, "How many searches are going to be running at timeblock X based on current search schedules?" While it may be used in other contexts, this command was built for that single purpose.
| croniter iterations=25 input=cron_schedule start_epoch=timestamp_field
Or
| croniter input=cron_schedule start_epoch=timestamp_field end_epoch=timestamp_field
Note that if both "iterations" and "end_epoch" are specified, the end_epoch will take precedence.
Starting now, show the next 25 expected runs for scheduled searches using a cron schedule and combine them to show which times have the highest number of searches scheduled.
| rest /servicesNS/-/-/saved/searches splunk_server=local
| where disabled=0 and is_scheduled=1
| table cron_schedule,title,disabled,is_scheduled
| croniter iterations=25 input=cron_schedule
| stats values(title) as searches,dc(title) as dc_searches by croniter_return
| convert ctime(croniter_return) timeformat="%Y-%m-%d %H:%M:%S"
| sort 0 - dc_searches
Same as the previous except start the iterations at a timestamp 2 days previous:
| rest /servicesNS/-/-/saved/searches splunk_server=local
| where disabled=0 and is_scheduled=1
| table cron_schedule,title,disabled,is_scheduled
| eval start_epoch=relative_time(now(),"-2d@d")
| croniter iterations=5 input=cron_schedule start_epoch=start_epoch
| stats values(title) as searches,dc(title) as dc_searches by croniter_return
| convert ctime(croniter_return) timeformat="%Y-%m-%d %H:%M:%S"
| sort 0 - dc_searches
Search using an end epoch instead of iteration count as the marker for stopping the generation:
| rest /servicesNS/-/-/saved/searches splunk_server=local
| where disabled=0 and is_scheduled=1
| table cron_schedule,title,disabled,is_scheduled
| eval myendepoch=relative_time(now(),"+3d@d")
| croniter end_epoch=myendepoch input=cron_schedule
If support is required or you would like to contribute to this project, please reference: https://gitlab.com/johnfromthefuture/TA-croniter. This app is supported by the developer as time allows.
1.0.4
Confirmed compatibility with Splunk 8/py3
Initial release.
As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 10GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps from Splunk, our partners and our community. Find an app for most any data source and user need, or simply create your own with help from our developer portal.