icon/x Created with Sketch.

Splunk Cookie Policy

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here.
Accept Cookie Policy

We are working on something new...

A Fresh New Splunkbase
We are designing a New Splunkbase to improve search and discoverability of apps. Check out our new and improved features like Categories and Collections. New Splunkbase is currently in preview mode, as it is under active development. We welcome you to navigate New Splunkbase and give us feedback.

Accept License Agreements

This app is provided by a third party and your right to use the app is in accordance with the license provided by that third-party licensor. Splunk is not responsible for any third-party apps and does not provide any warranty or support. If you have any questions, complaints or claims with respect to this app, please contact the licensor directly.

Thank You

Downloading LLM Command Scoring
SHA256 checksum (llm-command-scoring_210.tgz) 3c116553e66b89b18419db4429121712387ad2c8e710b1cdbfce244382c52670 SHA256 checksum (llm-command-scoring_201.tgz) 49107a53a03a5e97e91785646d503fcc861ef8a32841d7753380f32b2234ebcb SHA256 checksum (llm-command-scoring_200.tgz) 8323dddeb9e8539948e53bd2ae4ee7c8a7522bb3a8ff115357b7dce5e09de981 SHA256 checksum (llm-command-scoring_100.tgz) ac3926198e57db7045b11077b4d99f3997b72a2d8bfd80b53de369cecce63311
To install your download
For instructions specific to your download, click the Details tab after closing this window.

Flag As Inappropriate

splunk

LLM Command Scoring

Splunk Cloud
Overview
Details
TA-llm-command-scoring is a Splunk Technology Add-on that provides a custom streaming command designed specifically for evaluating command-line arguments (CLAs) from process events. It leverages large language models to assess the likelihood that a given CLA is malicious, assigning a simple, interpretable score.

This add-on isn’t a general-purpose AI chatbot or prompt interface. It doesn’t aim to replace Splunk’s `| ai prompt=` command from MLTK v5.6. Instead, it's a purpose-built, lightweight assistant focused solely on scrutinizing CLAs—a specialized tool to help SOC analysts cut through noise and surface risky executions fast.

The custom command accepts a field that contains a valid Command Line Argument, e.g.: `powershell.exe -nop -w hidden -enc aAB0AHQAcAA6AC8ALwAxADAAMAAuADEAMAAwAC4AMQAwADAALwBtAGEAbAB3AGEAcgBlAC4AZQB4AGUA`

It will ask the chosen AI model to scrutinize the command and will respond with a Likert-type score:

[5] Definitely Malicious
[4] Possibly Malicious
[3] Unclear
[2] Likely Benign
[1] Definitely Benign
[0] Invalid Process Command

and a short explanation of why it chose that score. It integrates directly into Splunk searches via a custom streaming command and leverages LLMs' ability to read between the lines — at scale, without fatigue.

🧠 TA-llm-command-scoring

TA-llm-command-scoring is a Splunk Technology Add-on that provides a custom streaming command designed specifically for evaluating command-line arguments (CLAs) from process events. It leverages large language models to assess the likelihood that a given CLA is malicious, assigning a simple, interpretable score.

This add-on isn’t a general-purpose AI chatbot or prompt interface. It doesn’t aim to replace Splunk’s | ai prompt=<prompt> command from MLTK v5.6. Instead, it's a purpose-built, lightweight assistant focused solely on scrutinizing CLAs—a specialized tool to help SOC analysts cut through noise and surface risky executions fast.

The custom command accepts a field that contains a valid Command Line Argument, e.g.: powershell.exe -nop -w hidden -enc aAB0AHQAcAA6AC8ALwAxADAAMAAuADEAMAAwAC4AMQAwADAALwBtAGEAbAB3AGEAcgBlAC4AZQB4AGUA

It will ask the chosen AI model to scrutinize the command and will respond with a Likert-type score:

[5] Definitely Malicious 
[4] Possibly Malicious
[3] Unclear 
[2] Likely Benign 
[1] Definitely Benign 
[0] Invalid Process Command 

and a short explanation of why it chose that score. It integrates directly into Splunk searches via a custom streaming command and leverages LLMs' ability to read between the lines — at scale, without fatigue.


⚙️ Features

  • 🧠 Supports LLM models from OpenAI, Google, and locally ran models using Ollama to evaluate command-line arguments in real-time
  • 🔐 Secure API key handling via Splunk's native credential storage
  • ⚡ Fast, streaming-compatible custom search command
  • 🔎 Customizable model, temperature, and output fields
  • 🧩 Modular GPT client with pre-prompt integrity check to block tampering

🧪 Usage

This app comes with an authorize.conf that defines a role called "can_run_claaiscore". Only users with Admin and this role can view the app and run the command. However the custom command may be ran everywhere as it is exported globally.

| your_search_here 
| claaiscore textfield=process api_name=my-openai-key

Other optional params

  • api_url — OpenAI's current working API URL (defaults to OpenAI or Google Gemini's working API URL as of 2025-July)
  • temperature — A number ranging from 0.0 to 1.9. Temperature controls the randomness or creativity of the model's responses. (Defaults to 0.0)
  • output_field — Give a field name to the AI's reponse (Defaults to ai_mal_score__by<llm provider, i.e.: openai | google>__<name of the input textfied>)

Example search

| tstats max(_time) as _time from datamodel=Endpoint.Processes where Processes.process_name="lsass.exe" by Process.user Process.process
| rename Process.* as *
| claaiscore textfield=process api_name=chatgpt-expires-aug2025 output_field=decision
| fields _time user process decision__byopenai__process
| where match(decision__byopenai__process, "\[[45]\].+Malicious")

Release Notes

Version 2.1.0
July 22, 2025

Version 2.1 TA-llm-command-scoring - now with Ollama!

  • Now supports Ollama engine, which is the interface that runs LLMs local to your machines
  • If you're going to use Ollama as LLM provider, please enter anything in the API Key during setup, this doesn't matter
  • API URL is a must when using Ollama, if you don't key in the port it will use the default 11434
  • Works on Splunk Enterprise 10
  • Refactor codes for readability
Version 2.0.1
July 18, 2025

LLM Command Scoring Version 2

now has Google Gemini LLM provider and a prettier setup page

This project started as a way to sharpen my Python and JavaScript skills while riding the LLM/AI wave — and it's been a wild, rewarding ride so far.

To be clear: this isn't meant to replace Splunk MLTK's | ai prompt=<your prompt> command. If you're looking for a general-purpose LLM interface inside Splunk, MLTK 5.6's | ai is still the gold standard.

What I built is purpose-driven and opinionated:

🛡️ A custom command focused solely on evaluating command-line arguments — and scoring them from 1 (benign) to 5 (malicious).

Version 2 highlights:

  • ✅ Choose between OpenAI or Google Gemini as your LLM backend
  • ✅ Brand-new UI for API credential management
  • ✅ Better error handling
  • ✅ Bug fixes
  • ⚠️ Note: this version introduces breaking changes from v1
Version 2.0.0
July 22, 2025
  • Better setup page
Version 1.0.0
July 14, 2025

Houses a custom Splunk command that queries OpenAI's GPT to assess whether a process' command-line argument (CLA) appears malicious.


Subscribe Share

Are you a developer?

As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 10GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps from Splunk, our partners and our community. Find an app for most any data source and user need, or simply create your own with help from our developer portal.

Follow Us:
Splunk, Splunk>,Turn Data Into Doing, Data-to-Everything, and D2E are trademarks or registered trademarks of Splunk LLC in the United States and other countries. All other brand names,product names,or trademarks belong to their respective owners.