Splunk App for Hyperledger Fabric
Overview Video
The Splunk App for Hyperledger Fabric contains a set of dashboards and analytics to give you full visibility into the system metrics, application data and ledger so that you can maintain security, stability and performance for your Hyperledger Fabric deployment.
These dashboards are meant to be a starting point for building analytics around your environment whether your infrastructure is virtual or physical, on-premise or in the cloud.
In order to take full advantage of the dashboards provided there are 4 types of data sources that should be configured.
- Hyperledger Fabric Distributed Ledger - These logs contain transaction information from the ledger itself and provide insight into operations and actions on-chain. We’ve open sourced our Splunk Connect for Hyperledger Fabric to help you easily ingest Hyperledger Fabric ledgers in Splunk.
- Hyperledger Fabric Application Logs - Application logs provide information about specific Hyperledger components such as the Orderers, Peer Nodes and other services (CouchDB and Kafka) useful for troubleshooting, debugging and monitoring application performance.
- Hyperledger Fabric Metrics (v1.4 and above) - These are metrics specific to Hyperledger Fabric components and performance. You can find a reference on these metrics here.
- Infrastructure/System Level Metrics and Logs - System metrics such as CPU, MEM, DISK and NETWORK activity provide insight into the underlying infrastructure Hyperledger Fabric nodes are running on. These metrics/logs could come from physical machines, Docker, Kubernetes, IBM IKS, Microsoft Azure, Google’s GCP and AWS Cloudwatch to name a few. Splunk has different Add-ons and connectors for each.
App Features
Dashboards
There are a few dashboards provided to get you started with analyzing your Hyperledger Fabric deployment. These include:
- Data Setup - A dashboard to ensure that your Splunk environment is receiving all the data the application requires.
- Network Architecture and Channels - See at a glance the number of orderers, peers, and channels in your Hyperledger Fabric network.
- Infrastructure Health and Monitoring - An overview of system health from system metrics like CPU, uptime status as well as transaction latency. You can see in real time when transactions are starting to back up or a peer is falling behind on blocks.
- Transaction Analytics - Real time visibility into the transactions being written on each ledger. In this dashboard, we’re blending ledger data sent from the peers with logs and metrics to give a holistic view of the network’s health.



Field Extractions and Aliases
The app provides a number of field extractions and aliases that will make searching and investigating Hyperledger Fabric data easier. These include parsing couchdb logs for actions (GET, PUT, POST, etc) and documents, chaincode logs for channel and latency metadata, and field aliases for accessing various parts of ledger transactions. To see the full list you can look at the props.conf
file or go to Settings > Fields in Splunk.
Getting Started
- Install the App on a Splunk Enterprise Search Head that will have access to the data.
- Open the App and navigate to the “Data Setup” dashboard from the Introduction Page.
-
Follow the instructions for each of the 4 data sources on the “Data Setup” page in order to populate the graphs and validate data is coming in correctly.
- Hyperledger Fabric Ledger Logs - The Splunk Connect for Hyperledger Fabric is an open source agent that connects to a peer on the Hyperledger Fabric network. See the README on Github here for deployment instructions. Docker, Kubernetes, and native deployments are all options.
- Hyperledger Fabric Application Logs - There are several options to get data in from you Hyperledger Fabric environment depending on where and how the nodes are hosted. You will need to create an index in Splunk as well as an input mechanism to receive the data. We usually like to create an index called “hyperledger_logs” and “hyperledger_metrics” and enable the Splunk HEC to receive data. You can use the example “indexes.conf.example” provided in the app. Simply rename the file from “indexes.conf.example” to “indexes.conf” to enable the indexes, and rename “inputs.conf.example” to “inputs.conf” to enable the HEC endpoints. You will also need to enable the HTTP Event Collector (HEC) to receive data if it has not been "enabled" already.
$ cd $SPLUNK_HOME/etc/apps/splunk-hyperledger-fabric/default
$ sudo mv inputs.conf.example inputs.conf
$ sudo mv indexes.conf.example indexes.conf
$ cd /opt/splunk/bin
$ sudo ./splunk restart
Supported Log Ingestion Methods
Also make sure to set the following environment variable in your Hyperledger Fabric environments:
FABRIC_LOGGING_FORMAT=json
Hyperledger Fabric Metrics (v1.4 and above)
Hyperledger Fabric 1.4 exposes metrics for ingestion using StatsD. You can setup Splunk to ingest from StatsD.
- Create a UDP data input following these directions OR use the example “inputs.conf.example” provided in the app. Simple rename the file from “inputs.conf.example” to “inputs.conf” and restart Splunk.
- Now set the following environment variables in your Hyperledger Fabric environment.
CORE_METRICS_PROVIDER: statsd
CORE_METRICS_STATSD_NETWORK: udp
CORE_METRICS_STATSD_ADDRESS: [SPLUNK-HOST]:[PORT]
ORDERER_METRICS_PROVIDER: statsd
ORDERER_METRICS_STATSD_NETWORK: udp
ORDERER_METRICS_STATSD_ADDRESS: [SPLUNK-HOST]:[PORT]
-
Open the Metrics Workspace to explore and analyze your metrics.
-
System Logs/Metrics - Depending on how you’ve deployed your Hyperledger Fabric network, there is probably a great option to get your System Logs and Metrics for end-to-end visibility. On the data setup dashboard, we’ve provided a list of common options that you can use to get your data into Splunk.
You are now ready to use the Splunk App for Hyperledger Fabric!