Create indexes for Splunk automatically.

Our Splunk environment uses nearly a thousand indexes per region or cluster with many more being added daily. Why so many indexes, well it’s all about administration. Let us say you have 5 separate websites or apps and each of the sites are managed by a separate team. Maybe each of these sites has a test and production instance, so for each site there is an index for test and an index for production. This means we have 10 indexes now for these 5 sites and those indexes can be specifically assigned to individual teams. Now imagine our environment with hundreds of sites with different environments and most of these managed by different teams. That is how we have such a large number of indexes with more being added every day. So the standard process of adding indexes just doesn’t make sense for our needs. So, I created some bash scripts that are run by cron jobs to automate the process.

Whenever a new site is added to our environment we use puppet to automagically install and configure the Splunk Forwarder. Using some predefined arguments the forwarder gets configured to send data to a new index named using our own naming scheme.

Deciding when a new index would need to be added is obviously the most important part. When data is sent to the indexers to be consumed it’s sent to a specified index. If the index doesn’t exist then the indexer complains about it in the splunkd.log with something like “05-23-2014 23:31:41.550 +0000 WARN  IndexProcessor – received event for unconfigured/disabled/deleted index=’INDEX_NAME’ …”. So I created a script that grabs these types of events creates a list and sends it over to a script on the CM.

The CM receives these index names and runs them through a script sure that the index doesn’t exist in the CM indexes.conf and then adds it to a temp file.

Now that we know there are new indexes to be added, the CM needs to add them. I created another script that reads these lists and appends the new index stanzas to the indexes.conf and issues the ‘<SPLUNK_HOME>/splunk apply cluster-bundle’ command to update all of the peers within the cluster with the new indexes.


Ultimately whenever a new site is launched Puppet installs the forwarder and generates the index name. Data is sent over to the new index name on the peers and then scripts find related errors on the indexers and then auto-create the indexes needed. This solution has worked very well for us and has created thousands of indexes across several regions and datacenters.

Note: I have cut out most of the hostnames and environment specifics from these scripts. So, if you find these to be useful they will not work just copying and pasting them. You will need to configure them for your environment.