Data ingestion with HTTP Event Collector in Splunk Web vs data ingestion using Splunk REST API - splunk

I am new to Splunk and have been exploring it's features. I have tried to ingest some dummy data into Splunk Web using the Http Event Collector(HEC). I wanted to know if there is any other REST API available in splunk for data input. If so, then what is the difference between HEC and the other REST API provided by Splunk. Thanking in advance for any understanding. :)

Splunk's REST API is more for managing Splunk than for getting data into Splunk. You should continue to use HEC for ingesting data.

Related

Send Google Home data to Splunk

I'm trying to send data generated by a Google Home mesh network to a Splunk instance. I'd especially like to capture which devices are connected to which points throughout the day. This information is available in the app, but does not seem to be able to be streamed to a centralized logging platform. Is there a way I'm not seeing?
There are two methods of ingesting google cloud data supported by splunk.
--Push-based method : data is sent to a Splunk HTTP Event Collector (HEC) through a pub/sub to Splunk dataflow job.
--Pull-based method: data is fetched from the Google Cloud APIs through the Splunk Add-on for Google Cloud Platform.
It is generally recommended that you use the push-based method to ingest Google Cloud data in Splunk. Only in certain cases it is recommended to use the pull based method to ingest Google Cloud data into Splunk. These circumstances are as follows:
--Your Splunk deployment does not offer a Splunk HEC endpoint.
--Your log volume is low.
--You want to pull Cloud Monitoring metrics, Cloud Storage objects, or low-volume logs.
--You are already managing one or more Splunk heavy forwarders or are using a hosted Inputs Data Manger for Splunk Cloud.
More information as how to work around with the setup part and working around with this
problem can be found in the following link :
https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk

Need a Rest API which should provide Sever monitoring information?

I need a server monitoring REST API which should provide the below points. can anyone suggest which one is best? I have found some tools like Nagios, Zabbix and Grafana but not sure they will provide Rest API.
1)Server Response time monitoring
2)Ping monitoring
3)Port monitoring
4)Graph event presentation & Logs APIs?
4)CPU, Harddisk, memory, Apache and Monitoring, etc.
Purpose of required API
This API will integrate the A application and gathering information from the C application then we can consolidate represent the custom graph in A application as per JSON result.
Any suggestions would be great.
Both Nagios and Zabbix do actual data collection, but Grafana only visualizes the data so you'll be looking at the former for this API. Both have a JSON API:
https://www.nagios.org/ncpa/help/2.2/api.html
https://www.zabbix.com/documentation/current/manual/api

How to build Google Analytics 'collect' like api using Google Cloud services

I'm trying to build a data collection web endpoint.The Use case is similar to Google Analytics collect API. I want to add this endpoint(GET method) to all pages on the website and on-page load collect page info through this API.
Actually I'm thinking of doing this by using Google Cloud services like Endpoints, BQ(for storing the data).. I don't want to host it in any dedicated servers. Otherwise, I will be end up doing a lot for managing/monitoring the service.
Please suggest me how do I achieve this with Google Cloud Service? OR direct me to right direction if my idea is wrong
I suggest focussing on deciding where you want to code to run. There are several GCP options that don't require dedicated servers:
Google App Engine
Cloud Functions/Firebase Functions
Cloud Run (new!)
Look here to see which support Cloud Endpoints.
All of these products can support running code that takes the data from the request and sends it to the BigQuery API.
There are various ways of achieving what you want. David's answer is absolutely valid, but I would like to introduce Stackdriver Custom Metrics to the discussion.
Custom metrics are similar to regular Stackdriver Monitoring metrics, but you create your own time series (Stackdriver lingo described here) to keep track of whatever you want and clients can sent in their data through an API.
You could achieve the same thing with a compute solution (Google Cloud Functions for example) and a database (Google BigTable for example) and writing your own logic.. but Custom Metrics is an already built solution that includes dashboards and alerting policies while being a more managed solution.

Update keyword list for streaming API on-the-fly using HBC

I'm working on a project accessing Twitter's Streaming API with HBC.
I'm storing keywords for Twitters Streaming API (filter) in a file and now I'm looking for a way to close and reconnect to Twitter each time the file changes.
I googled with no useful result.
Any idea how I could manage this task?
Don't do this Twitter doesn't like reconnects in stream API, they will ban your application.
If you have to often change filter parameters better use Rest Api search tweets endpoint

Splunk Graphite Integration

I want to know if Graphite can pull log data from Splunk to draw Graphs. I know Graphite can read data from Nagios, but want to know if it can pull from Splunk also.
You can also pull data via one of the Splunk SDKs - http://dev.splunk.com/view/sdks/SP-CAAADP7
There is an example on the developer site that shows pulling data from splunk and pushing it to Leftronic - http://dev.splunk.com/view/SP-CAAADSR
There also are a number of visual examples in the JavaScript SDK showing how to pull data from Splunk and visualize with other libraries - http://dev.splunk.com/view/javascript-sdk/SP-CAAAECM
Here's an app I wrote for Splunk that does exactly this: https://github.com/OnBeep/splunk_graphite
If the goal is to chart the data in splunk you can use the chart or timechart command in splunk.
If the gloat is to chart the splunk data in carbon/graphite, depending on the data that you wish to pull out of spunk you should be able to;
Create a save search in splunk
Use the cli or rest api to execute & gather the results of the save search
parse the results then push it into carbon.
This is how it works:
Carbon listen to receive data.
Carbon receives data and stores it in whisper.
Graphite reads from whisper and carbon cache and shows graphs.
There's no pull at all. Submitting data to carbon it's damn easy. It has two ports, one for simple tcp connect and submit one metric per line (metric.name metric.value metric.timestamp), or have a pickle port too.
Usually you will use Logstash or logster to parse application logs with regular expresion and any of those will take care of submitting the resulting metrics to carbon.
Also, if you have a software been able to submit real time metrics by udp, you can use statsd which will listen on udp and on a configured interval sum or average and submit to carbon with a lot of nice settings (like get the 95th percentile, etc).
In summary, I bet whatever log Splunk leave you, you will be able for sure to submit data to graphite.