Configure HEC to Use User Provided Certificate - Splunk Cloud - splunk

I am forwarding events from a DBaaS to a Splunk instance hosted on Cloud.
Provisioned a Splunk Cloud instance (free trial license).
Created a new index in the instance I just created.
Created a new HTTP Endpoint Collector (HEC) forwarding metrics to the new index created.
I verified that the HEC was online with the following:
curl -k "https://***-*-*****.splunkcloud.com:8088/services/collector" \
-H "Authorization: Splunk ${splunk_token}" \
-d '{"event": "Hello, world!", "sourcetype": "manual"}'
All good.
The HEC includes a self-signed certificate by default. That just won't work for me. I can't find a way to configure it to use a custom cert. Can someone point me in the right direction?

You should be able to do that by creating a custom app and including the HEC info in an inputs.conf file along with your certificate chain. See https://community.splunk.com/t5/All-Apps-and-Add-ons/How-do-I-secure-the-event-collector-port-8088-with-an-ssl/m-p/243885 for more info. The thread is old, but most of the information still applies.

Related

How to connect to vault with github token?

Our Vault is configured to use github tokens. How can one use spring-cloud-vault and use github tokens? looked all over documentation and forums.
Thanks in advance.
Assuming "spring-cloud-vault" is the same as Hashicorp Vault (and according to https://cloud.spring.io/spring-cloud-vault/reference/html/ this looks pretty much the same!), you first need make sure the "github" auth method is enabled.
Our Vault is configured to use github tokens
So this seems to be the case already.
Next you need to create a GitHub personal token on https://github.com/settings/tokens. Click on "Generate new token" and in the "admin:org" scope, select "read:org", then generate the token and copy it.
See this GitHub guide for additional help: https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line
You will get a token code. With this you can log in to your Vault. In the Vault UI select "GitHub" as Method, then paste the copied token.
If you are using the Vault API, e.g. with curl, you need to add the token as a HTTP header:
$ curl -X POST \
--data '{"token": "YOURSECRETANDPERSONALGITHUBTOKEN" }' https://vault.example.com/v1/auth/github/login
Note that in this example Vault is behind a reverse proxy, therefore not using the port 8200 in the URL.
You should get a HTTP 200 and a json reponse when you successfully logged in.
See https://www.vaultproject.io/docs/auth/github.html for more details.

NiFi: Configuring SSLContext, Truststore or Keystore Certification

I am trying to get data from the DC Metro's API tool. I am using the "Train Position" link.
I need to configure my SSLContextService, but I only have a Primary and Secondary key provided by the website. For example, the keys are in this form:
5bcf1f7d091f4f618f1eefbefe23a56e
f15633bd2dd44a1f944c96361c0ab26f
How do I configure this in the SSLContext part of NiFi? I am using GetHTTP. Here is a picture of what I currently have, first the GetHTTP config and then the SSL config:
Next Config:
I have no idea how to use those keys above as the Truststore or Keystore, or if that's even what I'm supposed to do. I have my Keystore filename pointed at cacerts, but I know those keys are not in there. I've tried to convert them to .pem, and that was a mistake, especially when I put them in keystore.jks.
I get this error:
How do I get access?
Useful link on API link.
Note that API keys as you've given are considered sensitive information just like a password.
These keys are used to access the API, and are unrelated to NiFi keystore/truststores which are used for SSL negotiation. Using Java cacerts in this case is correct but you do not need to add an API key to a truststore.
This page describes the form the request needs to take: https://developer.wmata.com/docs/services/5763fa6ff91823096cac1057/operations/5763fb35f91823096cac1058#TrainPosition
I suggest you read into how to use web APIs and making web requests to then understand how your API keys are used. They give an example curl at the bottom:
curl -v -X GET "https://api.wmata.com/TrainPositions/TrainPositions?contentType={contentType}"
-H "api_key: {subscription key}"
--data-ascii "{body}"
The {subscription key} is your API key, the {contentType} is the HTTP response content type. If you're unfamiliar with these terms you may need to look into them. I recommend getting the above curl command to work first, then carry that across to NiFi.
curl -v -X GET "api.wmata.com/TrainPositions/TrainPositions?contentType=json" -H "api_key: e13626d03d8e4c03ac07f95541b3091b" works for me. (This is a test API key from wmata website).
In InvokeHTTP, you would add a processor property (hit the plus symbol top right) called 'api_key' with the value set to your subscriber key (I don't know if this is the primary key), and set the "Attributes to Send" property value to "api_key". This will send the api_key attribute (the key) as a header called api_key just as we did above in curl with -H "api_key: e136... ".

Splunk Enterprise HEC not sending data

I've installed the Splunk Enterprise trial. I've enabled the HTTP Event Collector feature as described here which enables sending machine data from my app into Splunk.
I tried to send a POST request using Postman to Splunk and got no response.
method: POST
url : http://localhost:8088/services/collector
Authorization : my generated token
Why there is no response if I already enabled the HEC feature? It seems that no server listens on that port at all.
What I don't understand about Splunk is -- where is my data stored? Is data for Splunk Enterprise stored only locally and should be in use inside companies LAN network? Or Splunk's own servers in the cloud that stored all my data? Are Splunk Enterprise and Splunk Cloud different on that subject?
You have 2 choices, send JSON data using HEC event data format specification for ex.
curl -k "https://mysplunkserver.example.com:8088/services/collector" \
-H "Authorization: Splunk CF179AE4-3C99-45F5-A7CC-3284AA91CF67" \
-d '{"event": "Hello, world!", "sourcetype": "manual"}'
Or send raw data, for this you need to use the raw endpoint, for ex.
curl -k
"https://mysplunkserver.example.com:8088/services/collector/raw?channel=00872DC6-
AC83-4EDE- -H "Authorization: Splunk CF179AE4-3C99-45F5-A7CC-3284AA91CF67" -d '1,
2, 3... Hello, world!'

URLFetchApp with certificate: Google scripts with Apple ads reporting API

Hi I'm attempting to pull data from the Apple Ads API into a Google sheet, and I'm getting completely stuck on providing the security certificates. I've been able to successfully pull my data using Postman, so I'm comfortable that I can structure the request properly.
I'm trying to use URLFetchApp, but I can't see any means of including the PEM and KEY file, or even using the curl example provided by Apple of combining to the P12 file. Am I missing something here or is URLFetchApp unable to complete this?
It doesn't appear to me that this would fit into any of the existing headers for URLFetchApp https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app#fetchurl-params
curl \
--cert ./<FILENAME>.p12 \
--pass <PASSWORD> \
-H "Authorization: orgId=<ORG_ID>" \
-H "Content-Type: application/json" \
-d "<CAMPAIGN_DATA_FILE>.json" \
-X POST "https://api.searchads.apple.com/api/v1/campaigns"
You're right in that Google Apps Script (GAS) does not support client-side SSL certificates in their UrlFetchApp class, which appears to be their only way to make outbound HTTP(S) requests.
Your best bet is probably to make a custom Google Apps Engine (GAE) in a language of your choice and expose an endpoint from there which when called from GAS will make a new request to your destination and provide the needed certificates. However, GAE is not free like GAS (since Google changed their cloud terms of service a couple years back), so that's something to keep in mind.

XACML Authentication in Network Proxy Server

I am trying to implement Access Control Policies on Network Proxy Server. Presently, I am at a stage where I have modeled it like this:
The problem I am facing is how to send the resource url, username and password from PEP to PDP. I am presently using WSO2 for implementing PDP policies.
Relating to this I also saw a command on this link, which is as follow:
curl -X POST -H 'Content-type:text/xml' -T soap-xacml-request.xml https://localhost:8443/asm-pdp/pdp --cacert pdp.b64.cer --user pep:password
I also don't know what url should I be giving instead of https://localhost:8443/asm-pdp/pdp (as I am using WSO2).
Can somebody please help me regarding all these issues?
Did you look at the WSO2 documentation and blog? E.g. http://xacmlinfo.com/2012/06/14/pep-client-for-wso2is-pdp/.