Free device logs for analysis - splunk

I am setting up HOME SIEM lab using SPLUNK. I am looking for sources which can provide different logs for various devices but not limited for below ones.
Windows Logs
IIS Logs
IDS/IPS Logs
Based on the logs i am planning to build search queries for various events and further using the same to build the rules.

It is not clear why you need logs when you can generate these? For example you can set up a VM with Windows Server and install an agent like NXLog (or any log collection agent that can send logs forwarded via TCP, UDP, TLS, or HTTP) for log collection to Splunk.

Checkout the Montgomery County Data Portal. It's free
https://data.montgomerycountymd.gov/
You could also connect to a crypto exchange API and have lots of data flow in real-time

Related

How to check logs between OPC Publisher and IoT Hub to confirm the data transfer

I have setup IoT Edge up in one of our machines and installed OPC Publisher and connected it to one of our opc-ua servers which then sends data to OPC Publisher and then to IoT Hub. We have not received any data to our IoT hub for the last 10 days and suddenly today we have received the data. How can we troubleshoot why the data is missing for the last 10 days?
You can generate a support bundle on your edge device that will collect the logs of all deployed modules as well as the edge runtime logs.
sudo iotedge support-bundle --since 11d
More details on troubleshooting IoT Edge here
You can first look into the logs of the publisher and validate if the connection to the OPC UA Server was/is active. If this is fine then have a look into the edgeHub and validate if the upstream connectivity to IoT Hub was affected.
One of the most powerful tools to monitor your edge deployments is the integration with Azure Monitor. It will collect metrics from the edgeHub and edgeAgent, which combined will give you an overview of where your messages are going. It can show you how many messages are sent to your upstream endpoint and when.
Source of image
For a full overview of the capabilities, you can check out this blog post. Installation steps are here
Edit:
OPC Publisher aLso supports diagnostic logging, which will give you more information about the connections to OPC servers. To do this, you need to set the diagnostic interval. You can do this by specifying the --di command argument in your createOptions:
"OPCPublisher":{
"settings":{
"image":"<image>",
"createOptions":{
"Cmd":["di=60"]
}
},
"type":"docker",
"version":"1.0",
"status":"running",
"restartPolicy":"always"
}
The example above will log diagnostic metrics every 60 seconds. You can then upload the logs using the support bundle command from Cristian's answer, or use the UploadSupportBundle direct method to do the same without needing access to the device.

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

How can I use RabbitMQ user access management in iAPC?

I'm setting up a new RabbitMQ service in iAPC (Swisscom app cloud) and I need to control the user access of the different producer/consumer application.
My access control requirement:
Application A can only write to queue X.
Application B can only read from queue X.
RabbitMQ provides usually user management functionalities. However, the whole user management in the admin section, RabbitMQ management GUI, is not available.
What solution does exist in iAPC to manage read/write permissions for different applications which have an app binding?
Is it even possible to setup different users?
I believe there is no way to add additional users in these managed RabbitMQ service deployments provided by Swisscom. This is quite similar across all of the available shared services (e.g. ElasticSearch or MariaDB) which come with a preset of defined users. I assume that this is true because those are actually shared services (as opposed to dedicated ones), where there may be authentication / security concerns if you are allowed to administer existing users.
For anyone who is interested the way to access your RabbitMQ CloudFoundry service admin interface via the provided environment parameters to see what is possible:
bind your RabbitMQ service to a running app instance (e.g. MY-APP)
look at the environment of that app with cf env MY-APP
tunnel the RabbitMQ management port to your localhost:
cf ssh -N -T -L 15000:rabbitmq.service.consul:15672 MY-APP
open a webbrowser and look at http://localhost:15000
Use the Username and Password you found in step (2) under rabbitmqent > credentials > management to log in

How can I access a local API using Amazon Alexa

I intend to build a set of skills for Amazon Alexa that will integrate with a custom software suite that runs on a RaspberryPi in my home.
I am struggling to figure out how I can make the Echo / Dot itself make an API call to the raspberry pi directly - without going through the internet, as the target device will have nothing more then an intranet connection - it will be able to receive commands from devices on the local network, but is not accessible via the world.
From what I have read, the typical workflow is as follows
Echo -> Alexa Service -> Lambda
Where a Lambda function will return a blob of data to the Smart Home device; using this return value
Is it possible, and how can I make the Alexa device itself make an API request to a device on the local network, after receiving a response from lambda?
I have the same problem and my solution is to use SQS as the message bus so that my RaspberryPi doesn't need to be accessible from the internet.
Echo <-> Alexa Service <-> Lambda -> SQS -> RaspberryPi
A |
+------ SQS <-----+
This works fine as long as:
you enable long polling (20sec) of SQS on the RaspberryPi and set the max messages per request to 1
you don't have concurrent messages going back and forth between Alexa and the RaspberryPi
This give the benefit of:
with a max message size of 1 the SQS request will return as soon as one message is available in the queue, even before the long poll timeout is met
with only 1 long polling at a time to SQS for the entire month this fit under the SQS free tier of 1 million requests
no special firewall permission for accessing your RaspberryPi from the internet, so the RaspberryPi's connection from the lambda always "just works"
more secure than exposing your RaspberryPi to the internet since there are no open ports exposed for malicious programs to attack
You could try using AWS IoT:
Echo <-> Alexa Service <-> Lambda <-> IoT <-> RaspberryPi
I though about using this for my Alexa RasberryPi project but abandoned the idea since AWS IoT doesn't offer a permanent free tier. But the free tier is no longer a concern since Amazon now offers Alexa AWS promotional credits.
https://developer.amazon.com/alexa-skills-kit/alexa-aws-credits
One possibility is to install node-red on your rPi. Node-red has plugins (https://flows.nodered.org/node/node-red-contrib-alexa-local) to simulate Philips hue and makes Alexa talk to it directly. It's an instant response. The downside is that it only works for 3 commands: on , off, set to x %. Works great for software/devices that control lights, shades and air-con.
It was answered in this forum a while ago and I'm afraid to tell you that situation hasn't changed since:
Alexa is cloud based and requires access to the internet / Amazon servers to function, so you cannot use it only within the intranet without external access.
There are a couple workaround methods I've seen used.
The first method is one that I've used:
I setup If This Then That (IFTTT) to listen for a specific phrase from Alexa, then transmit commands through the Telegram secure chat/messaging service where I used a "chat bot" running on my raspberry PI to read and act on those messages.
The second method I most recently saw would use IFTTT to add rows to a google spreadsheet which the raspberry pi could monitor and act on.
I wasn't particularly happy with the performance/latency of either of these methods but if I wrote a custom Alexa service using a similar methodology it might at least eliminate the IFTTT delay.
Just open an SSH tunnel into your rPi with a service like https://ngrok.com/ and then communicate with that as either your endpoint or from the lambda.
You can achieve this by using proxy. BST has a tool for that , I currently use that one http://docs.bespoken.tools/en/latest/commands/proxy/
So rather than using a Lambda you can use local machine.
Essentially it becomes Echo -> Alexa Service -> Local Machine
Install npm bst to your local machine https://www.npmjs.com/package/bespoken-tools
npm install bespoken-tools --save
Go to your projects index.js folder and run proxy command
bst proxy lambda index.js
This will give you a url as follow:
https://proxy.bespoken.tools?node-id=xxx-xxx-xxx-xxx-xxxxxxxx
Now go to your alexa skill on developer.amazon and click to configure your skill.
Choose your service endpoint as https and enter the url printed out by BST
Then click save, and boooom your local machine becomes the final end point.

Transferring real time network logs from one Ec2 instance to another Ec2 instance

Explanation:
I have one executable Jar deployed on one EC2 instance which can be run manually to listen on port 80 for proxy traffic
I have one Spring application on another EC2 instance which hits a website on third party server
Connection between these two machines:
Spring application setup i.e. B tells third party server to open a website and use A as a proxy, this leads to generation of logs of network calls on A.
What I want to do is: for every request I send from B to third party server I want network logs that are being generated on A to be transferred to B
What I tried:
One way is to rotate logs on A and write to S3 and then application and pick it from S3 and process them
ssh into A and grep the log file, but this stops the JAR to listen to the new traffic and it gets stuck
What I am looking for:
A realtime solution, as soon as logs show up on A I want them to be ported to B without stopping A on its listening job
I am not sure what OS you are running, but if you are running a nix variant, you can install syslog-ng instead of syslog, or rsyslog, which is capable of logging local and external events. In this case I would set up a central logging server, that listens for logs from server a, and server b.
Another alternative is syslog-ng is not what you're looking for, you could install splunk on a server, and have it pick up the logs from splunk reporters on each server you want to centrally log.
Hope this helps.
As Kevin mentioned , you could set up a Splunk Indexer on an EC2 instance and use this to aggregate the collection of logs from A and B and any other sources , and then use the Splunk Search language to search over this log data in "near realtime", correlate events together across your various systems , create custom dashboards, setup proactive alerting etc...
http://www.splunk.com/
As far as the mechanisms for getting this data from your systems to the Splunk Indexer :
1) Use a Splunk Universal Forwarder to monitor log output and forward it to your Splunk Indexer , http://www.splunk.com/download/universalforwarder
2) As your systems are Java based , SplunkJavaLogging has log4j/logback/jdk appenders that you can seamlessly wire in to your logging config to forward log events to your Splunk Indexer : https://github.com/damiendallimore/SplunkJavaLogging
3) Use the Splunk Java SDK , http://dev.splunk.com/view/java-sdk/SP-CAAAECN , to input log events into your Splunk Indexer via HTTP REST or Raw TCP