Can Splunk read inside a file and filter based on a word inside? - splunk

I want to create an alert for hosts file modification.
Found the build in one here on the forums but I would like to add a filter that can read inside the file and when it's being modified by Docker, it would ignore and won't activate the alert.
Appreciate the assistance!

Unless it's in the data, Splunk has no way of knowing what process updated a file. All it knows is the data itself.
If there is something in the events that says it was put there by Docker then you can key on that and send the event to the null queue.

Related

Automatically add database entry after ftp upload

Sorry if this seems stupid but I wonder if it's possible to add a database entry after an ftp upload.
To be more clear, thanks to winSCP I have several folders sending everything I put in there automatically to my server.
However, I would like to create a mysql entry for each uploaded files and once again, automatically. Is it possible to do that? How?
To gives the full details of what I need to do, you can read the following.
I have several folders with pictures and each folders are uploaded automatically.
Each of those folders belong to one user and the goal is to give them an account and allow them to see and download those files through a web interface. Since one account = one folder, that's kinda easy.
And I think a simple .htaccess can simply secure things so one user can only see and download the file in his own repository, no?
However if I want them to be able to see what's new (=something they didn't download or simply mark as read) I think I need a table to manage those files.
Something like id | file (string) | read (bool).
If you think this way to proceed is bad, they I'm open to change how to do things, but to be clear uploading the file need to work this way. Not using any kind of formulary.
Thanks for reading that, sorry for my english.
Your problem contains three steps:
Folders/Files been automatically uploaded to your server directory, as you say, this been efficiently handled by winSCP.
You need to update your database with all the files and folders present in your server directory.
You need to update whether or not it is been read/downloaded by the user.
Since your first step is in place, we don't need anything there. For second step, you should write a script and schedule that script to run at a fixed time interval using CRON (if using LINUX or UNIX, or WINDOWS). The script would be responsible to create a list of file(s) present in the directory, and simply insert the file(s) information that are not present in your database.
EDIT:
This edit is to describe how your script file should work. As I explained, the cron jobs would simply help you run your script file in fixed set of interval (which can be every minute, or every hour, or every day, and so on). Lets say your database table has following columns:
fileid (varchar[20])
filepath (varchar[20])
status (boolean)
Your script file should do following things:
Create a list of existing filepaths in your server directory
Run a select query, create a list of existing filepaths from database table.
Compare list1 with list2, and find the ones that doesn't exist in list2 (This would give you a list of filepath that needs to be inserted into table)
Just insert the list of file paths you got above, and set there status to be false (which means the file is not read/downloaded yet)
NOTE: Please keep in mind that I am not advising right now that how your database table should look like. It can be what you have proposed or can even differ depending on your will or requirements.
For the third step, simply keep the status of your file to be unread when creating entries in your table from the second step, and then when user click on the file link in your application whether to view or download it, send a POST request to your server updating the file status to be marked as read.
Let me know if this helps!

Jedox - How to export SOAP Log into Excel file

I'm pretty new on Jedox, and I'm trying for internal use to export some specific "warning logs" (eg. "(Mapping missing)" ) into an excel/wss file.
And I don't how to do that...
Can you help me please.
Regards,
Usik
The easiest way to get these information is to use the Integrator in Jedox.
There you have the possibility to use a File Extract and then you can filter the information you are searching for.
After that it's possible to load these filtered information into a File.
The minimum steps you'll need are Connection -> Extract -> Transform -> Load.
Please take a look at the sample projects that are delivered with the Jedox software. In the example "sampleBiker", there are also file connections, extracts etc.
You can find more samples in:
<Install_path>\tomcat\webapps\etlserver\data\samples
I recommend to check the Jedox Knowledgebase.
The other way (and maybe more flexible way) would be to use, for example, a PHP macro inside of a Jedox Web report and read the log file you're trying to display.
If you've got a more specific idea what you'd like to do, please let me know and I'll try to give you an example how to do so.

Is there a way to see source file in splunk search.?

As part of our work, we need to index our client's configuration files into splunk and prepare reports on them for them. We need to reports in splunk similar to their existing reporting framework, we need to allow users to view specific configuration files and they might compare two different files or perform diff etc.
I would like to see if there is a way to view the whole file in pop up window in splunk search.? If it is not already defined, could you please provide me the way to achieve it.?
Depending on the version you use, there is a down arrow by each log entry. Click that and then click "Show Source". you can view the source that way.
or, you can change the query so that source="path/to/source.log" to see all the log entries from that source file.

How to separate the latest file from Multiple files in Mule

I have 5000 files in a folder and on daily basis new file keep loaded in same file. I need to get the latest file only on daily basis among all the files.
Will it be possible to achieve the scenario in Mule out of box.
Tried keeping file component inside Poll component( To make use of waterMark) but not working.
Is there any way we can achieve this. If not please suggest the best way ( Any possible links).
Mule Studio: 5.3, RunTime 3.7.2.
Thanks in advance
Short answer: Not really any extremely quick out of the box solution. But there are other ways. Im not saying this is the right or only way of solving it, but I've earlier implemented a similar scenario in this way:
A Normal File inbound with a database table as file-log. Each time a new file is processed, a component checks if its name appears in the table. By choice or filter I only continue if it isn't in there already - and after processing I add the filename to the table.
This is a quite "heavy" solution though. A simpler access would be to use an idempotent filter with a object store. For example a Redis server: https://github.com/mulesoft/redis-connector/blob/master/src/test/resources/redis-objectstore-tests-config.xml
It is actually very simple if your incoming file contains timestamp........you can configure the file inbound connector by setting file:filename-regex-filter pattern="myfilename_#[function:timestamp].csv". I hope this helps
May be you can use a quartz scheduler( mention the time in cron expression), followed by a groovy script in which you can start the file connector . Keep the file connector in another flow.

Accurev - How to update the file content in a stream after a promote

We have different streams for different environments. It is a grail project. So there is a property file called application.properties which has a property called app.version. I want that to be updated automatically post every promote done on the stream. Each stream will have different version number. Trigger server_post_promote_trig will be used to handle the post promote operation, but I am not sure how to access the files in the stream through script. I tried to give the path as /Folder1/file as reflected in the xml trigger input file, but I cannot update the file as trigger perl file complains it cannot find the file.
Any help is much appreciated.
If I understand your question correctly. You want to increment the version in a file under source control when ever a promotion occurs in the stream. If this is correct, you need to create a workspace off said stream which will edit/keep/promote the new version of this file. I would create a separate script that gets called by the server_post_promote trigger whenever a promotion occurs in this stream. This script would be placed under src control which is accessible in the workspace you created above.
In Accurev, files can only be modified via a workspace. As this is the case It may be better to implement a pre promote trigger to update this version information in the file when the user is performing the workspace to stream promote.
This would be similar to the existing Addheader script that can be found in the examples directory on the accurev server.
Also, within the script, you will probably want to build in logic to detect the promotion of the version file, to block updating the file again.