As part of our work, we need to index our client's configuration files into splunk and prepare reports on them for them. We need to reports in splunk similar to their existing reporting framework, we need to allow users to view specific configuration files and they might compare two different files or perform diff etc.
I would like to see if there is a way to view the whole file in pop up window in splunk search.? If it is not already defined, could you please provide me the way to achieve it.?
Depending on the version you use, there is a down arrow by each log entry. Click that and then click "Show Source". you can view the source that way.
or, you can change the query so that source="path/to/source.log" to see all the log entries from that source file.
Related
Sorry if this seems stupid but I wonder if it's possible to add a database entry after an ftp upload.
To be more clear, thanks to winSCP I have several folders sending everything I put in there automatically to my server.
However, I would like to create a mysql entry for each uploaded files and once again, automatically. Is it possible to do that? How?
To gives the full details of what I need to do, you can read the following.
I have several folders with pictures and each folders are uploaded automatically.
Each of those folders belong to one user and the goal is to give them an account and allow them to see and download those files through a web interface. Since one account = one folder, that's kinda easy.
And I think a simple .htaccess can simply secure things so one user can only see and download the file in his own repository, no?
However if I want them to be able to see what's new (=something they didn't download or simply mark as read) I think I need a table to manage those files.
Something like id | file (string) | read (bool).
If you think this way to proceed is bad, they I'm open to change how to do things, but to be clear uploading the file need to work this way. Not using any kind of formulary.
Thanks for reading that, sorry for my english.
Your problem contains three steps:
Folders/Files been automatically uploaded to your server directory, as you say, this been efficiently handled by winSCP.
You need to update your database with all the files and folders present in your server directory.
You need to update whether or not it is been read/downloaded by the user.
Since your first step is in place, we don't need anything there. For second step, you should write a script and schedule that script to run at a fixed time interval using CRON (if using LINUX or UNIX, or WINDOWS). The script would be responsible to create a list of file(s) present in the directory, and simply insert the file(s) information that are not present in your database.
EDIT:
This edit is to describe how your script file should work. As I explained, the cron jobs would simply help you run your script file in fixed set of interval (which can be every minute, or every hour, or every day, and so on). Lets say your database table has following columns:
fileid (varchar[20])
filepath (varchar[20])
status (boolean)
Your script file should do following things:
Create a list of existing filepaths in your server directory
Run a select query, create a list of existing filepaths from database table.
Compare list1 with list2, and find the ones that doesn't exist in list2 (This would give you a list of filepath that needs to be inserted into table)
Just insert the list of file paths you got above, and set there status to be false (which means the file is not read/downloaded yet)
NOTE: Please keep in mind that I am not advising right now that how your database table should look like. It can be what you have proposed or can even differ depending on your will or requirements.
For the third step, simply keep the status of your file to be unread when creating entries in your table from the second step, and then when user click on the file link in your application whether to view or download it, send a POST request to your server updating the file status to be marked as read.
Let me know if this helps!
I'm pretty new on Jedox, and I'm trying for internal use to export some specific "warning logs" (eg. "(Mapping missing)" ) into an excel/wss file.
And I don't how to do that...
Can you help me please.
Regards,
Usik
The easiest way to get these information is to use the Integrator in Jedox.
There you have the possibility to use a File Extract and then you can filter the information you are searching for.
After that it's possible to load these filtered information into a File.
The minimum steps you'll need are Connection -> Extract -> Transform -> Load.
Please take a look at the sample projects that are delivered with the Jedox software. In the example "sampleBiker", there are also file connections, extracts etc.
You can find more samples in:
<Install_path>\tomcat\webapps\etlserver\data\samples
I recommend to check the Jedox Knowledgebase.
The other way (and maybe more flexible way) would be to use, for example, a PHP macro inside of a Jedox Web report and read the log file you're trying to display.
If you've got a more specific idea what you'd like to do, please let me know and I'll try to give you an example how to do so.
I want to know who (Windows username)created and who have modified a particular qlikview file. I have checked the xml files created from the qvw file using the -prj method, but there is nothing related to that. There is also a file with .dat extension which might have storing this information. Can anyone help me in reading that dat file.
A better way of doing this in the future is to implement a source control system, you are out of luck with Qlikview as it does not capture this information.
The author and/or last changed user name is not part of the default
metadata inserted into the document. So unless something similar to
the suggestions above was already in the document, there will be no
way of determining who was responsible.
Source
A better place to look is your windows environment and see if the information is held in a back-up.
I am when trying to use java / c# or any other programming language to modify .pbix file which generated from Microsoft Power BI. Is there any dll provided by POWER BI or how can i read the content through program. I just want to get and update the datasource directory. Please help.
Thanks.
I don't think it's possible, and even if it is, the solution is likely inelegant.
Even if you managed to do this, you would need to open your PBIX file in the PBI Desktop to refresh your data.
Are you doing this because you have many queries and it's inconvenient to change data source string (folder name) of all of them? There is a way to keep your connection string in a single variable as described here.
I don't know your exact setup, but looking at your question, lets say you have sets of files in different folders and you want to change the folder in one step.
To use the approach from the link above but with file input, you need to do the following:
If it's a new report, import your files as usual
Create new query: "New Source"->"Blank Query"
You will see "Query1" and an empty text box, enter the folder name, for example "C:\". Rename this query to "Folder".
Go to your imported file in the query editor, "test1" in my example. In query settings on the right, select source.
Change the filename by substituting the folder with your "Folder" query, for example:
...File.Contents("C:\test1.csv"),...
...File.Contents(Folder & "test1.csv"),...
Repeat for all imported files, then "Close & Apply".
Now whenever you need to change the folder with your files, edit your "Folder" value and "Refresh".
Is there a way using the Accurev client or command line to query all the issues (in a specific state) that have file changes associated with them?
If you use the Accurev client, and open an issue, the Changes tab lists the changed files. However, when creating an issues query expression, I don't see any fields that indicate the changed files (or even if there are changed files).
You will need to use the "cpkdescribe" query to pull that information. A "queryIssue" query retrieves all the other fields, but not the change package data.
See the AccuRev_User_CLI.pdf manual that is included with your installation of AccuRev.