Not able to capture file events with osquery process_file_events - osquery

i was sucesful to capture file activites with file_events. But could not make the process_file_events work properly. After some file activities in monitored directory could not see any event coming to my plugin. For same plugin with same conf file the file_events are fetched successfully.
the conf file is modified to contain process_file_events instead of file_events.
Following is the flags file..:
--disable_extensions=false
--disable_events=false
--disable_audit=false
--enable_file_events=true
--audit_allow_config=true
--audit_allow_process_events=true
--audit_allow_fim_events=true
--logger_plugin=LogrPlugin
--extensions_timeout=10
--extensions_interval=5
--extensions_require=ExtensnMgr
is the flags file correct?
and any other difference to made between file_events (which was working) and process_file_events?

Related

Show `.map` files under their respectives files in WebStorm/IntelliJ

Using Filewatchers it's possible to show a generated file under it's respective source file:
The problem I am having is that only the generated .jsfile is 'watched' and grouped whilst the .map file still shows up separately. Is there a way to set it up so that both files are shown under their respective source file?
You need to modify your file watcher settings accordingly. Please make sure to set 'Output paths to refresh' to '$FileNameWithoutExtension$.js:$FileNameWithoutExtension$.map'

Need help on Pig

I am executing a Pig script, which reads files from a directory, performs some operation and stores to some output directory. In output directory I'm getting one or more "part" files, one _SUCCESS file and one _logs directory. My questions are:
Is there any way to control the name of files generated (upon execution of STORE command) in output directory. To be specific, I don't want the names to be "part-.......". I want Pig to generate files according to the file name pattern I specify.
Is there any way to suppress the _SUCCESS file and the _log directory? Basically I don't want the _SUCCESS and _logs to be generated in the output directory.
Regards
Biswajit
See this post.
To remove _SUCCESS, use SET mapreduce.fileoutputcommitter.marksuccessfuljobs false;. I'm not 100% sure how to remove _logs but you could try SET pig.streaming.log.persist false;.

load script from other file extension?

is it possible to load module from file with extension other than .lua?
require("grid.txt") results in:
module 'grid.txt' not found:
no field package.preload['grid.txt']
no file './grid/txt.lua'
no file '/usr/local/share/lua/5.1/grid/txt.lua'
no file '/usr/local/share/lua/5.1/grid/txt/init.lua'
no file '/usr/local/lib/lua/5.1/grid/txt.lua'
no file '/usr/local/lib/lua/5.1/grid/txt/init.lua'
no file './grid/txt.so'
no file '/usr/local/lib/lua/5.1/grid/txt.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './grid.so'
no file '/usr/local/lib/lua/5.1/grid.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
I suspect that it's somehow possible to load the script into package.preaload['grid.txt'] (whatever that is) before calling require?
It depends on what you mean by load.
If you want to execute the code in a file named grid.txt in the current directory, then just do dofile"grid.txt". If grid.txt is in a different directory, give a path to it.
If you want to use the path search that require performs, then add a template for .txt in package.path, with the correct path and then do require"grid". Note the absence of suffix: require loads modules identified by names, not by paths.
If you want require("grid.txt") to work should someone try that then yes, you'll need to manually loadfile and run the script and put whatever it returns (or whatever require is documented to return when the module doesn't return anything) into package.loaded["grid.txt"].
Alternatively, you could write your own loader just for entries like this which you set into package.preload["grid.txt"] which finds and loads/runs the file or, more generically, you could write yourself a loader function, insert it into package.loaders, and then let it do its job whenever it sees a "*.txt" module come its way.

WinSCP Session::RemoveFiles - Delete specified files in sub directories

[Question] Does Session::RemoveFiles() remove files in sub directory of source directory? If not, how to implement this ability?
(Please do not ask me why I have the remote directory as /C/testTransfer/. The code just for testing purpose.)
I have a SFTP program using WinSCP .Net assembly. Program language is C++/CLI. It opens up a work file. The file contains many lines of FTP instructions.
One type of instruction I have to handle is to transfer *.txt from source directory. The source directory may contain sub directories which may contain .txt as well. Once transfer is successful, delete the source files.
I use Session::GetFiles() for the transfer. It correctly transfer all .txt files (/C/testTransfer/*.txt), even those in sub directories (/C/testTransfer/sub/*.txt), in the source to the destination.
transferOptions->FileMask = "*.txt";
session->GetFiles("/C/testTransfer", "C:\\temp\\win", false, transferOption);
Now to remove, I use session->RemoveFiles("/C/testTransfer/*.txt"). I only see *.txt in the source (/C/testTransfer/*.txt), but not in the sub directory (/C/testTransfer/sub/*.txt), are removed.
The Session::RemoveFiles can remove even files in subdirectories in general. But not this way with wildcard, because WinSCP will not descend to subdirectories that do not match the wildcard (*.txt). Also note that even if you do not need the wildcard, the Session::RemoveFiles would remove even the subdirectories themselves, what I'm not sure you want it to.
Though you have other (and better = more safe) options:
Use the remove parameter of the Session::GetFiles method to instruct it to remove source file after successful transfer.
If you need to delete source files transactionally (=only after download of all files succeed), iterate the TransferOperationResult::Transfers returned by Session::GetFiles and call the Session::RemoveFiles for each (unless the TransferEventArgs::Error is not null).
Use the TransferEventArgs::FileName to get a file path to pass to the Session::RemoveFiles. Use the RemotePath::EscapeFileMask to escape the file name before passing it to the Session::RemoveFiles.
There's a similar full example available for Moving local files to different location after successful upload.
To recursively delete files matching a wildcard in a standalone operation (not after downloading the same files), use the Session::EnumerateRemoteFiles. Pass your wildcard to its mask argument. Use the EnumerationOptions.AllDirectories option for recursion.
Call the Session::RemoveFiles for each returned file. Use the RemotePath::EscapeFileMask to escape the file name before passing it to the Session::RemoveFiles.

.tex files not found when placed in a folder declared in TEXINPUTS

I have a set of documents based on a LaTeX template. Every document has its own folder, as following:
docs-folder
|-doc #1
|-doc #2
...
|-doc #n
|-texmf
|-tex
|-bibtex
|-fonts
|-docs
|-misc
|-logo.jpg
|-acronyms.tex
I wrote the template on my own, and, for every document (from #1 to #n) it loads the files logo.jpg and acronyms.tex with \includegraphics{logo.jpg} and \input{acronyms.tex}.
The path ..\docs-folder\texmf is set as a project root in MikTeX, and the local texmf tree is recognized properly, excluding the misc folder.
So, the path ..\docs-folder\texmf\misc is set as the value of the TEXINPUTS environment variable (under Windows). This is done in order to avoid an unwanted replication of the two files.
What happens is that, when I compile one of the documents, the file acronyms.tex is not found, while the logo.jpg is correctly found by PDFLaTeX.
I have no wonder why the acronyms.tex file is not loaded.
On unix systems the solution is to run texhash or mktexlsr. According to this page, the equivalent solution for MikTeX is to run MikTeX settings and click the "Refresh FNDB" button.