Splunk log data optimization - splunk

I am new to Splunk and I wish to optimize the log data files (do a lossless compression) that I will add to splunk. Since the data has to be textual (not binary or any other format), I cannot go for huffman coding etc, and don't know where to start from.
Any help/idea would be great.

According to Monitor files and directories:
Splunk Enterprise decompresses archive files before it indexes them. It can handle these common archive file types: tar, gz, bz2, tar.gz, tgz, tbz, tbz2, zip, and z.
I suggest using any of the above compression methods, and then configure Splunk to monitor the files by filename or directory spec using the UI or props.conf. If for some reason you need to use a different compression algorithm, you can do so and then instruct Splunk to use a special unarchive_cmd during the index pipeline. You can read more about that by looking at props.conf.spec. Here is a relevant portion:
unarchive_cmd = <string>
* Only called if invalid_cause is set to "archive".
* This field is only valid on [source::<source>] stanzas.
* <string> specifies the shell command to run to extract an archived source.
* Must be a shell command that takes input on stdin and produces output on stdout.
* Use _auto for Splunk's automatic handling of archive files (tar, tar.gz, tgz, tbz, tbz2, zip)
* This setting applies at input time, when data is first read by Splunk.
The setting is used on a Splunk system that has configured inputs acquiring the data.
* Defaults to empty.

Related

Openvms: Extracting RMS Indexed file t to Windows as a sequential flat file

I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.

Not able to filter files using pathGlobFilter

We are trying to read file from directory based on pattern from azure blob srorage.We are using
pathGlobFilter option to select files. The directory contains following files
Sales_51820_14529409_T_7a3cc7d1d17261fd17e7e1fabd3.csv
Sales_51820_14529409_7a3cc7d1d17261fd17e7e1fabd3.csv
Sales_61820_17529409_7a3cc7d1d17261fd17e7e1fabd3.csv
Sales_61820_17529409_T_7a3cc7d1d17261fd17e7e1fabd3.csv
We need to process only those files which does not have "T" in file name .We need to process only these two files
Sales_51820_14529409_7a3cc7d1d17261fd17e7e1fabd3.csv
Sales_61820_17529409_7a3cc7d1d17261fd17e7e1fabd3.csv
But we are not able to read only these two files.
Here is the code,
df = spark.read.format("csv").schema(structSchema).options(header=False,inferSchema=True,sep='|',pathGlobFilter= "Sales_\d{5} _ \d{8}_[a-z0-9]+.csv$").load("wasbs://abc#xxxxx.blob.core.windows.net/abc/2022/02/11/"
Regards,
Rajib
Glob is not a standard regular expression, there is differences between them.
For example glob doesn't match the number of times.
For details, see:here
Back to this question, a relatively stupid way, looking forward to the perfect solution of the giant.
pathGlobFilter="Sales_[0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[a-z0-9]*.csv"

Split CSV file in records and save as a csv file format - Apache NIFI

What I want to do is the following...
I want to divide the input file into registers, convert each record into a
file and leave all the files in a directory.
My .csv file has the following structure:
ERP,J,JACKSON,8388 SOUTH CALIFORNIA ST.,TUCSON,AZ,85708,267-3352,,ALLENTON,MI,48002,810,710-0470,369-98-6555,462-11-4610,1953-05-00,F,
ERP,FRANK,DIETSCH,5064 E METAIRIE AVE.,BRANDSVILLA,MO,65687,252-5592,1176 E THAYER ST.,COLUMBIA,MO,65215,557,291-9571,217-38-5525,129-10-0407,1/13/35,M,
As you can see it doesn't have Header row.
Here is my flow.
My problem is that when the Split Proccessor divides my csv into flows with 400 lines, it isn't save in my output directory.
It's first time using NIFI, sorry.
Make sure your RecordReader controller service is configured correctly(delimiter..etc) to read the incoming flowfile.
Records per split value as 1
You need to use UpdateAttribute processor before PutFile processor to change the filename to unique value (like UUID) unless if you are configured PutFile processor Conflict Resolution strategy as Ignore
The reason behind changing filename is SplitRecord processor is going to have same filename for all the splitted flowfiles.
Flow:
I tried your case and flow worked as expected, Use this template for your reference and upload to your NiFi instance, Make changes as per your requirements.

How does numpy handle mmap's over npz files?

I have a case where I would like to open a compressed numpy file using mmap mode, but can't seem to find any documentation about how it will work under the covers. For example, will it decompress the archive in memory and then mmap it? Will it decompress on the fly?
The documentation is absent for that configuration.
The short answer, based on looking at the code, is that archiving and compression, whether using np.savez or gzip, is not compatible with accessing files in mmap_mode. It's not just a matter of how it is done, but whether it can be done at all.
Relevant bits in the np.load function
elif isinstance(file, gzip.GzipFile):
fid = seek_gzip_factory(file)
...
if magic.startswith(_ZIP_PREFIX):
# zip-file (assume .npz)
# Transfer file ownership to NpzFile
tmp = own_fid
own_fid = False
return NpzFile(fid, own_fid=tmp)
...
if mmap_mode:
return format.open_memmap(file, mode=mmap_mode)
Look at np.lib.npyio.NpzFile. An npz file is a ZIP archive of .npy files. It loads a dictionary(like) object, and only loads the individual variables (arrays) when you access them (e.g. obj[key]). There's no provision in its code for opening those individual files inmmap_mode`.
It's pretty obvious that a file created with np.savez cannot be accessed as mmap. The ZIP archiving and compression is not the same as the gzip compression addressed earlier in the np.load.
But what of a single array saved with np.save and then gzipped? Note that format.open_memmap is called with file, not fid (which might be a gzip file).
More details on open_memmap in np.lib.npyio.format. Its first test is that file must be a string, not an existing file fid. It ends up delegating the work to np.memmap. I don't see any provision in that function for gzip.

What does f+++++++++ mean in rsync logs?

I'm using rsync to make a backup of my server files, and I have two questions:
In the middle of the process I need to stop and start rsync again.
Will rsync start from the point where it stopped or it will restart from the beginning?
In the log files I see "f+++++++++". What does it mean?
e.g.:
2010/12/21 08:28:37 [4537] >f.st...... iddd/logs/website-production-access_log
2010/12/21 08:29:11 [4537] >f.st...... iddd/web/website/production/shared/log/production.log
2010/12/21 08:29:14 [4537] .d..t...... iddd/web/website/production/shared/sessions/
2010/12/21 08:29:14 [4537] >f+++++++++ iddd/web/website/production/shared/sessions/ruby_sess.017a771cc19b18cd
2010/12/21 08:29:14 [4537] >f+++++++++ iddd/web/website/production/shared/sessions/ruby_sess.01eade9d317ca79a
Let's take a look at how rsync works and better understand the cryptic result lines:
1 - A huge advantage of rsync is that after an interruption the next time it continues smoothly.
The next rsync invocation will not transfer the files again, that it had already transferred, if they were not changed in the meantime. But it will start checking all the files again from the beginning to find out, as it is not aware that it had been interrupted.
2 - Each character is a code that can be translated if you read the section for -i, --itemize-changes in man rsync
Decoding your example log file from the question:
>f.st......
> - the item is received
f - it is a regular file
s - the file size is different
t - the time stamp is different
.d..t......
. - the item is not being updated (though it might have attributes
that are being modified)
d - it is a directory
t - the time stamp is different
>f+++++++++
> - the item is received
f - a regular file
+++++++++ - this is a newly created item
The relevant part of the rsync man page:
-i, --itemize-changes
Requests a simple itemized list of the changes that are being made to
each file, including attribute changes. This is exactly the same as
specifying --out-format='%i %n%L'. If you repeat the option, unchanged
files will also be output, but only if the receiving rsync is at least
version 2.6.7 (you can use -vv with older versions of rsync, but that
also turns on the output of other verbose messages).
The "%i" escape has a cryptic output that is 11 letters long. The
general format is like the string YXcstpoguax, where Y is replaced by
the type of update being done, X is replaced by the file-type, and the
other letters represent attributes that may be output if they are
being modified.
The update types that replace the Y are as follows:
A < means that a file is being transferred to the remote host (sent).
A > means that a file is being transferred to the local host (received).
A c means that a local change/creation is occurring for the item (such as the creation of a directory or the changing of a symlink,
etc.).
A h means that the item is a hard link to another item (requires --hard-links).
A . means that the item is not being updated (though it might have attributes that are being modified).
A * means that the rest of the itemized-output area contains a message (e.g. "deleting").
The file-types that replace the X are: f for a file, a d for a
directory, an L for a symlink, a D for a device, and a S for a
special file (e.g. named sockets and fifos).
The other letters in the string above are the actual letters that will
be output if the associated attribute for the item is being updated or
a "." for no change. Three exceptions to this are: (1) a newly created
item replaces each letter with a "+", (2) an identical item replaces
the dots with spaces, and (3) an unknown attribute replaces each
letter with a "?" (this can happen when talking to an older rsync).
The attribute that is associated with each letter is as follows:
A c means either that a regular file has a different checksum (requires --checksum) or that a symlink, device, or special file has a
changed value. Note that if you are sending files to an rsync prior to
3.0.1, this change flag will be present only for checksum-differing regular files.
A s means the size of a regular file is different and will be updated by the file transfer.
A t means the modification time is different and is being updated to the sender’s value (requires --times). An alternate value of T
means that the modification time will be set to the transfer time,
which happens when a file/symlink/device is updated without --times
and when a symlink is changed and the receiver can’t set its time.
(Note: when using an rsync 3.0.0 client, you might see the s flag
combined with t instead of the proper T flag for this time-setting
failure.)
A p means the permissions are different and are being updated to the sender’s value (requires --perms).
An o means the owner is different and is being updated to the sender’s value (requires --owner and super-user privileges).
A g means the group is different and is being updated to the sender’s value (requires --group and the authority to set the group).
The u slot is reserved for future use.
The a means that the ACL information changed.
The x means that the extended attribute information changed.
One other output is possible: when deleting files, the "%i" will
output the string "*deleting" for each item that is being removed
(assuming that you are talking to a recent enough rsync that it logs
deletions instead of outputting them as a verbose message).
Some time back, I needed to understand the rsync output for a script that I was writing. During the process of writing that script I googled around and came to what #mit had written above. I used that information, as well as documentation from other sources, to create my own primer on the bit flags and how to get rsync to output bit flags for all actions (it does not do this by default).
I am posting that information here in hopes that it helps others who (like me) stumble up on this page via search and need a better explanation of rsync.
With the combination of the --itemize-changes flag and the -vvv flag, rsync gives us detailed output of all file system changes that were identified in the source directory when compared to the target directory. The bit flags produced by rsync can then be decoded to determine what changed. To decode each bit's meaning, use the following table.
Explanation of each bit position and value in rsync's output:
YXcstpoguax path/to/file
|||||||||||
||||||||||╰- x: The extended attribute information changed
|||||||||╰-- a: The ACL information changed
||||||||╰--- u: The u slot is reserved for future use
|||||||╰---- g: Group is different
||||||╰----- o: Owner is different
|||||╰------ p: Permission are different
||||╰------- t: Modification time is different
|||╰-------- s: Size is different
||╰--------- c: Different checksum (for regular files), or
|| changed value (for symlinks, devices, and special files)
|╰---------- the file type:
| f: for a file,
| d: for a directory,
| L: for a symlink,
| D: for a device,
| S: for a special file (e.g. named sockets and fifos)
╰----------- the type of update being done::
<: file is being transferred to the remote host (sent)
>: file is being transferred to the local host (received)
c: local change/creation for the item, such as:
- the creation of a directory
- the changing of a symlink,
- etc.
h: the item is a hard link to another item (requires
--hard-links).
.: the item is not being updated (though it might have
attributes that are being modified)
*: means that the rest of the itemized-output area contains
a message (e.g. "deleting")
Some example output from rsync for various scenarios:
>f+++++++++ some/dir/new-file.txt
.f....og..x some/dir/existing-file-with-changed-owner-and-group.txt
.f........x some/dir/existing-file-with-changed-unnamed-attribute.txt
>f...p....x some/dir/existing-file-with-changed-permissions.txt
>f..t..g..x some/dir/existing-file-with-changed-time-and-group.txt
>f.s......x some/dir/existing-file-with-changed-size.txt
>f.st.....x some/dir/existing-file-with-changed-size-and-time-stamp.txt
cd+++++++++ some/dir/new-directory/
.d....og... some/dir/existing-directory-with-changed-owner-and-group/
.d..t...... some/dir/existing-directory-with-different-time-stamp/
Capturing rsync's output (focused on the bit flags):
In my experimentation, both the --itemize-changes flag and the -vvv flag are needed to get rsync to output an entry for all file system changes. Without the triple verbose (-vvv) flag, I was not seeing directory, link and device changes listed. It is worth experimenting with your version of rsync to make sure that it is observing and noting all that you expected.
One handy use of this technique is to add the --dry-run flag to the command and collect the change list, as determined by rsync, into a variable (without making any changes) so you can do some processing on the list yourself. Something like the following would capture the output in a variable:
file_system_changes=$(rsync --archive --acls --xattrs \
--checksum --dry-run \
--itemize-changes -vvv \
"/some/source-path/" \
"/some/destination-path/" \
| grep -E '^(\.|>|<|c|h|\*).......... .')
In the example above, the (stdout) output from rsync is redirected to grep (via stdin) so we can isolate only the lines that contain bit flags.
Processing the captured output:
The contents of the variable can then be logged for later use or immediately iterated over for items of interest. I use this exact tactic in the script I wrote during researching more about rsync. You can look at the script (https://github.com/jmmitchell/movestough) for examples of post-processing the captured output to isolate new files, duplicate files (same name, same contents), file collisions (same name, different contents), as well as the changes in subdirectory structures.
1.) It will "restart the sync", but it will not transfer files that are the same size and timestamp etc. It first builds up a list of files to transfer and during this stage it will see that it has already transferred some files and will skip them. You should tell rsync to preserve the timestamps etc. (e.g. using rsync -a ...)
While rsync is transferring a file, it will call it something like .filename.XYZABC instead of filename. Then when it has finished transferring that file it will rename it. So, if you kill rsync while it is transferring a large file, you will have to use the --partial option to continue the transfer instead of starting from scratch.
2.) I don't know what that is. Can you paste some examples?
EDIT: As per http://ubuntuforums.org/showthread.php?t=1342171 those codes are defined in the rsync man page in section for the the -i, --itemize-changes option.
Fixed part if my answer based on Joao's