Issue with Universal Forwarder forwarding logs to index - splunk

I have installed splunk UF on windows . I have one static log file in system (json) and that need to be monitored. I have configure this in inputs.conf file.
I see only system/application and security logs being sent to indexer whereas the static log file is not seen.
I ran "splunk list inputstatus" and checked,
C:\Users\Administrator\Downloads\test\test.json
file position = 75256
file size = 75256
percent = 100.00
type = finished reading
So, this means the file is being read properly.
What can be the issue that I dont see test.json logs at splunk side ? I tried checking index=_internal at indexer but not able to figure out what is causing issue, I checked few blogs on Internet as well. Can anyone please help on this.
inputs.conf stanza:
[monitor://C:\Users\Administrator\Downloads\data test\test.json]
disabled = 0
index = test_index
sourcetype = test_data

Related

OPEN_PIPE_NO_AUTHORITY upon opening non-existing file using OPEN DATASET FOR OUTPUT IN BINARY MODE without FILTER

I have a very strange problem.
I have a standard program with the following piece of code that tries to create a file in response to a previous attempt to open it with OPEN DATASET ... FOR INPUT IN BINARY MODE.
CATCH SYSTEM-EXCEPTIONS dataset_too_many_files = 6
open_dataset_no_authority = 7
open_pipe_no_authority = 8
dataset_no_pipe = 9.
OPEN DATASET filename FOR OUTPUT IN BINARY MODE
MESSAGE msg.
ENDCATCH.
Suprisingly the response to that is sy-subrc = 8 which according to SAP documentation can happen only when OPEN DATASET is used with FILTER.
The message in msg variable has that File could not be opened, which is irrelevant because we are trying to create this file.
Did anybody experienced something like that? I suppose it has something to do with authority to create a file in a given directory on the operating system level but I cannot find any other log or trace to that. The error message and sy-subrc = 8 seem to be actually misleading in this case. Could more pieces of information be seen by activated tracing in ST01?
It turned out that the cause of the problem was in the first place the lack of the directory in which the file should be created. No wonder the system could not create the file in a non-existing folder. The error message is in such a case a misleading one anyway.
Open Dataset Docu:
and
Open datset os additions
Suprisingly the response to that is sy-subrc = 8 which according to SAP documentation can happen only when OPEN DATASET is used with FILTER.
Not exactly what the docu says. Worth another look.
Ie Would add sy-subrc = 8 on the open dataset command means
The operating system could not open the file.

Splunk Universal Forwarder not sending data to Indexer

I am reading different logs from same source folder. But not all files are getting read, one stanza works other don't.
If i restart the UF, all stanzas work, but changed data is not capturing by one stanza.
files i am planning to monitor below files
performance_data.log
performance_data.log.1
performance_data.log.2
performance_data.log.3
performance.log
performance.log.1
performance.log.2
SystemOut.log
my input.conf file
[default]
host = LOCALHOST
[monitor://E:\Data\AppServer\A1\performance_data.lo*]
source=applogs
sourcetype=data_log
index=my_apps
[monitor://E:\Data\AppServer\A1\performance.lo*]
source=applogs
sourcetype=perf_log
index=my_apps
[monitor://E:\Data\logs\ImpaCT_A1\SystemOu*]
source=applogs
sourcetype=systemout_log
index=my_apps
\performance_data.lo* and \SystemOu* stanzas working fine, but performance.lo* stanza not working. only sends data when i restart the UF (universal forwarder), but changes were not sending automatically like other stanzas did.
Anything i am doing wrong here ?
It may be the buffer speed got exceed the limit so forwarder unable to send data to splunk
so try to add in input.conf like below
and create limit.conf in local path
input.conf
[monitor://E:\Data\AppServer\A1\performance.lo*]
source=applogs
sourcetype=perf_log
index=my_apps
crcSalt = <SOURCE>
limits.conf
[thruput]
maxKBps = 0

How to forward sub-json object to splunk indexer

My application writes log data to disk file. The log data is one-line json as below. I use the splunker-forwarder to send the log to splunk indexer
{"line":{"level": "info","message": "data is correct","timestamp": "2017-08-01T11:35:30.375Z"},"source": "std"}
I want to only send the sub-json object {"level": "info","message": "data is correct","timestamp": "2017-08-01T11:35:30.375Z"} to splunk indexer, not the whole json. How should I configure splunk forwarder or splunk indexer?
You can use sedcmd to delete data before it gets written to disk by the indexer(s).
Add this to your props.conf
[Yoursourcetype]
#...Other configurations...
SEDCMD-removejson = s/(.+)\:\{/g
This is an index time setting, so you will need to restart splunkd for changes to take affect

Using Leigh version of S3Wrapper.cfc Can't get past Init

I am new to S3 and need to use it for image storage. I found a half dozen versions of an s2wrapper for cf but it appears that the only one set of for v4 is one modified by Leigh
https://gist.github.com/Leigh-/26993ed79c956c9309a9dfe40f1fce29
Dropped in the com directory and created a "test" page that contains the following code:
s3 = createObject('component','com.S3Wrapper').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
but got the following error :
So I changed the line 37 from
variables.Sv4Util = createObject('component', 'Sv4').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
to
variables.Sv4Util = createObject('component', 'Sv4Util').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
Now I am getting:
I feel like going through Leigh code and start changing things is a bad idea since I have lurked here for year an know Leigh's code is solid.
Does any know if there are any examples on how to use this anywhere? If not what I am doing wrong. If it makes a difference I am using Lucee 5 and not Adobe's CF engine.
UPDATE :
I followed Leigh's directions and the error is now gone. I am addedsome more code to my test page which now looks like this :
<cfscript>
s3 = createObject('component','com.S3v4').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
bucket = "imgbkt.domain.com";
obj = "fake.ping";
region = "s3-us-west-1"
test = s3.getObject(bucket,obj,region);
writeDump(test);
test2 = s3.getObjectLink(bucket,obj,region);
writeDump(test2);
writeDump(s3);
</cfscript>
Regardless of what I put in for bucket, obj or region I get :
JIC I did go to AWS and get new keys:
Leigh if you are still around or anyone how has used one of the s3Wrappers any suggestions or guidance?
UPDATE #2:
Even after Alex's help I am not able to get this to work. The Link I receive from getObjectLink is not valid and getObject never does download an object. I thought I would try the putObject method
test3 = s3.putObject(bucketName=bucket,regionName=region,keyName="favicon.ico");
writeDump(test3);
to see if there is any additional information, I received this :
I did find this article https://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html but it is pretty old and since S3 specifically suggests using dots in bucketnames I don't that it is relevant any longer. There is obviously something I am doing wrong but I have spent hours trying to resolve this and I can't seem to figure out what it might be.
I will give you a rundown of what the code does:
getObjectLink returns a HTTP URL for the file fake.ping that is found looking in the bucket imgbkt.domain.com of region s3-us-west-1. This link is temporary and expires after 60 seconds by default.
getObject invokes getObjectLink and immediately requests the URL using HTTP GET. The response is then saved to the directory of the S3v4.cfc with the filename fake.ping by default. Finally the function returns the full path of the downloaded file: E:\wwwDevRoot\taa\fake.ping
To save the file in a different location, you would invoke:
downloadPath = 'E:\';
test = s3.getObject(bucket,obj,region,downloadPath);
writeDump(test);
The HTTP request is synchronous, meaning the file will be downloaded completely when the functions returns the filepath.
If you want to access the actual content of the file, you can do this:
test = s3.getObject(bucket,obj,region);
contentAsString = fileRead(test); // returns the file content as string
// or
contentAsBinary = fileReadBinary(test); // returns the content as binary (byte array)
writeDump(contentAsString);
writeDump(contentAsBinary);
(You might want to stream the content if the file is large since fileRead/fileReadBinary reads the whole file into buffer. Use fileOpen to stream the content.
Does that help you?

Locally calculate dropbox hash of files

Dropbox rest api, in function metatada has a parameter named "hash" https://www.dropbox.com/developers/reference/api#metadata
Can I calculate this hash locally without call any remote api rest function?
I need know this value to reduce upload bandwidth.
https://www.dropbox.com/developers/reference/content-hash explains how Dropbox computes their file hashes. A Python implementation of this is below:
import hashlib
import math
import os
DROPBOX_HASH_CHUNK_SIZE = 4*1024*1024
def compute_dropbox_hash(filename):
file_size = os.stat(filename).st_size
with open(filename, 'rb') as f:
block_hashes = b''
while True:
chunk = f.read(DROPBOX_HASH_CHUNK_SIZE)
if not chunk:
break
block_hashes += hashlib.sha256(chunk).digest()
return hashlib.sha256(block_hashes).hexdigest()
The "hash" parameter on the metadata call isn't actually the hash of the file, but a hash of the metadata. It's purpose is to save you having to re-download the metadata in your request if it hasn't changed by supplying it during the metadata request. It is not intended to be used as a file hash.
Unfortunately I don't see any way via the Dropbox API to get a hash of the file itself. I think your best bet for reducing your upload bandwidth would be to keep track of the hash's of your files locally and detect if they have changed when determining whether to upload them. Depending on your system you also likely want to keep track of the "rev" (revision) value returned on the metadata request so you can tell whether the version on Dropbox itself has changed.
This won't directly answer your question, but is meant more as a workaround; The dropbox sdk gives a simple updown.py example that uses file size and modification time to check the currency of a file.
an abbreviated example taken from updown.py:
dbx = dropbox.Dropbox(api_token)
...
# returns a dictionary of name: FileMetaData
listing = list_folder(dbx, folder, subfolder)
# name is the name of the file
md = listing[name]
# fullname is the path of the local file
mtime = os.path.getmtime(fullname)
mtime_dt = datetime.datetime(*time.gmtime(mtime)[:6])
size = os.path.getsize(fullname)
if (isinstance(md, dropbox.files.FileMetadata) and mtime_dt == md.client_modified and size == md.size):
print(name, 'is already synced [stats match]')
As far as I am concerned, No you can't.
The only way is using Dropbox API which is explained here.
The rclone go program from https://rclone.org has exactly what you want:
rclone hashsum dropbox localfile
rclone hashsum dropbox localdir
It can't take more than one path argument but I suspect that's something you can work with...
t0|todd#tlaptop/p8 ~/tmp|295$ echo "Hello, World!" > dropbox-hash-demo/hello.txt
t0|todd#tlaptop/p8 ~/tmp|296$ rclone copy dropbox-hash-demo/hello.txt dropbox-ttf:demo
t0|todd#tlaptop/p8 ~/tmp|297$ rclone hashsum dropbox dropbox-hash-demo
aa4aeabf82d0f32ed81807b2ddbb48e6d3bf58c7598a835651895e5ecb282e77 hello.txt
t0|todd#tlaptop/p8 ~/tmp|298$ rclone hashsum dropbox dropbox-ttf:demo
aa4aeabf82d0f32ed81807b2ddbb48e6d3bf58c7598a835651895e5ecb282e77 hello.txt