I am reading different logs from same source folder. But not all files are getting read, one stanza works other don't.
If i restart the UF, all stanzas work, but changed data is not capturing by one stanza.
files i am planning to monitor below files
performance_data.log
performance_data.log.1
performance_data.log.2
performance_data.log.3
performance.log
performance.log.1
performance.log.2
SystemOut.log
my input.conf file
[default]
host = LOCALHOST
[monitor://E:\Data\AppServer\A1\performance_data.lo*]
source=applogs
sourcetype=data_log
index=my_apps
[monitor://E:\Data\AppServer\A1\performance.lo*]
source=applogs
sourcetype=perf_log
index=my_apps
[monitor://E:\Data\logs\ImpaCT_A1\SystemOu*]
source=applogs
sourcetype=systemout_log
index=my_apps
\performance_data.lo* and \SystemOu* stanzas working fine, but performance.lo* stanza not working. only sends data when i restart the UF (universal forwarder), but changes were not sending automatically like other stanzas did.
Anything i am doing wrong here ?
It may be the buffer speed got exceed the limit so forwarder unable to send data to splunk
so try to add in input.conf like below
and create limit.conf in local path
input.conf
[monitor://E:\Data\AppServer\A1\performance.lo*]
source=applogs
sourcetype=perf_log
index=my_apps
crcSalt = <SOURCE>
limits.conf
[thruput]
maxKBps = 0
Related
I have installed splunk UF on windows . I have one static log file in system (json) and that need to be monitored. I have configure this in inputs.conf file.
I see only system/application and security logs being sent to indexer whereas the static log file is not seen.
I ran "splunk list inputstatus" and checked,
C:\Users\Administrator\Downloads\test\test.json
file position = 75256
file size = 75256
percent = 100.00
type = finished reading
So, this means the file is being read properly.
What can be the issue that I dont see test.json logs at splunk side ? I tried checking index=_internal at indexer but not able to figure out what is causing issue, I checked few blogs on Internet as well. Can anyone please help on this.
inputs.conf stanza:
[monitor://C:\Users\Administrator\Downloads\data test\test.json]
disabled = 0
index = test_index
sourcetype = test_data
I want to read data from amazon-s3 into kafka. I found camel-aws-s3-kafka-connector source and I try to use it and it works but... I want to read data from s3 without deleting files but execly once for each consumer without duplicates. It is possible to do this using only configuration file? I' ve already create file which looks like:
name=CamelSourceConnector
connector.class=org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
camel.source.maxPollDuration=10000
topics=ReadTopic
#prefix=WriteTopic
camel.source.endpoint.prefix=full/path/to/WriteTopic2
camel.source.path.bucketNameOrArn=BucketName
camel.source.endpoint.autocloseBody=false
camel.source.endpoint.deleteAfterRead=false
camel.sink.endpoint.region=xxxx
camel.component.aws-s3.accessKey=xxxx
camel.component.aws-s3.secretKey=xxxx
Additionaly with configuration as above I am not able to read only from "WriteTopic" but from all folders in s3, is it also possible to configure?
S3Bucket folders with files
I found workaround for duplicates problem, I'm not completly sure it is the best possible way but it may help somebody. My approach is described here: https://camel.apache.org/blog/2020/12/CKC-idempotency-070/ . I used camel.idempotency.repository.type=memory, and my configuration file looks like:
name=CamelAWS2S3SourceConnector connector.class=org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
camel.source.maxPollDuration=10000
topics=ReadTopic
# scieżka z ktorej czytamy dane
camel.source.endpoint.prefix=full/path/to/topic/prefix
camel.source.path.bucketNameOrArn="Bucket name"
camel.source.endpoint.deleteAfterRead=false
camel.component.aws2-s3.access-key=****
camel.component.aws2-s3.secret-key=****
camel.component.aws2-s3.region=****
#remove duplicates from messages#
camel.idempotency.enabled=true
camel.idempotency.repository.type=memory
camel.idempotency.expression.type=body
It is also important that I changed camel connector library. Initially I used camel-aws-s3-kafka-connector source, to use Idempotent Consumer I need to change connector on camel-aws2-s3-kafka-connector source
My application writes log data to disk file. The log data is one-line json as below. I use the splunker-forwarder to send the log to splunk indexer
{"line":{"level": "info","message": "data is correct","timestamp": "2017-08-01T11:35:30.375Z"},"source": "std"}
I want to only send the sub-json object {"level": "info","message": "data is correct","timestamp": "2017-08-01T11:35:30.375Z"} to splunk indexer, not the whole json. How should I configure splunk forwarder or splunk indexer?
You can use sedcmd to delete data before it gets written to disk by the indexer(s).
Add this to your props.conf
[Yoursourcetype]
#...Other configurations...
SEDCMD-removejson = s/(.+)\:\{/g
This is an index time setting, so you will need to restart splunkd for changes to take affect
Can anyone help me to implement how to move uploaded file from one server to another
I am not talking about the move_uploaded_file() function.
for example,
If the image is uploaded from http://example.com
How can I move it to http://image.example.com
It is possible right? Not by sending another post or put request?
Take the Uploaded file, move it to a temporary location and push it then to any FTP-Acount you like.
$tempName = tempnam(sys_get_temp_dir(), 'upload');
move_uploaded_file($_FILES["file"]["tmpname"], $tempName);
$handle = fopen("ftp://user:password#example.com/somefile.txt", "w");
fwrite($handle, file_get_contents($uploadedFile));
fclose($handle);
unlink($tempName);
Actually you don't even need the part with the move_uploaded_file. It is totally sufficent to take the uploaded file and write it's content to the file opened with fopen. For more informations on opening URLs with fopenhave a look at the php-documentation. For more information on uploading files have a look at the File-Upload-Section of the PHP-Manual
[Edit] Added file_get_contents to the code-example
[Edit] Shorter Example
$handle = fopen("ftp://user:password#example.com/somefile.txt", "w");
fwrite($handle, file_get_contents($_FILES["file"]["tmpname"]);
fclose($handle);
// As the uploaded file has not been moved from the temporary folder
// it will be deleted from the server the moment the script is finished.
// So no cleaning up is required here.
I have a program that will run a query, and return results in report viewer. The issue is we have 10 locations, all with their own local database. What I'd like to do is have each location use the program and utilize the App.config file to specify which database to connect to depending on which location you are. This will prevent me from having to create 10 individual programs with separate database connections. I was thinking I could have 3 values in the app.config file "Database" "login" "password" Generally speaking the databases are on the .30 address... so it would be nice to be able to have them set the config file to the database server IP...
For example:
Location: 1
DatabaseIP: 10.0.1.30
Login: sa
Password: databasepassword
Is it possible to set something like this up using the app.config file?
You should take a look on the resource files.
Originally, they are intended for localization, but they should work for you also.
Go to your project Properties, and set up an Application Setting --> Type (Connection String) from the drop down. This will result in a xlm config file in your output directory in which you can modify the connection string post-compile.
I ended up using a simple XML File to do this. I used this site to accomplish it. I first wrote the XML using the form load, then switched it to the read.