how to syslog-ng to remote facility - syslog-ng

i have a host running syslog-ng. it does all it's stuff locally fine (creating log files etc). however, i would like to forward ALL of it's logs to a remote machine - specifically to one facility on the remote machine (local4). i tried playing around with rewrite (set-facility) and templates within the destination (syntax errors) - but to no avail.
destination remote_server {
udp(\"172.18.192.8\" port (514));
udp(\"172.18.192.9\" port (514));
};
rewrite r_local4 {
set-facility(local4);
};
filter f_alllogs {
level (debug...emerg);
};
log {
source(local);
filter(f_alllogs);
rewrite(r_local4)
destination(remote_server);
};

AFAIK, currently it is not possible to modify the facility of a message in syslog-ng.
Is there a special reason you want to do it?

Related

Connection refused when connecting to localhost's port 9999

I am trying to understand some example Kotlin code that connects to http://127.0.0.1 using sockets, and I have IIS enabled and running it on 127.0.0.1. However, when I run the code, I get:
"C:\Program Files\Microsoft\jdk-11.0.12.7-hotspot\bin\java.exe" -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:52769,suspend=y,server=n -javaagent:C:/Users/ivlat/.gradle/caches/modules-2/files-2.1/org.jetbrains.kotlinx/kotlinx-coroutines-core-jvm/1.5.0/d8cebccdcddd029022aa8646a5a953ff88b13ac8/kotlinx-coroutines-core-jvm-1.5.0.jar -javaagent:C:\Users\ivlat\AppData\Local\JetBrains\IdeaIC2022.2\captureAgent\debugger-agent.jar=file:/C:/Users/ivlat/AppData/Local/Temp/capture.props -Dfile.encoding=UTF-8 -classpath "C:\Dev\KotlinProjects\7_08_Echo\client\build\classes\kotlin\main;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\org.jetbrains.kotlin\kotlin-stdlib-jdk8\1.5.21\6b3de2a43405a65502728047db37a98a0c7e72f0\kotlin-stdlib-jdk8-1.5.21.jar;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\ch.qos.logback\logback-classic\1.2.3\7c4f3c474fb2c041d8028740440937705ebb473a\logback-classic-1.2.3.jar;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\ch.qos.logback\logback-core\1.2.3\864344400c3d4d92dfeb0a305dc87d953677c03c\logback-core-1.2.3.jar;C:\Dev\KotlinProjects\7_08_Echo\shared\build\classes\kotlin\main;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\org.jetbrains.kotlinx\kotlinx-coroutines-core-jvm\1.5.0\d8cebccdcddd029022aa8646a5a953ff88b13ac8\kotlinx-coroutines-core-jvm-1.5.0.jar;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\org.jetbrains.kotlin\kotlin-stdlib-jdk7\1.5.21\f059658740a4b3a3461aba9681457615332bae1c\kotlin-stdlib-jdk7-1.5.21.jar;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\org.jetbrains.kotlin\kotlin-stdlib\1.5.21\2f537cad7e9eeb9da73738c8812e1e4cf9b62e4e\kotlin-stdlib-1.5.21.jar;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\org.slf4j\slf4j-api\1.7.25\da76ca59f6a57ee3102f8f9bd9cee742973efa8a\slf4j-api-1.7.25.jar;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\org.jetbrains.kotlin\kotlin-stdlib-common\1.5.21\cc8bf3586fd2ebcf234058b9440bb406e62dfacb\kotlin-stdlib-common-1.5.21.jar;C:\Users\ivlat\.gradle\caches\modules-2\files-2.1\org.jetbrains\annotations\13.0\919f0dfe192fb4e063e7dacadee7f8bb9a2672a9\annotations-13.0.jar;C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.2.3\lib\idea_rt.jar" com.knowledgespike.client.ApplicationKt
Connected to the target VM, address: '127.0.0.1:52769', transport: 'socket'
Exception in thread "main" java.io.IOException: The remote computer refused the network connection.
at java.base/sun.nio.ch.Iocp.translateErrorToIOException(Iocp.java:299)
at java.base/sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:389)
at java.base/sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Disconnected from the target VM, address: '127.0.0.1:52769', transport: 'socket'
Process finished with exit code 1
Apparently, my IIS isn't really set up for the port 9999, as I am getting the error below when I type localhost/127.0.0.1:9999
HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
The connection string that the Kotlin code is also using is localhost/127.0.0.1:9999
How would I go about getting the IIS recognize this request/let it through? It really doesn't even have to be IIS. I just want to understand this sample code better, and any sort of sandbox/test web server that allows for this request would more than suffice at this point.
This is the Kotlin code that's throwing the exception when trying to connect:
fun main() = runBlocking {
val client: AsynchronousSocketChannel = AsynchronousSocketChannel.open()
val hostAddress = InetSocketAddress("localhost", PORT)
val tcpSocket = TcpSocket(client)
tcpSocket.connect(hostAddress)
val br = BufferedReader(InputStreamReader(System.`in`))
var line: String
println("Main name is: \t\t\t\t\t${Thread.currentThread().name}")
println("Message to server:")
while (br.readLine().also { line = it } != null) {
val result = async {
println("while:async name is: \t\t\t${Thread.currentThread().name}")
sendMessage(tcpSocket, line)
}
println("while: name is: \t\t\t\t${Thread.currentThread().name}")
if (line == "bye") {
println("End")
break
}
val response: String = result.await()
withContext(Dispatchers.Default) {
println("withContext[Default] name is: \t${Thread.currentThread().name}")
println("response from server: $response")
println("Message to server:")
}
}
}
A webserver listens on a particular port, the default is port 80 for HTTP. Unless you go out of your way to configure it, I expect that's where IIS will be listening
I would start by taking Kotlin out of the question and establish you have a webserver listening on a particular port. Just use a regular web browser to check. Beware the browsers handle comms fails in nice friendly ways, hiding the real issues. Better to use a more technical tool like
curl
Postman
If IIS is suspect, then what IDE are you using? If you are using IntelliJ
find yourself an HTML file,
open it up, then notice in the Top Right several however icons will show up
pick the IntelliJ one, this will start an internal web server and then a web browser will launch to connect to it
try this web server for your tests
I was able to circumvent the error by going to bindings for the default web site in the IIS Manager UI and then adding the port 9999. The Kotlin code connects fine now. I'm pretty sure there are better ways of solving this IRL, but I think it's good enough, as I am just learning Kotlin at this point.

Syslog-ng logs not processing certain logs possibly due to journal cursor issue

I'm using syslog-ng 3.37.1 on a VMware Photon 3.0 virtual appliance (preconfigured VM). The appliance is configured to write logs into certain files under /var/log folder as well as to remote syslog servers (optional).
Logs from facility 'auth' and 'authpriv' are configured to write to /var/log/auth.log, as well as send it over to remote syslog server when enabled.
In addition, there are other messages as well from kernel, systemd services as well as other processes, configured to be processed via syslog-ng.
Issue is that, logs from a few facilities (such as auth, authpriv, cron etc) are not processed (received?) by syslog-ng initially. So, any SSH events, TTY login events are not logged into the file and remote. However, many other events from kernel, systemd and other processes are logged fine.
Below is the configuration for auth.log, that does not log in the first boot.
filter f_auth { facility(auth) or facility(authpriv)); };
destination authlog { file("/var/log/auth.log" perm(0600)); };
log { source(s_local); filter(f_auth); destination(authlog); };
I updated the filter as below without any success
filter f_auth {
facility(auth) or facility(authpriv) or
match('sshd' value('PROGRAM')) or match('systemd-logind' value('PROGRAM'));
};
In journal logs I can observe the relevant logs, for example, below command to view SSH logs.
journalctl -f -u sshd
Additional syslog-ng service restart or config reload during appliance startup do not fix this.
The log file /var/log/auth.log (and also cron log etc) show zero size during this time. Syslog-ng log looks fine too.
However, if I generate some auth facility event (say, SSH/TTY login) and manually restart syslog-ng, all the log entries (including old events) are immediately written into filesystem log (/var/log/auth.log) and also sent to remote syslog server.
In the syslog-ng.log I find below entry when it starts working that way.
syslog-ng[481]: [date] Failed to seek journal to the saved cursor position; cursor='', error='Invalid argument (22)'
It makes me wonder if it is due to some bad cursor position. However, I can still see other systemd and kernel logs being logged fine. So, not sure.
What could be causing such behaviour? How can I ensure that syslog-ng is able to receive and process these logs without manual intervention?
Below is more detailed configuration for reference:
#version: 3.37
#include "scl.conf"
source s_local {
system();
internal();
udp();
};
destination d_local {
file("/var/log/messages");
file("/var/log/messages-kv.log" template("$ISODATE $HOST $(format-welf --scope all-nv-pairs)\n") frac-digits(3));
};
log {
source(s_local);
# uncomment this line to open port 514 to receive messages
#source(s_network);
destination(d_local);
};
filter f_auth {
facility(auth) or facility(authpriv)); # Also tried facility (auth, authpriv)
};
destination authlog { file("/var/log/auth.log" perm(0600)); };
log { source(s_local); filter(f_auth); destination(authlog); };
destination d_kern { file("/dev/console" perm(0600)); };
filter f_kern { facility(kern); };
log { source(s_local); filter(f_kern); destination(d_kern); };
destination d_cron { file("/var/log/cron" perm(0600)); };
filter f_cron { facility(cron); };
log { source(s_local); filter(f_cron); destination(d_cron); };
destination d_syslogng { file("/var/log/syslog-ng.log" perm(0600)); };
filter f_syslogng { program(syslog-ng); };
log { source(s_local); filter(f_syslogng); destination(d_syslogng); };
# A few more of above kind of configuration follows here.
# Add configuration files that have remote destination, filter and log configuration for remote servers
#include "remote/*.conf"
As can be seen, /var/log/auth.log should hold logs from auth facility, but the log remains blank until subsequent restart of syslog-ng after a syslog config change (via API) or manual login into the system. However, triggering automated restart of syslog-ng using cron (without additional syslog config change) does not help.
Any thoughts, suggestions?
This is probably caused by your real time clock going backwards. The notification mechanism in libsystemd does not work in this case.
There's a proof-of-concept patch in this syslog-ng issue:
https://github.com/syslog-ng/syslog-ng/issues/2836
But I've increased the priority to tackle that problem and fix this, as it is causing issue more often than I anticipated.
As a workaround you should synchronize the time for your VM, preferably so that during boot it waits until a sync and then keep the time synchronized by ntp.

logstash and centralised redis problems

I'm trying to get logstash working in a centralised setup using the docs as an example:
http://logstash.net/docs/1.2.2/tutorials/getting-started-centralized
I've got logstash (as indexer), redis, elasticsearch and standalone kibana3 running on my web server. I then need to run logstash as an agent on another server to collect apache logs and send them to the web server via redis. The number of agents will increase and the logs will vary, but for now I just want to get this working!
I need everything to run as a service so that all is well after reboots etc. All servers are running Ubuntu.
For all logstash instances (indexer and agent), I'm using the following init script (Ubuntu version, second gist):
https://gist.github.com/shadabahmed/5486949#file-logstash-ubuntu
For running redis as a service, I followed the instructions here:
http://redis.io/topics/quickstart (Installing redis more properly)
Elasticsearch is also running as a service.
On the web server, running redis-cli returns PONG correctly. Navigating to the correct Elasticsearch URL returns the correct JSON response. Navigating to the Kibana3 url gives me the dashboard, but no data. UFW is set to allow the redis port (at the moment from everywhere).
On the web server, my logstash.conf is:
input {
file {
path => "/var/log/apache2/access.log"
type => "apache-access"
sincedb_path => "/etc/logstash/.sincedb"
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "logstash"
codec => json
}
}
filter {
grok {
type => "apache-access"
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
elasticsearch {
embedded => true
}
statsd {
# Count one hit every event by response
increment => "apache.response.%{response}"
}
}
From the agent server, I can telnet successfully to the web server IP and redis port. logstash is running. The logstash.conf file is:
input {
file {
path => "/var/log/apache2/shift.access.log"
type => "apache"
sincedb_path => "/etc/logstash/since_db"
}
stdin {
type => "example"
}
}
filter {
if [type] == "apache" {
grok {
pattern => "%{COMBINEDAPACHELOG}"
}
}
}
output {
stdout { codec => rubydebug }
redis { host => ["xx.xx.xx.xx"] data_type => "list" key => "logstash" }
}
If I comment out the stdin and stdout lines, I still don't get a result. The logstash logs do not give me any connection errors - only warnings about the deprecated grok settings format.
I have also tried running logstash from the command line (making sure to stop the demonised service first). The apache log file is correctly outputted in the terminal, so I know that logstash is accessing the log correctly. And I can write random strings and they are output in the correct logstash format.
The redis logs on the web server show no sign of trouble......
The frustrating thing is that this has worked once. One message from stdin made it all the way through to elastic search. That was this morning just after getting everything setup. Since then, I have had no luck and I have no idea why!
Any tips/pointers gratefully received... Solving my problem will stop me tearing out more of my hair which will also make my wife happy......
UPDATE
Rather than filling the comments....
Thanks to #Vor and #rutter, I've confirmed that the user running logstash can read/write to the logstash.log file.
I've run the agent with -vv and the logs are populated with e.g.:
{:timestamp=>"2013-12-12T06:27:59.754000+0100", :message=>"config LogStash::Outputs::Redis/#host = [\"XX.XX.XX.XX\"]", :level=>:debug, :file=>"/opt/logstash/logstash.jar!/logstash/config/mixin.rb", :line=>"104"}
I then input random text into the terminal and get stdout results. However, I do not see anything in the logs until AFTER terminating the logstash agent. After the agent is terminated, I get lines like these in the logstash.log:
{:timestamp=>"2013-12-12T06:27:59.835000+0100", :message=>"Pipeline started", :level=>:info, :file=>"/opt/logstash/logstash.jar!/logstash/pipeline.rb", :line=>"69"}
{:timestamp=>"2013-12-12T06:29:22.429000+0100", :message=>"output received", :event=>#<LogStash::Event:0x77962b4d #cancelled=false, #data={"message"=>"test", "#timestamp"=>"2013-12-12T05:29:22.420Z", "#version"=>"1", "type"=>"example", "host"=>"Ubuntu-1204-precise-64-minimal"}>, :level=>:info, :file=>"(eval)", :line=>"16"}
{:timestamp=>"2013-12-12T06:29:22.461000+0100", :level=>:debug, :host=>"XX.XX.XX.XX", :port=>6379, :timeout=>5, :db=>0, :file=>"/opt/logstash/logstash.jar!/logstash/outputs/redis.rb", :line=>"230"}
But while I do get messages in stdout, I get nothing in redis on the other server. I can however telnet to the correct port on the other server, and I get "ping/PONG" in telnet, so redis on the other server is working..... And there are no errors etc in the redis logs.
It looks to me very much like the redis plugin on the logstash shipper agent is not working as expected, but for the life of me, I can't see where the breakdown is coming from.....

Logstash initially reads but then stops reading log files from CIFS network share

I've set up a logstash on a CentOS server to read from our production web servers IIS logs via a CIFS mount.
input {
file {
path => "/mnt/remote/server*/W3SVC1/ex*.log"
type => "w3c"
}
}
filter {
grok {
type => "w3c"
match => [ "message", "%{HOST:hostname} %{IP:hostip} %{WORD:method} %{URIPATH:request} (?:%{NOTSPACE:param}|-) %{NUMBER:port} (?:%{USER:username}|-) %{IPORHOST:clientip} %{NOTSPACE:httpver} (?:%{NOTSPACE:agent}|-) %{NOTSPACE:cookies} %{NOTSPACE:referer} %{IPORHOST:webhostname} %{NUMBER:status} %{NUMBER:time-taken}" ]
}
}
But, after initially reading an initial burst of logs, it just dies.
(The elevated data afterwards is from a different data source)
I tried a hack from Jordan from this thread, but it doesn't seem to work
tail -f /mnt/remote/server1/W3SVC1/ex130913.log | java -jar logstash.jar
We are purposely avoiding installing Java/Logstash on our front-end web servers because of security issues. So, can you think of a way to make this work?

couldn't setup local SOCKS5 proxy on port 7777: Address already in use: JVM_Bind

while sending meessage from agent present in spark to client present in client application
im getting following error
couldn't setup local SOCKS5 proxy on port 7777: Address already in use: JVM_Bind
the code i wrote for sending message to client is .. bellow..
i wrote the following method in the class, implemented org.jivesoftware.smackx.workgroup.agent.OfferListener
Message message1 = new Message();
message1.setBody(message);
try {
for (MultiUserChat muc : GlobalUtils.getMultiuserchat()) {
if (muc.getRoom().equals(conf)) {
muc.sendMessage(message1);
System.out.println("message sent ############# agent to client..");
}
}
} catch (Exception ex) {
System.out.println("exception while sending message in sendMessage() ");
ex.printStackTrace();
}
help me
thanks
rajesh.v
it was because you was running your server with your client on the same machine.
You know... I assume you use openfire for the server..
Openfire use port 7777 by default for file transfer proxy service and it was enabled by default.
and your client do the same by using the port 7777 for the default file transfer.
look at openfire setting at the Server Settings > File Transfer Setting.
You can disable it.
or just run your client and your server on different machine.
I think you are in development state so your server and your client on the same machine
What is the payload of your message - are there any & in it - not sure why, but this seems to trip up smack