Fluentbit pump local file to splunk - splunk

I am trying to pump local file to splunk using fluentbit. The Splunk is currently https and secure.
I kept encountering error message of unexpected EOF, I am not sure what have I done wrongly in the fluent-bit.config file.
This is the general setting of the splunk
Below is the fluent-bit.config that I used with the fluent-bit.exe..
[INPUT]
Name tail
Tag taglog
Path C:\*.json
[OUTPUT]
Name splunk
Match *
Host localhost
Port 443
Splunk_Token <The HTTP Event Collector token generated in Splunk Web>
TLS On
TLS.Verify On
http_user <The username login to Splunk Web>
http_passwd <The password used to login to Splunk Web>
splunk_send_raw On
when i set the "TLS.Verify" to Off, it will have 303 http status code

You are connecting to the web port. Instead you must send it to the management port. (I think it's 8089 by default, but you should confirm in your splunk settings)

Related

Syslog-ng to Syslog-ng over TLS - destination not writing to disk

Trying to configure a syslog-ng server to send all of the logs that it receives, to another syslog-ng server over TLS. Both running RHEL 7. Everything seems to be working from an encryption and cert perspective. Not seeing any error messages in the logs, an openssl s_client test connection works successfully, I can see the packets coming in over the port that I'm using for TLS, but nothing is being written to disk on the second syslog-ng server. Here's the summary of the config on the syslog server that I'm trying to send the logs to:
source:
source s_encrypted_syslog {
syslog(ip(0.0.0.0) port(1470) transport("tls")
tls(key-file("/etc/syslog-ng/key.d/privkey.pem")
certfile("/etc/syslog-ng/cert.d/servercert.pem")
peer-verify(optional-untrusted)
}
#changing to trusted once issue is fixed
destination:
destination d_syslog_facility_f {
file("/mnt/syslog/$LOGHOST/log/$R_YEAR-$R_MONTH-$R_DAY/$HOST_FROM/$HOST/$FACILITY.log" dir-owner ("syslogng") dir-group("syslogng") owner("syslogng") group("syslogng"));
log setting:
log { source (s_encrypted_syslog); destination (d_syslog_facility_f); };
syslog-ng is currently running as root to rule out permission issues. selinux is currently set to permissive. Tried increasing the verbosity on syslog-ng logs and turned on debugging, but not seeing anything jumping out at me as far as errors or issues go. Also the odd thing is, I have very similar config on the first syslog-ng server and it's receiving and storing logs just fine.
Also, I should note that there could be some small typo's in the config above as I'm not able to copy and paste it. Syslog-ng allows me to start up the service with no errors with the config that I have loaded currently. It's simply not writing the data that it's receiving to the destination that I have specified.
It happens quite often that the packet filter prevents a connection to the syslog port, or in your case port 1470. In that case the server starts up successfully, you might even be able to connect using openssl s_client on the same host, but the client will not be able to establish a connection to the server.
Please check that you can actually connect to the server from the client computer (e.g. via openssl s_client, or at least with something like netcat or telnet).
If the connection works, another issue might be that the client is not routing messages to this encrypted destination. syslog-ng only performs the SSL handshake as messages are being sent. No messages would result in the connection being open but not really exchanging packets on the TCP level.
Couple of troubleshooting tips:
You can check if there is a connection between the client and the server with "netstat -antp | grep syslog-ng" on the server or the client. You should see connections in the ESTABLISHED state on both sides of the connection (with local/remote addresses switched of course).
Check that your packet filter lets port 1470 connections through. You are most likely using iptables, try reviewing your ruleset and see if port 1470 on TCP is allowed to pass in the INPUT chain. You could try adding a "LOG" rule right before the default rule to see if the packets are dropped at that level. If you already have LOG rules, you might check the kernel logs of the server to see if that LOG rule produced any messages.
You can also confirm if there's traffic with tcpdump on the server (e.g. tcpdump -pen port 1470). If you write the traffic dump to a file (e.g. the -w argument to tcpdump, along with -s 0 to avoid truncation), then this dump file can be analyzed with wireshark to see if the negotiation takes place. You should at the very least see a "Client Hello" and a "Server Hello" packet which are not encrypted at the beginning of the handshake.

How to change the "cn" value to IP address instead of localhost in websphere Application server

I am trying to set up a client-server EJB using two different machines on my network. while installing WAS server it prompted me to add admin credentials, so LTPA is enabled (hope it enabled SSL). Now on client EJB deployed WAS server I have to configure the outbound IIOP SSL certificate(Correct me if I am wrong on this). But unfortunately in the server WAS admin console I can see SSL Signer certificates cn(Host/domain) parameter as localhost. the same "localhost" is arriving when I try to "retrieve from port" in client EJB WAS server.
I have attached the "Retrieve from port" screenshot
Client WAS retrieve from port action
Even I have tried changing the hostname in the server WAS under the Server-> Communications -> Port to IP address instead of localhost.
I expect it should bring domainname.ipaddress
"Retrieve from port" action always brings "localhost" from the remote server
As per the comment by #Gas, I am following this below link
https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/tsec_sslreplacecell.html
will update in short

Connect a Jaeger agent to a collector running in Openshift

I am having problems pointing a jaeger agent to a collector running in openshift.
I am able to browse my OCP collector endpoint doing this:
https://mycollectoropenshift.com:443
My jaeger agent Dockerfile currently looks like this
FROM centos:latest
EXPOSE 5775/udp 6831/udp 6832/udp 5778
COPY agent-linux /go/bin/
#CMD ["--collector.host-port=localhost:14267"]
#CMD ["--collector.host-port=https://mycollectoropenshift.com:443"]
CMD ["--collector.host-port=mycollectoropenshift.com:443"]
ENTRYPOINT ["/go/bin/agent-linux"]
I get the expected result when i point my agent to a collector running locally per the first commented line.
I get the following error using the second uncommented CMD flag.
error":"dial tcp: address https://mycollectoropenshift.com:443: too many colons in address"
When i attempt the agent to the collector running on openshift, i get the error below
Failed to run the agent: listen tcp 10.100.120.221:443: bind: cannot assign requested address
I am able to successfully curl the collector endpoint by doing this
curl https://mycollectoropenshift.com:443
I get the following error when i attempt to curl the endpoint this way:
curl mycollectoropenshift.com:443
curl: (52) Empty reply from server
I need help setting up a proper --collector.host-port flag that will connect to a collector running remotely behind an HTTPS protocol.
I don't think it's possible at the moment and you should definitely ask for this feature in the mailing list, Gitter or GitHub issue. The current assumption is that a clear TChannel connection can be made between the agent and collector(s), all being part of the same trusted network.
If you are using the Java, Node or C# client, my recommendation in your situation is to have your Jaeger Client to talk directly to the collector. Look for the env var JAEGER_ENDPOINT in the Client Features documentation page.

Analyse number of possible keepalive_requests from the client side

I'd like to figure out the value of 'keepalive_requests' for a given 'Nginx' or 'Apache' server from the client side. The default for 'Nginx' is 100 (http://nginx.org/en/docs/http/ngx_http_core_module.html) but I'd like to analyse this for www.example.com where I don't have access to the config.
Obviously I could start a Wireshark and do it manually. I was hoping on some sort of easy shell (e.g. 'wget' like) command.
From client side, I use ab test, wireshark, and an editor like notepad++ to count number of http requests in a socket.
First I use apache ab test to send request, something as bellow:
ab -n 100 -c 100 http://www.example.com/index.php
Before execution, start my wireshark, and set the display filter:
ip.dst == && tcp.port == 80 && !http && tcp.flags.fin==1
After the ab test finished, the result list of wireshark shows the total sockets used during the requests. Right click one packet, and click follow TCP stream, the opened windows shows all the message send and receive in this socket.
If the last FIN direction was sent by remote nginx server, which was influenced by the value of keepalive_requests. We can copy the all requests in this TCP connection into the notepad++, search for the key word and then count the requests in this socket. the number presents the value of keepalive_requests setting in the remote nginx server.
BTW, I wish a better solution, my solution is not so good.

Setting up passive FTP (IIS6) on Windows Server 2003

I am having trouble setting up passive FTP on IIS 6. I used this instruction: http://www.velikan.net/iis-passive-ftp/
When I tried to upload a file through the FTP, I got the error:
425 Can't open data connection. : /index.html
The interesting thing is that from the server, I can see the index.html file is already created but the file size is set to 0.
I am using the FireFTP client. I opened the FTP passive ports for 1024-1048.
Any ideas? Thanks!
Have you set the passive port range and opened the ports on the server and any intermediate firewall? (allowing connections on those ports from client to server)
Have you allowed the ports/application in your local firewall? (allowing connections outwards)
In the FTP client log does it say PASV at some point?
The command to create the file is sent on the port 21 connection, the additional port is the one for data. So creating a 0kb file just shows that it is not working.
Few things to check-
Make sure the client is making PASV connections. Check the ftp client logs to see if is sending PASV command before retrieving any data.
FTP passive ports are NOT 1024-1048, the server randomly picks any ports above 1024, as far as i know.