web logic server Breach Help! How do Find Signs of what data if any was accessed? - weblogic

A Weblogic server got hacked and the problem is now removed.
I am looking through the infected VM's now in a sandbox and want to see what if any data was accessed on the application servers.
the app servers were getting hammered with ssh requests and so we identified the infected VM's as the web logic VMS, we did not have http logging on. Is there any way to Identify if any PII was Compromised?
Looked through secure logs on weblogic as well as looked through the PIA logs
I am not sure how to identify what if any data was accessed
I would like to find out what went out of our network and info or data
what should I be looking for
is there anything I can learn from looking at the weblogic servers running on red hat?

I would want to believe that SSH was not the only service being hammered, and that was a large attempt to make eyes be on Auth logging whilst an attempt on other services is made.
Do you have a Time frame that you are working with?
Have the OS logs been checked for that time frame?
.bash_history been checked? env variables? /etc/pass* for added users? aliases? reverse shells open on the network connections? New users created on services running on that particular host?
Was WebLogic the only service running on this publicly available host?
What other services and ports were available?
Was this due to an older version of Weblogic or another service, application, plugin?
Create yourself an excel spreadsheet and start a timeline.
Look at all the OS level logging possible and start to make note of anything that looks suspicious, to then follow that breadcrumb to exhaustion.

Related

Verifying individual servers in a load balancing configuration

Here is my situation. Recently, my production environment has been burned by a few Windows updates that caused some production servers to stop responding. While we have since resolved the issue of both of the servers (which are in a load balancing configuration) getting updates on the same day, the question arouse, how do we check that the application running on each server is still working? If we call the load balancing IP, we may or may not hit a server that is working. So if the update takes out the application on one server, how do we know that this has happened
The only idea I have for this is to purchase 2 more SSL certificates and allocate 2 ip addresses and assign one to each server. This way I would be guaranteed that I would know each server is up (we have a 3rd party service pinging our servers). But I have to believe that there is a better way to do this?
Please note that I am a .Net developer by trade with only an extremely small smattering of networking and IIS experience, but I'm what my small company has. So please assume I don't know where a lot of stuff is and dumb down the answer.
Load balancer maintains live status of the servers ( based on timeouts or http health checks ). It uses this status to route the traffic only to active servers.
Generally, LBs have a dashboard through which you can check this status. If not, you can check it's logs.

Where is guacamole-client?

I've followed the instructions here: http://guac-dev.org/doc/gug/installing-guacamole.html
This says
Guacamole is separated into two pieces: guacamole-server, which provides the guacd proxy and related libraries, and guacamole-client, which provides the client to be served by your servlet container, usually Tomcat.
guacamole-client is available in binary form, but guacamole-server must be built from source. Don't be discouraged: building the components of Guacamole from source is not as difficult as it sounds, and the build process is automated. You just need to be sure you have the necessary tools installed ahead of time. With the necessary dependencies in place, building Guacamole only takes a few minutes.
And then proceed to describe how to install guacamole-server and use it. I can now go to http://localhost:8080/guacamole/ and access the server and see which clients have connected.
How do I connect a client though? I see no documentation of where the remote desktop needs to browse to in order to run the guacamole-client?
Or have I totally misunderstood this?
The key phrase in the quoted documentation is:
... guacamole-client, which provides the client to be served by your servlet container, usually Tomcat.
"guacamole-client" is the web application and the client. When a user visits the URL for your Guacamole server, logs in, and clicks on a connection, they are connected to the corresponding remote desktop via Guacamole's JavaScript client which is served to their browser like any other web application.
I can now go to http://localhost:8080/guacamole/ and access the server and see which clients have connected.
The list you see when you first log in to your Guacamole server is not the list of clients that have connected; it is the list of connections to remote desktops which are available. If you click on one of those connections, you will be connected using Guacamole's own built-in JavaScript client.
How do I connect a client though? I see no documentation of where the remote desktop needs to browse to in order to run the guacamole-client?
The remote desktop does not need to do anything - Guacamole will simply connect to it. You can see a video of the overall user experience on the Guacamole website which may hopefully clear things up for you:
https://vimeo.com/116207678
Overall:
You deploy guacamole-client (the web application) and install guacamole-server (the remote desktop proxy that the web application uses in the backend). The combination of these two pieces of software makes up a typical Guacamole server.
You and your users can log in through the web application and connect to remote desktops using a web browser.
You do not need to explicitly run a client.
It looks like this
Internet -> Guacamole server (on the local network) -> Desktop pc
I installed Guacamole in a vmware enviroment on Ubuntu.
There is a file in /etc/guacamole what is called user-mapping.xml
In that file you add or edit the connections available to the user you want.
A connection for that user must be set between the <connection> tags

Hosting a continuosly running Console application

Azure VM, Cloud service or Web job?
I have a configurable console application which runs continuosly. Currently it is running on a VM and consumes lot of memory (it is basically doing data mining).
The current requirement is to have multiple instances of this application with different set of configuration which can be changed by specific users.
So where should I host this application such that the configuration can be modified using some front end which provides access managements(like Sharepoint),ability to stop it/restart (like WCF service) without logging on the VM?
I am open to any suggestions/ideas. Thanks
I don't think there's any sold answer to this question as there is the preference variable but for what it's worth, if it were up to me I would deploy it against individual azure VM's for each specific set of users. That way if the server resources went up because of config changes the user group made it is isolated to that group, and with azure, will scale automatically to meet the resource demand. Then just build a little .net web app to allow user to authenticate and change configuration settings.
You could expose an "admin" endpoint for your service (obviously you need authentication here!) that:
1. can return the current configuration
2. accept new configuration
3. restart the service (if needed). Stopping the service will be harder, since that leaves the question on how to start it again.
Then you need to write your own (or use a 3-party (like sharepoint or a CMS)) application that will handle your users and under the hood consume your "admin" endpoint.
Edit: The hosting part: If I understand you correctly your app is just an console application today, and you don't know how to host it? Well, there are many answers to that question. If you have a operations department go talk to them, if you are on your own play around and see what fits you and your environment best!
My tip: go for a http/https protocol/interface - just because there are many web host out there, and you can easy find tools for that protocol. if you are on the .NET platform check out Web.API or OWASP
Azure now has Machine learning to process data mining.
You should check if it's suit to you.
Otherwise, you can use Webjob:
Allow you to have multiple instances of your long time running job (Webjon scaling out).
AppSettings can be change from the Azure Portal or using the Azure Management API

Cocoa server with user friendly automatic port forwarding or external ip lookup

I am coding a mac app, which will be a server that serve files to each user's mobile device.
The issues with this of course are getting the actual ip/port of the server host, as it will usually be inside of a home network. If the ip/port changes, its no big as i plan to send that info to a middle-man-server first, and have my mobile app get the info from there.
I have tried upnp with https://code.google.com/p/tcmportmapper/ but even though I know my router supports upnp, the library does not work as intended.
I even tried running a TURN server on my amazon ec2 instance, but i had a very hard time figuring how what message to communicate with it to get the info i need.
I've been since last night experimenting with google's libjingle, but am having a hard time even getting the provided ios example to run.
Any advice on getting this seemingly difficult task accomplished?
The port of your app will not change. The IP change could be handled by posting your servers IP to a web service every hour or whatever time period you want.
Server should run a URL http://your-web-service.com/serverip.php?ip=your-updated-ip and then have your serverip.php handle the rest (put it into a mySQL db or something)
When your client start it should ask your site for the IP and then connect to your server with that.
This is a pretty common way of handling this type of things.

Tracking internet activity of RDP sessions

Our product (Network Forensics and analytics tool) has a requirement to dissect RDP sessions on Windows 200x servers and
Map each session to a logged-in account.
Track all TCP/UDP sessions that are going to the internet.
URLs visited
External Servers and ports connected to, etc.
I have designed a code that can achieve this by installing an NT service on each of the terminal servers. This service will mine the data on that server and push it to my linux-based appliance. Alternatively it can log the information to the local Eventlog and then I can use simple WMI calls to retrieve this information.
However, I would like to know if there is a way to retrieve all TCP/UDP connections by polling the terminal servers externally (via WMI or otherwise) and gather the same information. Basically, I am trying to check if there is a way to avoid installing anything on the Windows terminal servers.
Thanks,
-Chandra