WebLogic server - large file transfers - weblogic

I may be giving entirely the wrong information here, but at the moment we're a bit unsure where to look for the issue. We have a server running on WebLogic, of which version I'm not sure.
Our site has an installer that clients need that can run around 15 MB. Normally, this downloads perfectly fine, but we've recently been finding issues in the download where the browser reports it completed, but the installer can't be opened - it appears that the filesize isn't what it's expected to be either, like the download was just cut off.
The issues are relegated to instances where the user is on a spotty connection, such as a 3G card in their laptop.
It seems to happen mostly on Macs, but it seems like that's because the mac .dmg file is much larger than the windows executable. Still, from my knowledge of network protocols, a spotty network shouldn't cause the specific issue we're seeing.
At the moment, we're debugging several of the layers of the transfer, like our firewalls, but with my meager knowledge of Weblogic, I'm kind of curious if there is something we could be missing in the server's configuration itself.
Unfortunately, I'm not sure if I am able to post the configuration files here - I'm pretty sure at the moment, there are no servlet rules created specifically for the installer's directory - but I was hoping someone here might at least recognize this type of issue and be able to point me in the right direction.

Check if you have any maxpostsize limit set.
Check for the responses that has failed if there's any socket timeout errors seen in the log file.
If you are using a proxy, check for error there related mainly to sockets.
Such issues can come when a tcp socket is timed out at the firewall end, WLS end, Frontend proxy like apache end.
There are few other settings like http connection timeout I think in WLS.
check from admin console-server-protocol-general tab or http tab.

Related

web logic server Breach Help! How do Find Signs of what data if any was accessed?

A Weblogic server got hacked and the problem is now removed.
I am looking through the infected VM's now in a sandbox and want to see what if any data was accessed on the application servers.
the app servers were getting hammered with ssh requests and so we identified the infected VM's as the web logic VMS, we did not have http logging on. Is there any way to Identify if any PII was Compromised?
Looked through secure logs on weblogic as well as looked through the PIA logs
I am not sure how to identify what if any data was accessed
I would like to find out what went out of our network and info or data
what should I be looking for
is there anything I can learn from looking at the weblogic servers running on red hat?
I would want to believe that SSH was not the only service being hammered, and that was a large attempt to make eyes be on Auth logging whilst an attempt on other services is made.
Do you have a Time frame that you are working with?
Have the OS logs been checked for that time frame?
.bash_history been checked? env variables? /etc/pass* for added users? aliases? reverse shells open on the network connections? New users created on services running on that particular host?
Was WebLogic the only service running on this publicly available host?
What other services and ports were available?
Was this due to an older version of Weblogic or another service, application, plugin?
Create yourself an excel spreadsheet and start a timeline.
Look at all the OS level logging possible and start to make note of anything that looks suspicious, to then follow that breadcrumb to exhaustion.

NSURLSession - The request timed out

I'm posting data from my app to my server using NSURLSession when a button is pressed. I can successfully send the data to my server and insert into a database, for the first two occasions, but any time after that, the request times out.
I've tried: changing session configuration (connections per host, timeoutInterval etc), session configuration types, changing the way the data is posted.
Has anyone seen this sort of behaviour before and know how I can fix this issue?
Or is it a server issue? I thought my server was down initially. I couldn't connect to it, nor load certain pages. However, it was only down for me. After rebooting my modem, I could connect back to the server. I didn't have any issues connecting to phpMyAdmin.
If the problem was reproducible after a reboot of the router, then I would look into whether Apple's captive portal test servers were down at the time.
Otherwise, my suspicion is that it is a network problem rather than anything specific to your app.
It is quite possible that the pages you were loading successfully were coming from cache.
Because you said that rebooting your modem fixed the problem, that likely means that your modem stopped responding to either DHCP requests or DNS lookups (either for your domain or for one of the captive portal test domains).
It is also possible that you have a packet loss problem, and that it gets worse the longer your router has been up and running. This could cause some requests to complete and others to fail.
Occasionally, I've seen weird behavior vaguely similar to this when ICMP is getting blocked too aggressively.
I've also seen this when a stateful firewall loses its mind and forgets the state.
This can also be caused by keeping HTTP/HTTPS connections alive past the point at which the server gives up and drops the connection, if your firewall is blocking the packet that tells you that the connection was closed by the remote end.
But without a packet trace, there's no way to be certain. To get one:
If your network code is running on OS X, you can just do this with tcpdump on your network interface.
If you are doing this on iOS, you can do this by connecting your computer via wired Ethernet, enabling network sharing over Wi-Fi, pointing tcpdump at the bridge interface, and pointing your iPhone at that Wi-Fi network.
Either way, that will tell you if there are requests going out that never come back, and more importantly, what type of requests they are, and who is responsible for replying to them. Once you have that information, if the source of the problem isn't obvious, post a link to the packet trace and we'll add more suggestions.

Cocoa server with user friendly automatic port forwarding or external ip lookup

I am coding a mac app, which will be a server that serve files to each user's mobile device.
The issues with this of course are getting the actual ip/port of the server host, as it will usually be inside of a home network. If the ip/port changes, its no big as i plan to send that info to a middle-man-server first, and have my mobile app get the info from there.
I have tried upnp with https://code.google.com/p/tcmportmapper/ but even though I know my router supports upnp, the library does not work as intended.
I even tried running a TURN server on my amazon ec2 instance, but i had a very hard time figuring how what message to communicate with it to get the info i need.
I've been since last night experimenting with google's libjingle, but am having a hard time even getting the provided ios example to run.
Any advice on getting this seemingly difficult task accomplished?
The port of your app will not change. The IP change could be handled by posting your servers IP to a web service every hour or whatever time period you want.
Server should run a URL http://your-web-service.com/serverip.php?ip=your-updated-ip and then have your serverip.php handle the rest (put it into a mySQL db or something)
When your client start it should ask your site for the IP and then connect to your server with that.
This is a pretty common way of handling this type of things.

apache with node.js

I am having some problem in writing jade script.So am wondering if i could host my webpages
on apache. Once the page opens it will open a socket to port 3000 and then node server will start pushing notifications.
Will it give some warning in browser? any other concern with this approach?
So Apache willl run on 80
node on 3000
I had done this once for testing purposes, works well, no warnings were given.
Concerns would be that splitting website background on two servers will bring an extra complexity to the project. If you will load any parts from node initially, it will cause longer loading time, because you will only be able to establish connection to node when main page from apache will finish load. Also there is a possibility of duplicated code doing same thing under different servers.
In conclusion I would say that if there is no special needs from other servers, I would try my best to keep everything on node with jade, even if that would require some learning, it will definitely payback later.
If the only reason you're using apache is to serve the initial page, I would suggest the nginx in that case, because it is more lightweight but I haven't tested it by myself.

Avoiding bandwidth cap by ISP on an Apache HTTP Web Server: encryption methods can do it?

I got a working Apache HTTP web server on my computer so a friend (and only him, no one else) who has no computer at home could get my files directly, as from an website, from an Internet café.
I did some speed tests on my computer at home and on my computer at workplace and found out that, in both cases, I get almost full bandwidth (~7MB/s) when using protocol encryption methods in some P2P softwares (BitTorrent, eMule). This leads me to believe that this is happening because the data is hidden from their ISPs.
Well, at the same very moment, when downloading from my web server at home to my work, it goes sluggish as hell (~90KB/s)...
Is there a protocol encryption method like the one in P2P to prevent my Apache web server from being slowed down by the ISP? Or at least some alternate solution to achieve better speed in this situation? Tried HTTPS but it seemed to not work.
Download != upload. Your upload at home will most likely be 1 Mbit (do you have an ADSL connection?), which will come down to ~ 90 KB/s.
But this doesn't belong on SO. :-)