Browser cannot get reponse from fuseki if a long running query is executed - apache

Recently I have configured a reverse proxy(apache2 in ubuntu 16) for fuseki2 as a web application using tomcat7. However, there is a problem still confusing me that I can only get results if I execute some simple queries which don't need a long processing time (less than 10mins in chrome exactly) from the backend. I will otherwise get a print of "unable to get a response from endpoint". The timeout is strictly 10mins. I have increased the timeout for fuseki2 as well as the reverse proxy, but this doesn't work for me.
I have traced the error.log in apache2 and catalina.out in tomcat and found no error. Besides, Tomcat is keep runing when "no response" error appears in Chrome.
Anyone has any idea about this? Any help or hint will be appreciated.

Related

Enabling TLS in Elasticsearch

I'm having problems enabling TLS in Elasticsearch 7.1.1 running on Windows 7.
I have a single node with certificates created as
elasticsearch-certutil ca
elasticsearch-certutil cert --ca elastic-stack-ca.p12
The elasticsearch.yml file has the following settings
node.name: node1
discovery.type: single-node
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: 'C:\elasticsearch-7.1.1\config\certs\elastic-certificates.p12'
xpack.security.transport.ssl.truststore.path: 'C:\elasticsearch-7.1.1\config\certs\elastic-certificates.p12'
This works fine but when I add the below
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: 'C:\elasticsearch-7.1.1\config\certs\elastic-certificates.p12'
xpack.security.http.ssl.truststore.path: 'C:\elasticsearch-7.1.1\config\certs\elastic-certificates.p12'
and start up elasticsearch I see the following error
[2019-06-25T07:34:19,659][WARN ][o.e.h.AbstractHttpServerTransport]
[node1] caught exception while handling client http traffic, closing
connection Netty4HttpChannel{localAddress=0.0.0.0/0.0.0.0:9200,
remoteAddress=/127.0.0.1:6757}
io.netty.handler.codec.DecoderException:
io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:
This is repeated every 10-15 seconds.
https is enabled though and I can access the node using https://localhost:9200
I don't know why I receive the above error though as nothing else is running and accessing elasticsearch.
Any help would be much appreciated.
Thanks heaps
It was pointed out to me, on the elastic forum, that the above is a warning and not an error. I still couldn't understand what was causing it as I wasn't running any service or anything else that could be causing it but eventually found something called heartbeat that was running. This was obviously setup in an earlier version/previous installation of Kibana and this was still running, making a call using http and thus causing the above error (this is used for creating dummy data to use for/by to demo Kibana).
I came across this problem too. And IF you have previously installed Elastic Search then high chance you got some residual indices with "red" status, which makes the process of enabling TLS unsuccessful.
Try this command to verify your indices and their statuses
curl -XGET https://localhost:9200/_cat/indices
then delete those with red status.

URL works fine from browser but from google plus we get a internal service error

The website URL works fine. The link from google maps on the phone works fine. For some reason when you google it in the browser (mobile browser or desktop) you get an Internal Service Error. Google said it was nothing on their end to be done.
Has anyone encountered this or have an idea of what's causing it? The error received is below.
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator and inform them of the time the error occurred, and the actions you performed just before this error.
More information about this error may be available in the server error log.
Apache Server at www.thetechbuyer.com Port 80
As the error suggests, an Internal Server Error is internal to your webapp or web service on your host. It usually means that your program, whatever it is doing, has crashed. This could be because of misconfiguration, or a bug in your code, or something else - without knowing more (possibly a lot more) about your server environment, it is impossible to know what.
Check your server logs - they should have more information.
(It also isn't clear what this has to do with Google+.)

LDAP commands are still working without ldap_mod enabled

I've been having a problem with my (development) site hanging when I try to login using LDAP credentials. Using xdebug, I was able to pinpoint the hang to a specific line of code, which is a call to ldap_bind(...). After days of trying to understand why it is hanging, one of my debugging techniques was to disable mod_ldap, and then just try to get any error to show up in apache log (after this has started, apache error logs haven't been recording any errors while performing this http request; It 'hangs' too).
What I did
I disabled the module (sudo a2dismod mod_ldap), restarted the server (sudo service apache2 restart), and confirmed that the module isn't enabled (apache2ctl -M does not show mod_ldap)
The Problem
It still hangs when it reaches ldap_bind(), however, it shouldn't even be reaching that point because without mod_ldap, my code shouldn't even be successfully calling ldap_connect() (right?) which is returning (resource) resource id='5' type='ldap link' (meaning the call was successful). I'm expecting a NoMethod or Function error.
Why can php make a call to a module that isn't enabled, and how do I stop that behavior?
Version
Ubuntu 14.04
Apache 2.4.7
PHP 5.5.9-1ubuntu4.5
Well it turns out, I had a novice understanding of what was happening. Never occurred to me that ldap_connect() as a php call had nothing to do with mod_ldap or any apache module. Thus, disabling/enabling apache mods wasn't doing anything because... it had nothing to do with apache mods.
So... I was able to disable ldap php extension with
sudo php5dismod ldap
and able to check that it was disabled with
php -m
And sure enough, after restarting the server, running my script caused a No function error for ldap_connect() (just as I was expecting/hoping). Now I just need to figure out why ldap_bind() hangs...

ORA-29273: HTTP request failed intermittent error using the utl_http package

I'm using the utl_http package to make HTTP GET requests to an IIS site on the same server (local) as Oracle. Sometimes it works and I get the response, but more often than not it hangs for about 15 seconds and then I get this error:
ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1722 ORA-29263: HTTP protocol error
As a test, I've got a small static text file in the IIS site, so this is how I'm testing it:
select utl_http.request('http://domain.com/test.txt') from dual
I get the same problem if I run it in Oracle Apex instead of direct on the db.
The other thing I've tried is to create a package of my own that does the HTTP request using the long utl_htp.begin_request() method, instead of the utl_http.request() shortcut. This gives the exact same problem (works sometimes but errors mostly - same error).
The pattern I'm seeing is if I wait a while and then try, it works for the first 2-10 times, and then begins erroring. When it does work, I get the response instantly and when it errors, there is always the delay before the error.
If I request the text file URL (or any other resource in the site) using a remote web browser then I get the correct response every time.
I have tried setting a timeout like below but it doesn't have any effect. For example instead of timing out after 3 seconds it continues for 10 or 15 seconds before the error is shown.
UTL_HTTP.set_transfer_timeout(3);
I think I can rule out ACL because it works sometimes.
Does anyone know what might cause this behaviour?
Possible reasons
-> You may have a problem with your TNS-Listener.
From the command prompt window, try to run TNSPING service_name .. try to run it quickly several times and check if it fails in some of them.
I had once a similar problem. Try to re-configure your TNS-Listener.
There must be also an option in which you can give an IP number in the TNS-listener definition. This also solves sometimes these kind of problems.
-> IIS problem.
Read about SET_PERSISTENT_CONN_SUPPORT Procedure:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_http.htm#i1027673
Using: utl_http.set_persistent_conn_support(true, 30);
Could you be exceeding the limit of of concurrent HTTP connections? I vaguely remembering that I run into a similar problem when I forgot to close the HTTP connection.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.