AKS API Load testing error: Premature end of Content-Length delimited message body - api

While load testing, after some successful responses from the API, JMeter records errors:
'Premature end of Content-Length delimited message body'.
From logs inside the code the response seems to complete normally.
The APP is deployed on AKS with ingress nginx/1.15.10 controllers. The APP consists of 4 separate APIs (one master calling the 3 others). The APIs are created in FLASK with CONNEXION and run in a WSGIContainer on a Tornado HTTPServer.
Another confusing factor is that the APP is deployed on two AKS instances on the same cluster. The one deployment does not return errors and the other does.
What could be causing the error?

I would suggest to limit your testing scope.
1) target the application directly (bypassing the k8s svc and ingress controller). ensure you target each app running on the two different nodes. Do you still see the issue ?
2) target the app service directly (bypassing ingress controller), ensure you target each app running on the two different nodes. Do you still see the issue ?
3) target the app using its ingress, ensure you target each app running on the two different nodes. Do you still see the issue ?
Based on those results, we should be able to pinpoint better the source of your issue.

Related

Intermittent problems starting Azure App Services: "500.37 ANCM Failed to Start Within Startup Time Limit"

Our app services are experiencing the problem, that they can’t be restarted by the hosting environment (ANCM).
The user is getting the following screen in that case:
Http Error 500.37
Our production subscription consists of up to 8 different app services and the problem can randomly harm one of them ore some of them.
The problem can occur several times a week, or just once a month.
The bootstrapping procedure of our app services is not time consuming.
The last occurrence of the problem has this log entries within the eventlog:
Failed to gracefully shutdown application 'MACHINE/WEBROOT/APPHOST/XXXXXXXXX'.
followed by:
Application '/LM/W3SVC/815681839/ROOT' with physical root 'D:\home\site\wwwroot' failed to load coreclr. Exception message: Managed server didn't initialize after 120000 ms
In most cases the problem can be resolved by manually stopping and starting the app service. In some cases we had to do that twice.
We are not able to reproduce that behavior locally.
The App Service Plan is S2 and we actually use just one instance.
The documentation of the Http error 500.37 recommends:
"You may need to stagger the startup process of multiple apps."
But there is no hint of how to do that.
How can we ensure that our app services are restarted without errors.
HTTP Error 500.37 - ANCM Failed to Start Within Startup Time Limit
You can try following approaches:
Approach 1: If possible, can try to move one app into a new App Service with a separate App Service plan, then check whether it can start as expected.
Please note that creating and using a separate App Service plan would be charged.
Approach 2: Increasing the startupTimeLimit attribute of the aspNetCore element.
For more information about the startupTimeLimit attribute, please check: https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module?view=aspnetcore-3.1#attributes-of-the-aspnetcore-element

WSO2 APIM 2.0 Gateway-Worker-Node: "the requested resource XXX is not available"

I have a gatewaymanager (GWM) with 2 worker nodes. When I deploy an API its pushed to the GWM and is available threre --> API-Call works fine.
I decided to synchronize the APIs from the GWM to the worker nodes via rsync. The filesystems under ~wso2/repository/deployment/server on the workernodes are synced and similar to the GWM node.
But when I call the API on a worker node I get this message:
<am:fault xmlns:am="http://wso2.org/apimanager"><am:code>404</am:code>
<am:type>Status report</am:type><am:message>Not Found</am:message>
<am:description>The requested resource (/XXX/1/foo) is not available.
</am:description>
</am:fault>
I also restarted the workes, but same result.
Did I miss something or is there a trigger to load the APIs on the workers to the cache, or something like this?
Faced same issue , when the contents of mediation files were changed.
**Solution which worked for me **
Demote your api to created
Ensure gateway is checked
Redeploy it

unable to delete osb_server1 in the osb 10.3.6.0

There are scripts that build the admin server, then create clusters, managed servers, machines etc and when this domain is built, it is seen that an additional phantom server osb_server1 with port 8011, is getting built that isn't attached to any cluster or any machine.
This is built when the wlsb.jar was being referenced during one of the scripts.
Once after the admin server is up and running and we have other managed servers as well, Was trying to remove osb_server1 and this error creeps up
weblogic.management.configuration.AppDeploymentMBeanImpl.isCacheInAppDirectorySet()
Errors must be corrected before processding
There are like 120 default deployments on OSB that are targeted to osb_server1, was trying to retarget them to another server, but that is also throwing an error ...
Any ideas ???
That's due to the weird behaviour/bug of the standard osb template. There is a discussion here. http://theheat.dk/blog/?p=1255.
I didnt follow the steps given by Oracle(as in the URL). What I did was,
I keep the default osb_server1, and make it part of the cluster during the domain creation(ie, it's the first server). Once the domain is created, I re-set the osb_server1 to the desired value. That way the singleton services will still be deployed to the 1st server and others to cluster. Using WLST:
readDomain(domain_name)
cd('/Servers/osb_server1')
set('ListenPort', osb1_listen_port)
set('Name', osb1_name)
cd('/Servers/' + osb1_name + '/ServerDiagnosticConfig/osb_server1')
set('Name', osb1_name)
updateDomain()
closeDomain()

nginx + multiple instance of fastcgi-mono-server = WebResource.axd error

I'm running nginx which does load balancing over several instances of fastcgi-mono-server4 configured upstream.
Apparently when a webresource link is handled by a different fastcgi-mono-server than the one which originally produced the link it returns a 404 error.
I have set a persistent machinekey as recommended for webfarms but the problem still remains.
Any idea what could be wrong?
If it makes any difference: the application is written with F#/WebSharper and we disabled the session state and the forms authentication.
Thanks

glassfish load balancer principle of operation

I have configured cluster with two instances on glassfish 3.1.1 and iPlanet Web Server as a load-balancer (on the same machine). For test application provided with glassfish everything works ok (and this application has session replication enabled).
But when I try to make my own application working following situation takes place: it responds when I send requests on ports of a particular instances (that is 28080 and 28081), but when I try to send request through load balancer (port 81) I get error 404. My application has not session replication enabled yet, but it can just make a connection and create two other sessions for each instance. I would like to get similar effect with load balancer.
So I would like to determine:
Is session replication strongly required to load balancer works fine?
Does anyone know any other reasons of this error?
Message from iPlanet log:
[23/Aug/2012:05:44:16] failure ( 4120) myHost: for host 127.0.0.1 trying to GET /myApp/login.jsp, service-j2ee reports: PWC6117: File "c:/webserver7/https-myHost/docs/myApp/login.jsp" not found
Additional conclusions:
(81 - http-listener port on iPlanet)
When I send GET http://localhost:81/testApp then loadbalancer passes it to glassfish and returns correct site. But when I try the same with my test application, GET http://localhost:81/myApp then iPlanet looks for this site in its own resources (docs directory as in log above)
fragment of myHost-obj.conf:
<Object name="default">
AuthTrans fn="match-browser" browser="*MSIE*" ssl-unclean-shutdown="true"
NameTrans fn="name-trans-passthrough" name="lbplugin" config-file="C:/WebServer7/https-myHost/config/loadbalancer.xml"
NameTrans fn="assign-name" name="perf" from="/.perf"
NameTrans fn="ntrans-j2ee" name="j2ee"
NameTrans fn="pfx2dir" from="/mc-icons" dir="C:/WebServer7/lib/icons" name="es-internal"
PathCheck fn="uri-clean"
PathCheck fn="check-acl" acl="default"
PathCheck fn="find-pathinfo"
PathCheck fn="find-index-j2ee"
PathCheck fn="find-index" index-names="index.html,home.html,index.jsp"
ObjectType fn="type-j2ee"
ObjectType fn="type-by-extension"
ObjectType fn="force-type" type="text/plain"
Service method="(GET|HEAD)" type="magnus-internal/directory" fn="index-common"
Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file"
Service method="TRACE" fn="service-trace"
Error fn="error-j2ee"
AddLog fn="flex-log"
</Object>
First, if you are running the Load Balancer plugin, then you may have a support contract (a GlassFish license is required before you put the plugin into production). If so, calling support is a good option.
To answer your first question, session replication is not required for the Load Balancer to work.
As a shameless plug, I have a 5-part youtube series on setting this up. You can skip the videos on downloading and installing and go straight to setup/configuration/testing. Based on what you describe, I suspect the issue isn't the plugin itself, but the loadbalancer.xml configuration. Look at loadbalancer.xml and see if myApp is configured.
Hope this helps.