Restlet Java SE 2.1.2 jvm memory usage - restlet

we use Restlet to make many concurrent http requests to a web server. A couple of days ago we upgrade Restlet libs from 2.0.14 to 2.1.2 and we note a very high heap memory consumption with respect to the old version.
Here are some data:
- 2.0.14 -> 100 threads ~115MB
- 2.1.2 -> 30 threads ~90MB
Has anyone else notice this behaviour?
Here is an example of a request:
...
clientResource = new ClientResource(URI);
representation = clientResource.get(MediaType.APPLICATION_JSON);
...
Thx

Related

How to switch off TLSv1.3 in gatling?

I recently migrated from Gatling 3.3.1 to Gatling 3.4.0.
As a result, everything works fine in my local machine, but crashes in k8s because of the following error:
Couldn't execute warm up request https://gatling.io
java.lang.IllegalArgumentException: TLSv1.3
at sun.security.ssl.ProtocolVersion.valueOf(ProtocolVersion.java:187)
at sun.security.ssl.ProtocolList.convert(ProtocolList.java:84)
at sun.security.ssl.ProtocolList.<init>(ProtocolList.java:52)
at sun.security.ssl.SSLEngineImpl.setEnabledProtocols(SSLEngineImpl.java:2081)
...
I migrated back to the working version.
I assumed from here, that TLSv1.3 is switched on by default.
I searched for the appropriate setting in gatling-defaults.conf, but did not succeed.
I use Java 1.8 both locally and on remote k8s
Please help me to resolve this issue!
Thanks in advance!
In order to support TLSv3, Gatling needs:
either to be able to load netty-tcnative (basically BoringSSL)
or run on Java 11+ where TLSv3 is available
We can see in the logs that the former fails. We can also see that netty_transport_native_epoll_x86_64 can't be loaded while netty_transport_native_epoll_x86 can. This means you're running on a 32-bit Linux. netty-tcnative/BoringSSL is only available on 64-bit.
The latter fails as you stated running on Java 8.
We can probably improve things on our side, but you should switch to a 64-bit host.
Otherwise, you can enforce the list of supported protocols in gatling.conf, see https://github.com/gatling/gatling/blob/master/gatling-core/src/main/resources/gatling-defaults.conf#L57

Symfony Messenger: retry delay not working with Redis transport

I have a Symfony 4 application using the Symfony Messenger component (version 4.3.2) to dispatch messages.
For asynchronous message handling some Redis transports are configured and they work fine. But then I decided that one of them should retry a few times when message handling fails. I configured a retry strategy and the transport actually started retrying on failure, but it seems to ignore the delay configuration (keys delay, multiplier, max_delay) and all the retry attempts are always made without any delay, all within one second or a similarly short timespan, which is really undesirable in this use case.
My Messenger configuration (config/packages/messenger.yaml) looks like this
framework:
messenger:
default_bus: messenger.bus.default
transports:
transport_without_retry:
dsn: '%env(REDIS_DSN)%/without_retry'
retry_strategy:
max_retries: 0
transport_with_retry:
dsn: '%env(REDIS_DSN)%/with_retry'
retry_strategy:
max_retries: 5
delay: 10000 # 10 seconds
multiplier: 3
max_delay: 3600000
routing:
'App\Message\RetryWorthMessage': transport_with_retry
I tried replacing Redis with Doctrine (as implementation of the retrying transport) and voila - the delays started to work as expected. I therefore suspect that the Redis transport imlementation doesn't support delayed retry. But I read the docs carefully, searched related Github issues, and still didn't find a definite answer.
So my question is: does Redis transport support delayed retry? If it does, how do I make it work?
It turned out that Redis transport supports delayed retry, but only since Messenger version 4.4.

Get Jaeger agent error when distributed trace spans Node.js and Java services

In our application a Node.js front end talks to a Java Spring backend. Everything is containerized and running in Kubernetes. Some time ago we added support for Jaeger distribtued tracing across the front end and back end services. Jaeger has been running fine until recently.
Our Elasticsearch cluster was out of date so we upgraded. That mandated an upgrade of Jaeger--we ended up with the following bits:
Jaeger Helm Chart: 0.13.3 from https://github.com/helm/charts/tree/master/incubator/jaeger
Jaeger Client for Node: 3.17.1
Jaeger Client for Java:
opentracing-spring-jaeger-cloud-starter 2.0.3
opentracing-spring-jaeger-web-starter 2.0.3
Both of the opentracing libraries have a dependency on the version 0.35.1 of the Jaeger Java client.
Since upgrading, traces that are created on one side or the other seem to be fine. But traces that span the boundary (i.e. start on the Node.js front end and complete on the Java backend) generate errors in the jaeger-agent pod like this:
{"level":"error","ts":1574224941.7531824,"caller":"processors/thrift_processor.go:119",
"msg":"Processor failed","error":"*jaeger.Batch error reading struct: *jaeger.Span error
reading struct: *jaeger.Log error reading struct: *jaeger.Tag error reading struct:
error reading field 3: Invalid data length","stacktrace":"github.com/jaegertracing/jaeger/cmd/agent/app/processors.
(*ThriftProcessor).processBuffer\n\t/home/travis/gopath/src/github.com/jaegertracing/jaeger/cmd/
agent/app/processors/thrift_processor.go:119\ngithub.com/jaegertracing/jaeger/cmd/agent/app/proc
essors.NewThriftProcessor.func2\n\t/home/travis/gopath/src/github.com/jaegertracing/jaeger/cmd/a
gent/app/processors/thrift_processor.go:83"}
For these traces, the Jaeger UI shows us the spans that were created by the front end before invoking the backend API, but the child backend spans do not show up as you would expect.
What might cause this sort of processor error?
It looks like you have different versions of opentracing. The spring-starter-jaeger version 2.x upgrade the version of opentracing, so you might have introduced this breaking changes when you upgraded the dependency version.

How do I get a Twitter feed using Pharo?

Since Twitter changed their website design, I cannot get a set of tweets from any account by using built-in Zinc classes. It throws an error that says: ConnectionClosed: Connection closed while waiting for data
I am using Pharo 5, and I don't know how to tweak the ZnClient settings in order to keep the connection open or something to the purpose of getting the data.
testTwitter
| client |
self ensureSocketStreamFactory.
self isNativeSSLPluginPresent ifFalse: [ ^ self ].
(client := ZnClient new)
get: 'https://www.twitter.com/pharoproject'.
self assert: client isSuccess.
self assert: (client contents includesSubstring: 'Twitter').
client close
That's the test I have in place, it never passes, and throws the error mentioned above. What's missing here? I did a Ruby script using open-uri, openssl and Nokogiri and it fetched the tweets just fine. Perhaps it's a problem with the SSL connection itself?
The issue here is quite easy to answer, but you won't like it. Your issue is connected to the fact that the Twitter has deprecated support for TLS 1.0, TLS 1.1 on July 15/2019. Your pharo is using the deprecated TLS to connect. That is the reason why you are getting the timeout.
The solution?
You have to compile the new SSL/TLS support yourself which is not an easy task to do. You have to compile in at least TLS 1.2 to be able to connect again. There is lack of Pharo documentation how to compile support for new libraries. My guess is that you are using TLS 1.0 (see a note below) - since Pharo 6.1 (so your Pharo 5.x will have same or older libraries) has libgit2.so compiled against libssl.so.1.0.0 (which has dependency libcurl-gnutls.so.4) - If you update the libraries you can see that those support >= TLS 1.2.
Note:
This is connected to the issue which I have posted some time ago. Nobody upvoted it or answered so it got automatically deleted - you can vote to undelete it: https://stackoverflow.com/questions/51399321/getting-error-when-adding-ossubprocess-to-my-pharo-6-1-on-centos-7-4x (see the bottom of the post for the question). I don't have an answer for that as I have dedicated time to my Smalltalk/X project.
Or just switch to a newer Pharo. Adding your method to ZnHTTPSTest in Pharo 8 just works (tested on Pharo 8 build 686, Ubuntu 18.04.02 LTS with the stable vm in PharoLauncher)

Glassfish 3.1 Maximum URI Length

I'm having this error when using a long GET request:
SEVERE: GRIZZLY0039: Request URI is
too large.
java.nio.BufferOverflowException
What is the configuration I have to change for Glassfish 3.1?
I tried changing these parameters but had no success :
- header-buffer-length-bytes (through admin console)
- request-body-buffer-size-bytes (in domain.xml)
Thanks.
We had the same problem while deploying a large application into a cluster (standalone deployment was working). A solution, that worked for us was to increase the TCP buffer size.
In the administration console change
'Configurations -> cluster-config -> Network Config -> Transports -> tcp -> Buffer Size' to 131072 or more.
Hope that helps. -- Wintermute