I have to create RedisTemplate for each of the requests (write/read) on demand.
The connectionfactory is JedisConnectionFactory
JedisConnectionFactory factory=new
JedisConnectionFactory(RedisSentinelConfiguration,JedisPoolConfig);
Once, I do operations with RedisTemplate.opsForHash/opsForValue, how to dispose the templates safely , so that the connection is returned to JedisPool.
As of now , I do this with
template.getConnectionFactory().getConnection().close();
Is this the right way ?
RedisTemplate fetches the connection from the RedisConnectionFactory and asserts that it is returned to the pool, or closed properly after command execution, depending on the configuration provided. (see: JedisConnection#close())
Closing the connection manually via getConnectionFactory().getConnection().close(); would fetch a fresh connection and close it right away.
So if you want to have a bit more control, you could fetch the connection, perform some operations and close it later
RedisConnection connection = template.getConnectionFactory().getConnection();
connection... // call ops as required
connection.close();
or use RedisTemplate.execute(...) along with RedisCallback, so that you do not have to worry about fetching and returning the connection.
template.execute(new RedisCallback<Void>() {
#Override
public Void doInRedis(RedisConnection connection) throws DataAccessException {
connection... // call ops as required
return null;
}});
Related
Problem Statement
Context
I'm a Software Engineer in Test running order permutations of Restaurant Menu Items to confirm that they succeed order placement w/ the POS
In short, this POSTs a JSON payload to an endpoint which then validates the order w/ a POS to define success/fail/other
Where POS, and therefore Transactions per Second (TPS), may vary, but each Back End uses the same core handling
This can be as high as ~22,000 permutations per item, in easily manageable JSON size, that need to be handled as quickly as possible
The Network can vary wildly depending upon the Restaurant, and/or Region, one is testing
E.g. where some have a much higher latency than others
Therefore, the HTTPClient should be able to intelligently negotiate the same content & endpoint regardless of this
Direct Problem
I'm using Apache's HTTP Client 5 w/ PoolingAsyncClientConnectionManager to execute both the GET for the Menu contents, and the POST to check if the order succeeds
This works out of the box, but sometimes loses connections w/ Stream Refused, specifically:
org.apache.hc.core5.http2.H2StreamResetException: Stream refused
No individual tuning seems to work across all network contexts w/ variable latency, that I can find
Following the stacktrace seems to indicate it is that the stream had closed already, therefore needs a way to keep it open or not execute an already-closed connection
if (connState == ConnectionHandshake.GRACEFUL_SHUTDOWN) {
throw new H2StreamResetException(H2Error.PROTOCOL_ERROR, "Stream refused");
}
Some Attempts to Fix Problem
Tried to use Search Engines to find answers but there are few hits for HTTPClient5
Tried to use official documentation but this is sparse
Changing max connections per route to a reduced number, shifting inactivity validations, or connection time to live
Where the inactivity checks may fix the POST, but stall the GET for some transactions
And that tuning for one region/restaurant may work for 1 then break for another, w/ only the Network as variable
PoolingAsyncClientConnectionManagerBuilder builder = PoolingAsyncClientConnectionManagerBuilder
.create()
.setTlsStrategy(getTlsStrategy())
.setMaxConnPerRoute(12)
.setMaxConnTotal(12)
.setValidateAfterInactivity(TimeValue.ofMilliseconds(1000))
.setConnectionTimeToLive(TimeValue.ofMinutes(2))
.build();
Shifting to a custom RequestConfig w/ different timeouts
private HttpClientContext getHttpClientContext() {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(Timeout.of(10, TimeUnit.SECONDS))
.setResponseTimeout(Timeout.of(10, TimeUnit.SECONDS))
.build();
HttpClientContext httpContext = HttpClientContext.create();
httpContext.setRequestConfig(requestConfig);
return httpContext;
}
Initial Code Segments for Analysis
(In addition to the above segments w/ change attempts)
Wrapper handling to init and get response
public SimpleHttpResponse getFullResponse(String url, PoolingAsyncClientConnectionManager manager, SimpleHttpRequest req) {
try (CloseableHttpAsyncClient httpclient = getHTTPClientInstance(manager)) {
httpclient.start();
CountDownLatch latch = new CountDownLatch(1);
long startTime = System.currentTimeMillis();
Future<SimpleHttpResponse> future = getHTTPResponse(url, httpclient, latch, startTime, req);
latch.await();
return future.get();
} catch (IOException | InterruptedException | ExecutionException e) {
e.printStackTrace();
return new SimpleHttpResponse(999, CommonUtils.getExceptionAsMap(e).toString());
}
}
With actual handler and probing code
private Future<SimpleHttpResponse> getHTTPResponse(String url, CloseableHttpAsyncClient httpclient, CountDownLatch latch, long startTime, SimpleHttpRequest req) {
return httpclient.execute(req, getHttpContext(), new FutureCallback<SimpleHttpResponse>() {
#Override
public void completed(SimpleHttpResponse response) {
latch.countDown();
logger.info("[{}][{}ms] - {}", response.getCode(), getTotalTime(startTime), url);
}
#Override
public void failed(Exception e) {
latch.countDown();
logger.error("[{}ms] - {} - {}", getTotalTime(startTime), url, e);
}
#Override
public void cancelled() {
latch.countDown();
logger.error("[{}ms] - request cancelled for {}", getTotalTime(startTime), url);
}
});
}
Direct Question
Is there a way to configure the client such that it can handle for these variances on its own without explicitly modifying the configuration for each endpoint context?
Fixed w/ Combination of the below to Assure Connection Live/Ready
(Or at least is stable)
Forcing HTTP 1
HttpAsyncClients.custom()
.setConnectionManager(manager)
.setRetryStrategy(getRetryStrategy())
.setVersionPolicy(HttpVersionPolicy.FORCE_HTTP_1)
.setConnectionManagerShared(true);
Setting Effective Headers for POST
Specifically the close header
req.setHeader("Connection", "close, TE");
Note: Inactivity check helps, but still sometimes gets refusals w/o this
Setting Inactivity Checks by Type
Set POSTs to validate immediately after inactivity
Note: Using 1000 for both caused a high drop rate for some systems
PoolingAsyncClientConnectionManagerBuilder
.create()
.setValidateAfterInactivity(TimeValue.ofMilliseconds(0))
Set GET to validate after 1s
PoolingAsyncClientConnectionManagerBuilder
.create()
.setValidateAfterInactivity(TimeValue.ofMilliseconds(1000))
Given the Error Context
Tracing the connection problem in stacktrace to AbstractH2StreamMultiplexer
Shows ConnectionHandshake.GRACEFUL_SHUTDOWN as triggering the stream refusal
if (connState == ConnectionHandshake.GRACEFUL_SHUTDOWN) {
throw new H2StreamResetException(H2Error.PROTOCOL_ERROR, "Stream refused");
}
Which corresponds to
connState = streamMap.isEmpty() ? ConnectionHandshake.SHUTDOWN : ConnectionHandshake.GRACEFUL_SHUTDOWN;
Reasoning
If I'm understanding correctly:
The connections were being un/intentionally closed
However, they were not being confirmed ready before executing again
Which caused it to fail because the stream was not viable
Therefore the fix works because (it seems)
Given Forcing HTTP1 allows for a single context to manage
Where HttpVersionPolicy NEGOTIATE/FORCE_HTTP_2 had greater or equivalent failures across the spectrum of regions/menus
And it assures that all connections are valid before use
And POSTs are always closed due to the close header, which is unavailable to HTTP2
Therefore
GET is checked for validity w/ reasonable periodicity
POST is checked every time, and since it is forcibly closed, it is re-acquired before execution
Which leaves no room for unexpected closures
And otherwise the potential that it was incorrectly switching to HTTP2
Will accept this until a better answer comes along, as this is stable but sub-optimal.
I have configured TcpClient as described in the examples here. I am trying to make the following code resilient in situations where the server unexpectedly closes the connection:
TcpClient tcpClient = getTcpClient();
public Mono<String> sendMessage(Mono<bytes[]> request) {
Connection connection = getConnectionFromPool(tcpClient);
return connection
.outbound()
.sendByteArray(request)
.then()
.then(connection.inbound().receive().asString().as(Mono::from));
}
In such an event, I expect the method "getConnectionFromPool" to be able to retrieve a connection from the pool or open a new one if none are available.
After noticing .connect() eventually defers to ConnectionProvider.acquire(), I tried to use tcpClient.connect(), but it becomes necessary to change the method return type as follows:
public Mono<Mono<String>> sendMessage1(Mono<String> request) {
return this.tcpClient
.connect()
.map(connection ->
connection
.outbound()
.sendByteArray(request.map(String::getBytes))
.then()
.then(connection.inbound().receive().asString().as(Mono::from))
);
}
Clearly this is undesirable. How do I acquire a Connection instance directly from the pool? Is there a simple Mono operator I am missing, or am I using the TcpClient API incorrectly?
Thanks very much for any help.
What about using flatMap instead of map?
public Mono<String> sendMessage1(Mono<String> request) {
return this.tcpClient
.connect()
.flatMap(connection ->
connection
.outbound()
.sendByteArray(request.map(String::getBytes))
.then()
.then(connection.inbound().receive().asString().as(Mono::from))
);
}
I have a job with the following config:
#Autowired
private ConnectionFactory connectionFactory;
#Bean
Step step() {
return steps.get("step")
.<~>chunk(chunkSize)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
#Bean
ItemReader<Person> reader() {
return new AmqpItemReader<>(amqpTemplate());
}
#Bean
AmqpTemplate amqpTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setChannelTransacted(true);
return rabbitTemplate;
}
Is it possible to change behavior of RabbitResourceHolder to not requeue the message in case of a transaction rollback? It makes sense in Spring Batch?
Not when using an external transaction manager; the whole point of rolling back a transaction is to put things back the way they were before the transaction started.
If you don't use transactions (or just use a local transaction - via setChannelTransacted(true) and no transaction manager), you (or an ErrorHandler) can throw an AmqpRejectAndDontRequeueException (or set defaultRequeueRejected to false on the container) and the message will go to the DLQ.
I can see that this is inconsistent; the RabbitMQ documentation says:
On the consuming side, the acknowledgements are transactional, not the consuming of the messages themselves.
So rabbit itself does not requeue the delivery but, as you point out, the resource holder does (but the container will reject the delivery when there is no transaction manager and one of the 2 conditions I described is true).
I think we need to provide at least an option for the behavior you want.
I opened AMQP-711.
we have more than dozon of wcf services and being called using TCP binding. There are a lots of calls to same wcf service at various places in code.
AdminServiceClient client = FactoryS.AdminServiceClient();// it takes significant time. and
client.GetSomeThing(param1);
client.Close();
i want to cache the client or produce it from singleton. so that i can save some time, Is it possible?
Thx
Yes, this is possible. You can make the proxy object visible to the entire application, or wrap it in a singleton class for neatness (my preferred option). However, if you are going to reuse a proxy for a service, you will have to handle channel faults.
First create your singleton class / cache / global variable that holds an instance of the proxy (or proxies) that you want to reuse.
When you create the proxy, you need to subscribe to the Faulted event on the inner channel
proxyInstance.InnerChannel.Faulted += new EventHandler(ProxyFaulted);
and then put some reconnect code inside the ProxyFaulted event handler. The Faulted event will fire if the service drops, or the connection times out because it was idle. The faulted event will only fire if you have reliableSession enabled on your binding in the config file (if unspecified this defaults to enabled on the netTcpBinding).
Edit: If you don't want to keep your proxy channel open all the time, you will have to test the state of the channel before every time you use it, and recreate the proxy if it is faulted. Once the channel has faulted there is no option but to create a new one.
Edit2: The only real difference in load between keeping the channel open and closing it every time is a keep-alive packet being sent to the service and acknowledged every so often (which is what is behind your channel fault event). With 100 users I don't think this will be a problem.
The other option is to put your proxy creation inside a using block where it will be closed / disposed at the end of the block (which is considered bad practice). Closing the channel after a call may result in your application hanging because the service is not yet finished processing. In fact, even if your call to the service was async or the service contract for the method was one-way, the channel close code will block until the service is finished.
Here is a simple singleton class that should have the bare bones of what you need:
public static class SingletonProxy
{
private CupidClientServiceClient proxyInstance = null;
public CupidClientServiceClient ProxyInstance
{
get
{
if (proxyInstance == null)
{
AttemptToConnect();
}
return this.proxyInstance;
}
}
private void ProxyChannelFaulted(object sender, EventArgs e)
{
bool connected = false;
while (!connected)
{
// you may want to put timer code around this, or
// other code to limit the number of retrys if
// the connection keeps failing
AttemptToConnect();
}
}
public bool AttemptToConnect()
{
// this whole process needs to be thread safe
lock (proxyInstance)
{
try
{
if (proxyInstance != null)
{
// deregister the event handler from the old instance
proxyInstance.InnerChannel.Faulted -= new EventHandler(ProxyChannelFaulted);
}
//(re)create the instance
proxyInstance = new CupidClientServiceClient();
// always open the connection
proxyInstance.Open();
// add the event handler for the new instance
// the client faulted is needed to be inserted here (after the open)
// because we don't want the service instance to keep faulting (throwing faulted event)
// as soon as the open function call.
proxyInstance.InnerChannel.Faulted += new EventHandler(ProxyChannelFaulted);
return true;
}
catch (EndpointNotFoundException)
{
// do something here (log, show user message etc.)
return false;
}
catch (TimeoutException)
{
// do something here (log, show user message etc.)
return false;
}
}
}
}
I hope that helps :)
In my experience, creating/closing the channel on a per call basis adds very little overhead. Take a look at this Stackoverflow question. It's not a Singleton question per se, but related to your issue. Typically you don't want to leave the channel open once you're finished with it.
I would encourage you to use a reusable ChannelFactory implementation if you're not already and see if you still are having performance problems.
I'm trying to implement a reconnect logic for a wcf client. I'm aware that you have to create a new channel after the current channel entered the faulted state. I did this in a channel faulted event handler:
internal class ServiceClient : DuplexClientBase, IServiceClient
{
public ServiceClient(ICallback callback, EndpointAddress serviceAddress)
: base(callback, MyUtility.GetServiceBinding("NetTcpBinding"), serviceAddress)
{
// Open the connection.
Open();
}
public void Register(string clientName)
{
// register to service
}
public void DoSomething()
{
// some code
}
}
public class ClientApp
{
private IServiceClient mServiceClient;
private ICallback mCallback;
public ClientApp()
{
mServiceClient = new ServiceClient( mCallback, new EndpointAddress("someAddress"));
mServiceClient.Register();
// register faulted event for the service client
((ICommunicationObject)mServiceClient).Faulted += new EventHandler(ServiceClient_Faulted);
}
void ServiceClient_Faulted(object sender, EventArgs e)
{
// Create new Service Client.
mServiceClient = new ServiceClient( mCallback, new EndpointAddress("someAddress"));
// Register the EI at Cell Controller
mServiceClient.Register();
}
public void DoSomething()
{
mServiceClient.DoSomething();
}
}
But in my unit test I still get a "The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state" exception.
Is it possible that the callback channel is still faulted and if yes how can I replace the callback channel?
so far I have experienced that a WCF connection needs to be recreated on fault - there doesn't seem to be a way to recover it otherwise. As for when a fault occurs, the method seems to fire fine, but often it fires and cleans up the WCF connection (establishing a new one, etc) as the current request is going through - causing this to fail - especially true on timeouts.
A couple of suggestions:
- If it is timeout related, keep track of the last time a call was made and a constant containing the timeout value. If the WCF connection will have been dropped due to inactivity, drop it and recreate it before you send the request over the wire.
- The other thing, it looks like you are not re-adding the fault handler, which means the first fault will get handled, but the second time it faults it will fall over without a handler cause no new one has been attached.
Hope this helps
Have you tried to reset the communications channel by calling mServiceClient.Abort in the Faulted event handler?
Edit:
I see that you do not reinitialize the mCallback object in your recovery code. You may need to assign it to a new instance.