ServiceStack.Redis: Unable to Connect: sPort: - redis

I regularly get
ServiceStack.Redis: Unable to Connect: sPort: 0 or ServiceStack.Redis: Unable to Connect: sPort: 50071 (or another port number).
This appears to happen when our site is busier. Redis itself appears fine, no real increase in CPU or memory usage.
I'm using connection pooling and have tried changing the timeout values without success.
public sealed class RedisConnection
{
// parameter values are:
// Config.Settings.RedisPoolSize = 10000
// Config.Settings.RedisPoolTimeoutSeconds = 2
// Config.Settings.RemoteCacheServerName = 192.168.10.12
private static readonly PooledRedisClientManager instance
= new PooledRedisClientManager(Config.Settings.RedisPoolSize,
Config.Settings.RedisPoolTimeoutSeconds,
new string[] { Config.Settings.RemoteCacheServerName })
{
ConnectTimeout = 1500
};
static RedisConnection()
{
}
public static PooledRedisClientManager Instance
{
get
{
return instance;
}
}
}
Usage is like this:
public sealed class Caching
{
public static T GetCacheSingle<T>(string key)
{
using (var redisClient = RedisConnection.Instance.GetReadOnlyClient())
{
var value = redisClient.Get<byte[]>(key);
....
}
}
}

This was caused by latency issue due to Redis running on Ubuntu hosted as a virtual machine on Hyper-V.
We reduced latency by 45% by moving to a physical Linux box, fixing the problem.

Related

Apache Ignite data distribution issue

I am trying to load data into an Apache Ignite cluster using a Java thin client, but the data is not getting loaded evenly. I see that only one machine's memory is increased.
I followed this, added a custom affinity mapper. Version is 2.14.0
https://ignite.apache.org/docs/2.14.0/thin-clients/java-thin-client#partition-awareness
Update1: Adding code
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("host1:10800","host2:10800",......)
.setTimeout(10*60*1000)
.setPartitionAwarenessEnabled(true)
.setPartitionAwarenessMapperFactory(new ClientPartitionAwarenessMapperFactory() {
#Override public ClientPartitionAwarenessMapper create(String cacheName, int partitions) {
final AffinityFunction aff = new MyAffinityFunction(partitions);
return new ClientPartitionAwarenessMapper() {
#Override
public int partition(Object key) {
return aff.partition(key);
}
};
}
});
ignite = Ignition.startClient(cfg)
kmvpCache = ignite.cache(s"cacheName")
val kmvpBatch = new java.util.HashMap[java.lang.Long, Array[Byte]](params.batchSize)
for(....){
for(....){
kmvpBatch.put(<Key>, <value>)
}
kmvpCache.putAll(kmvpBatch)
kmvpBatch.clear()
}

Gemfire Pdx Serialization Put All

Is it normal that a client application took a longer time to insert data into GemFire Cluster for the first time? For example, my client application took around 4 seconds to insert GemFire Cluster successfully. However , the subsequent insert only took around 1 second. May i know what is the reason behind it?
#Configuration
#EnablePool(name = "sgpool", socketBufferSize = 1000000, prSingleHopEnabled = true, idleTimeout = 10000)
public class RegionConfiguration {
#Bean("People")
public ClientRegionFactoryBean<String, Person> customersRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean<String, Person> customersRegion = new ClientRegionFactoryBean<>();
customersRegion.setCache(gemfireCache);
customersRegion.setClose(false);
customersRegion.setPoolName("sgpool");
customersRegion.setShortcut(ClientRegionShortcut.CACHING_PROXY);
return customersRegion;
}
}
#ClientCacheApplication
#EnablePdx
#Service
#EnableGemfireRepositories(basePackageClasses = PersonRepository.class)
#Import(RegionConfiguration.class)
public class PersonDataAccess {
#Autowired
#Qualifier("People")
private Region<String, Person> peopleRegion;
#Autowired
private PersonRepository personRepository;
#PostConstruct
public void init() {
peopleRegion.registerInterestForAllKeys();
}
public void saveAll(Iterable<Person> iteratorList) {
personRepository.saveAll(iteratorList);
}
}
#Service
#EnableScheduling
#Log4j2
public class PersonService {
#Autowired
public PersonDataAccess personDataAccess;
private Person createPerson(String ic, int age) {
return new Person(ic, "Jack - " + age, "Kay - " + age, LocalDate.of(2000, 12, 10), age, 2,
new Address("Jack Wonderful Land 12", "Wonderful Land", "Wonderful 101221"), "11111", "22222", "33333",
"Dream", "Space");
}
#Scheduled(fixedDelay = 1000, initialDelay = 5000)
public void testSaveRecord() {
ArrayList<Person> personList = new ArrayList<>();
for (int counter = 0; counter < 30000; counter++) {
personList.add(createPerson("S2011" + counter, counter));
}
log.info("start saving person");
personDataAccess.saveAll(personList);
log.info("Complete saving all the message");
}
}
GemFire Configuration:
Using PDX Serialization
1 Locator and Cache Server (GemFire 9.10.5
Partition Region
Client Application (Using Spring Data GemFire 2.3.4)
Thank you so much
Clearly the 1st time there’s a PDX cache register process going on between client and server kind of like
client server event distribution
The register process does not need to happen 2nd time because it’s now setup.
The PDX registration process is explained more here where it says: Geode maintains a central registry of the PDX domain object metadata.
There must be a way to get the region setup done before your 1st insert, so that each insert is the same.
Maybe this could help

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

Exception thrown for large number of Vertx connecting to Redis

Trying to simulate scenario for heavy load with Redis (default config only).
To keep it simple, when multi is issued immediately excute then close the connection.
import io.vertx.core.*;
import io.vertx.core.json.Json;
import io.vertx.redis.RedisClient;
import io.vertx.redis.RedisOptions;
import io.vertx.redis.RedisTransaction;
class MyVerticle extends AbstractVerticle {
private int index;
public MyVerticle(int index) {
this.index = index;
}
private void run2() {
RedisClient client = RedisClient.create(vertx, new RedisOptions().setHost("127.0.0.1"));
RedisTransaction tr = client.transaction();
tr.multi(ev2 -> {
if (ev2.succeeded()) {
tr.exec(ev3 -> {
if (ev3.succeeded()) {
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
else {
System.out.println("FAIL EXEC");
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
else {
System.out.println("FAIL MULTI");
tr.close(i -> {
if (i.failed()) {
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
#Override
public void start(Future<Void> startFuture) {
long timerID = vertx.setPeriodic(1, new Handler<Long>() {
public void handle(Long aLong) {
run2();
}
});
}
#Override
public void stop(Future stopFuture) throws Exception {
System.out.println("MyVerticle stopped!");
}
}
public class Periodic {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
for (int i = 0; i < 8000; i++) {
vertx.deployVerticle(new MyVerticle(i));
}
}
}
Although connections are closed properly I still get warning errors.
All of them are thrown even before I put more logic within multi.
2017-06-20 16:29:49 WARNING io.netty.util.concurrent.DefaultPromise notifyListener0 An exception was thrown by io.vertx.core.net.impl.ChannelProvider$$Lambda$61/1899599620.operationComplete()
java.lang.IllegalStateException: Uh oh! Event loop context executing with wrong thread! Expected null got Thread[globalEventExecutor-1-2,5,main]
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:316)
at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:193)
at io.vertx.core.net.impl.NetClientImpl.failed(NetClientImpl.java:258)
at io.vertx.core.net.impl.NetClientImpl.lambda$connect$5(NetClientImpl.java:233)
at io.vertx.core.net.impl.ChannelProvider.lambda$connect$0(ChannelProvider.java:42)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:233)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
Is there a reason for this error ?
You'll continue to get errors, because you test the wrong things.
First of all, vertices are not fat coroutines. They are thin actors. Meaning creating 500 of them won't speed things up, but probably will slow everything down, because event loop still needs to switch between them.
Second, if you want to prepare for 2K concurrent requests, put your Vertx application on one machine, and run wrk or similar tool over the network.
Third, your Redis is also on the same machine. I hope that won't be the case in your production, since Redis will compete with Vertx over CPU.
Once everything is setup correctly, I believe that you'll be able to handle 10K requests quite easily. I've seen Vertx handle 8K requests on modest machines with PostgreSQL.

MassTransit with RabbitMq Request/Response wrong reply address because of network segments

I have a web app that uses a request/response message in Masstransit.
This works on out test environment, no problem.
However on the customer deployment we face a problem. At the customer site we do have two network segments A and B. The component doing the database call is in segment A, the web app and the RabbitMq server in segment B.
Due to security restrictions the component in segment A has to go through a loadbalancer with a given address. The component itself can connect to RabbitMQ with Masstransit. So far so good.
The web component on segment B however uses the direct address for the RabbitMq server. When the web component now is starting the request/response call, I can see that the message arrives at the component in segment A.
However I see that the consumer tries to call the RabbitMQ server on the "wrong" address. It uses the address the web component uses to issue the request. However the component in segment A should reply on the "loadbalancer" address.
Is there a way to configure or tell the RespondAsync call to use the connection address configured for that component?
Of course the easiest would be to have the web component also connect through the loadbalancer, but due to the network segments/security setup the loadbalancer is only reachable from segment A.
Any input/help is appreciated.
I had a similar problem with rabbitmq federation. Here's what I did.
ResponseAddressSendObserver
class ResponseAddressSendObserver : ISendObserver
{
private readonly string _hostUriString;
public ResponseAddressSendObserver(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task PreSend<T>(SendContext<T> context)
where T : class
{
if (context.ResponseAddress != null)
{
// Send relative response address alongside the message
context.Headers.Set("RelativeResponseAddress",
context.ResponseAddress.AbsoluteUri.Substring(_hostUriString.Length));
}
return Task.CompletedTask;
}
...
}
ResponseAddressConsumeFilter
class ResponseAddressConsumeFilter : IFilter<ConsumeContext>
{
private readonly string _hostUriString;
public ResponseAddressConsumeFilter(string hostUriString)
{
_hostUriString = hostUriString;
}
public Task Send(ConsumeContext context, IPipe<ConsumeContext> next)
{
var responseAddressOverride = GetResponseAddress(_hostUriString, context);
return next.Send(new ResponseAddressConsumeContext(responseAddressOverride, context));
}
public void Probe(ProbeContext context){}
private static Uri GetResponseAddress(string host, ConsumeContext context)
{
if (context.ResponseAddress == null)
return context.ResponseAddress;
object relativeResponseAddress;
if (!context.Headers.TryGetHeader("RelativeResponseAddress", out relativeResponseAddress) || !(relativeResponseAddress is string))
throw new InvalidOperationException("Message has ResponseAddress but doen't have RelativeResponseAddress header");
return new Uri(host + relativeResponseAddress);
}
}
ResponseAddressConsumeContext
class ResponseAddressConsumeContext : BaseConsumeContext
{
private readonly ConsumeContext _context;
public ResponseAddressConsumeContext(Uri responseAddressOverride, ConsumeContext context)
: base(context.ReceiveContext)
{
_context = context;
ResponseAddress = responseAddressOverride;
}
public override Uri ResponseAddress { get; }
public override bool TryGetMessage<T>(out ConsumeContext<T> consumeContext)
{
ConsumeContext<T> context;
if (_context.TryGetMessage(out context))
{
// the most hackish part in the whole arrangement
consumeContext = new MessageConsumeContext<T>(this, context.Message);
return true;
}
else
{
consumeContext = null;
return false;
}
}
// all other members just delegate to _context
}
And when configuring the bus
var result = MassTransit.Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(hostAddress), h =>
{
h.Username(...);
h.Password(...);
});
cfg.UseFilter(new ResponseAddressConsumeFilter(hostAddress));
...
});
result.ConnectSendObserver(new ResponseAddressSendObserver(hostAddress));
So now relative response addresses are sent with the messages and used on the receiving side.
Using observers to modify anything is not recommended by the documentation, but should be fine in this case.
Maybe three is a better solution, but I haven't found one. HTH