How to use Quarkus to read data from Redis Stream? - redis

Am trying to read data from Redis stream using Quarkus. But am unable to achieve it.Upon checking the Quarkus guide,stream is not available yet .Is there any other way that I could read data from Redis Stream using Quarkus
Using Redis API
#Startup
void onStart(#Observes StartupEvent ev) {
System.out.println("Stream");
Redis.createClient(vertx)
.connect()
.onSuccess(connection -> {
// use the connection
System.out.println("Successfully connected = " + connection + " " + Thread.currentThread().getName());
connection.handler(message -> {
// do whatever you need to do with your message
System.out.println("Message = " + message + " " + Thread.currentThread().getName());
});
connection.send(Request.cmd(Command.XRANGE).arg("test").arg("-").arg("+"))
.onSuccess(res -> {
System.out.println("Subscribed");
System.out.println(res);
});
});
}

Related

how to configure the key expired event listener in redisson reactive api (spring boot project)

i am using spring boot web flux with redisson. I want to enable all key expired event in my application. i tried it this way. but it doesn't work.
this.client.getTopic("__keyevent#*__:expired", StringCodec.INSTANCE)
.addListener(String.class, new MessageListener<String>() {
#Override
public void onMessage(CharSequence channel, String msg) {
//
}
});
I wish a help to resole this problem.
1st issue is, you haven't subscribed to the listener. and the 2nd one is that you can't use getTopic to the pub-sub event if you use a pattern in redisson. you should use getPatternTopic method like this. and make sure to subscribe to the process finally. and the listener should be implemented from PatternMessageListener interface.
this.client
.getPatternTopic("__keyevent#*__:expired", StringCodec.INSTANCE)
.addListener(String.class, new PatternMessageListener<String>() {
#Override
public void onMessage(CharSequence pattern, CharSequence channel, String msg) {
System.out.println("pattern = " + pattern + ", channel = " + channel + ", msg = " + msg);
}
})
.subscribe();

Too many open processes when using AWS S3 TransferManager with MultipartUpload and S3ProgressListener for ResumableTransfer

We have implemented the AWS TransferManager with MultipartUpload and ResumableTransfer for file uploads.
Implemented the solution as per the below:
https://aws.amazon.com/blogs/developer/pausing-and-resuming-transfers-using-transfer-manager/
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-transfermanager.html
https://aws.amazon.com/blogs/mobile/pause-and-resume-amazon-s3-transfers-using-the-aws-mobile-sdk-for-android/
The process count was well under control when uploading file without the MultipartUpload and ResumableTransfer, but started increasing exponentially once we implemented the above-said approach.
SampleCode Below:
try {
AmazonS3 s3client = s3ClientFactory.createClient();
xferManager = TransferManagerBuilder.standard()
.withS3Client(s3client)
.withMinimumUploadPartSize(6291456L) //6 * 1024 * 1024(long) (represents 6MB)
.withMultipartUploadThreshold(6291456L) //6 * 1024 * 1024(long) (represents 6MB)
.withExecutorFactory(() -> Executors.newFixedThreadPool(3))
.build();
String resumableTargetFile ="/path/to/resumableTargetFile";
Upload upload = xferManager.upload(putRequest, new S3ProgressListener() {
ExecutorService executor = Executors.newFixedThreadPool(1);
#Override
public void progressChanged(ProgressEvent progressEvent) {
double pct = progressEvent.getBytesTransferred() * 100.0 / progressEvent.getBytes();
LOGGER.info("Upload status for file - " + fileName + " is: " + Double.toString(pct) + "%");
switch (progressEvent.getEventType()) {
case TRANSFER_STARTED_EVENT:
LOGGER.info("Started uploading file {} to S3", fileName);
break;
case TRANSFER_COMPLETED_EVENT:
LOGGER.info("Completed uploading file {} to S3", fileName);
break;
case TRANSFER_CANCELED_EVENT:
LOGGER.warn("Upload of file {} to S3 was aborted", fileName);
break;
case TRANSFER_FAILED_EVENT:
LOGGER.error("Failed uploading file {} to S3", fileName);
break;
default:
break;
}
}
#Override
public void onPersistableTransfer(final PersistableTransfer persistableTransfer) {
executor.submit(() -> {
saveTransferState(persistableTransfer, resumableTargetFile);
});
}
});
UploadResult uploadResult = upload.waitForUploadResult();
streamMD5 = uploadResult.getETag();
if (upload.isDone()) {
LOGGER.info("File {} uploaded successfully to S3 bucket {}",fileNameKey, bucketName);
}
} catch (AmazonServiceException ase) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
LOGGER.error("AmazonServiceException occurred: " + ase.getMessage());
} catch (SdkClientException sdce) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
LOGGER.error("SdkClientException occurred: " + sdce.getMessage());
} catch (AmazonClientException ace) {
LOGGER.error("AWS Exception occurred: " + ace.getMessage());
} catch (Exception e) {
LOGGER.error("Exception occurred during files processing: " + e.getMessage());
} finally {
xferManager.shutdownNow(true);
return streamMD5;
}
Looking to see if anyone has faced a similar issue and any inputs regarding this issue
Although as per AWS documentation, closing the TransferManager with TransferManager.shutdownNow(true) should close the TransferManager and the related child objects, we found that the ExecutorService spawned within the S3ProgressListener used for the ResumableTransfer was never getting closed upon closing the TransferManager.
Once we closed the executor explicitly by calling executor.shutdown(), the issue with the open processes going up exponentially was addressed

ChronicleQueue - Unable to cleanup older CQ4 files

I am using CronicleQueue and I have only one writer/reader. Wanted to cleanup as soon as the reader is done with a CQ4 file. The following code wasn't able to remove the file, is the file reference still held by CQ during onReleased() event?
public class ChronicleFactory {
public SingleChronicleQueue createChronicle(String instance, String persistenceDir, RollCycles rollCycles) {
SingleChronicleQueue chronicle = null;
String thisInstance = instance;
try {
chronicle = SingleChronicleQueueBuilder.binary(persistenceDir).rollCycle(rollCycles).storeFileListener(new StoreFileListener() {
#Override
public void onReleased(int i, File file) {
String currentTime = LocalDateTime.now().format(DateTimeFormatter.ofPattern("yyyy??-MM-dd HH:mm:ss.SSS"));
System.out.println(instance + "> " + currentTime + ": " + Thread.currentThread().getName() + " onReleased called for file: " + file.getAbsolutePath() + " for cycle: " + i);
if(instance.equals("Reader")) {
System.out.println("Removing previous CQ file: " + file.getName() + ", deleted? " + file.delete()); //==> Not able to delete the file !!!
}
}
.....
Don't know whether this is an optimal solution, please let me know if there is a cleaner way, but the following works [except it will have to loop few times before deleting, as the file is actually released after few milliseconds it seems, i see "file is closed" printed few time before the actual delete happens)
#Override
public void onAcquired(int cycle, File file) {
String currentTime = LocalDateTime.now().format(DateTimeFormatter.ofPattern("yyyy‌​-MM-dd HH:mm:ss.SSS"));
System.out.println(instance + "> " + currentTime + ": " + Thread.currentThread().getName() + " onAcquired called for file: " + file.getAbsolutePath() + " for cycle: " + cycle);
if(prevCycleFile != null && instance.equals("Reader")) {
System.out.println(">>> Trying to remove: " + prevCycleFile);
boolean bDelete;
while((bDelete=prevCycleFile.delete())==false) {
System.out.println(">>> file is closed");
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
System.out.println(">>> Removing previous CQ file: " + prevCycleFile.getAbsolutePath() + ", deleted? " + bDelete);
}
}
It's reasonable to expect a small delay between the Queue reporting that a file is released, and the OS actually detecting/acting upon it.
The StoreFileListener is invoked on the same thread that is reading or writing to the queue, so you will incur latency while the while loop is executing. You'll get better results by handing off the work of closing files to another thread (e.g. via a j.u.c.ConcurrentLinkedQueue).

Spring data redis - listen to expiration event

I would like to listen expiration events with KeyExpirationEventMessageListener but I can't find an example.
Someone know how to do it using Spring boot 1.4.3 & Spring Data Redis?
I am currently doing this
JedisPool pool = new JedisPool(new JedisPoolConfig(), "localhost");
this.jedis = pool.getResource();
this.jedis.psubscribe(new JedisPubSub() {
#Override
public void onPMessage(String pattern, String channel, String message) {
System.out.println("onPMessage pattern " + pattern + " " + channel + " " + message);
List<Object> txResults = redisTemplate.execute(new SessionCallback<List<Object>>() {
public List<Object> execute(RedisOperations operations) throws DataAccessException {
operations.multi();
operations.opsForValue().get("val:" + message);
operations.delete("val:" + message);
return operations.exec();
}
});
System.out.println(txResults.get(0));
}
}, "__keyevent#0__:expired");
And I would like to use Spring instead of Jedis directly.
Regards
Don't use KeyExpirationEventMessageListener as it triggers RedisKeyExpiredEvent which then leads to a failure in RedisKeyValueAdapter.onApplicationEvent.
Rather use RedisMessageListenerContainer:
#Bean
RedisMessageListenerContainer keyExpirationListenerContainer(RedisConnectionFactory connectionFactory) {
RedisMessageListenerContainer listenerContainer = new RedisMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.addMessageListener((message, pattern) -> {
// event handling comes here
}, new PatternTopic("__keyevent#*__:expired"));
return listenerContainer;
}
RedisMessageListenerContainer runs all notifications on an own thread.

Apache CXF | JAX RS LoggingOutInterceptor - Access HttpServletRequest object

Folks,
I'm using Apache CXF (JAX-RS)'s LoggingInInterceptor and LoggingOutInterceptor to log the request and response objects to my web service and also to log the response time.
For this, I have extended both these classes and done relevant configuration in the appropriate XML files. Doing this, I was able to log the request and response object.
However, I also want to log the request URL in both these interceptors. I was able to get the HttpServletRequest object (Inside the LoggingInInterceptor) using the following:
HttpServletRequest request = (HttpServletRequest)message.get(AbstractHTTPDestination.HTTP_REQUEST);
Then, from the request object I was able to get the request URL (REST URL in my case). I was however, not able to get the request object in the LoggingOutInterceptor using this code (or by any other means).
Here is a summary of the issue:
I need to access the reqeuest URI inside the LoggingOutInterceptor (using HttpServletRequest object perhaps?).
Would appreciate any help on this.
Update: Adding the interceptor code.
public class StorefrontRestInboundInterceptor extends LoggingInInterceptor {
/**
* constructor.
*/
public StorefrontRestInboundInterceptor() {
super();
}
#Override
public void handleMessage(final Message message) throws Fault {
HttpServletRequest httpRequest = (HttpServletRequest) message.get(AbstractHTTPDestination.HTTP_REQUEST);
if (isLoggingRequired()) {
String requestUrl = (String) message.getExchange().get("requestUrl");
Date requestTime = timeService.getCurrentTime();
LOG.info("Performance Monitor started for session id:" + customerSession.getGuid());
LOG.info(httpRequest.getRequestURI() + " Start time for SessionID " + customerSession.getGuid() + ": "
+ requestTime.toString());
}
try {
InputStream inputStream = message.getContent(InputStream.class);
CachedOutputStream outputStream = new CachedOutputStream();
IOUtils.copy(inputStream, outputStream);
outputStream.flush();
message.setContent(InputStream.class, outputStream.getInputStream());
LOG.info("Request object for " + httpRequest.getRequestURI() + " :" + outputStream.getInputStream());
inputStream.close();
outputStream.close();
} catch (Exception ex) {
LOG.info("Error occured reading the input stream for " + httpRequest.getRequestURI());
}
}
public class StorefrontRestOutboundInterceptor extends LoggingOutInterceptor {
/**
* logger implementation.
*/
protected static final Logger LOG = Logger.getLogger(StorefrontRestOutboundInterceptor.class);
/**
* constructor.
*/
public StorefrontRestOutboundInterceptor() {
super(Phase.PRE_STREAM);
}
#Override
public void handleMessage(final Message message) throws Fault {
if (isLoggingRequired()) {
LOG.info(requestUrl + " End time for SessionID " + customerGuid + ": " + (timeService.getCurrentTime().getTime() - requestTime)
+ " milliseconds taken.");
LOG.info("Performance Monitor ends for session id:" + customerGuid);
}
OutputStream out = message.getContent(OutputStream.class);
final CacheAndWriteOutputStream newOut = new CacheAndWriteOutputStream(out);
message.setContent(OutputStream.class, newOut);
newOut.registerCallback(new LoggingCallback(requestUrl));
}
public class LoggingCallback implements CachedOutputStreamCallback {
private final String requestUrl;
/**
*
* #param requestUrl requestUrl.
*/
public LoggingCallback(final String requestUrl) {
this.requestUrl = requestUrl;
}
/**
* #param cos CachedOutputStream.
*/
public void onFlush(final CachedOutputStream cos) { //NOPMD
}
/**
* #param cos CachedOutputStream.
*/
public void onClose(final CachedOutputStream cos) {
try {
StringBuilder builder = new StringBuilder();
cos.writeCacheTo(builder, limit);
LOG.info("Request object for " + requestUrl + " :" + builder.toString());
} catch (Exception e) {
LOG.info("Error occured writing the response object for " + requestUrl);
}
}
}
Update:Since you are in Out chain you may need to get the In message from where you can get the request URI since the Request URI may null for out going response message.
You may try like this to get the Incoming message:
Message incoming = message.getExchange().getInMessage();
Then I think you should be able to get the Request URI using:
String requestURI = (String) incoming.get(Message.REQUEST_URI);
or
String endpointURI = (String) incoming.get(Message.ENDPOINT_ADDRESS);
If this is still not working, try to run the interceptor in PRE_STREAM phase like Phase.PRE_STREAM in your constructor.
You can also try to get the message from Interceptor Chain like this:
PhaseInterceptorChain chain = message.getInterceptorChain();
Message currentMessage = chain.getCurrentMessage();
HttpServletRequest req = (HttpServletRequest) currentMessage.get("HTTP.REQUEST");