releasing Jedis pool - redis

In most examples including this Jedis example the Jedis pool is created in the try..catch parenthesis which I think disposes it:
try(Jedis jedis = jedisPool.getResource())
{
some code
}
For me this is not possible because I need to throw errors while still using the Jedis pool, depending on result from Redis. So for my this is just
Jedis jedis = jedisPool.getResource()
Question is how best to dispose the object? Is it jedis.disconnect? jedis.quit? jedis = null?
There are some similar questions but not exactly same. One about error it gives, another about increasing pool, another about threading.

Simply jedis.close(). See Jedis - When to use returnBrokenResource()
You used to be expected to use jedisPool.returnBrokenResource(jedis) or jedisPool.returnResource(jedis), but jedis.close() takes care of it.
See Jedis.java.
Jedis jedis = null;
try {
jedis = jedisPool.getResource();
...
}catch (JedisConnectionException e) {
...
}catch (Exception e){
...
} finally {
if (jedis != null)
jedis.close();
}

Related

Merging Mono and Flux in Spring WebFlux

Let's say I have a method store(Flux<DataBuffer> bufferFlux) which receives some data as a flux of DataBuffers, calculates an identifier, creates an AsynchronousFileChannel and then uses DataBufferUtils the data to the channel.
I started like this. Please note, that the following code will not work. It should just illustrate how I create a FileChannel and how I would like to write the data, while releasing used buffers and closing the channel afterwards.
public Mono<Void> store(Flux<DataBuffer> bufferFlux) {
var channelMono = Mono.defer(() -> {
try {
log.info("opening file {}", filePath);
return Mono.just(AsynchronousFileChannel
.open(filePath, StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE));
} catch (IOException ex) {
log.error("error opening file", ex);
return Mono.error(ex);
}
});
// calculate identifier
// store buffers to AsynchronousFileChannel
return DataBufferUtils
.write(bufferFlux, fileChannel)
.doOnNext(DataBufferUtils.releaseConsumer())
.doFinally(f -> {
try {
fileChannel.close();
} catch (IOException ioException) {
log.error("error closing file channel", ioException);
}
})
.then();
}
The problem is, that I just started with reactive programming and have no clue how I could bring these two building blocks together, so that
the data is written to the channel
all buffers are gracefully released
the channel is closed after writing the data
the whole operation just signals complete or error (I guess this is what Mono<Void> is used for)
Can anyone help me choose the right operators or point me to a conceptual problem (perhaps there is a good reason why I cannot find a suitable operator)? :)
Thank you!

Disable JMX/MBeans in JVM

I have two Debezium SQL Server connectors that have to connect to one database and publish two different tables. Their name and database.history.kafka.topic are unique. Still when adding the second one (using a POST request) I get below exceptions. I don't want to use a unique value for database.server.name which counterintuitively has been used for metric name.
java.lang.RuntimeException: Unable to register the MBean 'debezium.sql_server:type=connector-metrics,context=schema-history,server=mydatabase'
Caused by: javax.management.InstanceAlreadyExistsException: debezium.sql_server:type=connector-metrics,context=schema-history,server=mydatabase
We won't be using JMX/MBeans so it's okay to disable it, but the question is how. If there is a common way to do it for JVM please advise.
I even see below code in Debezium where it registers a MBean. Looking just at two first lines, it seems one way to bypass this issue is forcing ManagementFactory.getPlatformMBeanServer() to return null. So another way of asking the same question may be how to force ManagementFactory.getPlatformMBeanServer() return null?
public synchronized void register() {
try {
final MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer();
if (mBeanServer == null) {
LOGGER.info("JMX not supported, bean '{}' not registered", name);
return;
}
// During connector restarts it is possible that Kafka Connect does not manage
// the lifecycle perfectly. In that case it is possible the old metric MBean is still present.
// There will be multiple attempts executed to register new MBean.
for (int attempt = 1; attempt <= REGISTRATION_RETRIES; attempt++) {
try {
mBeanServer.registerMBean(this, name);
break;
}
catch (InstanceAlreadyExistsException e) {
if (attempt < REGISTRATION_RETRIES) {
LOGGER.warn(
"Unable to register metrics as an old set with the same name exists, retrying in {} (attempt {} out of {})",
REGISTRATION_RETRY_DELAY, attempt, REGISTRATION_RETRIES);
final Metronome metronome = Metronome.sleeper(REGISTRATION_RETRY_DELAY, Clock.system());
metronome.pause();
}
else {
LOGGER.error("Failed to register metrics MBean, metrics will not be available");
}
}
}
// If the old metrics MBean is present then the connector will try to unregister it
// upon shutdown.
registered = true;
}
catch (JMException | InterruptedException e) {
throw new RuntimeException("Unable to register the MBean '" + name + "'", e);
}
}
You should use a single Debezium SQL Server connector for this, and use the table.include.list property on the connector to list the two tables you want to capture.
https://debezium.io/documentation/reference/stable/connectors/sqlserver.html#sqlserver-property-table-include-list

Ignoring offers to coroutine channels after closing

Is there a good way to have channels ignore offers once closed without throwing an exception?
Currently, it seems like only try catch would work, as isClosedForSend isn't atomic.
Alternatively, is there a problem if I just never close a channel at all?
For my specific use case, I'm using channels as an alternative to Android livedata (as I don't need any of the benefits beyond sending values from any thread and listening from the main thread). In that case, I could listen to the channel through a producer that only sends values when I want to, and simply ignore all other inputs.
Ideally, I'd have a solution where the ReceiveChannel can still finish listening, but where SendChannel will never crash when offered a new value.
Channels throw this exception by design, as means of correct communication.
If you absolutely must have something like this, you can use an extension function of this sort:
private suspend fun <E> Channel<E>.sendOrNothing(e: E) {
try {
this.send(e)
}
catch (closedException: ClosedSendChannelException) {
println("It's fine")
}
}
You can test it with the following piece of code:
val channel = Channel<Int>(capacity = 3)
launch {
try {
for (i in 1..10) {
channel.sendOrNothing(i)
delay(50)
if (i == 5) {
channel.close()
}
}
println("Done")
}
catch (e: Exception) {
e.printStackTrace()
}
finally {
println("Finally")
}
}
launch {
for (c in channel) {
println(c)
delay(300)
}
}
As you'll notice, producer will start printing "It's fine" since the channel is closed, but consumer will still be able to read first 5 values.
Regarding your second question: it depends.
Channels don't have such a big overhead, and neither do suspended coroutines. But a leak is a leak, you know.
I ended up posting an issue to the repo, and the solution was to use BroadcastChannel. You can create a new ReceiveChannel through openSubscription, where closing it will not close the SendChannel.
This more accurately reflects RxJava's PublishSubject

How to tell the Session to throw the error query[NHibernate]?

I made a test class against the repository methods shown below:
public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File
{
try
{
_session.Save(FileToAdd);
_session.Flush();
}
catch (Exception e)
{
if (e.InnerException.Message.Contains("Violation of UNIQUE KEY"))
throw new ArgumentException("Unique Name must be unique");
else
throw e;
}
}
public void RemoveFile(File FileToRemove)
{
_session.Delete(FileToRemove);
_session.Flush();
}
And the test class:
try
{
Data.File crashFile = new Data.File();
crashFile.UniqueName = "NonUniqueFileNameTest";
crashFile.Extension = ".abc";
repo.AddFile(crashFile);
Assert.Fail();
}
catch (Exception e)
{
Assert.IsInstanceOfType(e, typeof(ArgumentException));
}
// Clean up the file
Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault();
repo.RemoveFile(removeFile);
The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?
Edit
The other stay the same, only added Evict as suggested
public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File
{
try
{
_session.Save(FileToAdd);
_session.Flush();
}
catch (Exception e)
{
_session.Evict(FileToAdd);
if (e.InnerException.Message.Contains("Violation of UNIQUE KEY"))
throw new ArgumentException("Unique Name must be unique");
else
throw e;
}
}
No difference to the result.
Call _session.Evict(FileToAdd) in the catch block. Although the save fails, FileToAdd is still a transient object in the session and NH will attempt to persist (insert) it the next time the session is flushed.
NHibernate Manual "Best practices" Chapter 22:
This is more of a necessary practice than a "best" practice. When
an exception occurs, roll back the ITransaction and close the ISession.
If you don't, NHibernate can't guarantee that in-memory state
accurately represents persistent state. As a special case of this,
do not use ISession.Load() to determine if an instance with the given
identifier exists on the database; use Get() or a query instead.

Why it is necessary to create only single instance of SessionFactory?

my code is
static {
try {
sessionFactory = new AnnotationConfiguration().configure().buildSessionFactory();
} catch (Throwable ex) {
System.err.println("Initial SessionFactory creation failed." + ex);
throw new ExceptionInInitializerError(ex);
}
}
here i created only single instance of SessionFactory
the above code work correctly but why we create only single instance ?
The process of creating a session factory is expensive, performance wise. The performance gain from using a single static session factory is at least an order of magnitude. You can certainly create a new factory on each request, if you'd like, but it would be incredibly wasteful to do so.