Want to create a circuit breaker using vert.x for rabbit MQ - rabbitmq

I want to implement circuit breaker using vert.x for rabbit mq. if rabbit mq is down then circuit is open based on the configuration.
I did separate POC for vert.x circuit breaker and also able to connect with Rabbit MQ client . using vert.x .
Now, I want to build circuit breaker if circuit is open then client will not to push messages in queue , and keep in data base, once circuit is closed then it will start pushing db messages in mq. Please help.
I've use below links for create working poc.
https://vertx.io/docs/vertx-circuit-breaker/java/
https://vertx.io/docs/vertx-rabbitmq-client/java/
Snippet Used for Circuit Breaker
CircuitBreakerOptions options = new CircuitBreakerOptions()
.setMaxFailures(0)
.setTimeout(5000)
.setMaxRetries(3)
.setFallbackOnFailure(true);
CircuitBreaker breaker =
CircuitBreaker.create("my-circuit-breaker", vertx, options)
.openHandler(v -> {
System.out.println("Circuit opened");
}).closeHandler(v -> {
System.out.println("Circuit closed");
}).retryPolicy(retryCount -> retryCount * 100L);
breaker.executeWithFallback(promise -> {
vertx.createHttpClient().getNow(8080, "localhost", "/", response -> {
if (response.statusCode() != 200) {
promise.fail("HTTP error");
} else {
response.exceptionHandler(promise::fail).bodyHandler(buffer -> {
promise.complete(buffer.toString());
});
}
});
}, v -> {
// Executed when the circuit is opened
return "Hello (fallback)";
}, ar -> {
// Do something with the result
System.out.println("Result: " + ar.result());
});
}
Snippet Used for RAbbit MQ Client
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
RabbitMQOptions config = new RabbitMQOptions();
// Each parameter is optional
// The default parameter with be used if the parameter is not set
config.setUser("guest");
config.setPassword("guest");
config.setHost("localhost");
config.setPort(5672);
// config.setVirtualHost("vhost1");
// config.setConnectionTimeout(6000); // in milliseconds
config.setRequestedHeartbeat(60); // in seconds
// config.setHandshakeTimeout(6000); // in milliseconds
// config.setRequestedChannelMax(5);
// config.setNetworkRecoveryInterval(500); // in milliseconds
// config.setAutomaticRecoveryEnabled(true);
// vertx.
RabbitMQClient client = RabbitMQClient.create(vertx, config);
CircuitBreakerOptions options = new CircuitBreakerOptions().setMaxFailures(0).setTimeout(5000).setMaxRetries(0)
.setFallbackOnFailure(true);
CircuitBreaker breaker = CircuitBreaker.create("my-circuit-breaker", vertx, options).openHandler(v -> {
System.out.println("Circuit opened");
}).closeHandler(v -> {
System.out.println("Circuit closed");
}).retryPolicy(retryCount -> retryCount * 100L);
breaker.executeWithFallback(promise -> {
vertx.createHttpClient().getNow(8080, "localhost", "/", response -> {
if (response.statusCode() != 200) {
promise.fail("HTTP error");
} else {
response.exceptionHandler(promise::fail).bodyHandler(buffer -> {
promise.complete(buffer.toString());
});
}
});
}, v -> {
// Executed when the circuit is opened
return "Hello (fallback)";
}, ar -> {
// Do something with the result
System.out.println("Result: " + ar.result());
});
client.start(rh -> {
if (rh.failed()) {
System.out.println("failed");
} else {
for (int i = 0; i > 5; i++) {
System.out.println(rh.succeeded());
System.out.println("client : " + client.isConnected());
breaker.executeWithFallback(promise -> {
if(!client.isConnected()) {
promise.fail("MQ client is not connected");
}
},v -> {
// Executed when the circuit is opened
return "Hello (fallback)";
}, ar -> {
// Do something with the result
System.out.println("Result: " + ar.result());
});
/* RabbitMQClient client2 = RabbitMQClient.create(vertx); */
JsonObject message = new JsonObject().put("body", "Hello RabbitMQ, from Vert.x !");
client.basicPublish("eventExchange", "order.created", message, pubResult -> {
if (pubResult.succeeded()) {
System.out.println("Message published !");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} else {
pubResult.cause().printStackTrace();
}
});
}
}
});
}
Although its working but, circuit open in case rabbit mq down . But its not looks a standard way of ding it with vert.x .
CAn you please suggest of give input how i can implement it.

Related

How to get the value of another function synchronously

I am using the ktor websocket module
When I send data to the client, how do I get the data back from the client after this send?
val result = serverSession.send(json)
// result
Just like this
It is actually the Unit type
But I want to get the String
There are great examples on official site of Ktor.
If you are server-side, check this link (https://ktor.io/docs/websocket.html#handle-single-session) and the below example.
webSocket("/echo") {
send("Please enter your name")
for (frame in incoming) {
when (frame) {
is Frame.Text -> {
val receivedText = frame.readText()
if (receivedText.equals("bye", ignoreCase = true)) {
close(CloseReason(CloseReason.Codes.NORMAL, "Client said BYE"))
} else {
send(Frame.Text("Hi, $receivedText!"))
}
}
}
}
}
If you are client-side, check this link (https://ktor.io/docs/websocket-client.html#example) and the below example.
client.webSocket(method = HttpMethod.Get, host = "127.0.0.1", port = 8080, path = "/echo") {
while(true) {
val othersMessage = incoming.receive() as? Frame.Text
println(othersMessage?.readText())
val myMessage = Scanner(System.`in`).next()
if(myMessage != null) {
send(myMessage)
}
}
}

Ktor responding outputStream causing io.netty.handler.timeout.WriteTimeoutException

I have a Ktor application that return multiple files as a stream but when the client has a slow internet connection, apparently, the stream gets full and blows launching an io.netty.handler.timeout.WriteTimeoutException.
kotlinx.coroutines.JobCancellationException: Parent job is Cancelling
Caused by: io.netty.handler.timeout.WriteTimeoutException: null
Is there a way to prevent that ?
Here's my method:
suspend fun ApplicationCall.returnMultiFiles(files: List<StreamFile>) {
log.debug("Returning Multiple Files: ${files.size}")
val call = this // Just to make it more readable
val bufferSize = 16 * 1024
call.response.headers.append(
HttpHeaders.ContentDisposition,
ContentDisposition.Attachment.withParameter(ContentDisposition.Parameters.FileName, "${UUID.randomUUID()}.zip").toString())
call.respondOutputStream(ContentType.parse("application/octet-stream")) {
val returnOutputStream = this.buffered(bufferSize) // Just to make it more readable
ZipOutputStream(returnOutputStream).use { zipout ->
files.forEach{ record ->
try {
zipout.putNextEntry(ZipEntry(record.getZipEntry()))
log.debug("Adding: ${record.getZipEntry()}")
record.getInputStream().use { fileInput ->
fileInput.copyTo(zipout, bufferSize)
}
} catch (e: Exception) {
log.error("Failed to add ${record.getZipEntry()}", e)
}
}
}
}
}

Rabbitmq's priority queue

Rabbitmq's priority queue mechanism has been tested and will not take effect until the producer is started to publish the message before the consumer is started. How to solve this problem?
**Code snippet**
consumer:
Map<String,Object> args = new HashMap<String,Object>();
args.put("x-max-priority", 10);
channel.queueDeclare(TEST_PRIORITY_QUEUE, true, false, false,args);
//omit... ...
DeliverCallback deliverCallback= (consumerTag, delivery) -> {
try {
String message = new String(delivery.getBody(), "UTF-8");
System.out.println("message="+message);
Thread.sleep(20*1000);
}catch (Exception e) {
e.printStackTrace();
}finally {
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
};
//... ...
producer:
for (int i = 0; i <20; i++) {
String messagelow = "lowLevelMsg";
channel.basicPublish(TEST_EXCHANGE_direct,
"prikey",
new BasicProperties.Builder().priority(1).build(),
messagelow.getBytes());
}
String messagehigh = "HigherLevelMsg";
channel.basicPublish(TEST_EXCHANGE_direct,
"prikey",
new BasicProperties.Builder().priority(9).build(),
messagehigh.getBytes());

How to render a StreamObserver in WebUI (ktor + freemarker)

How to handle the output of a StreamObserver in freemarker ?
I have the following controller code so as to subscribe to a stream channel.
else -> {
try {
//jsonResponse = gnhc.getRequestJsonOutput(pathlist,pretty = true)
jsonResponseRaw = gnhc.subscribev1(pathlist, subId, writer).toString()
jsonResponse = jsonResponseRaw
application.log.debug("SDN_JSON_PROCESSOR: ${jsonResponse}")
} catch (e: Exception) {
jsonResponse = e.toString()
application.log.error("Failed to set channel", e)
} finally {
gnhc.shutdownNow()
}
}
}
call.respond(FreeMarkerContent("subscribe.ftl", mapOf("hostname" to hostname, "port" to port, "cmd" to cmd, "result" to jsonResponse,"rawresult" to jsonResponseRaw, "pathlist" to pathlist, "error" to error), etag = "e"))
}
The Observer is declared here:
try {
// simple observer without writer and subId
val sr: StreamObserver<Gnmi.SubscribeRequest> = stub.subscribe(GnmiStreamObserver(this))
// Writer + Id
//val sr: StreamObserver<Gnmi.SubscribeRequest> = stub.subscribe(StreamResponseWriter(_path,_id,_writer))
sr.onNext(subRequest)
waitCompleted()
sr.onCompleted()
}

completion port thread is leaking when client terminates the connection

I have a ASP.NET WebApi self-hosted application. HttpSelfHostConfiguration is configured as below
HttpSelfHostConfiguration config = new HttpSelfHostConfiguration("http://0.0.0.0:54781")
{
TransferMode = TransferMode.StreamedResponse,
MaxConcurrentRequests = 1000,
SendTimeout = TimeSpan.FromMinutes(60),
};
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "{controller}",
defaults: new { #controller = "Default" }
);
HttpSelfHostServer server = new HttpSelfHostServer(config);
server.OpenAsync().Wait();
for (;;)
{
int workerThreads, completionPortThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
Console.WriteLine("Available Completion Port Threads = {0};", completionPortThreads);
Console.Out.Flush();
Thread.Sleep(2000);
}
There is an action accepts HTTP GET request as below
public class DefaultController : ApiController
{
public HttpResponseMessage GET()
{
Console.WriteLine("Receive HTTP GET request");
Func<Stream, HttpContent, TransportContext, Task> func = (stream, httpContent, transportContext) =>
{
return Monitor(httpContent, stream);
};
HttpResponseMessage response = Request.CreateResponse();
response.StatusCode = HttpStatusCode.OK;
response.Content = new PushStreamContent(func, new MediaTypeHeaderValue("text/plain"));
return response;
}
async Task Monitor(HttpContent httpContent, Stream stream)
{
try
{
using (StreamWriter sw = new StreamWriter(stream, new UTF8Encoding(false)))
{
for (;;)
{
sw.WriteLine(Guid.NewGuid().ToString());
sw.Flush();
await Task.Delay(1000);
}
}
}
catch (CommunicationException ce)
{
HttpListenerException ex = ce.InnerException as HttpListenerException;
if (ex != null)
{
Console.WriteLine("HttpListenerException occurs, error code = {0}", ex.ErrorCode);
}
else
{
Console.WriteLine("{0} occurs : {1}", ce.GetType().Name, ce.Message);
}
}
catch (Exception ex)
{
Console.WriteLine("{0} occurs : {1}", ex.GetType().Name, ex.Message);
}
finally
{
stream.Close();
stream.Dispose();
httpContent.Dispose();
Console.WriteLine("Dispose");
}
}
}
Open the url http://127.0.0.1:54781/ and I see progressive response comming.
But If the client terminates the connection when the server is sending the response, the completion port thread is taken up and never released.
It able to bring down the application by exhausting the completion port threads pool.
After changing to OWIN self-hosting, this problem disappears. It seems a bug in System.Web.Http.SelfHost
Here is updated code:
class Program
{
static void Main(string[] args)
{
var server = WebApp.Start<Startup>(url: "http://*:54781");
for (;;)
{
int workerThreads, completionPortThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
Console.WriteLine("Worker Threads = {0}; Available Completion Port Threads = {1};", workerThreads, completionPortThreads);
Console.Out.Flush();
Thread.Sleep(2000);
}
}
}
public class Startup
{
// This code configures Web API. The Startup class is specified as a type
// parameter in the WebApp.Start method.
public void Configuration(IAppBuilder appBuilder)
{
// Configure Web API for self-host.
HttpConfiguration config = new HttpConfiguration();
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "{controller}",
defaults: new { #controller = "Default" }
);
appBuilder.UseWebApi(config);
}
}
For others coming to this post there is a similar issue in System.Web.Http.SelfHost which by default uses CPU Cores * 100 threads which is the same as the default ThreadPool limit ending in a deadlock as Mono pre-creates all the threads.
The easiest way to get around this is to set the MONO_THREADS_PER_CPU environment variable to a value greater than 1 or use the --server flag when running the Mono application.