UDP server crashes when receiving datagram - udp

I'm discovering Dart language. To train myself, I try to code a simple UDP server that logs every datagram received.
Here's my code so far:
import 'dart:async';
import 'dart:convert';
import 'dart:io';
class UDPServer {
static final UDPServer _instance = new UDPServer._internal();
// Socket used by the server.
RawDatagramSocket _udpSocket;
factory UDPServer() {
return _instance;
}
UDPServer._internal();
/// Starts the server.
Future start() async {
await RawDatagramSocket
.bind(InternetAddress.ANY_IP_V4, Protocol.udpPort, reuseAddress: true)
.then((RawDatagramSocket udpSocket) {
_udpSocket = udpSocket;
_udpSocket.listen((RawSocketEvent event) {
switch (event) {
case RawSocketEvent.READ:
_readDatagram();
break;
case RawSocketEvent.CLOSED:
print("Connection closed.");
break;
}
});
});
}
void _readDatagram() {
Datagram datagram = _udpSocket.receive();
if (datagram != null) {
String content = new String.fromCharCodes(datagram.data).trim();
String address = datagram.address.address;
print('Received "$content" from $address');
}
}
}
It works partially great... After logging some messages, it just crashes. The number varies between 1 and ~5. And IDEA just logs Lost connection to device., nothing more. I tried to debug but didn't find anything.
Does anyone have an idea why it crashes? Many thanks in advance.
EDIT: I forgot to mention but I was using this code into a Flutter application and it seems to come from that. See this Github issue for more info.

Related

WxSocket (Was not declared in this scope)

Hello, if i try to build this code here, ill get a error and dont know what to do.
void wxsocket_test_finalFrame::OnServerStart(wxCommandEvent& WXUNUSED(event))
{
// Create the address - defaults to localhost:0 initially
wxIPV4address addr;
addr.Service(3000);
// Create the socket. We maintain a class pointer so we can
// shut it down
m_server = new wxSocketServer(addr);
// We use Ok() here to see if the server is really listening
if (! m_server->Ok())
{
return;
}
// Set up the event handler and subscribe to connection events
m_server->SetEventHandler(*this, SERVER_ID);
m_server->SetNotify(wxSOCKET_CONNECTION_FLAG);
m_server->Notify(true);
}
void wxsocket_test_finalFrame::OnServerEvent(wxSocketEvent& WXUNUSED(event))
{
// Accept the new connection and get the socket pointer
wxSocketBase* sock = m_server->Accept(false);
// Tell the new socket how and where to process its events
sock->SetEventHandler(*this, SOCKET_ID);
sock->SetNotify(wxSOCKET_INPUT_FLAG | wxSOCKET_LOST_FLAG);
sock->Notify(true);
}
void wxsocket_test_finalFrame::OnSocketEvent(wxSocketEvent& event)
{
wxSocketBase *sock = event.GetSocket();
// Process the event
switch(event.GetSocketEvent())
{
case wxSOCKET_INPUT:
{
char buf[10];
// Read the data
sock->Read(buf, sizeof(buf));
// Write it back
sock->Write(buf, sizeof(buf));
// We are done with the socket, destroy it
sock->Destroy();
break;
}
case wxSOCKET_LOST:
{
sock->Destroy();
break;
}
}
}
\wxsocket_test_finalMain.cpp|99|error: 'm_server' was not declared in this scope|
OS: Windows
Compiler: gcc version 8.1.0 (x86_64-posix-seh-rev0, Built by MinGW-W64 project)
im a bloody newbie and cant figure out what is happening here, someone has a clue ?

AKKA.NET randomly sends messages to deadletters

I have an actor system that randomly fails because of messages being delivered to dead letters. Fail meaning it just does not complete
Message [UploadFileFromDropboxSuccessMessage] from
akka://MySystem-Actor-System/user/...../DropboxToBlobSourceSubmissionUploaderActor/DropboxToBlobSourceFileUploaderActor--1
to
akka://MySystem-Actor-System/user/.../DropboxToBlobSourceSubmissionUploaderActor
was not delivered. [5] dead letters encountered .This logging can be
turned off or adjusted with configuration settings
'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
private void InitialState()
{
Receive<UploadFileFromDropboxMessage>(msg =>
{
var sender = Sender;
var self = Self;
var parent = Parent;
var logger = Logger;
UploadFromDropboxToBlobStorageAsync(msg.File, msg.RelativeSourceRootDirectory, msg.BlobStorageDestinationRootPath).ContinueWith(o =>
{
if (!o.IsFaulted)
{
parent.Tell(new UploadFileFromDropboxSuccessMessage(msg.File.Path, o.Result), self);
}
else
{
parent.Tell(new UploadFileFromDropboxFailureMessage(msg.File.Path), self);
}
}, TaskContinuationOptions.ExecuteSynchronously);
});
}
I also tried
private void InitialState()
{
Receive<UploadFileFromDropboxMessage>(msg =>
{
try
{
var result = UploadFromDropboxToBlobStorageAsync(msg.File, msg.RelativeSourceRootDirectory, msg.BlobStorageDestinationRootPath).Result;
Parent.Tell(new UploadFileFromDropboxSuccessMessage(msg.File.Path, result));
}
catch (Exception ex)
{
Parent.Tell(new UploadFileFromDropboxFailureMessage(msg.File.Path, ex));
}
});
}
This happens randomly and happens on both success and complete. I have checked parent.IsNobody()... and this returns false. In the documentation it says that alocal actor can fail:
if the mailbox does not accept the message (e.g. full BoundedMailbox)
if the receiving actor fails while processing the message or is
already terminated
I can't imagine a use case where this is true but also don't really know how to check from the context of my current actor (even if its just for logging purposes).
EDIT: Does AKKA have a limitation on the total amount of messages in the entire system?
EDIT: This happens 1 would say 10% of the time.
EDIT: Eventually discovered it was an actor waaay higher in the tree being killed. I am still confused why IsNobody() returned false if its in deed dead.

how to start api app on intellij without tomcat or any type of server?

I have an API app created using vert.x framework , i am able to build the application but not able to run . when i try to run the app , i automatically get redirected to the "cucumber.api.cli.main not found error" . I delete the automatic configuration but next time i try to run the app it gets generated . What is the configuration i should run on.
I have tried to research about this , but most of the questions and answers asks me to set up tom cat server or glass fish server which i don't want to do .
Here is my hello world application using IntelliJ Idea, vert.x -
Verticle :
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Future;
import io.vertx.core.http.HttpServerResponse;
import io.vertx.ext.web.Router;
import static com.sun.xml.internal.ws.spi.db.BindingContextFactory.LOGGER;
public class MyVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws Exception {
Router router = Router.router(vertx);
router.route("/").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.putHeader("content-type", "text/html")
.end("<h1> Hello Vert.x </h>");
});
vertx.createHttpServer().requestHandler(router::accept)
.listen(8070, http -> {
if (http.succeeded()) {
LOGGER.info("Started Server at port 8070");
} else {
startFuture.fail(http.cause());
}
});
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "text/plain")
.end("Hello from Vert.x!");
}).listen(8888, http -> {
if (http.succeeded()) {
startFuture.complete();
System.out.println("HTTP server started on port 8888");
} else {
startFuture.fail(http.cause());
}
});
router.route("/test").handler(routingContext -> {
HttpServerResponse response = routingContext.response();
response.putHeader("content-type","text/html")
.end("<h2> This is another end point with same port </h2>");
});
vertx.createHttpServer().requestHandler(router::accept).listen(8070,http ->{
if(http.succeeded()){
LOGGER.info("Another server started 8070");
}else{
startFuture.fail(http.cause());
}
});
}
#Override
public void stop() {
LOGGER.info("Shutting down application");
}
}
Main Method to deploy Verticle
import com.testproject.starter.verticles.MyVerticle;
import io.vertx.core.Vertx;
public class MyVerticleTest {
public static void main(String[] args) {
Vertx vertex = Vertx.vertx();
MyVerticle myVerticle = new MyVerticle();
vertex.deployVerticle(myVerticle);
}
}
Now you can follow the below URLs -
1. http://localhost:8888
2. http://localhost:8070/test
The application doesn't required to have tomcat to run.
Reference : https://vertx.io/docs/
Useful links - https://github.com/vert-x3/vertx-awesome

Second level retries in Rebus with Rabbitmq

I have a scenario where I call an api in one of my handlers and that Api can go down for like 6 hours per month. Therefore, I designed a retry logic with 1sec retry, 1 minute retry and a 6 hour retry. This all works fine but then I found that long delay retries are not a good option.Could you please give me your experience about this?
Thank you!
If I were you, I would use Rebus' ability to defer messages to the future to implement this functionality.
You will need to track the number of failed delivery attempts manually though, by attaching and updating headers on the deferred message.
Something like this should do the trick:
public class YourHandler : IHandleMessages<MakeExternalApiCall>
{
const string DeliveryAttemptHeaderKey = "delivery-attempt";
public YourHandler(IMessageContext context, IBus bus)
{
_context = context;
_bus = bus;
}
public async Task Handle(MakeExternalApiCall message)
{
try
{
await MakeCallToExternalWebApi();
}
catch(Exception exception)
{
var deliveryAttempt = GetDeliveryAttempt();
if (deliveryAttempt > 5)
{
await _bus.Advanced.TransportMessage.Forward("error");
}
else
{
var delay = GetNextDelay(deliveryAttempt);
var headers = new Dictionary<string, string> {
{DeliveryAttemptHeaderKey, (deliveryAttempt+1).ToString()}
};
await bus.Defer(delay.Value, message, headers);
}
}
}
int GetDeliveryAttempt() => _context.Headers.TryGetValue(DeliveryAttemptHeaderKey, out var deliveryAttempt)
? deliveryAttempt
: 0;
TimeSpan GetNextDelay() => ...
}
When running in production, please remember to configure some kind of persistent subscription storage – e.g. SQL Server – otherwise, your deferred messages will be lost in the event of a restart.
You can configure it like this (after having installed the Rebus.SqlServer package):
Configure.With(...)
.(...)
.Timeouts(t => t.StoreInSqlServer(...))
.Start();

Exception thrown for large number of Vertx connecting to Redis

Trying to simulate scenario for heavy load with Redis (default config only).
To keep it simple, when multi is issued immediately excute then close the connection.
import io.vertx.core.*;
import io.vertx.core.json.Json;
import io.vertx.redis.RedisClient;
import io.vertx.redis.RedisOptions;
import io.vertx.redis.RedisTransaction;
class MyVerticle extends AbstractVerticle {
private int index;
public MyVerticle(int index) {
this.index = index;
}
private void run2() {
RedisClient client = RedisClient.create(vertx, new RedisOptions().setHost("127.0.0.1"));
RedisTransaction tr = client.transaction();
tr.multi(ev2 -> {
if (ev2.succeeded()) {
tr.exec(ev3 -> {
if (ev3.succeeded()) {
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
else {
System.out.println("FAIL EXEC");
tr.close(i -> {
if (i.failed()) {
System.out.println("FAIL TR CLOSE");
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
else {
System.out.println("FAIL MULTI");
tr.close(i -> {
if (i.failed()) {
client.close(j -> {
if (j.failed()) {
System.out.println("FAIL CLOSE");
}
});
}
});
}
});
}
#Override
public void start(Future<Void> startFuture) {
long timerID = vertx.setPeriodic(1, new Handler<Long>() {
public void handle(Long aLong) {
run2();
}
});
}
#Override
public void stop(Future stopFuture) throws Exception {
System.out.println("MyVerticle stopped!");
}
}
public class Periodic {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
for (int i = 0; i < 8000; i++) {
vertx.deployVerticle(new MyVerticle(i));
}
}
}
Although connections are closed properly I still get warning errors.
All of them are thrown even before I put more logic within multi.
2017-06-20 16:29:49 WARNING io.netty.util.concurrent.DefaultPromise notifyListener0 An exception was thrown by io.vertx.core.net.impl.ChannelProvider$$Lambda$61/1899599620.operationComplete()
java.lang.IllegalStateException: Uh oh! Event loop context executing with wrong thread! Expected null got Thread[globalEventExecutor-1-2,5,main]
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:316)
at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:193)
at io.vertx.core.net.impl.NetClientImpl.failed(NetClientImpl.java:258)
at io.vertx.core.net.impl.NetClientImpl.lambda$connect$5(NetClientImpl.java:233)
at io.vertx.core.net.impl.ChannelProvider.lambda$connect$0(ChannelProvider.java:42)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:233)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
Is there a reason for this error ?
You'll continue to get errors, because you test the wrong things.
First of all, vertices are not fat coroutines. They are thin actors. Meaning creating 500 of them won't speed things up, but probably will slow everything down, because event loop still needs to switch between them.
Second, if you want to prepare for 2K concurrent requests, put your Vertx application on one machine, and run wrk or similar tool over the network.
Third, your Redis is also on the same machine. I hope that won't be the case in your production, since Redis will compete with Vertx over CPU.
Once everything is setup correctly, I believe that you'll be able to handle 10K requests quite easily. I've seen Vertx handle 8K requests on modest machines with PostgreSQL.