I am doing some test to request some data to a remote database from a client. For that, I have a client gRPC that call a method in the gRPC, this gRPC server use EF to get the data and send the result to the client.
Well, in my case, I get about 3MB of data, that is higher than the default maximum size allowed for the channel.
I know that I can resolve the problem when I create the channel in the client, in this way, for example, to 60 mb:
var channel = GrpcChannel.ForAddress("http://localhost:5223",
new GrpcChannelOptions
{
MaxReceiveMessageSize = 62914560,
MaxSendMessageSize = 62914560,
});
But although I can increase this when I create the channel, I can't ensure that some query returns more data than the maximum allowed.
So I would like to know how I can handle this.
In this case, the method is unaray, it is not a stream.
Thanks.
Related
I thought gun instance in the server was also one of the peers.
But when I put data on the server, the peer can't get the data.
Here is my simple test code.
global.gun.get('servertest').put('yes'); // at server side
gun.get('servertest').once(console.log); // at client side
And it prints undefined.
please, let me know how to use a gun instance on server side.
On the server, run this to actually accept remote connections:
var server = require('http').createServer().listen(8080);
var gun = Gun({web: server});
On the client, run this to connect to your server:
var gun = Gun({peers: ["http://server-ip-or-hostname:8080/gun"]})
As a side note, even if you establish a peer connection to get your data, you still need to handle undefined, as once() might fire several times as data is coming in.
Relevant links:
https://gun.eco/docs/Installation#server:
https://github.com/amark/gun/tree/master/examples
https://github.com/skiqh/gun-cli
EDIT:
To be more explicit about my side note above -- the once callback on your client getting undefined for non-local data is actually by design. It means the client does not have the requested data available yet. It will however request it from its peers, which will try to answer with what they themselves can resolve (locally or from their respective peers). These answers will trigger the callback again (if they got through the CRDT algorithm I think).
Getting undefined on the client could also mean the server's response might have timed out and GUN considered it unanswered. You can prolong the waiting time with .once(callback_function, {wait: time_in_miliseconds}).
As per Hadar's answer, try using on() instead of once() to mitigate race conditions, i.e. your client requesting data from the server before you actually wrote it. Once you got your data and don't want any more updates, you can unsubscribe with gun.get('servertest').off()
Also, it might be noteworthy that GUN instances are not magically linked; having two of them connected does not mean they are one and the same in any way. Conceptually, they are peers in a distributed system, which in GUN's case gives you eventual consistency with all the limits and tradeoffs associated with that.
#skiqh
Hello, Thanks for your answer.
I initiated gun instance well in both server and client.
server
let server = https.createServer(options, app);
server.listen( port );
let gun = Gun({ file: 'data', web: server });
global.gun = gun; // <-- my gun instance on server side
global.gun.get('servertest').put('yes'); <-- I tried to put data
// listening~~~~~
client
window.G = G;
let opt = {};
opt.store = RindexedDB(opt);
opt.localStorage = false;
opt.peers = ['https://my.link/gun'];
G.gun = Gun(opt); // <-- my gun instance on client
gun.get('servertest').once(console.log) // <-- it prints "undefined" even though I put data here by server!
I really want to know how to use methods like .put(), .get(), .on() etc.. on the server side using gun instance.
I tried doing this but failed as I attached the result on my post.
Please, Let me know what Im doing something wrong and the correct way.
Thank you
try gun.on instead of once. on will subscribe to all changes.
your example should work if you run .once only after you write something to the server.
using gun.on on client should work regardless and will trigger the moment the server write somehting
We are using Masstransit with RabbitMq for making RPCs from one component of our system to others.
Recently we faced the limit of throughput on client side, measured about 80 completed responses per second.
While trying to investigate where the problem was, I found that requests were processed fast by the RPC server, then responses were put to callback queue, and then, the queue processing speed was 80 M\s
This limit is only on client side. Starting another process of the same client app on the same machine doubles requests throughput on the server side, but then I see two callback queues, filled with messages, are being consumed each with the same 80 M\s
We are using single instance of IBus
builder.Register(c =>
{
var busSettings = c.Resolve<RabbitSettings>();
var busControl = MassTransitBus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(busSettings.Host), h =>
{
h.Username(busSettings.Username);
h.Password(busSettings.Password);
});
cfg.UseSerilog();
cfg.Send<IProcessorContext>(x =>
{
x.UseCorrelationId(context => context.Scope.CommandContext.CommandId);
});
}
);
return busControl;
})
.As<IBusControl>()
.As<IBus>()
.SingleInstance();
The send logic looks like this:
var busResponse = await _bus.Request<TRequest, TResult>(
destinationAddress: _settings.Host.GetServiceUrl<TCommand>(queueType),
message: commandContext,
cancellationToken: default(CancellationToken),
timeout: TimeSpan.FromSeconds(_settings.Timeout),
callback: p => { p.WithPriority(priority); });
Has anyone faced the problem of that kind?
My guess that there is some program limit in the response dispatch logic. It might be the Max thread pool size, or the size of the buffer, also the prefetch count of response queue.
I tried to play with .Net thread pool size, but nothing helped.
I'm kind of new to Masstransit and will appreciate any help with my problem.
Hope it can be fixed in configuration way
There are a few things you can try to optimize the performance. I'd also suggest checking out the MassTransit-Benchmark and running it in your environment - this will give you an idea of the possible throughput of your broker. It allows you to adjust settings like prefetch count, concurrency, etc. to see how they affect your results.
Also, I would suggest using one of the request clients to reduce the setup for each request/response. For example, create the request client once, and then use that same client for each request.
var serviceUrl = yourMethodToGetIt<TRequest>(...);
var client = Bus.CreateRequestClient<TRequest>(serviceUrl);
Then, use that IRequestClient<TRequest> instance whenever you need to perform a request.
Response<Value> response = await client.GetResponse<TResponse>(new Request());
Since you are just using RPC, I'd highly recommend settings the receive endpoint queue to non-durable, to avoid writing RPC requests to disk. And adjust the bus prefetch count to a higher value (higher than the maximum number of concurrent requests you may have by 2x) to ensure that responses are always delivered directly to your awaiting response consumer (it's an internal thing to how RabbitMQ delivers messages).
var busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.PrefetchCount = 1000;
}
What is the advantage of using Source Streaming vs the regular way of handling requests? My understanding that in both cases
The TCP connection will be reused
Back-pressure will be applied between the client and the server
The only advantage of Source Streaming I can see is if there is a very large response and the client prefers to consume it in smaller chunks.
My use case is that I have a very long list of users (millions), and I need to call a service that performs some filtering on the users, and returns a subset.
Currently, on the server side I expose a batch API, and on the client, I just split the users into chunks of 1000, and make X batch calls in parallel using Akka HTTP Host API.
I am considering switching to HTTP streaming, but cannot quite figure out what would be the value
You are missing one other huge benefit: memory efficiency. By having a streamed pipeline, client/server/client, all parties safely process data without running the risk of blowing up the memory allocation. This is particularly useful on the server side, where you always have to assume the clients may do something malicious...
Client Request Creation
Suppose the ultimate source of your millions of users is a file. You can create a stream source from this file:
val userFilePath : java.nio.file.Path = ???
val userFileSource = akka.stream.scaladsl.FileIO(userFilePath)
This source can you be use to create your http request which will stream the users to the service:
import akka.http.scaladsl.model.HttpEntity.{Chunked, ChunkStreamPart}
import akka.http.scaladsl.model.{RequestEntity, ContentTypes, HttpRequest}
val httpRequest : HttpRequest =
HttpRequest(uri = "http://filterService.io",
entity = Chunked.fromData(ContentTypes.`text/plain(UTF-8)`, userFileSource))
This request will now stream the users to the service without consuming the entire file into memory. Only chunks of data will be buffered at a time, therefore, you can send a request with potentially an infinite number of users and your client will be fine.
Server Request Processing
Similarly, your server can be designed to accept a request with an entity that can potentially be of infinite length.
Your questions says the service will filter the users, assuming we have a filtering function:
val isValidUser : (String) => Boolean = ???
This can be used to filter the incoming request entity and create a response entity which will feed the response:
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.model.HttpResponse
import akka.http.scaladsl.model.HttpEntity.Chunked
val route = extractDataBytes { userSource =>
val responseSource : Source[ByteString, _] =
userSource
.map(_.utf8String)
.filter(isValidUser)
.map(ByteString.apply)
complete(HttpResponse(entity=Chunked.fromData(ContentTypes.`text/plain(UTF-8)`,
responseSource)))
}
Client Response Processing
The client can similarly process the filtered users without reading them all into memory. We can, for example, dispatch the request and send all of the valid users to the console:
import akka.http.scaladsl.Http
Http()
.singleRequest(httpRequest)
.map { response =>
response
.entity
.dataBytes
.map(_.utf8String)
.foreach(System.out.println)
}
I'm trying to use mediasoup to forward RTP streams with room.createRtpStreamer
my problem is that the payload type (for OPUS) I get from producer.rtpParameters.codecs[i].payloadType is 111,
while the one I get on the actual RTP packets is 100 (seen on Wireshark)
I tried to set preferredPayloadType in my server's config, but it seems to make no difference.
Note:
if I hardcode 100 as the Payload Type for the OPUS stream I can view/hear the stream using FFPlay
I'm using Chrome 55 (latest) and mediasoup 2.0.5 (latest)
any help will be appreciated.
The Producer has the RTP parameters decided by the client (browser), so the PT of OPUS is 111 (the default value generated by Chrome).
But, once in mediasoup server, the Consumers associated to that Producer use the RTP parameters given during the room creation. So, if the codecs given to room = new server.Room(codecs) [1] have a preferredPayloadType field, that will be used within the Consumers (otherwise it will be randomly chosen by the server).
So, when you call room.createRtpStreamer() you provide a Producer, and the generated RtpStreamer [2] has an associated Consumer and PlainRtpTransport. So, you should read the rtpStreamer.consumer.rtpParameters rather than the producer's ones.
[1] https://mediasoup.org/documentation/mediasoup/api/#server-Room
[2] https://mediasoup.org/documentation/mediasoup/api/#RtpStreamer
You should have a look at the SDP of the call setup message and check whether you get 111 or 100 for the OPUS payload.
From there you can decide which part has the bug (Chrome or mediasoup).
In the call setup message (initiating the call) check the payload of the OPUS code.
The called party should respond with the same payload number if it accepts OPUS and then both parties should use the same payload number in the RTP packets.
So I found that the payload I get on producer.rtpParameters.codecs[i].payloadType was the original payload and that room.createRtpStreamer changes the payload type.
Ended up doing the below to resolve the issue
// get the payload (type) from the room.rtpCapabilities.codecs.preferredPayloadType for the specific codec
let payload = this.room.rtpCapabilities.codecs.find((c)=>{
return c.name === producer.rtpParameters.codecs[i].name;
}).preferredPayloadType;
For reasons outlined here I need to review a set values from they querystring or formdata before each request (so I can perform some authentication). The keys are the same each time and should be present in each request, however they will be located in the querystring for GET requests, and in the formdata for POST and others
As this is for authentication purposes, this needs to run before the request; At the moment I am using a MessageHandler.
I can work out whether I should be reading the querystring or formdata based on the method, and when it's a GET I can read the querystring OK using Request.GetQueryNameValuePairs(); however the problem is reading the formdata when it's a POST.
I can get the formdata using Request.Content.ReadAsFormDataAsync(), however formdata can only be read once, and when I read it here it is no longer available for the request (i.e. my controller actions get null models)
What is the most appropriate way to consistently and non-intrusively read querystring and/or formdata from a request before it gets to the request logic?
Regarding your question of which place would be better, in this case i believe the AuthorizationFilters to be better than a message handler, but either way i see that the problem is related to reading the body multiple times.
After doing "Request.Content.ReadAsFormDataAsync()" in your message handler, Can you try doing the following?
Stream requestBufferedStream = Request.Content.ReadAsStreamAsync().Result;
requestBufferedStream.Position = 0; //resetting to 0 as ReadAsFormDataAsync might have read the entire stream and position would be at the end of the stream causing no bytes to be read during parameter binding and you are seeing null values.
note: The ability of a request's content to be read single time only or multiple times depends on the host's buffer policy. By default, the host's buffer policy is set as always Buffered. In this case, you will be able to reset the position back to 0. However, if you explicitly make the policy to be Streamed, then you cannot reset back to 0.
What about using ActionFilterAtrributes?
this code worked well for me
public HttpResponseMessage AddEditCheck(Check check)
{
var request= ((System.Web.HttpContextWrapper)Request.Properties.ToList<KeyValuePair<string, object>>().First().Value).Request;
var i = request.Form["txtCheckDate"];
return Request.CreateResponse(HttpStatusCode.Ok);
}