What's the max size of a NSQ message? - nsq

I'm evaluating using NSQ, http://nsq.io/, for a specific project. The idea is to setup a data pipeline where each step is a job, and where the state ideally will be located in the message body.
Which got me to think about a potential maximal message size. I don't manage to find any documentation on the subject. Can it be any size? I guess it will affect performance if messages are big enough to not fit in memory.

As Aldo mentioned, the default maximum message size is around 1 megabyte.
This is easy enough to change with the max-msg-size switch for the NSQ daemon. Here is an example of how to use it:
nsqd --lookupd-tcp-address=127.0.0.1:4160 -max-msg-size=2097152
This tells the NSQ daemon (nsqd) that max message size should be 2 megabytes instead of the default.
Using this setting, you could indeed have a message that would fill up your memory, as you mentioned in a comment.

as far as I can tell from the documentation, it looks like it's 1MB by default.
I know it sounds like a lot but I managed to hit this limit.
What I'm doing now is to send an NSQ message with the minimum amount of data regarding the event, and get the full data on the other end when I handle it.

Related

What is maximum size of dictionary can I send through watchconnectivity

I was sending .5 mb size dictionary through iPhone to watch os2, however every time it is giving message reply failed. It is working correctly in watchos1. There are 700 objects in that dictionary.
Please help.
The payload size is dependent on how you send the content from the phone to the watch. You may want to consider breaking your dictionary into smaller chunks, (smaller than 65KB), or write the dictionary to a file you can then send using WCSession's transferFile:metadata method.
For more info, see this question and answer, and take a look at the docs for WCSession. The file size limits aren't documented, and may change in the future.

Increasing timeout for LBAPI request

I've been working on an app that uses the LBAPI to gather all the leaf work items within our workspace when the app is first ran. This is expected to take some time, seeing as there are over 25,000 and I'm pulling several fields for each item. However, recently the requests seem to be timing out at around the 30 second mark. I would assume this is a setting within the SDK, however I found no way to change the timeout anywhere in the documentation. To make matters worse, rather than returning to the callback function an "unsuccessful" response, there is no response at all, which makes exception handling much more difficult on my end.
I was wondering, is there in fact a way to increase this timeout? And if not, is there are more elegant way to catch that event, rather than simply setting a timer on my end as well, and assuming once it gets to zero without a request there was an error?
Thanks!
30 second default is probably low for 20K page size. Changing pagesize to 10k with limit set to infinity may help. Also, given a Rally.data.WsapiDataStore or Rally.data.lookback.SnapshotStore try
store.getProxy().timeout = 60000;

What are the limits of messages, queues and exchanges?

What are the allowed types of messages (strings, bytes, integers, etc.)?
What is the maximum size of a message?
What is the maximum number of queues and exchanges?
Theoretically anything can be stored/sent as a message. You actually don't want to store anything on the queues. The system works most efficiently if the queues are empty most of the time. You can send anything you want to the queue with two preconditions:
The thing you are sending can be converted to and from a bytestring
The consumer knows exactly what it is getting and how to convert it to the original object
Strings are pretty easy, they have a built in method for converting to and from bytes. If you know it is a string then you know how to convert it back. The best option is to use a markup string like XML, JSON, or YML. This way you can convert objects to Strings and back again to the original objects; they work across programming languages so your consumer can be written in a different language to your producer as long as it knows how to understand the object.
I work in Java. I want to send complex messages with sub objects in the fields. I use my own message object. The message object has two additional methods toBytes and fromBytes that convert to and from the bytestream. I use routing keys that leave no doubt as to what type of message the consumer is receiving. The message is Serializable. This works fine, but is limiting as I can only use it with other Java programs.
The size of the message is limited by the memory on the server, and if it is persistent then also the free HDD space too. You probably do not want to send messages that are too big; it might be better to send a reference to a file or DB.
You might also want to read up on their performance measures:
http://www.rabbitmq.com/blog/2012/04/17/rabbitmq-performance-measurements-part-1/
http://www.rabbitmq.com/blog/2012/04/25/rabbitmq-performance-measurements-part-2/
Queues are pretty light weight, you will most likely be limited by the number of connections you have. It will depend on the server most likely. Here is some info on a similiar question:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2009-February/003042.html
What is the maximum size of a message?
It used to be 2 GiB before version 3.8.0:
%% Trying to send a term across a cluster larger than 2^31 bytes will
%% cause the VM to exit with "Absurdly large distribution output data
%% buffer". So we limit the max message size to 2^31 - 10^6 bytes (1MB
%% to allow plenty of leeway for the #basic_message{} and #content{}
%% wrapping the message body).
-define(MAX_MSG_SIZE, 2147383648).
Reference: https://github.com/rabbitmq/rabbitmq-common/blob/v3.7.21/include/rabbit.hrl#L279
It has been 512 MiB since version 3.8.0:
%% Max message size is hard limited to 512 MiB.
%% If user configures a greater rabbit.max_message_size,
%% this value is used instead.
-define(MAX_MSG_SIZE, 536870912).
Reference: https://github.com/rabbitmq/rabbitmq-common/blob/v3.8.0/include/rabbit.hrl#L238
See robthewolf's answer.
The max message size is 2GB, however, performance tuning for messages of this size is not effective. Max Message Size
There is no hard limit imposed by RabbitMQ Server Software on the number of queues, however, the hardware the server is running on may very well impact this limit.
3a. There is no queue length limit imposed by the server by default. You can, however, limit this through server-side policy (configuration) or client side policy. Max Queue Length
There is more information and links on a related post.

How should I decide the quotas for the Tridion core service binding?

I am connecting to the Tridion core service using a System.ServiceModel.WsHttpBinding. The service will only be used by authenticated users, and probably only by code which I control. I have to choose values for the following
MaxBufferPoolSize (default 524,288 bytes)
MaxReceivedMessageSize (default 65,536 bytes)
ReaderQuotas.MaxArrayLength (default 16384 bytes)
ReaderQuotas.MaxBytesPerRead (default 4096 bytes)
ReaderQuotas.MaxNameTableCharCount (default 16384 bytes)
ReaderQuotas.MaxStringContentLength (default 8192 bytes)
The code examples I have seen for using the core service invariably set at least some of these to values larger than the defaults, for example 4Mb. Is this because of known problems when other values, such as the defaults, are used?
MaxBufferPoolSize is there to allow you to prevent excessive garbage collections. Is this simply a matter of monitoring GCs and tuning based on that?
MaxReceivedMessageSize, MaxArrayLength and MaxBytesPerRead are there to defend against Dos attacks, so in my scenario, perhaps I can improve throughput by increasing these. Would a really large number help?
MaxNameTableCharCount seems to be there to prevent uncontrolled growth of something you might not want to grow uncontrolledly, so perhaps leaving the default would be a good thing.
The documentation on MaxStringContentLength doesn't specify what happens if you exceed the quota. Presumably ReadContentAsString will fail in some way, so perhaps this value should be large.
So - should I leave these values at their defaults? Will that cause me problems? Should I increase them to large values? Will that help with throughput etc., or is it more likely to cause other problems?
The general rule is to have these values as small as possible, just enough for you code to work. If you will take a look at default config that is shipped with CoreService.dll, it has some of the values increased.
For example, if you expect to get large XML lists (or search results) - you should increase MaxReceivedMessageSize. Keep in mind that you have control over the size of the list you will get using BaseColumns property of a filter.
If you prefer to use GetList, GetSystemWideList and GetSearchResults methods over their XML counterparts, you will probably have to increase ReaderQuotas.MaxArrayLength together with MaxReceivedMessageSize. But please note, that large arrays will be stored in memory.
I'm not sure you want to increase any of these values until you will hit the limit. WCF is quite good with pointing you to the parameter you have to adjust.
I'm afraid this is not really an answer to your questions... But, from my experience, I increased the values to more than the defaults suggested. I used 4MB as you already suggested. This was namely because I was experiencing error while communicating with the Core Service. They were related to the request/response sizes exceeding the allotted sizes.
Moreover, in the case of Core Service transactionality, I saw more of these exceptions. It seems that the sizes of request/responses increase quite a bit when using transactions. In my case, I was creating a batch of Components in one big transaction. If one Component failed to create, I would roll-back the whole transaction.
Hope this helps.
I've been experimenting with the Core Service recently and have seen an XmlReader exception occur when trying to open a large TBB (C# fragment) using the following code:
using(var client = new CoreService.CoreService2010Client())
{
var item = client.Read(tcmId,new ReadOptions());
//More code
}
System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be
increased by changing the MaxStringContentLength property on the
XmlDictionaryReaderQuotas object used when creating the XML reader.
Line 1, position 9201.
As it says in the message, I had to up the ReaderQuotas.MaxStringContentLength to fix this. So if you're working with any Building Blocks that have content bigger than 8KB expect this error.

How do I increase the reader quota on WCF service?

I don't expect anyone to be able to help out with this but let's give it a go.
I have a WinForms app that uses a WCF service to pull down a rather large JSON document serialised into a string. I have changed the client's Reader Quota on strings to 8192000 (arbitrary but suitable for most cases) and put the service onto a custom binding with an explicit readerquota of 8192000
checking the service reference in Notepad by eye the quota in the .svcinfo files is set to 8192 although this could be a red herring.
I'm at the end of my tether, I've followed every piece of advice I can find on Google:
http://www.haveyougotwoods.com/archive/2008/03/29/wcf-and-large-messages.aspx
http://msdn.microsoft.com/en-us/magazine/cc163394.aspx
to name but two and all the suggested answers I could find on here i.e.
WCF service The maximum array length quota (16384) has been exceeded
WCF Maximum Message Size Quota
Maximum array length quota
and I looked at this:
http://wildermuth.com/2009/09/10/Using_Large_Message_Requests_in_Silverlight_with_WCF
which was a response to one of the above or one of the many other things I have looked at that I have not retrieved from my "Recently Closed Tabs" list.
Basically I can't think of anything else to do to increase this limit and yet it still insists upon first encountering a string longer than 64k in length that the limits have not been altered at all.
So could anyone just give me a really basic step-by-step to altering this one setting for a WinForms app serialising and then deserialising JSON data as a string on either end of the transaction? A lot of the other advice has been about silverlight or some other scenario and for whatever reason it just fails to affect this case.
I tried the solution as shown in the last article I linked to again, just to go over my previous work. This time instead of preventing the WCF services from working at all (which is what had happened previously) it instead started to work and upped the limits.
I don't know what I was doing wrong the first time or what I did right this time... one of those things I guess.