RTI DDS - not enough available space in CDR buffer - data-distribution-service

I am playing back DDS data from a recorded database, and have written a Java program to listen for the data. I am able to receive most of the messages fine, but I am getting some consistent exceptions that look like the following:
PRESCstReaderCollator_storeSampleData:!deserialize
java.lang.IllegalStateException: not enough available space in CDR buffer
at com.rti.dds.cdr.CdrBuffer.checkSize(Unknown Source)
at com.rti.dds.cdr.CdrInputStream.readShortFromBigEndian(Unknown Source)
at com.rti.dds.cdr.CdrInputStream.deserializeAndSetCdrEncapsulation(Unknown Source)
at <my type>.deserialize_key_sample(<my type>TypeSupport.java:456)
at com.rti.dds.topic.TypeSupportImpl.deserialize_key(Unknown Source)
at com.rti.dds.topic.TypeSupportImpl.deserialize_keyI(Unknown Source)
Has anyone seen this or know what might cause this?
EDIT: I should also add that I am currently receiving DDS data via a replayed database, using rtireplay. I started receiving this error after dropping in a new replay configuration that I was given to use. So maybe the question is what replay configuration settings could affect something like this? I am also posting obfuscated #key fields in IDL at request
struct MyType{
Key1 key1; //#key
Key2 key2; //#key
...
}
struct Key1 {
long long m; //#key
long long l; //#key
...
}
//key members only
struct Key2 {
Key1 a; //#key
...
}

Although the stack trace is slightly different, I was able to reproduce a similar case with the following output:
Exception in thread "Thread-5" java.lang.IllegalArgumentException: string length (200)
exceeds maximum (10)
at com.rti.dds.cdr.CdrInputStream.readString(CdrInputStream.java:364)
at stringStructTypeSupport.deserialize_key_sample(stringStructTypeSupport.java:411)
at com.rti.dds.topic.TypeSupportImpl.deserialize_key(TypeSupportImpl.java:1027)
at com.rti.dds.topic.TypeSupportImpl.deserialize_keyI(TypeSupportImpl.java:965)
PRESCstReaderCollator_storeSampleData:!deserialize
Note that I am using 5.1.0 which is a bit more verbose in its error messages.
The conditions under which this occurred were the following:
The DataReader expected a type that had a different key-definition, or to be more precise, a key definition of a different size than what the DataWriter was producing. In my case, the DataWriter produced a string key attribute of 200 characters whereas the DataReader expected a string no longer than 10 characters.
The DataWriter had a QoS setting of protocol.serialize_key_with_dispose set to true, meaning that this inconsistent key is actually used (as opposed to its key hash) for determining the instance in case of a dispose()
In case you are using 5.1.0 or later: the publishing Participant had a QoS setting of resource_limits.type_code_max_serialized_length and resource_limits.type_object_max_serialized_length both set to 0. This avoids communication of type information and therefore prevents detection of the inconsistency in the definition. Older versions did not check for type consistency in the first place, even if there resource_limits were set to non-zero.
The error occurred at the moment that an instance was dispose()d
Especially protocol.serialize_key_with_dispose is not commonly changed and it seems to be the only reason why this deserialize_key function might show up in your stack trace. If you check your rtireplay configuration and find that this particular settings is set to true, then it is highly likely that the scenario described here is your case.
The serialize_key_with_dispose setting is to allow for the case where the first sample ever received for a key value happens to be a dispose. This means that the instance is not yet known. Normally, the actual key values are not propagated with a dispose, but just a hashed key. This might not be good enough to identify which instance the dispose is intended for. Setting this policy to true results in the full key value being propagated with a dispose. It is related to propagate_dispose_of_unregistered_instances. For more details, see section 6.5.3.5 Propagating Serialized Keys with Disposed-Instance Notifications of the Connext User's Manual

Related

Is VkDescriptorPoolSize struct really needed when creating descriptor pools?

I'm creating descriptor pool with poolSizeCount == 0 and pPoolSizes == nullptr and i still can allocate various number of descriptors of any type. There are no validation errors on Linux, only on Windows (but code works).
Another case: i'm providing VkDescriptorPoolSize with only 1 VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, but can allocate more VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER or even descriptors of other types (in this case errors don't occur on both Linux and Windows).
Why is this happening?
It is not technically invalid usage to exceed the pool limits in general:
If a call to vkAllocateDescriptorSets would cause the total number of descriptor sets allocated from the pool to exceed the value of vkDescriptorPoolCreateInfo::maxSets used to create pAllocateInfo->descriptorPool, then the allocation may fail due to lack of space in the descriptor pool. Similarly, the allocation may fail due to lack of space if the call to vkAllocateDescriptorSets would cause the number of any given descriptor type to exceed the sum of all the descriptorCount members of each element of VkDescriptorPoolCreateInfo::pPoolSizes with a member equal to that type.
Note the use of the word "may", which allows implementations to fail but doesn't require them to do so. This means that you're supposed to stay within those limits, but nobody's going to stop you if you exceed them and get away with it.
Now, it is a violation of valid usage to pass no sizes at all:
poolSizeCount must be greater than 0
And the appropriate layer should catch that. But outside of layers, you just achieve undefined behavior. Which can be "appears to work".

nvmlDeviceGetPowerManagementMode() always returning NVML_ERROR_INVALID_ARGUMENT?

I am writing a code to measure the power usage of an NVIDIA Tesla K20 GPU (Kepler architecture) periodically using the NVML API.
Variables:
nvmlReturn_t result;
nvmlEnableState_t pmmode;
nvmlDevice_t nvmlDeviceID;
unsigned int powerInt;
Basic code:
result = nvmlDeviceGetPowerManagementMode(nvmlDeviceID, &pmmode);
if (pmmode == NVML_FEATURE_ENABLED) {
result = nvmlDeviceGetPowerUsage(nvmlDeviceID, &powerInt);
}
My issue is that nvmlDeviceGetPowerManagementMode is always returning NVML_ERROR_INVALID_ARGUMENT. I checked this.
The NVML API Documentation says that NVML_ERROR_INVALID_ARGUMENT is returned when either nvmlDeviceID is invalid or pmmode is NULL.
nvmlDeviceID is definitely valid because I am able to query its properties which match with my GPU. But I don't see why I should set the value of pmmode to anything, because the documentation says that it is a Reference in which to return the current power management mode. For the record, I tried assigning an enable value to it, but the result was still the same.
I am clearly doing something wrong because other users of the system have written their own libraries using this function, and they face no problem. I am unable to contact them. What should I fix to get this function to work correctly?
The problem here was not directly in the API call - it was in the rest of the code - but the answer might be useful to others. Before attempting this solution, one must know for a fact that Power Management mode is enabled (check with nvidia-smi -q -d POWER).
In case of the invalid argument error, it is very likely that the problem lies with the nvmlDeviceID. I said I was able to query the device properties and at the time I was sure it was right, but be aware of any API calls that modify the nvmlDeviceID value later on.
For example, in this case, the following API call had some_variable as an invalid index, so nvmlDeviceID became invalid.
nvmlDeviceGetHandleByIndex(some_variable, &nvmlDeviceID);
It had to be changed to:
nvmlDeviceGetHandleByIndex(0, &nvmlDeviceID);
So the solution is to either remove all API calls that change or invalidate the value of nvmlDeviceID, or at least to ensure that any existing API call in the code does not modify the value.

Erlang binary protocol serialization

I'm currently using Erlang for a big project but i have a question regarding a proper proceeding.
I receive bytes over a tcp socket. The bytes are according to a fixed protocol, the sender is a pyton client. The python client uses class inheritance to create bytes from the objects.
Now i would like to (in Erlang) take the bytes and convert these to their equivelant messages, they all have a common message header.
How can i do this as generic as possible in Erlang?
Kind Regards,
Me
Pattern matching/binary header consumption using Erlang's binary syntax. But you will need to know either exactly what bytes or bits your are expecting to receive, or the field sizes in bytes or bits.
For example, let's say that you are expecting a string of bytes that will either begin with the equivalent of the ASCII strings "PUSH" or "PULL", followed by some other data you will place somewhere. You can create a function head that matches those, and captures the rest to pass on to a function that does "push()" or "pull()" based on the byte header:
operation_type(<<"PUSH", Rest/binary>>) -> push(Rest);
operation_type(<<"PULL", Rest/binary>>) -> pull(Rest).
The bytes after the first four will now be in Rest, leaving you free to interpret whatever subsequent headers or data remain in turn. You could also match on the whole binary:
operation_type(Bin = <<"PUSH", _/binary>>) -> push(Bin);
operation_type(Bin = <<"PULL", _/binary>>) -> pull(Bin).
In this case the "_" variable works like it always does -- you're just checking for the lead, essentially peeking the buffer and passing the whole thing on based on the initial contents.
You could also skip around in it. Say you knew you were going to receive a binary with 4 bytes of fluff at the front, 6 bytes of type data, and then the rest you want to pass on:
filter_thingy(<<_:4/binary, Type:6/binary, Rest/binary>>) ->
% Do stuff with Rest based on Type...
It becomes very natural to split binaries in function headers (whether the data equates to character strings or not), letting the "Rest" fall through to appropriate functions as you go along. If you are receiving Python pickle data or something similar, you would want to write the parsing routine in a recursive way, so that the conclusion of each data type returns you to the top to determine the next type, with an accumulated tree that represents the data read so far.
I only covered 8-bit bytes above, but there is also a pure bitstring syntax, which lets you go as far into the weeds with bits and bytes as you need with the same ease of syntax. Matching is a real lifesaver here.
Hopefully this informed more than confused. Binary syntax in Erlang makes this the most pleasant binary parsing environment in a general programming language I've yet encountered.
http://www.erlang.org/doc/programming_examples/bit_syntax.html

How to handle GSM buffer on the Microcontroller?

I have a GSM module hooked up to PIC18F87J11 and they communicate just fine . I can send an AT command from the Microcontroller and read the response back. However, I have to know how many characters are in the response so I can have the PIC wait for that many characters. But if an error occurs, the response length might change. What is the best way to handle such scenario?
For Example:
AT+CMGF=1
Will result in the following response.
\r\nOK\r\n
So I have to tell the PIC to wait for 6 characters. However, if there response was an error message. It would be something like this.
\r\nERROR\r\n
And if I already told the PIC to wait for only 6 characters then it will mess out the rest of characters, as a result they might appear on the next time I tell the PIC to read the response of a new AT command.
What is the best way to find the end of the line automatically and handle any error messages?
Thanks!
In a single line
There is no single best way, only trade-offs.
In detail
The problem can be divided in two related subproblems.
1. Receiving messages of arbitrary finite length
The trade-offs:
available memory vs implementation complexity;
bandwidth overhead vs implementation complexity.
In the simplest case, the amount of available RAM is not restricted. We just use a buffer wide enough to hold the longest possible message and keep receiving the messages bytewise. Then, we have to determine somehow that a complete message has been received and can be passed to further processing. That essentially means analyzing the received data.
2. Parsing the received messages
Analyzing the data in search of its syntactic structure is parsing by definition. And that is where the subtasks are related. Parsing in general is a very complex topic, dealing with it is expensive, both in computational and laboriousness senses. It's often possible to reduce the costs if we limit the genericity of the data: the simpler the data structure, the easier to parse it. And that limitation is called "transport layer protocol".
Thus, we have to read the data to parse it, and parse the data to read it. This kind of interlocked problems is generally solved with coroutines.
In your case we have to deal with the AT protocol. It is old and it is human-oriented by design. That's bad news, because parsing it correctly can be challenging despite how simple it can look sometimes. It has some terribly inconvenient features, such as '+++' escape timing!
Things become worse when you're short of memory. In such situation we can't defer parsing until the end of the message, because it very well might not even fit in the available RAM -- we have to parse it chunkwise.
...And we are not even close to opening the TCP connections or making calls! And you'll meet some unexpected troubles there as well, such as these dreaded "unsolicited result codes". The matter is wide enough for a whole book. Please have a look at least here:
http://en.wikibooks.org/wiki/Serial_Programming/Modems_and_AT_Commands. The wikibook discloses many more problems with the Hayes protocol, and describes some approaches to solve them.
Let's break the problem down into some layers of abstraction.
At the top layer is your application. The application layer deals with the response message as a whole and understands the meaning of a message. It shouldn't be mired down with details such as how many characters it should expect to receive.
The next layer is responsible from framing a message from a stream of characters. Framing is extracting the message from a stream by identifying the beginning and end of a message.
The bottom layer is responsible for reading individual characters from the port.
Your application could call a function such as GetResponse(), which implements the framing layer. And GetResponse() could call GetChar(), which implements the bottom layer. It sounds like you've got the bottom layer under control and your question is about the framing layer.
A good pattern for framing a stream of characters into a message is to use a state machine. In your case the state machine includes states such as BEGIN_DELIM, MESSAGE_BODY, and END_DELIM. For more complex serial protocols other states might include MESSAGE_HEADER and MESSAGE_CHECKSUM, for example.
Here is some very basic code to give you an idea of how to implement the state machine in GetResponse(). You should add various types of error checking to prevent a buffer overflow and to handle dropped characters and such.
void GetResponse(char *message_buffer)
{
unsigned int state = BEGIN_DELIM1;
bool is_message_complete = false;
while(!is_message_complete)
{
char c = GetChar();
switch(state)
{
case BEGIN_DELIM1:
if (c = '\r')
state = BEGIN_DELIM2;
break;
case BEGIN_DELIM2:
if (c = '\n')
state = MESSAGE_BODY:
break;
case MESSAGE_BODY:
if (c = '\r')
state = END_DELIM;
else
*message_buffer++ = c;
break;
case END_DELIM:
if (c = '\n')
is_message_complete = true;
break;
}
}
}

Change a wireshark preference in dissector?

I'm creating a dissector for Wireshark in C, for a protocol on top of UDP. Since i'm using heuristic dissecting but another protocol with a standard dissector for the same port as mine exists, my packets are being dissected as that other protocol. For my dissector to work, I need to enable the "try heuristic dissectors first" UDP preference, but I wished to change that property when my plugin is registered (in the code), so the user does not need to change it manually.
I noticed on epan/prefs.h, the function prefs_set_pref exists! But when I used it on my plugin, Wireshark crashes on startup with a Bus Error 10.
Is what I want to do possible/correct?
So I've tried this:
G_MODULE_EXPORT void plugin_register(void){
prefs_set_pref("udp.try_heuristic_first:true");
// My proto_register goes here
}
Since epan/prefs.h has:
/*
* Given a string of the form "<pref name>:<pref value>", as might appear
* as an argument to a "-o" option, parse it and set the preference in
* question. Return an indication of whether it succeeded or failed
* in some fashion.
*
* XXX - should supply, for syntax errors, a detailed explanation of
* the syntax error.
*/
WS_DLL_PUBLIC prefs_set_pref_e prefs_set_pref(char *prefarg);
Thanks
Calling prefs_set_pref("udp.try_heuristic_first:true"); works for me in a test Wireshark plugin.
OK: Assuming no other issues,I expect the problem is that prefs_set_pref() modifies the string passed to it.
If (the address of) a string literal is passed, the code will attempt to modify the literal which, in general, is not allowed. I suspect this is the cause of your
Bus Error 10.
(I'd have to dig deeper to see why my test on Windows actually worked).
So: I suggest trying something like:
char foo[] = {"udp.try_heuristic_first:true"};
...
prefs_set_pref(foo);
to see if that works;
Or: do a strcpy of the literal to a local array.
==============
(Earlier original comments)
Some comments/questions:
What's the G_MODULE_EXPORT about ?
None of the existing Wireshark plugin dissectors use this.
(See any of the dissectors under plugins in your dissector Wireshark source tree).
The plugin register function needs to be named proto_register_??? where ???
is the name of you plugin dissector.
So: I don't understand the whole G_MODULE_EXPORT void plugin_register(void){ & etc
The call to prefs_set_prefs() should be in the proto_reg_handoff_???() function (and not in the proto_register_??? function).