I want to know, if there an easy way to get the total length in bytes of the NSStream object. So, for example, in C# I can get a Stream.Length property, and that'll be the answer. In objective-c, so far, I haven't found anything like that. The simplest solution, I could imagine would be "read bytes to buffer and count their number":
long totalLength = 0;
while((result = [sInput read:buffer maxLength:BUFFER_SIZE]) != 0) {
if(result > 0) {
totalLength += result;
}
As stated in docs return value for read method is:
A positive number indicates the number of bytes read;
0 indicates that the end of the buffer was reached;
A negative number means that the operation failed.
finally the size values would contain the total length.
Is this a correct way to solve the issue, or is there a simpler way? Btw, is my code correct? (I'm not confident in my obj-c skills yet)
Related
The .Net SDK documentation for TableBatchOperation says that
A batch operation may contain up to 100 individual table operations, with the requirement that each operation entity must have same partition key. A batch with a retrieve operation cannot contain any other operations. Note that the total payload of a batch operation is limited to 4MB.
It's easy to ensure that I don't add more than 100 individual table operations to the batch: in the worst case, I can check the Count property. But is there any way to check the payload size other than manually serialising the operations (at which point I've lost most of the benefit of using the SDK)?
As you add entities you can track the size of the names plus data. Assuming you're using a newer library where the default is Json, the additional characters added should be relatively small (compared to the data if you're close to 4MB) and estimable. This isn't a perfect route, but it would get you close.
Serializing as you go especially if you're actually getting close to the 100 entity limit or the 4MB limit frequently is going to lose you a lot of perf, aside from any convenience lost. Rather than trying to track as you go either by estimating size or serializing, you might be best off sending the batch request as-is and if you get a 413 indicating request body too large, catch the error, divide the batch in 2, and continue.
I followed Emily Gerner's suggestion using optimistic inserts and error handling, but using StorageException.RequestInformation.EgressBytes to estimate the number of operations which fit in the limit. Unless the size of the operations varies wildly, this should be more efficient. There is a case to be made for not raising len every time, but here's an implementation which goes back to being optimistic each time.
int off = 0;
while (off < ops.Count)
{
// Batch size.
int len = Math.Min(100, ops.Count - off);
while (true)
{
var batch = new TableBatchOperation();
for (int i = 0; i < len; i++) batch.Add(ops[off + i]);
try
{
_Tbl.ExecuteBatch(batch);
break;
}
catch (Microsoft.WindowsAzure.Storage.StorageException se)
{
var we = se.InnerException as WebException;
var resp = we != null ? (we.Response as HttpWebResponse) : null;
if (resp != null && resp.StatusCode == HttpStatusCode.RequestEntityTooLarge)
{
// Assume roughly equal sizes, and base updated length on the size of the previous request.
// We assume that no individual operation is too big!
len = len * 4000000 / (int)se.RequestInformation.EgressBytes;
}
else throw;
}
}
off += len;
}
I am using MsgSendv and server sends MSgReply like this:
char desc_buf_out[MAX_CHARS_IN_A_LINE];
MsgReply(rcvid, EOK, desc_buf_out, sizeof(desc_buf_out));
My client is looking like this:
iov_t *iovrcv=calloc(1,sizeof(iov_t));
char rcv[1024]={0}
if (MsgSendv(server_coid, iovin_render, 3 , iovrcv, 1 ) == -1)
{
printf("error sending message to server\n");
fprintf( stderr,
"%s: %s\n",
__func__,
strerror( errno ) );
return EXIT_FAILURE;
}
SETIOV (iovrcv + 0, rcv, sizeof(rcv));
printf("iovrcv=%s\n", rcv);
But I get nothing in my rcv buffer?
Can you tell me why and what is the correct way of doing it so I receive my data correctly? I expect to receive string.
You are using the iovrcv uninitialized (well, ok, it's initialized with zeros via calloc, but it's not initialized to point to anything).
An iov_t is a pair of values, a pointer and a length.
It's given to the MsgSendv() function to tell it where the data should go. By leaving it uninitialized, you're telling MsgSendv() that the pointer is zero and the length is zero -- not a whole lot of data! :-)
Move your SETIOV to above the MsgSendv() function.
Also, be sure to initialize the iovin_render (which you show as having three parts, that is, three pairs of ptr/length values).
I have a question about Marc Gravell's Booksleeve library.
I tried to understand how booksleeve deal the Int64 value (i have billion long value in Redis actually)
I used reflection to undestand the Set long value overrides.
// BookSleeve.RedisMessage
protected static void WriteUnified(Stream stream, long value)
{
if (value >= 0L && value <= 99L)
{
int i = (int)value;
if (i <= 9)
{
stream.Write(RedisMessage.oneByteIntegerPrefix, 0, RedisMessage.oneByteIntegerPrefix.Length);
stream.WriteByte((byte)(48 + i));
}
else
{
stream.Write(RedisMessage.twoByteIntegerPrefix, 0, RedisMessage.twoByteIntegerPrefix.Length);
stream.WriteByte((byte)(48 + i / 10));
stream.WriteByte((byte)(48 + i % 10));
}
}
else
{
byte[] bytes = Encoding.ASCII.GetBytes(value.ToString());
stream.WriteByte(36);
RedisMessage.WriteRaw(stream, (long)bytes.Length);
stream.Write(bytes, 0, bytes.Length);
}
stream.Write(RedisMessage.Crlf, 0, 2);
}
I don't understand why, with more than two digits int64, the long is encoding in ascii?
Why don't use byte[] ? I know than i can use byte[] overrides to do this, but i just want to understand this implementation to optimize mine. There may be a relationship with the Redis storage.
By advance thank you Marc :)
P.S : i'm still very enthusiastic about your next major version, than i can use long value key instead of string.
It writes it in ASCII because that is what the redis protocol demands.
If you look carefully, it is always encoded as ASCII - but for the most common cases (0-9, 10-99) I've special-cased it, as these are very simple results:
x => $1\r\nX\r\n
xy => $2\r\nXY\r\n
where x and y are the first two digits of a number in the range 0-99, and X and Y are those digits (as numbers) offset by 48 ('0') - so decimal 17 becomes the byte sequence (in hex):
24-32-0D-0A-31-37-0D-0A
Of course, that can also be achieved simply via the writing each digit sequentially and offsetting the digit value by 48 ('0'), and handling the negative sign - I guess the answer there is simply "because I coded it the simple but obviously correct way". Consider the value -123 - which is encoded as $4\r\n-123\r\n (hey, don't look at me - I didn't design the protocol). It is slightly awkward because it needs to calculate the buffer length first, then write that buffer length, then write the value - remembering to write in the order 100s, 10s, 1s (which is much harder than writing the other way around).
Perfectly willing to revisit it - simply: it works.
Of course, it becomes trivial if you have a scratch buffer available - you just write it in the simple order, then reverse the portion of the scratch buffer. I'll check to see if one is available (and if not, it wouldn't be unreasonable to add one).
I should also clarify: there is also the integer type, which would encode -123 as :-123\r\n - however, from memory there are a lot of places this simply does not work.
This question may already have been asked but nothing on SO actually gave me the answer I need.
I am trying to reverse engineer someone else's vb.NET code and I am stuck with what a Xor is doing here. Here is 1 line of the body of a soap request that gets parsed (some values have been obscured so the checksum may not work in this case):
<HD>CHANGEDTHIS01,W-A,0,7753.2018E,1122.6674N, 0.00,1,CID_V_01*3B</HD>
and this is the snippet of vb code that checks it
LastStar = strValues(CheckLoop).IndexOf("*")
StrLen = strValues(CheckLoop).Length
TransCheckSum = Val("&h" + strValues(CheckLoop).Substring(LastStar + 1, (StrLen - (LastStar + 1))))
CheckSum = 0
For CheckString = 0 To LastStar - 1
CheckSum = CheckSum Xor Asc(strValues(CheckLoop)(CheckString))
Next '
If CheckSum <> TransCheckSum Then
'error with the checksum
...
OK, I get it up to the For loop. I just need an explanation of what the Xor is doing and how that is used for the checksum.
Thanks.
PS: As a bonus, if anyone can provide a c# translation I would be most grateful.
Using Xor is a simple algorithm to calculate a checksum. The idea is the same as when calculating a parity bit, but there is eight bits calculated across the bytes. More advanced algorithms like CRC and MD5 are often used to calculate checksums for more demanding applications.
The C# code would look like this:
string value = strValues[checkLoop];
int lastStar = value.IndexOf("*");
int transCheckSum = Convert.ToByte(value.Substring(lastStar + 1, 2), 16);
int checkSum = 0;
for (int checkString = 4; checkString < lastStar; checkString++) {
checkSum ^= (int)value[checkString];
}
if (checkSum != transCheckSum) {
// error with the checksum
}
I made some adjustments to the code to accomodate the transformation to C#, and some things that makes sense. I declared the variables used, and used camel case rather than Pascal case for local variables. I use a local variable for the string, instead of getting it from the collection each time.
The VB Val method stops parsing when it finds a character that it doesn't recognise, so to use the framework methods I assumed that the length of the checksum is two characters, so that it can parse the string "3B" rather than "3B</HD>".
The loop starts at the fourth character, to skip the first "<HD>", which should logically not be part of the data that the checksum should be calculated for.
In C# you don't need the Asc function to get the character code, you can just cast the char to an int.
The code is basically getting the character values and doing a Xor in order to check the integrity, you have a very nice explanation of the operation in this page, in the Parity Check section : http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Recognizing when to use the mod operator
What are the practical uses of modulus? I know what modulo division is. The first scenario which comes to my mind is to use it to find odd and even numbers, and clock arithmetic. But where else I could use it?
The most common use I've found is for "wrapping round" your array indices.
For example, if you just want to cycle through an array repeatedly, you could use:
int a[10];
for (int i = 0; true; i = (i + 1) % 10)
{
// ... use a[i] ...
}
The modulo ensures that i stays in the [0, 10) range.
I usually use them in tight loops, when I have to do something every X loops as opposed to on every iteration..
Example:
int i;
for (i = 1; i <= 1000000; i++)
{
do_something(i);
if (i % 1000 == 0)
printf("%d processed\n", i);
}
One use for the modulus operation is when making a hash table. It's used to convert the value out of the hash function into an index into the array. (If the hash table size is a power of two, the modulus could be done with a bit-mask, but it's still a modulus operation.)
To print a number as string, you need the modulus to find the value of a digit.
string number_to_string(uint number) {
string result = "";
while (number != 0) {
result = cast(char)((number % 10) + '0') ~ result;
// ^^^^^^^^^^^
number /= 10;
}
return result;
}
For the control number of international bank account numbers, the mod97 technique.
Also in large batches to do something after n iterations. Here is an example for NHibernate:
ISession session = sessionFactory.openSession();
ITransaction tx = session.BeginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);
session.Save(customer);
if ( i % 20 == 0 ) { //20, same as the ADO batch size
//Flush a batch of inserts and release memory:
session.Flush();
session.Clear();
}
}
tx.Commit();
session.Close();
The usual implementation of buffered communications uses circular buffers, and you manage them with modulus arithmetic.
For languages that don't have bitwise operators, modulus can be used to get the lowest n bits of a number. For example, to get the lowest 8 bits of x:
x % 256
which is equivalent to:
x & 255
Cryptography. That alone would account for an obscene percentage of modulus (I exaggerate, but you get the point).
Try the Wikipedia page too:
Modular arithmetic is referenced in number theory, group theory, ring theory, knot theory, abstract algebra, cryptography, computer science, chemistry and the visual and musical arts.
In my experience, any sufficiently advanced algorithm is probably going to touch on one more of the above topics.
Well, there are many perspectives you can look at it. If you are looking at it as a mathematical operation then it's just a modulo division. Even we don't need this as whatever % do, we can achieve using subtraction as well, but every programming language implement it in very optimized way.
And modulu division is not limited to finding odd and even numbers or clock arithmetic. There are hundreds of algorithms which need this module operation, for example, cryptography algorithms, etc. So it's a general mathematical operation like other +, -, *, /, etc.
Except the mathematical perspective, different languages use this symbol for defining built-in data structures, like in Perl %hash is used to show that the programmer declared a hash. So it all varies based on the programing language design.
So still there are a lot of other perspectives which one can do add to the list of use of %.