How to insert floating point numbers in Aerospike KV store? - aerospike

I am Using Aerospike 3.40. Bin with floating point value doesn't appear. I am using python client. Please help.

It is now supported in Aerospike 3.6 version

The server does not natively support floats. It supports integers, strings, bytes, lists, and maps. Different clients handle the unsupported types in different ways. The PHP client, for example, will serialize the other types such as boolean and float and store them in a bytes field, then deserialize them on reads. The Python client will be doing that starting with the next release (>= 1.0.38).
However, this approach has the limitation of making it difficult for different clients (PHP and Python, for example) to read such serialized data, as it's not serialized using a common format.
One common way to get around this with floats is to turn them into integers. For example, If you have a bin called 'currency' you can multiply the float by 100, chop off the mantissa, and store it as an integer. On the way out you simply divide by 100.
A similar method is to store the significant digits in one bin and the mantissa in another, both of them integer types, and recombine them on the read. So 123.456789 gets stored as v_sig and v_mantissa.
(v_sig, v_mantissa) = str(123.456789).split('.')
on read you would combine the two
v = float(v_sig)+float("0."+str(v_mantissa))
FYI, floats are now supported natively as doubles on the aerospike server versions >= 3.6.0. Most clients, such as the Python and PHP one supports casting floats to as_double.

Floating point number can be divided into two parts, before decimal point and after it and storing them in two bins and leveraging them in the application code.
However, creating more number of bins have performance overheads in Aerospike as a new malloc will be used per bin.
If switching from Python to any other language is not the use case, it is better to use a better serialization mechanism and save it in single bin. That would mean only one bin per floating number is used and also will reduce the data size in Aerospike. Lesser amount of data in Aerospike always helps in speed in terms of Network I/O which is the main aim of Caching.

Related

Big numbers in Redis Sorted Set

I would like to store values as a score in a redis sorted set that be be as big as 10^24 (and if possible even 2^256)
What are the integer size limits with ZRANGE?
For some context I'm trying to implement a ranking of top holders for a custom ethereum token. e.g. https://etherscan.io/token/0xdac17f958d2ee523a2206206994597c13d831ec7#balances
I want to hold the balances in a Redis DB and access it through node.js. I can retrieve the actual balances using web3, in case the db crashes or something. The point is i would like to have the data sorted and i would like to be able to access the data blazingly fast.
Quotation from the Redis documentation about sorted sets:
Range of integer scores that can be expressed precisely
Redis sorted sets use a double 64-bit floating point number to represent the score. In all the architectures we support, this is represented as an IEEE 754 floating point number, that is able to represent precisely integer numbers between -(2^53) and +(2^53) included. In more practical terms, all the integers between -9007199254740992 and 9007199254740992 are perfectly representable. Larger integers, or fractions, are internally represented in exponential form, so it is possible that you get only an approximation of the decimal number, or of the very big integer, that you set as score.
So when leaving the precise range and an approximation of the score is good enough for your use case, wikipedia says that 2^1023 would be the highest exponent possible.

Why does the Java API use int instead of short or byte?

Why does the Java API use int, when short or even byte would be sufficient?
Example: The DAY_OF_WEEK field in class Calendar uses int.
If the difference is too minimal, then why do those datatypes (short, int) exist at all?
Some of the reasons have already been pointed out. For example, the fact that "...(Almost) All operations on byte, short will promote these primitives to int". However, the obvious next question would be: WHY are these types promoted to int?
So to go one level deeper: The answer may simply be related to the Java Virtual Machine Instruction Set. As summarized in the Table in the Java Virtual Machine Specification, all integral arithmetic operations, like adding, dividing and others, are only available for the type int and the type long, and not for the smaller types.
(An aside: The smaller types (byte and short) are basically only intended for arrays. An array like new byte[1000] will take 1000 bytes, and an array like new int[1000] will take 4000 bytes)
Now, of course, one could say that "...the obvious next question would be: WHY are these instructions only offered for int (and long)?".
One reason is mentioned in the JVM Spec mentioned above:
If each typed instruction supported all of the Java Virtual Machine's run-time data types, there would be more instructions than could be represented in a byte
Additionally, the Java Virtual Machine can be considered as an abstraction of a real processor. And introducing dedicated Arithmetic Logic Unit for smaller types would not be worth the effort: It would need additional transistors, but it still could only execute one addition in one clock cycle. The dominant architecture when the JVM was designed was 32bits, just right for a 32bit int. (The operations that involve a 64bit long value are implemented as a special case).
(Note: The last paragraph is a bit oversimplified, considering possible vectorization etc., but should give the basic idea without diving too deep into processor design topics)
EDIT: A short addendum, focussing on the example from the question, but in an more general sense: One could also ask whether it would not be beneficial to store fields using the smaller types. For example, one might think that memory could be saved by storing Calendar.DAY_OF_WEEK as a byte. But here, the Java Class File Format comes into play: All the Fields in a Class File occupy at least one "slot", which has the size of one int (32 bits). (The "wide" fields, double and long, occupy two slots). So explicitly declaring a field as short or byte would not save any memory either.
(Almost) All operations on byte, short will promote them to int, for example, you cannot write:
short x = 1;
short y = 2;
short z = x + y; //error
Arithmetics are easier and straightforward when using int, no need to cast.
In terms of space, it makes a very little difference. byte and short would complicate things, I don't think this micro optimization worth it since we are talking about a fixed amount of variables.
byte is relevant and useful when you program for embedded devices or dealing with files/networks. Also these primitives are limited, what if the calculations might exceed their limits in the future? Try to think about an extension for Calendar class that might evolve bigger numbers.
Also note that in a 64-bit processors, locals will be saved in registers and won't use any resources, so using int, short and other primitives won't make any difference at all. Moreover, many Java implementations align variables* (and objects).
* byte and short occupy the same space as int if they are local variables, class variables or even instance variables. Why? Because in (most) computer systems, variables addresses are aligned, so for example if you use a single byte, you'll actually end up with two bytes - one for the variable itself and another for the padding.
On the other hand, in arrays, byte take 1 byte, short take 2 bytes and int take four bytes, because in arrays only the start and maybe the end of it has to be aligned. This will make a difference in case you want to use, for example, System.arraycopy(), then you'll really note a performance difference.
Because arithmetic operations are easier when using integers compared to shorts. Assume that the constants were indeed modeled by short values. Then you would have to use the API in this manner:
short month = Calendar.JUNE;
month = month + (short) 1; // is july
Notice the explicit casting. Short values are implicitly promoted to int values when they are used in arithmetic operations. (On the operand stack, shorts are even expressed as ints.) This would be quite cumbersome to use which is why int values are often preferred for constants.
Compared to that, the gain in storage efficiency is minimal because there only exists a fixed number of such constants. We are talking about 40 constants. Changing their storage from int to short would safe you 40 * 16 bit = 80 byte. See this answer for further reference.
The design complexity of a virtual machine is a function of how many kinds of operations it can perform. It's easier to having four implementations of an instruction like "multiply"--one each for 32-bit integer, 64-bit integer, 32-bit floating-point, and 64-bit floating-point--than to have, in addition to the above, versions for the smaller numerical types as well. A more interesting design question is why there should be four types, rather than fewer (performing all integer computations with 64-bit integers and/or doing all floating-point computations with 64-bit floating-point values). The reason for using 32-bit integers is that Java was expected to run on many platforms where 32-bit types could be acted upon just as quickly as 16-bit or 8-bit types, but operations on 64-bit types would be noticeably slower. Even on platforms where 16-bit types would be faster to work with, the extra cost of working with 32-bit quantities would be offset by the simplicity afforded by only having 32-bit types.
As for performing floating-point computations on 32-bit values, the advantages are a bit less clear. There are some platforms where a computation like float a=b+c+d; could be performed most quickly by converting all operands to a higher-precision type, adding them, and then converting the result back to a 32-bit floating-point number for storage. There are other platforms where it would be more efficient to perform all computations using 32-bit floating-point values. The creators of Java decided that all platforms should be required to do things the same way, and that they should favor the hardware platforms for which 32-bit floating-point computations are faster than longer ones, even though this severely degraded PC both the speed and precision of floating-point math on a typical PC, as well as on many machines without floating-point units. Note, btw, that depending upon the values of b, c, and d, using higher-precision intermediate computations when computing expressions like the aforementioned float a=b+c+d; will sometimes yield results which are significantly more accurate than would be achieved of all intermediate operands were computed at float precision, but will sometimes yield a value which is a tiny bit less accurate. In any case, Sun decided everything should be done the same way, and they opted for using minimal-precision float values.
Note that the primary advantages of smaller data types become apparent when large numbers of them are stored together in an array; even if there were no advantage to having individual variables of types smaller than 64-bits, it's worthwhile to have arrays which can store smaller values more compactly; having a local variable be a byte rather than an long saves seven bytes; having an array of 1,000,000 numbers hold each number as a byte rather than a long waves 7,000,000 bytes. Since each array type only needs to support a few operations (most notably read one item, store one item, copy a range of items within an array, or copy a range of items from one array to another), the added complexity of having more array types is not as severe as the complexity of having more types of directly-usable discrete numerical values.
If you used the philosophy where integral constants are stored in the smallest type that they fit in, then Java would have a serious problem: whenever programmers write code using integral constants, they have to pay careful attention to their code to check if the type of the constants matter, and if so look up the type in the documentation and/or do whatever type conversions are needed.
So now that we've outlined a serious problem, what benefits could you hope to achieve with that philosophy? I would be unsurprised if the only runtime-observable effect of that change would be what type you get when you look the constant up via reflection. (and, of course, whatever errors are introduced by lazy/unwitting programmers not correctly accounting for the types of the constants)
Weighing the pros and the cons is very easy: it's a bad philosophy.
Actually, there'd be a small advantage. If you have a
class MyTimeAndDayOfWeek {
byte dayOfWeek;
byte hour;
byte minute;
byte second;
}
then on a typical JVM it needs as much space as a class containing a single int. The memory consumption gets rounded to a next multiple of 8 or 16 bytes (IIRC, that's configurable), so the cases when there are real saving are rather rare.
This class would be slightly easier to use if the corresponding Calendar methods returned a byte. But there are no such Calendar methods, only get(int) which must returns an int because of other fields. Each operation on smaller types promotes to int, so you need a lot of casting.
Most probably, you'll either give up and switch to an int or write setters like
void setDayOfWeek(int dayOfWeek) {
this.dayOfWeek = checkedCastToByte(dayOfWeek);
}
Then the type of DAY_OF_WEEK doesn't matter, anyway.
Using variables smaller than the bus size of the CPU means more cycles are necessary. For example when updating a single byte in memory, a 64-bit CPU needs to read a whole 64-bit word, modify only the changed part, then write back the result.
Also, using a smaller data type requires overhead when the variable is stored in a register, since the behavior of the smaller data type to be accounted for explicitly. Since the whole register is used anyways, there is nothing to be gained by using a smaller data type for method parameters and local variables.
Nevertheless, these data types might be useful for representing data structures that require specific widths, such as network packets, or for saving space in large arrays, sacrificing speed.

1.2 in SQLite3 Database Is Actually 1.199999998

I am attempting to store a float in my SQLite3 database using java. When I go to store the number 1.2 in the database, it is actually stored as 1.199999998 & the same occurs for every even number (1.4, 1.6, etc.).
This makes is really diffult to delete rows because I delete a row according to its version column(whose type =float). So this line wont work:
"DELETE FROM tbl WHERE version=1.2"
Thats because there is no 1.2 but only 1.19999998. How can I make sure that when I store a float in my SQLite3 DB, that it is the exact number I input?
Don't use a float if you need precise accuracy. Try a decimal instead.
Remember that the 1.2 you put in your source code or that the user entered into a textbox and ultimately ended up in the database is actually stored as a binary value (usually in a format known as IEEE754). To understand why this is a problem, try converting 1.2 (1 1/5) to binary by hand (binary .1 is 1/2, .01 is 1/4) and see what you end up with:
1.001100110011001100110011001100110011
You can save time by using this converter (ignore the last "1" that breaks the cycle at the site —its because the converter had to round the last digit).
As you can see, it's a repeating pattern. This goes on pretty much forever. It would be like trying to represent 1/3 as a decimal. To get around this problem, most programming languages have a decimal type (as opposed to float or double) that keeps a base 10 representation. However, calculations done using this type are orders of magnitude slower, and so it's typically reserved for financial transactions and the like.
This is the very nature of floating point numbers. They are not exact.
I'd suggest you either use an integer, or text field to store a version.
You should never rely on the accuracy of a float or a double. A float should never be used for keys in a data base or to represent money.
You should probably use decimal in this case.
Floats are not an accurate data type. They are designed to be fast, have a large range of values, and have a small memory footprint.
They are usually implemented using the IEEE standard
http://en.wikipedia.org/wiki/IEEE_754-2008
As Joel Coehoorn has pointed out, 1.2 is the recurring fraction 1.0011 0011 0011... in binary and can't be exactly represented in a finite number of bits.
The closest you can get with an IEEE 754 float is 1.2000000476837158203125. The closest you can get with a double is 1.1999999999999999555910790149937383830547332763671875. I don't know where you're getting 1.199999998 from.
Floating-point was designed for representing approximate quantities: Physical measurements (a swimming pool is never exactly 1.2 meters deep), or irrational-valued functions like sqrt, log, or sin. If you need a value accurate to 15 significant digits, it works fine. If you truly need an exact value, not so much.
For a version number, a more appropriate representation would be a pair of integers: One for the major version and one for the minor version. This would also correctly handle the sequence 1.0, 1.1, ..., 1.9, 1.10, 1.11, which would sort incorrectly in a REAL column.

Best way to store a file size in bytes?

What's the best way to store a file size in bytes in database?
Considering that the size can be huge MB, GB, TB...
I'm using bigint (max: 9.223.372.036.854.775.807), but is it the best way?
That's the type I would choose. It corresponds to the long type in c# (a 64 bit number), and it is the same type that is used by Windows to store file sizes.
A 64-bit integer is all you need.
If bigint has a maximum value of 9.223.372.036.854.775.807, then that suggests a signed 64-bit integer, which is perfectly adequate.
From the description, it does not look like 32-bit integers will do what you need, so unless you actually need to support larger sizes than 9.223.372.036.854.775.807, then bigint is the most efficient form you could possibly choose.
If you needed larger values (I can't imagine why), then you'd need to either store it as a string, or find a large-number library that will use as many bytes as neccessary to store the number (ie, has no maximum size).

How do you handle BLOB and numerical data efficiently in database communication?

SQL databases seem to be the cornerstone of most software. However, it seems optimized for textual data. In fact when doing any queries involving numerical data, integers specifically, it seems inefficient that the numbers are getting converted to text and then back to native formats both ways between the application and the database. This same inefficiency seems to apply to BLOB data as well. My understanding is that even with something like Linq to SQL, this two way conversion is occuring in the background.
Are there general ways to bypass this overhead with SQL? Are there certain database management systems that handle this more efficiently than others (ie, with non-standard extensions/API's)?
Clarification. In the following select statement, the list of numbers after IN could be more easily passed as a raw array of int, but there seems to be no way of achieving that optimization level.
SELECT foo FROM bar WHERE baz IN (23, 34, 45, 9854004, ...)
Don't suppose. Measure.
Format conversion is not likely to be a measurable cost for database work, unless you are misusing the database as an arithmetic engine.
The IO cost for LOBs, especially for CLOBS with character conversion, can become significant; the remedy here, once you know that the simplest thing that might work actually has a noticeable performance impact, is to minimize the number of times you copy the LOB data. Use whatever SQL parameter binding style allows you to transfer the data directly between its point of creation or use, and the database -- often this is binding the LOB to a stream or I/O channel.
But don't do this until you have a way to measure the impact, and have measurements showing that this is your bottleneck.
Numerical data in a database is not stored as text. I guess it depends on the database, but it certainly doesn't have to be and isn't.
BLOBs are stored exactly how you set them -- by definition, the DB has no way to interpret the information -- I guess it could compress if it found that to be useful. BLOBs are not translated into text.
Here's how Oracle stores numbers:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm#i16209
Internal Numeric Format
Oracle Database stores numeric data in variable-length format. Each value is stored in scientific notation, with 1 byte used to store the exponent and up to 20 bytes to store the mantissa. The resulting value is limited to 38 digits of precision. Oracle Database does not store leading and trailing zeros. For example, the number 412 is stored in a format similar to 4.12 x 102, with 1 byte used to store the exponent(2) and 2 bytes used to store the three significant digits of the mantissa(4,1,2). Negative numbers include the sign in their length.
MySQL info here:
http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
Look at the table -- a TINYINT is represented in 1 byte (range -128 - 127), not possible if stored as text.
EDIT: With the clarification -- I would say use the API in your language that looks something like this (pseudocode)
stmt = conn.Prepare("SELECT * FROM TABLE where x in (?, ?, ?)");
stmt.SetInt(0, x);
stmt.SetInt(1, y);
stmt.SetInt(2, z);
I don't believe that the underlying protocols use text for the transport of parameters.