I'm using SQLite to store some data. The primary database is on a NAS (Debian Lenny, 2.6.15, armv4l) since the NAS runs a script which updates the data every day. A typical "select * from tableX" looks like this:
2010-12-28|20|62.09|25170.0
2010-12-28|21|49.28|23305.7
2010-12-28|22|48.51|22051.1
2010-12-28|23|47.17|21809.9
When I copy the DB to my main computer (Mac OS X) and run the same SQL query, the output is:
2010-12-28|20|1.08115035175016e-160|25170.0
2010-12-28|21|2.39343503830763e-259|-9.25596535779558e+61
2010-12-28|22|-1.02951149572792e-86|1.90359837597183e+185
2010-12-28|23|-1.10707273937033e-234|-2.35343828462275e-185
The 3rd and 4th column have the type REAL. Interesting fact: When the numbers are integer (i.e. they end with ".0"), there is no difference between the two databases. In all other cases, the differences are ... hm ... surprising? I can't seem to find a pattern.
If someone's got a clue - please share!
PS: sqlite3 -version output
Debian: 3.6.21 (lenny-backports)
Mac OS X: 3.6.12 (10.6)
In release 3.4.0 of SQLite there was a compile time flag added.
Added the SQLITE_MIXED_ENDIAN_64BIT_FLOAT compile-time option to support ARM7 processors with goofy endianness.
I was having this same problem with an Arm920Tid device and my x86 based VM. The arm device was writing the data, and I was trying to read it on the x86 VM (or on my Mac).
After adding this compile time flag to my makefile for my arm build I was able to get sane values when I queried the DB on either platform.
For reference I am using sqlite 3.7.14
It should be, the file format says that REAL is stored in big-endian format, which would be architecture-invariant if serialized correctly by both builds.
A value of 7 stored within the database record header indicates that the corresponding database value is an SQL real (floating point number). In this case the blob of data contains an 8-byte IEEE floating point number, stored in big-endian byte order.
Related
Trying to get familiar with the datatype 'geometry', I want to import a GPX-file in a table and show it on an OSM-map. I'm using MariaDB/phpmyadmin because that's what my hoster is providing me. I'm using the 'geometry'-type because I like the ST_functions (instead of putting the lat-lon in two columns and develop/copy the needed algorithm's).
After googling and youtubing for some time now, I'm at the point that I'm wondering if I'm doing things wrong or if I'm encountering bugs. Because I don't know what to expect, I hope someone can get me on the right track.
I started on a local PC with XAMPP 7.3.4 (phpmyadmin 4.8.5/mariadb 10.1.38) installed. I started with a column datatype POINT, was surprised that phpmyadmin has the option to show the contents of a record on a map, and was dissapointed I only saw blue water. When editting a record, phpmyadmin showed a map and data which have to presented, which makes clear that SRID is '0'. Couldn't get the SRID on '4326', until some text somewhere hinted met to use a column with datatype GEOMETRY. But only a worldmap showed up.
After a lot of trying, I decided to use the hosted environment (phpmyadmin 4.9.5/mariadb 10.3.22). To my surprise the point was visible at the map. Only, on a different part of the world. Looking at the lat-lon I saw that they were interchanged. Putting them in the lon-lat sequence, the point was visible at the place where I expected it.
Because the hoster provides higher versions installed, I installed a newer XAMPP 7.4.6 (phpmyadmin 5.0.2/mariadb 10.4.1). It was a big surprise that my point wasn't showing up, just the worldmap again. So it's some configuration with the OSM-map on the local machine that needs attention? The lat-lon still have to be interchanged.
Mapping is ok, lat-lon interchanged
Mapping wrong, lat-lon ok
Mapping of a walk in Paris. First is mapping of a GPX in Prune, second is an import of the tracked points in MariaDB. Exactly the same mapping, just had to interchange the points lat - lon. So, nothing wrong with used SRID and/or coordinates I think, just phpmysql taking the lat as y and lon as x, instead of the expected lat as x and lon as Y which puts the walk somewhere in the sea in front of Somalie:
Mapping of walk in Paris presented in Prune
Mapping of same gps-points in phpmyadmin, lat-lon interchanged
Apart from the presentation of the data, I have difficulties when using the insert-option of phpmyadmin. I only get data in one pass in the table when using sql. The insert-option generates sql which gives errors. I have to edit that sql, there are " ' " and " \ " which I have to remove. Comparing the used versions I detected differences in number and places of the to remove " ' "and " \ ".
I looked at the phpmyadmin-issues, nothing seems open. I can find closed ones who indicate to some sort of issues I'm experiencing. A lot of docu on geo is offcourse about postgresql, some about mysql, but less about mariadb and phpmysql, it's hard to find the good directions.
So, my biggest three questions are if it's intended to store lon-lat instead of lat-lon (Or do I have to use another srid?) Second question is what I have to configure to get the map working locally like it does with my hoster (if that's what's causing the problem)? Third is if people can use the insert-option of phpmysql without editing the generated sql?
Thanks in advance.
If you are going to use great-circle distances, be sure to have a version of MySQL/MaraiDB that includes st_distance_sphere. (I think that limits you to MySQL 8.0.)
If you are going to have code to "find the nearest", you will find that challenging. Here's my discussion of techniques for such. http://mysql.rjweb.org/doc.php/find_nearest_in_mysql (Also included is a Stored Function to do great-circle calculations.) That discusses using SPATIAL and other techniques.
Part of your issue is the mapping from y and x to whatever projection of the world your map has. The spherical long-lat are thinking "sphere" not a "projection".
phpmyadmin is just a UI toop. It may not be smart enough to deal with some of the SPATIAL issue. Suggest you switch to MySQL commandline tool and/or your application code. BTW, what language will you be writing in?
Backtics are used around table and column names. Quotes (' or ") are used around strings. In some contexts, backslash (\) may need to be doubled or even quadrupled up.
POINT is a 25-byte binary format. It is best to dynamically construct the value, not spell out the value, as can be done with decimal numbers for INT and FLOAT.
And, yes, longitude comes first in POINT() and other SPATIAL thingies.
I am Using Aerospike 3.40. Bin with floating point value doesn't appear. I am using python client. Please help.
It is now supported in Aerospike 3.6 version
The server does not natively support floats. It supports integers, strings, bytes, lists, and maps. Different clients handle the unsupported types in different ways. The PHP client, for example, will serialize the other types such as boolean and float and store them in a bytes field, then deserialize them on reads. The Python client will be doing that starting with the next release (>= 1.0.38).
However, this approach has the limitation of making it difficult for different clients (PHP and Python, for example) to read such serialized data, as it's not serialized using a common format.
One common way to get around this with floats is to turn them into integers. For example, If you have a bin called 'currency' you can multiply the float by 100, chop off the mantissa, and store it as an integer. On the way out you simply divide by 100.
A similar method is to store the significant digits in one bin and the mantissa in another, both of them integer types, and recombine them on the read. So 123.456789 gets stored as v_sig and v_mantissa.
(v_sig, v_mantissa) = str(123.456789).split('.')
on read you would combine the two
v = float(v_sig)+float("0."+str(v_mantissa))
FYI, floats are now supported natively as doubles on the aerospike server versions >= 3.6.0. Most clients, such as the Python and PHP one supports casting floats to as_double.
Floating point number can be divided into two parts, before decimal point and after it and storing them in two bins and leveraging them in the application code.
However, creating more number of bins have performance overheads in Aerospike as a new malloc will be used per bin.
If switching from Python to any other language is not the use case, it is better to use a better serialization mechanism and save it in single bin. That would mean only one bin per floating number is used and also will reduce the data size in Aerospike. Lesser amount of data in Aerospike always helps in speed in terms of Network I/O which is the main aim of Caching.
So, let me start by saying I have had a hard time finding any documentation about this online - hence I am asking, here. I am having to manually calculate the size of a row in Microsoft SQL Server 2008 here at work (I know this can be done via a query; however, do to some hardware issues, it is not presently possible). Either way, I figured this question might help others in the long run:
Within the database that I am working, there are a number of columns with data type NUMBER() - some of which have set the precision and scale for the number. Now, I do know that precision affects size; however, here is the question: what is the range for the disk size of data type NUMBER in SQL Server in bytes (any measurement is fine, actually).
Some documentation will provide the possible ranges of sizes and the corresponding disk size. If you know of any documentation for this data type, please feel free to post.
OBSERVATION:
I have found documentation for type NUMERIC. Is that the same - or a different version of - NUMBER?
As Andrew has mentioned it is a User Defined Type NUMBER since there is no default data type with name as NUMBER in sql server. And no one here can tell you what characteristics this Data type has.
You can execute the below query to find out what all the characteristics of this User defined Data type.
SELECT *
FROM sys.types
WHERE is_user_defined = 1
AND name = 'NUMBER'
I want to make an ISO Message with a field 64 message authentication code (MAC). I want to know what to make mac with, the binary of the ISO message without field 64? or the binary of the iso message with noting set on Field 64 but a 1 in the end of bitmap showing that there is something in Field 64?
You're supposed to determine the fields you wish to use in the MAC calculation. Select specific fields and apply your MAC-ing algorithm.
Generally, you can go by the following guidlines:
Do not use the either of the MAC fields (F64/F128) in the calculation of the MAC. Those fields are supposed to contain the results of the calculation of MAC; including them in the calculation will guarantee that the MAC value will always be inconsistent
Try to use mandatory fields, i.e. fields that you (or ISO) have designated as mandatory for the message type you're looking to MAC. For some vendors (like ACI, Base24), the Message Header, Message Type Identifier (MTI) and primary bitmap are all available to be included in the MAC calculation.
Ultimately, you're supposed to just select a handful of guaranteed fields and apply your MAC-ing algorithm. What would be the point of flagging F64 as enabled without populating it?
So I shipped an iPad app to the app store last march. It contains a Core Data model with a prepopulated store of binary tree data. I had written a tiny 64-bit OSX app to convert and extract data from an arbitrary 200MB weighing SQLite database to my custom Core Data model, resulting in a neat 20MB sized SQLite persistent store. All dandy, put into version control and never looked back.
Now my client has an updated database and they want to ship an updated app too. I figured this wouldn't take more than running the converter again and re-publishing to iTunesConnect, but no. My converter wouldn't run, or it wouldn't output the desired store. I spent hours and hours trying to figure out what was wrong, before finally reverting everything back to when I submitted my app to the store originally, and guess what, the converter wouldn't even convert that. Same code, same input database, same everything!
When looking at some data that my app resulted with, I found a weird discrepancy in how some values get stored in the Core Data model. In these three lines the group object is a Core Data entity. The identifier property is typed to Integer 16, as are all my integer based values in the model. Funny enough, when looking at the values after these three lines:
int identifier = 39899;
NSNumber *numIdentifier = [NSNumber numberWithInt:identifier];
group.identifier = numIdentifier;
I get these three values:
identifier: 39899
numIdentifier: 39899
group.identifier: -25637
Err, what? The number would of course stem from the source database, but even when inserting it manually, the last property on the core data entity gets garbled. Why on earth is that last line different? Negative what? Surely the value doesn't even come close to INT_MAX, why does it look like it's wrapping a signed int? And why is it different now when back in march, with the same code and same input database it used to work just fine? The only thing I can remember changing since then is an upgrade to OSX Lion. But surely that couldn't have affected this, right?
Would someone know what I'm doing wrong, what I've maybe been doing wrong last march already, and how I can fix this mess?
A normal int is 32 bits, not 16. While 32 bits is plenty large enough for that value, 16 is not! -25637 is the overflowed value. You either need smaller numbers, or bigger variables.
See also
"CoreData and Integer Width in iOS 5"
http://www.seattle-ipa.org/2011/09/11/coredata-and-integer-width-in-ios-5/
"In iOS 3 or 4 you could get away with storing a wider integer than your model specifies but in iOS5 the width of integers is now being enforced."
Related Stack Overflow question
Core Data change property from Integer 16 to Integer 32