SQlite: issues with storing high values? - sql

In working on a project, where:
A dataset is collected every 10 seconds, which is stored in an SQlite file on an server.
After being processed, the data is sent to an SQL-Database every 5 minutes.
Afterwards the data in the SQlite file, which isn't needed anymore, gets deleted.
The collecting of the data continues and at the moment the id doens't get reset.
I didn't get how much an integer in SQLite can store according to the documentation (https://www.sqlite.org/datatype3.html).
In MySQL-Databases the maximum value of an interger column is 2.147.483.647. If my script would run for 10 years the id would be 31.449.600. Although this would be much lower less than the maximum, I wondered,
if there is any problem with storing high values in SQlite.
Could this affect the performance?

That page mentions that integer numbers can be stored in up to 8 bytes, i.e., 64 bits.
As mentioned elsewhere, this means that the largest allowed integer is 9,223,372,036,854,775,807.

An integer in SQLite can store values up to 9223372036854775807 (8 Bytes signed so 63 bits as 1 bit is for the sign), which is the same as a MySQL BIGINT (which is in addition to the Standard INT).

Related

Should I define a column type from actual length or nth power of 2(Sql Server )?

Should I define a column type from actual length to nth power of 2?
The first case, I have a table column store no more than 7 charactors,
will I use NVARCHAR(8)? since there maybe implicit convert inside Sql
server, allocate 8 space and truncate automatic(heard some where).
If not, NCHAR(7)/NCHAR(8), which should be(assume the fixed length is 7)
Any performance differ on about this 2 cases?
You should use the actual length of the string. Now, if you know that the value will always be exactly 7 characters, then use CHAR(7) rather than VARCHAR(7).
The reason you see powers-of-2 is for columns that have an indeterminate length -- a name or description that may not be fixed. In most databases, you need to put in some maximum length for the varchar(). For historical reasons, powers-of-2 get used for such things, because of the binary nature of the underlying CPUs.
Although I almost always use powers-of-2 in these situations, I can think of no real performance differences. There is one. . . in some databases the actual length of a varchar(255) is stored using 1 byte whereas a varchar(256) uses 2 bytes. That is a pretty minor difference -- even when multiplied over millions of rows.

How to execute query longer than 32767 characters on Firebird?

I'm developing a Java web application that deals with large amounts of text (HTML code strings encoded using base64), which I need to save in my database. I'm using Firebird 2.0, and every time I try to insert a new record with strings longer than 32767 characters, I receive the following error:
GDS Exception. 335544726. Error reading data from the connection.
I have done some research about it, and apparently this is the character limit for Firebird, both for query strings and records in the database. I have tried a couple of things, like splitting the string in the query and then concatenating the parts, but it didn't work. Does anyone know any workarounds for this issue?
If you need to save large amount of text data in the database - just use BLOB fields. Varchar field size is limited to 32Kb.
For better performance you can use binary BLOBs and save there zipped data.
Firebird query strings are limited to 64 kilobytes in Firebird 2.5 and earlier. The maximum length of a varchar field is 32766 byte (which means it can only store 8191 characters when using UTF-8!). The maximum size of a row (with blobs counting for 8 bytes) is 64 kilobytes as well.
If you want to store values longer than 32 kilobytes, you need to use a BLOB SUB_TYPE TEXT, and you need to use a prepared statement to set the value.

SQL data type that automatically sets to field length of value

Does such a thing exist or are you always required to set the max field length manually?
We have a varchar field in our database where the length of the value can be anything from 0 all the way to 800 or greater. Is there a datatype that will automatically set itself to the length of the value being entered?
(this is specifically for SQL Server - but should apply to most other RDBMS as well)
You need to define the maximum length - but the varchar type will always only store as much data as needed. So need to define the max - but not the actual size - that's handled automatically.
So if you have varchar(800) and store 50 characters - it takes up 52 bytes. If you store 100 characters - 102 bytes used (2 bytes overhead per entry).
At the same time, I would still argue you should NOT just define all column varchar(max) (2 GB max. size) - even though it might look tempting and convenient. Read What's the Point of Using VARCHAR(n) Anymore? for a great explanation of why not to do that.

Is varchar(128) better than varchar(100)

Quick question. Does it matter from the point of storing data if I will use decimal field limits or hexadecimal (say 16,32,64 instead of 10,20,50)?
I ask because I wonder if this will have anything to do with clusters on HDD?
Thanks!
VARCHAR(128) is better than VARCHAR(100) if you need to store strings longer than 100 bytes.
Otherwise, there is very little to choose between them; you should choose the one that better fits the maximum length of the data you might need to store. You won't be able to measure the performance difference between them. All else apart, the DBMS probably only stores the data you send, so if your average string is, say, 16 bytes, it will only use 16 (or, more likely, 17 - allowing 1 byte for storing the length) bytes on disk. The bigger size might affect the calculation of how many rows can fit on a page - detrimentally. So choosing the smallest size that is adequate makes sense - waste not, want not.
So, in summary, there is precious little difference between the two in terms of performance or disk usage, and aligning to convenient binary boundaries doesn't really make a difference.
If it would be a C-Program I'd spend some time to think about that, too. But with a database I'd leave it to the DB engine.
DB programmers spent a lot of time in thinking about the best memory layout, so just tell the database what you need and it will store the data in a way that suits the DB engine best (usually).
If you want to align your data, you'll need exact knowledge of the internal data organization: How is the string stored? One, two or 4 bytes to store the length? Is it stored as plain byte sequence or encoded in UTF-8 UTF-16 UTF-32? Does the DB need extra bytes to identify NULL or > MAXINT values? Maybe the string is stored as a NUL-terminated byte sequence - then one byte more is needed internally.
Also with VARCHAR it is not neccessary true, that the DB will always allocate 100 (128) bytes for your string. Maybe it stores just a pointer to where space for the actual data is.
So I'd strongly suggest to use VARCHAR(100) if that is your requirement. If the DB decides to align it somehow there's room for extra internal data, too.
Other way around: Let's assume you use VARCHAR(128) and all things come together: The DB allocates 128 bytes for your data. Additionally it needs 2 bytes more to store the actual string length - makes 130 bytes - and then it could be that the DB aligns the data to the next (let's say 32 byte) boundary: The actual data needed on the disk is now 160 bytes 8-}
Yes but it's not that simple. Sometimes 128 can be better than 100 and sometimes, it's the other way around.
So what is going on? varchar only allocates space as necessary so if you store hello world in a varchar(100) it will take exactly the same amount of space as in a varchar(128).
The question is: If you fill up the rows, will you hit a "block" limit/boundary or not?
Databases store their data in blocks. These have a fixed size, for example 512 (this value can be configured for some databases). So the question is: How many blocks does the DB have to read to fetch each row? Rows that span several block will need more I/O, so this will slow you down.
But again: This doesn't depend on the theoretical maximum size of the columns but on a) how many columns you have (each column needs a little bit of space even when it's empty or null), b) how many fixed width columns you have (number/decimal, char), and finally c) how much data you have in variable columns.

Whats the size of an SQL Int(N)?

Simple question. I have tried searching on Google and after about 6 searches, I figured it would be faster here.
How big is an int in SQL?
-- table creation statement.
intcolumn INT(N) NOT NULL,
-- more table creation statement.
How big is that INT(N) element? What's its range? Is it 2^N or is it N Bytes long? (2 ^ 8N)? Or even something else I have no idea about?
It depends on the database. MySQL has an extension where INT(N) means an INT with a display width of 4 decimal digits. This information is maintained in the metadata.
The INT itself is still 4 bytes, and values 10000 and greater can be stored (and probably displayed, but this depends how the application uses the result set).