How to deal with silent mysql sum() integer overflow? - sql

I've got this table with an int(11) column and hundreds of millions of rows. When I run a query like
SELECT SUM(myIntColumn) as foo FROM myTable;
the return value does not make sense--it is smaller than the the single largest max value. My values for this column max out somewhere around 500m, and the signed int should be able to handle ~2bil, so I assume mysql is experiencing an integer overflow, and keeping mum about it.
What to do?
Miscellaneous details that might just matter but probably not:
mysql Ver 14.12 Distrib 5.0.75, for debian-linux-gnu (x86_64) using readline 5.2
mysqld Ver 5.0.75-0ubuntu10 for debian-linux-gnu on x86_64 ((Ubuntu))
Linux kona 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 UTC 2009 x86_64 GNU/Linux

You can double the range by casting the value to an unsigned value:
SELECT SUM(CAST(myIntColumn AS UNSIGNED)) ...
There is a bigger data type: the BIGINT, but unfortunately you cannot CAST() to it. If you want to make use of it, you must change your column to that type:
ALTER TABLE myTable CHANGE COLUMN myIntColumn myBigIntColumn BIGINT UNSIGNED ...

Related

ORA-00999: invalid view name - whats the problem?

CREATE VIEW ["Counties above average NUMBEROFINFECTIONS"] AS
SELECT NAME, TOTALNUMBEROFINFECTIONS
FROM COUNTRY
WHERE TOTALNUMBEROFINFECTIONS > (SELECT AVG(TOTALNUMBEROFINFECTIONS) FROM COUNTRY)
if you are in Oracle or sql server remove brackets:
CREATE VIEW "Counties above average NUMBEROFINFECTIONS" AS ....
brackets only works in sql server :
CREATE VIEW [Counties above average NUMBEROFINFECTIONS] AS ...
however why you name your view that needs tobe escaped , not a good practice at all
Also Bryan Dellinger brought to my attention:
In Oracle 12.2 and above the maximum object name length is 128 bytes.
In Oracle 12.1 and below the maximum object name length is 30 bytes.

Why doesn't PostgreSQL use unsigned integers for IDs? Wouldn't that give twice as many possible records?

According to this documentation, PostgreSQL doesn't support unsigned integers.
While I get that it makes the type resolution system less complicated, I don't see how this is practical for let's say auto-incrementing IDs.
Wouldn't adding an unsigned integer as an auto-increment ID result in a growth of twice the amount of possible records using a regular int? (4294967296 instead of 2147483648)
I know that it only means the difference between 1 bit, but it's still a bit that you will never use at this point.
Thanks!
By convention IDs are positive integers commencing at 1 (& to my limited knowledge this isn't enforced by standards but would not be utterly surprised if this was so). The PostgreSQL serial data type implements this convention and the values a "auto generated" as well.
If you really wish to implement your own approach, you can do so like this:
create sequence epictable_seq
MINVALUE -2147483648
start -2147483648
increment 1
NO MAXVALUE
CACHE 1;
✓
CREATE TABLE epictable
(
mytable_key INT unique not null,
moobars VARCHAR(40) not null,
foobars DATE
);
✓
insert into epictable(mytable_key, moobars,foobars)
values
(nextval('epictable_seq'),'delicious moobars','2012-05-01')
, (nextval('epictable_seq'),'worldwide interblag','2012-05-02')
;
2 rows affected
select *
from epictable
;
mytable_key | moobars | foobars
----------: | :------------------ | :---------
-2147483648 | delicious moobars | 2012-05-01
-2147483647 | worldwide interblag | 2012-05-02
db<>fiddle here
Also see this former answer

Merge records and columns

Data from table A:
id desc
1 huba
1 blub
3 foo
4 bar
And I'd like to have
id desc
1 huba, blub
3 foo
4 bar
So records with the same id should be merged and the desc concatenated.
Unfortunately I can't use string or concat. I get an error if I try to use these functions.
Sybase Version:
Sybase version: Adaptive Server Enterprise/15.5/EBF 19902 SMP ESD#5.1/P/x86_64/Enterprise Linux/asear155/2594/64-bit/FBO/Wed Jun 6 01:20:27 2012
Sybase ASE does not have any functions similar to list(), group_concat(), etc.
While Sybase ASE does provide some XML support, it doesn't provide support for the 'for xml / path()' construct.
And while it's possible to create a funky workaround in ASE 16 (using a table #variable and a user-defined function) ... you're not running ASE 16.
Net result is that you'll need to write some sort of looping construct to accomplish what you want, eg, a cursor that loops through the rows in the table.
NOTE: I'd have to think about the 'merge' idea but since you're running ASE 15.5 and 'merge' isn't available until ASE 15.7 ... I'll put that idea on the back burner for now.
ps - OK, there is a single-query solution but it involves using Application Context Functions (ACFs) (eg, get_appcontext(), set_appcontext()); but that's a very, Very, VERY messy solution ...

how to change datatype of a column in sybase query?

One of my query to Sybase server is returning garbage data. After some investigations i found out that one of the columns with datatype double is causing the issue. If I don't select that particular column then the query returns correct result. The column is question is a double with laarge number of decimal places. I tried to use round function upto 4 decimal places but still i get corrupt data. How can I correctly specify the column in my query to get correct data?
I am using windows 7 box and Sybase Adaptive server enterprise driver. (Sybase client 15.5). I am using 32 bit drivers.
Sample results:
Incorrect result using sybase ASE driver on windows 7 box
"select ric_code as ric, adjusted_weight as adjweight from v_temp_idx_comp where index_ric_code='.AXJO' and ric_code='AQG.AX'"
ric adjweight
1 AQG.AX NA
2 \020 NA
3 <NA> NA
Correct result on windows xp box using Merant driver
"select ric_code, adjusted_weight from v_temp_idx_comp where index_ric_code='.AXJO' and ric_code='AQG.AX'"
ric_code adjusted_weight
1 AQG.AX 0.3163873547
Regards,
Alok
You may try convert to numeric like this:
select ric_code as ric, weight, convert(numeric(16,4), adjusted_weight) as adjweight, currency as currency
from v_temp_idx_comp
where index_ric_code='.AXJO'

how to link tables together using timestamp sql, mysql

here is how my tables are currently setup:
Dataset
|
- Dataset_Id - Int
|
- Timestamp - Timestamp
Flowrate
|
-Flowrate_id - int
|
-Dataset_id - ALL NULL (INT)
|
-TimeStamp - TimeStamp
|
-FlowRate - FLoat
I want to update the flowrate dataset_id column so that its ids corespond to the dataset dataset_ids. The Dataset table has over close to 400000 rows.... How can I do this so that it does not take forever. This data came from different data loggers and that's why I need to link them with their timestamps....
UPDATE
Flowrate JOIN Dataset ON (Flowrate.TimeStamp = Dataset.Timestamp)
SET Flowrate.Dataset_id = Dataset.Dataset_Id
completely independent from Python of course (what a weird tag to put here -- as if MySql cared what language you're using to send fixed SQL statements to it?!). Will be fast if and only if the tables are properly indexed, of course.
Absolutely weird capitalization irregularities you have in your schema, BTW -- would drive me absolutely bonkers if anybody used lowercase vs uppercase at random spots of column names that are so obviously "meant to" be identical! Nevertheless I've tried to reproduce it exactly, but I hope you reconsider this absurd style choice.