Decimal(3,2) values in MySQL are always 9.99 - sql

I have a field, justsomenum, of type decimal(3,2) in MySQL that seems to always have values of 9.99 when I insert something like 78.3. Why?
This is what my table looks like:
mysql> describe testtable;
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| firstname | varchar(20) | YES | | NULL | |
| lastname | varchar(20) | YES | | NULL | |
| justsomenum | decimal(3,2) | YES | | NULL | |
+---------------+--------------+------+-----+---------+----------------+
When I insert something like this and the select:
mysql> insert into testtable (firstname, lastname, justsomenum) values ("Lloyd", "Christmas", 83.5);
I get 9.99 when I select.
mysql> select * from testtable;
+----+-----------+-----------+---------------+
| id | firstname | lastname | justsomenum |
+----+-----------+-----------+---------------+
| 1 | Shooter | McGavin | 9.99 |
| 2 | Lloyd | Christmas | 9.99 |
| 3 | Lloyd | Christmas | 9.99 |
| 4 | Lloyd | Christmas | 9.99 |
+----+-----------+-----------+---------------+
4 rows in set (0.00 sec)
This is MySQL 5.0.86 on Mac OS X 10.5.8.
Any ideas? Thanks.

The maximum value for decimal(3, 2) is 9.99, so when you try to insert something larger than that, it is capped to 9.99. Try decimal(5, 2) or something else if you want to store larger numbers.
The first argument is the total number of digits of precision, and the second argument is the number of digits after the decimal point.

In older versions MySQL DECIMAL(3,2) meant 3 integers to the left of the DP and 2 to the right.
The MySQL devs have since changed it so the first property (in this case '3') is the complete number of integers in the decimal (9.99 being three numbers), and the second property (in this case '2') stays the same — the number of decimal places.
It's a little confusing. Basically, for DECIMAL fields, whatever number of integers you want before the DP needs to be added to whatever number of integers you want after the DP and set as your first option.
Then as has already been said, if you try to enter a number greater than the maximum value for the field, MySQL trims it for you. The problem here is your MySQL configuration. I go into more detail about this on my blog.

Related

SQL issue with specific timestamp

I am currently trying to optimize some workflows here. One of our workflows involves calculating a time offset in hours from a given date, and that involves selecting from a number of tables and applying some business logic. That part of the problem is fairly well solved. What I am trying to do is to calculate a final timestamp based upon a timestamp value and an offset (in hours).
My source table looks like:
MariaDB [ingest]> describe tmp_file_3;
+---------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+---------------------+------+-----+---------+-------+
| mci_idx | bigint(20) unsigned | YES | | NULL | |
| mcg_idx | bigint(20) unsigned | YES | | NULL | |
| ingested_time | timestamp | YES | | NULL | |
| hours_persist | int(11) | YES | | NULL | |
| active | tinyint(1) | YES | | NULL | |
+---------------+---------------------+------+-----+---------+-------+```
And I am populating my new table with the following SQL:
MariaDB [ingest]> insert into master_expiration_index (select mci_idx, TIMESTAMPADD(HOUR, hours_persist, ingested_time) as expiration_time from tmp_file_3 where active=1);
ERROR 1292 (22007): Incorrect datetime value: '2023-03-12 02:20:15' for column `ingest`.`master_expiration_index`.`expiration_time` at row 347025
The SQL is correct to my understanding, since if I add a limit 10 to the query executes without any issues. The questions I have are:
What is wrong with that datetime value? It appears to be in the correct format
How do I figure out which row is causing the issue?
How do I fix this in the general case?

How To Check Numerical Format in SQL Server 2008

I am converting some existing Oracle queries to MSSQL Server (2008) and can't figure out how to replicate the following Regex check:
SELECT SomeField
FROM SomeTable
WHERE NOT REGEXP_LIKE(TO_CHAR(SomeField), '^[0-9]{2}[.][0-9]{7}$');
That finds all results where the format of the number starts with 2 positive digits, followed by a decimal point, and 7 decimal places of data: 12.3456789
I've tried using STR, CAST, CONVERT, but they all seem to truncate the decimal to 4 decimal places for some reason. The truncating has prevented me from getting reliable results using LEN and CHARINDEX. Manually adding size parameters to STR gets slightly closer, but I still don't know how to compare the original numerical representation to the converted value.
SELECT SomeField
, STR(SomeField, 10, 7)
, CAST(SomeField AS VARCHAR)
, LEN(SomeField )
, CHARINDEX(STR(SomeField ), '.')
FROM SomeTable
+------------------+------------+---------+-----+-----------+
| Orig | STR | Cast | LEN | CHARINDEX |
+------------------+------------+---------+-----+-----------+
| 31.44650944 | 31.4465094 | 31.4465 | 7 | 0 |
| 35.85609 | 35.8560900 | 35.8561 | 7 | 0 |
| 54.589623 | 54.5896230 | 54.5896 | 7 | 0 |
| 31.92653899 | 31.9265390 | 31.9265 | 7 | 0 |
| 31.4523333333333 | 31.4523333 | 31.4523 | 7 | 0 |
| 31.40208955 | 31.4020895 | 31.4021 | 7 | 0 |
| 51.3047869443893 | 51.3047869 | 51.3048 | 7 | 0 |
| 51 | 51.0000000 | 51 | 2 | 0 |
| 32.220633 | 32.2206330 | 32.2206 | 7 | 0 |
| 35.769247 | 35.7692470 | 35.7692 | 7 | 0 |
| 35.071022 | 35.0710220 | 35.071 | 6 | 0 |
+------------------+------------+---------+-----+-----------+
What you want to do does not make sense in SQL Server.
Oracle supports a number data type that has a variable precision:
if a precision is not specified, the column stores values as given.
There is no corresponding data type in SQL Server. You have have a variable number (float/real) or a fixed number (decimal/numeric). However, both apply to ALL values in a column, not to individual values within a row.
The closest you could do is:
where somefield >= 0 and somefield < 100
Or if you wanted to insist that there is a decimal component:
where somefield >= 0 and somefield < 100 and floor(somefield) <> somefield
However, you might have valid integer values that this would filter out.
This answer gave me an option that works in conjunction with checking the decimal position first.
SELECT SomeField
FROM SomeTable
WHERE SomeField IS NOT NULL
AND CHARINDEX('.', SomeField ) = 3
AND LEN(CAST(CAST(REVERSE(CONVERT(VARCHAR(50), SomeField , 128)) AS FLOAT) AS BIGINT)) = 7
While I understand this is terrible by nearly all metrics, it satisfies the requirements.
The basis of checking formatting on this data type in inherently flawed as pointed out by several posters, however for this very isolated use case I wanted to document the workaround.

Why query is still so fast when I operate a non-indexing column?

I am learning indexing of database.
here are indexings of a table. And this table has 330k records.
mysql> show index from employee;
+----------+------------+-------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
+----------+------------+-------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| employee | 0 | PRIMARY | 1 | id | A | 297383 | NULL | NULL | | BTREE | | | YES | NULL |
| employee | 0 | ak_employee | 1 | personal_code | A | 297383 | NULL | NULL | | BTREE | | | YES | NULL |
| employee | 1 | idx_email | 1 | email | A | 297383 | NULL | NULL | | BTREE | | | YES | NULL |
+----------+------------+-------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
as you can see, there are only three indexing on this table.
Now I want to query with where on birth_date column, I think it will be very slow because there is no indexing on birth-date column, I when I try query, I found it is very fast.
mysql> select sql_no_cache *
-> from employee
-> where birth_date > '1955-11-11'
-> limit 100
-> ;
100 rows in set, 1 warning (0.04 sec)
So I am confused:
why it is still so fast without indexing?
due to its still fast, why do we still need indexing?
This is your query:
select sql_no_cache *
from employee
where birth_date > '1955-11-11'
limit 100
There are no indexes so the query starts reading the data from the data pages. On each record, it compares the birthdate and returns the row. When it finds 100 (due to the limit) it stops.
Presumably, it finds 100 rows quite quickly. After all, the median age of the United States is about 38 -- which is (as I write this) a birth year of 1981. By far, most people were born after 1955.
The query would be much slower if you had an order by or group by. That would require reading all the data before returning anything.

Ordering a varchar column in MySQL in an Excel-like manner

I have a varchar column with mixed data- strings, integers, decimals, blank strings, and null values. I'd like to sort the column the same way that Excel would, first sorting numbers and then sorting the strings. For example:
1
2
3
3.5
10
11
12
alan
bob
carl
(blank/null)
(blank/null)
I've tried doing something like 'ORDER BY my_column+0' which sorts the numbers correctly but not the strings. I was hoping someone might know of an efficient way to accomplish this.
MartinofD's suggestion works for the most part and if I expand on it a little bit I can get exactly what I want:
SELECT a FROM test
ORDER BY
a IS NULL OR a='',
a<>'0' AND a=0,
a+0,
a;
Pretty ugly though and I'm not sure if there are any better options.
That's because my_column+0 is equal for all strings (0).
Just use ORDER BY my_column+0, my_column
mysql> SELECT a FROM test ORDER BY a+0, a;
+-------+
| a |
+-------+
| NULL |
| alan |
| bob |
| carl |
| david |
| 1 |
| 2 |
| 3 |
| 3.5 |
| 10 |
| 11 |
| 12 |
+-------+
12 rows in set (0.00 sec)
If you strictly need the numbers to be above the strings, here's a solution (though I'm not sure how quick this will be on big tables):
mysql> SELECT a FROM test ORDER BY (a = CONCAT('', 0+a)) DESC, a+0, a;
+-------+
| a |
+-------+
| 1 |
| 2 |
| 3 |
| 3.5 |
| 10 |
| 11 |
| 12 |
| alan |
| bob |
| carl |
| david |
| NULL |
+-------+
12 rows in set (0.00 sec)
This works:
SELECT a FROM test ORDER BY a IS NULL OR a='', a<>'0' AND a=0, a+0, a;
Any more efficient/elegant solution would be welcome however.

SQL LIKE question

I was wondering if there's a drawback (other than bad practice) to using something like this
SELECT * FROM my_table WHERE id LIKE '1';
where id is an integer. I know you're supposed to use id=1 but I am writing a java program and if everything can use LIKE it'll be a lot easier for me. Also, so far, everything works fine; I get the correct query results, so if there is no drawback I will continue doing it like this.
edit: I am using MySQL.
MySQL will allow it, but will ignore the index:
mysql> describe METADATA_44;
+---------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| AtextId | int(11) | NO | PRI | NULL | |
| num | varchar(128) | YES | | NULL | |
| title | varchar(128) | YES | | NULL | |
| file | varchar(128) | YES | | NULL | |
| context | varchar(128) | YES | | NULL | |
| source | varchar(128) | YES | | NULL | |
+---------+--------------+------+-----+---------+-------+
6 rows in set (0.00 sec)
mysql> explain select * from METADATA_44 where Atextid like '7';
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| 1 | SIMPLE | METADATA_44 | ALL | PRIMARY | NULL | NULL | NULL | 591 | Using where |
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
mysql> explain select * from METADATA_44 where Atextid=7;
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
| 1 | SIMPLE | METADATA_44 | const | PRIMARY | PRIMARY | 4 | const | 1 | |
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
1 row in set (0.00 sec)
You'd need to look at the Query Execution Plan on your RDBMS to verify that LIKE with no wildcards is treated as efficiently as an = would be. A quick test in SQL Server shows that it would give you an index scan rather than a seek so I guess it doesn't look at that when generating the plan and for SQL Server using = would be much more efficient. I don't have a MySQL install to test against.
Edit: Just to update this SQL Server seems to handle it fine and do a seek when the data type is varchar. When it is run against an int column though you get the scan. This is because it does an implicit conversion to varchar on the int column so can't use the index.
You are better off writing your query as
SELECT * FROM my_table WHERE id = 1;
otherwise mysql will have to typecast '1' to int which is the type of the column id
so obviously there is a small performance penalty, when u know the type of the column supply the value according to that type
Speed. [15-char filler as there's not much more to say]
Without using any wildcards with LIKE, is should be fine for your needs if the speed/efficiency is something you don't bother with.