DECIMAL values being converted to INT + 1 on its own - sql

I have a C# app where the user fills a form to store a product in a SQL Server database.
The problem is that every time a product is stored (through the user filling a form), the price (a decimal) is automatically converted to int and has 1 added to it.
I initially thought it was an issue with the app, however, the registration process is pretty simple and I didn't find any error there, so I inserted a row directly from SQL Server and the issue presented itself, so this tells me the issue is in SQL Server, not in the app.
Executing
insert into product (code, description, unit_price, stock, category_code)
values (7, 'Window Cleaner', 20.50, 20, 3)
Results into price being 21.
This is the table definition
CODE INT NOT NULL,
DESCRIPTION VARCHAR(30) NOT NULL,
UNIT_PRICE DECIMAL NOT NULL,
STOCK INT NOT NULL,
CATEGORY_CODE INT NOT NULL
CONSTRAINT PK_PRODUCT PRIMARY KEY(CODE),
CONSTRAINT FK_CATEGORY_CODE FOREIGN KEY (CATEGORY_CODE) REFERENCES Category(CODE),
CONSTRAINT PRODUCT_POSITIVE_VALUES CHECK(UNIT_PRICE > 0 AND CODE >= 0 AND STOCK >= 0)

You have used the following to define your column:
UNIT_PRICE DECIMAL NOT NULL
This has no precision nor scale and will therefore use the default precision (18) and scale (0). The default scale of 0 is effectively an int. So when you insert/update a value the value will get rounded to an int. To solve your problem define your column with the correct precision and scale e.g.
UNIT_PRICE DECIMAL(9,2) NOT NULL
Reference

Related

Small number out of DOUBLE range

I created a table on a mariaDB with the following definition. Note the longitude and latitude fields.
Create Table geo_data (
geo_data_id int NOT NULL AUTO_INCREMENT,
place_id int NOT NULL,
longitude DOUBLE(18,18) SIGNED,
latitude DOUBLE(18,18) SIGNED,
Primary Key (geo_data_id),
Foreign Key (place_id) References place (place_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
When I try to insert data into the geo_data table using
Insert into geo_data (place_id, longitude, latitude) Values (1, 1.2, 3.4);
I receive the following error message:
Error: ER_WARN_DATA_OUT_OF_RANGE: Out of range value for column 'longitude' at row 1
I guess I am missing something here, since I don't believe 1.2 could in any way be out of range of a Double(18,18). So what on earth is going on here?
Your column is defined as DOUBLE(18,18). The first number is the scale (the total number of digits in the whole number, including decimals); the second is the precision (the number of decimal positions).
Giving the same value to both the scale and precision means that your value cannot be greater than 1 (all 18 digits are decimals).
You want to decrease the precision to something smaller in order to leave room for non-decimal digits, like: DOUBLE(18, 6), which gives you 12 non-decimal positions.

Why does Diesel fail to migrate a PostgresSQL database when the columns specify a length? [duplicate]

I am experimenting with PostgreSQL coming from SQL using MySQL and I simply wish to create a table with this piece of code which is valid SQL:
CREATE TABLE flat_10
(
pk_flat_id INT(30) DEFAULT 1,
rooms INT(10) UNSIGNED NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id)
);
I get the error
ERROR: syntax error at or near "("
LINE 3: pk_flat_id integer(30) DEFAULT 1,
I have conducted searches on the web and found no answer and I cant seem to find an answer in the PostgreSQL manual. What am I doing wrong?
I explicitly want to set a limit to the number of digits that can be inserted into the "pk_flat_id" field
I explicitly want to set a limit to the number of digits that can be inserted into the "pk_flat_id" field
Your current table definition does not impose a "size limit" in any way. In MySQL the parameter for the intdata type is only a hint for applications on the display width of the column when displaying it.
You can store the value 2147483647 in an int(1) without any problems.
If you want to limit the values to be stored in an integer column you can use a check constraint:
CREATE TABLE flat_10
(
pk_flat_id bigint DEFAULT 1,
rooms integer NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id),
constraint valid_number
check (pk_flat_id <= 999999999)
);
The answer is that you use numeric or decimal types. These are documented here.
Note that these types can take an optional precision argument, but you don't want that. So:
CREATE TABLE flat_10
(
pk_flat_id DECIMAL(30) DEFAULT 1,
rooms DECIMAL(10) NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (pk_flat_id)
);
Here is a SQL Fiddle.
I don't think that Postgres supports unsigned decimals. And, it seems like you really want serial types for your keys and the long number of digits is superfluous.
Changing integer to numeric works.
CREATE TABLE flat_10
(
pk_flat_id bigint DEFAULT 1,
rooms numeric NOT NULL,
room_label CHAR(1) NOT NULL,
);

How do I validate these columns?

I'm working on a database and I need to validate some columns under the Payment schema.
Like if a credit card is not used for payments, CreditCardNumber, CardHoldersName, and CreditCardExpDate should be made NULL. If a credit card is used, the CreditCardExpDate value should be greater than the PaymentDate
PaymentDue can allow NULL but should not be greater than PaymentAmount
I've searched online but what I get are complex triggers and procedures which are not really helpful.
create table Payment.Payments(
Payment_ID int identity (200, 21),
Payment_Amount money constraint chk_Payment_Amount check (Payment_Amount >
'0'),
Payment_Date date, -- is to be greater than the end date which is on another table
Credit_Card_Number int,
Card_Holders_Name char (50),
Credit_Card_Expiry_Date date,
Project_ID int Foreign Key references ProjectDetails.Projects(Project_ID),
Payment_Due money -- should not be greater than Payment Amount but
can still accept null*
);
The notes show the current validation problem i'm having.
I created a trigger for the payment_date but i can only get it to fire when the inserted date is greater than the current date, i need it to fire if it is less than the end date(end date is on another table)
CREATE TRIGGER paymentdate
ON Payment.Payments
FOR INSERT
AS
DECLARE #ModifiedDate date
SELECT #ModifiedDate = Payment_Date FROM Inserted
IF (#ModifiedDate > getdate())
BEGIN
PRINT 'The modified date should be the current date. Hence, cannot insert.'
ROLLBACK TRANSACTION
END
I'm reading a lot between the lines here, but I think this is what you're after (Note I have used the dbo schema though):
USE Sandbox;
GO
CREATE TABLE dbo.Payments (
Payment_ID int identity(200, 21),
Payment_Amount money CONSTRAINT chk_Payment_Amount CHECK (Payment_Amount > '0'),
Payment_Date date,
Credit_Card_Number char(19), --note datatype change from int to char. See my comment below (copied from my comment)
Card_Holders_Name varchar (50), --note I've used varchar instead. Names aren't all 50 characters long
Credit_Card_Expiry_Date date,
--Project_ID int FOREIGN KEY REFERENCES ProjectDetails.Projects(Project_ID) --Commented out as I don't have this table
Payment_Due money CONSTRAINT chk_Payment_Due CHECK (Payment_Due > '0' OR Payment_Due IS NULL)
);
GO
--Credit Card format validation
ALTER TABLE dbo.Payments ADD CONSTRAINT ck_Credit_Card CHECK (Credit_Card_Number LIKE '[0-9][0-9][0-9][0-9] [0-9][0-9][0-9][0-9] [0-9][0-9][0-9][0-9] [0-9][0-9][0-9][0-9]' OR Credit_Card_Number IS NULL);
--Add card details must be there, or none.
ALTER TABLE dbo.Payments ADD CONSTRAINT ck_Card_Details CHECK ((Credit_Card_Number IS NULL AND Card_Holders_Name IS NULL AND Credit_Card_Expiry_Date IS NULL)
OR (Credit_Card_Number IS NOT NULL AND Card_Holders_Name IS NOT NULL AND Credit_Card_Expiry_Date IS NOT NULL))
GO
DROP TABLE dbo.Payments;
Comment made on the Card Number's datatype:
The datatype int for a credit card number is a bit of an oxymoron. The maximum value for an int is 2,147,483,647 and a card number is made up of 4 sets of 4 digit numbers (i.e. 9999 9999 9999 9999). Even as a number, that's far higher than the max value of an int. I'd suggest using a char(19) and making a constraint on the format as well.

How can I set a size limit for an "int" datatype in PostgreSQL 9.5

I am experimenting with PostgreSQL coming from SQL using MySQL and I simply wish to create a table with this piece of code which is valid SQL:
CREATE TABLE flat_10
(
pk_flat_id INT(30) DEFAULT 1,
rooms INT(10) UNSIGNED NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id)
);
I get the error
ERROR: syntax error at or near "("
LINE 3: pk_flat_id integer(30) DEFAULT 1,
I have conducted searches on the web and found no answer and I cant seem to find an answer in the PostgreSQL manual. What am I doing wrong?
I explicitly want to set a limit to the number of digits that can be inserted into the "pk_flat_id" field
I explicitly want to set a limit to the number of digits that can be inserted into the "pk_flat_id" field
Your current table definition does not impose a "size limit" in any way. In MySQL the parameter for the intdata type is only a hint for applications on the display width of the column when displaying it.
You can store the value 2147483647 in an int(1) without any problems.
If you want to limit the values to be stored in an integer column you can use a check constraint:
CREATE TABLE flat_10
(
pk_flat_id bigint DEFAULT 1,
rooms integer NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id),
constraint valid_number
check (pk_flat_id <= 999999999)
);
The answer is that you use numeric or decimal types. These are documented here.
Note that these types can take an optional precision argument, but you don't want that. So:
CREATE TABLE flat_10
(
pk_flat_id DECIMAL(30) DEFAULT 1,
rooms DECIMAL(10) NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (pk_flat_id)
);
Here is a SQL Fiddle.
I don't think that Postgres supports unsigned decimals. And, it seems like you really want serial types for your keys and the long number of digits is superfluous.
Changing integer to numeric works.
CREATE TABLE flat_10
(
pk_flat_id bigint DEFAULT 1,
rooms numeric NOT NULL,
room_label CHAR(1) NOT NULL,
);

How do I set the precision and scale in SQL (access)

I am trying to create a table in Access.
I have the following code:
CREATE TABLE Class Enrollement (
OfferNo INTEGER PRIMARY KEY,
StdNo Text(9) NULL,
EnrGrade Decimal(2) Percision(8) scale(4) NULL
);
EnrGrade needs to be a decimal, Precision of 8, Scale of 4, and 2 decimal places.
The last line of code is not correct. How would I do this?
I believe you're looking for:
Where the first value is the precision (number of decimal digits, followed by scale, or numbers after the decimal)
CREATE TABLE ClassEnrollment (
OfferNo INTEGER PRIMARY KEY,
StdNo Text(9) NULL,
EnrGrade Decimal(8, 2) NULL
);
You must enable ANSI-92 Query Mode. After that in your query you can write:
CREATE TABLE Offering (
OfferNo INTEGER PRIMARY KEY,
StdNo Text(9) NULL,
EnrGrade Decimal(8,4) NULL
);
Precision of 8, Scale of 4, and 2 decimal places
These requirements appear contradictory. A decimal column with a precision of 8 and a scale of 4 can store up to 4 decimal places.
Perhaps the intention of the spec is to display at least two decimal places? e.g.
SQL DDL:
CREATE TABLE ClassEnrollement (
OfferNo INTEGER PRIMARY KEY,
StdNo NVARCHAR(9),
EnrGrade DECIMAL(8, 4)
);
SQL DML:
SELECT OfferNo, StdNo,
FORMAT$(ClassEnrollement, '0.00##')
AS ClassEnrollement__formatted
FROM ClassEnrollement;
Or perhaps the extra numeric scale is to enable custom rounding? The DECIMAL type exhibits rounding by trucation, a feature often missed because all other numeric types exhibit bankers' rounding. A rule of thumb is to store an extra place of numeric scale so that the inherent rounding, whatever it is, does not affect the raw value being stored, enabling custom rounding to be applied later. Two extra may simply be over-engineering ;)