I have a project table:
CREATE TABLE DOC.BRAND
(
ID int PRIMARY KEY IDENTITY (1, 1),
project_id varchar(150) ,
project_name varchar(250) ,
)
For example, project_id should be PRJ001, PRJ002 based on identity column value as shown here:
+----+-------------+---------------+
| ID | project_id | project_name |
+----+-------------+---------------+
| 1 | PRJ001 | PROJECT1 |
| 2 | PRJ002 | PROJECT2 |
+----+-------------+---------------+
How we can achieve that using a stored procedure or is there any table-level setting?
If you are using SQL Server (which seems likely based on the syntax), you can use a computed column:
CREATE TABLE DOC.BRAND (
ID int PRIMARY KEY IDENTITY (1, 1),
project_id as ('PRJ' + format(id, '000')),
project_name varchar(250)
);
Here is a db<>fiddle.
Related
I'm using Microsoft SQL Server Management Studio. I am trying to create this database, but I can not figure out what I'm doing wrong with the timestamp; it has to default now.
This is the Create Table query:
create table users(
id int identity primary key,
username varchar(255) unique,
created_at date, TIMESTAMP default)
For a column date use default getdate()
for a column datetime use default CURRENT_TIMESTAMP
create table users(
id int identity primary key,
username varchar(255) unique,
created_date date DEFAULT GETDATE(),
created_date_time datetime default CURRENT_TIMESTAMP)
GO
✓
insert into users ( username) values ('new user');
GO
1 rows affected
select * from users
GO
id | username | created_date | created_date_time
-: | :------- | :----------- | :----------------------
1 | new user | 2022-05-31 | 2022-05-31 13:19:41.830
db<>fiddle here
I have the following three tables:
CREATE TABLE group (
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
insert_date TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE TABLE customer (
id SERIAL PRIMARY KEY,
ext_id VARCHAR NOT NULL,
insert_date TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE TABLE customer_in_group (
id SERIAL PRIMARY KEY,
customer_id INT NOT NULL,
group_id INT NOT NULL,
insert_date TIMESTAMP WITH TIME ZONE NOT NULL,
CONSTRAINT customer_id_fk
FOREIGN KEY(customer_id)
REFERENCES customer(id),
CONSTRAINT group_id_fk
FOREIGN KEY(group_id)
REFERENCES group(id)
)
I need to find all of the groups which have not had any customer_in_group entities' group_id column reference them within the last two years. I then plan to delete all of the customer_in_groups that reference them, and finally delete that group after finding them.
So basically given the following two groups and the following 3 customer_in_groups
Group
| id | name | insert_date |
|----|--------|--------------------------|
| 1 | group1 | 2011-10-05T14:48:00.000Z |
| 2 | group2 | 2011-10-05T14:48:00.000Z |
Customer In Group
| id | group_id | customer_id | insert_date |
|----|----------|-------------|--------------------------|
| 1 | 1 | 1 | 2011-10-05T14:48:00.000Z |
| 2 | 1 | 1 | 2020-10-05T14:48:00.000Z |
| 3 | 2 | 1 | 2011-10-05T14:48:00.000Z |
I would expect just to get back group2, since group1 has a customer_in_group referencing it inserted in the last two years.
I am not sure how I would write the query that would find all of these groups.
As a starter, I would recommend enabling on delete cascade on foreing keys of customer_in_group.
Then, you can just delete the rows you want from groups, and it will drop the dependent rows in the child table. For this, you can use not exists:
delete from groups g
where not exists (
select 1
from customer_in_group cig
where cig.group_id = g.id and cig.insert_date >= now() - interval '2 year'
)
I would like to create a table to house the following type of data
+--------+-----+----------+
| pk | ctr | name |
+--------+-----+----------+
| fish | 1 | herring |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----_+----------+
PK is the primary key string (100) not null
ctr is a field I want to auto increment by 1 for each pk row
I have tried the following
create or replace table schema.animals (
pk string(100) not null primary key,
ctr integer not null default ( select NVL(max(ctr),0) + 1 from schema.animals )
name string (1000) not null);
This produced the following error
SQL compilation error: error line 6 at position 52 aggregate functions
are not allowed as part of the specification of a default value
clause.
So i would have used the auto increment /identity property like so
AUTOINCREMENT | IDENTITY [ ( start_num , step_num ) | START num INCREMENT num ]
but it doesnt seem to be able to support the resetting per unique pk
looking for any suggestions on how to solve this, thanks for any help in advance
You cannot do this with an IDENTITY method. The suggested solution is to use INSTEAD OF trigger that will calculate ctr value on every row of INSERTED table. For example
CREATE TABLE dbo.animals (
pk nvarchar(100) NOT NULL,
ctr integer NOT NULL,
name nvarchar(1000) NOT NULL,
CONSTRAINT PK_animals PRIMARY KEY (pk, ctr)
)
GO
CREATE TRIGGER dbo.animals_before_insert ON dbo.animals INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO animals (pk, ctr, name)
SELECT
i.pk,
(ROW_NUMBER() OVER (PARTITION BY i.pk ORDER BY i.name) + ISNULL(a.max_ctr, 0)) AS ctr,
i.name
FROM inserted i
LEFT JOIN (SELECT pk, MAX(ctr) AS max_ctr FROM dbo.animals GROUP BY pk) a
ON i.pk = a.pk;
END
GO
INSERT INTO dbo.animals (pk, name) VALUES
('fish' , 'herring'),
('mammal' , 'dog'),
('mammal' , 'cat'),
('mammal' , 'whale'),
('bird' , 'pengui'),
('bird' , 'ostrich');
SELECT * FROM dbo.animals;
Result
pk ctr name
------- ----- ---------
bird 1 ostrich
bird 2 pengui
fish 1 herring
mammal 1 cat
mammal 2 dog
mammal 3 whale
Another method is to use scalar user-defined function as DEFAULT value but it is slow: the trigger fires once on all rows whereas the function is called on every row.
I have no idea why you would have a column called pk that is not the primary key. You cannot (easily) do what you want. I would recommend doing this as:
create or replace table schema.animals (
animal_id int identity primary key,
name string(100) not null primary key,
);
create view schema.v_animals as
select a.*, row_number() over (partition by name order by animal_id) as ctr
from schema.animals a;
That is, calculate ctr when you need to use it, rather than storing it in the table.
I have the following:
1. A table "patients" where I store patients data.
2. A table "tests" where I store data of tests done to each patient.
Now the problem comes as I have 2 types of tests "tests_1" and "tests_2"
So for each test done to particular patient I store the type and id of the type of test:
CREATE TABLE IF NOT EXISTS patients
(
id_patient INTEGER PRIMARY KEY,
name_patient VARCHAR(30) NOT NULL,
sex_patient VARCHAR(6) NOT NULL,
date_patient DATE
);
INSERT INTO patients values
(1,'Joe', 'Male' ,'2000-01-23');
INSERT INTO patients values
(2,'Marge','Female','1950-11-25');
INSERT INTO patients values
(3,'Diana','Female','1985-08-13');
INSERT INTO patients values
(4,'Laura','Female','1984-12-29');
CREATE TABLE IF NOT EXISTS tests
(
id_test INTEGER PRIMARY KEY,
id_patient INTEGER,
type_test VARCHAR(15) NOT NULL,
id_type_test INTEGER,
date_test DATE,
FOREIGN KEY (id_patient) REFERENCES patients(id_patient)
);
INSERT INTO tests values
(1,4,'test_1',10,'2004-05-29');
INSERT INTO tests values
(2,4,'test_2',45,'2005-01-29');
INSERT INTO tests values
(3,4,'test_2',55,'2006-04-12');
CREATE TABLE IF NOT EXISTS tests_1
(
id_test_1 INTEGER PRIMARY KEY,
id_patient INTEGER,
data1 REAL,
data2 REAL,
data3 REAL,
data4 REAL,
data5 REAL,
FOREIGN KEY (id_patient) REFERENCES patients(id_patient)
);
INSERT INTO tests_1 values
(10,4,100.7,1.8,10.89,20.04,5.29);
CREATE TABLE IF NOT EXISTS tests_2
(
id_test_2 INTEGER PRIMARY KEY,
id_patient INTEGER,
data1 REAL,
data2 REAL,
data3 REAL,
FOREIGN KEY (id_patient) REFERENCES patients(id_patient)
);
INSERT INTO tests_2 values
(45,4,10.07,18.9,1.8);
INSERT INTO tests_2 values
(55,4,17.6,1.8,18.89);
Now I think this approach is redundant or not to good...
So I would like to improve queries like
select * from tests WHERE id_patient=4;
select * from tests_1 WHERE id_patient=4;
select * from tests_2 WHERE id_patient=4;
Is there a better approach?
In this example I have 1 test of type tests_1 and 2 tests of type tests_2 for patient with id=4.
Here is a fiddle
Add a table testtype (id_test,name_test) and use it an FK to the id_type_test field in the tests table. Do not create seperate tables for test_1 and test_2
It depends on the requirement
For OLTP I would do something like the following
STAFF:
ID | FORENAME | SURNAME | DATE_OF_BIRTH | JOB_TITLE | ...
-------------------------------------------------------------
1 | harry | potter | 2001-01-01 | consultant | ...
2 | ron | weasley | 2001-02-01 | pathologist | ...
PATIENT:
ID | FORENAME | SURNAME | DATE_OF_BIRTH | ...
-----------------------------------------------
1 | hermiony | granger | 2013-01-01 | ...
TEST_TYPE:
ID | CATEGORY | NAME | DESCRIPTION | ...
--------------------------------------------------------
1 | haematology | abg | arterial blood gasses | ...
REQUEST:
ID | TEST_TYPE_ID | PATIENT_ID | DATE_REQUESTED | REQUESTED_BY | ...
----------------------------------------------------------------------
1 | 1 | 1 | 2013-01-02 | 1 | ...
RESULT_TYPE:
ID | TEST_TYPE_ID | NAME | UNIT | ...
---------------------------------------
1 | 1 | co2 | kPa | ...
2 | 1 | o2 | kPa | ...
RESULT:
ID | REQUEST_ID | RESULT_TYPE_ID | DATE_RESULTED | RESULTED_BY | RESULT | ...
-------------------------------------------------------------------------------
1 | 1 | 1 | 2013-01-02 | 2 | 5 | ...
2 | 1 | 2 | 2013-01-02 | 2 | 5 | ...
A concern I have with the above is with the unit of the test result, these can sometimes (not often) change. It may be better to place the unit un the result table.
Also consider breaking these into the major test categories as my understanding is they can be quite different e.g. histopathology and xrays are not resulted in the similar ways as haematology and microbiology are.
For OLAP I would combine request and result into one table adding derived columns such as REQUEST_TO_RESULT_MINS and make a single dimension from RESULT_TYPE and TEST_TYPE etc.
You can do this in a few ways. without knowing all the different type of cases you need to deal with.
The simplest would be 5 tables
Patients (like you described it)
Tests (like you described it)
TestType (like Declan_K suggested)
TestResultCode
TestResults
TestRsultCode describe each value that is stored for each test. TestResults is a pivoted table that can store any number of test-results per test,:
Create table TestResultCode
(
idTestResultCode int
, Code varchar(10)
, Description varchar(200)
, DataType int -- 1= Real, 2 = Varchar, 3 = int, etc.
);
Create Table TestResults
(
idPatent int -- FK
, idTest int -- FK
, idTestType int -- FK
, idTestResultCode int -- FK
, ResultsI real
, ResultsV varchar(100)
, Resultsb int
, Created datetime
)
so, basically you can fit the results you wanted to add into the tables "tests_1" and "tests_2" and any other tests you can think of.
The application reading this table, can load each test and all its values. Of course the application needs to know how to deal with each case, but you can store any type of test in this structure.
In a database that contains many tables, I need to write a SQL script to insert data if it is not exist.
Table currency
| id | Code | lastupdate | rate |
+--------+---------+------------+-----------+
| 1 | USD | 05-11-2012 | 2 |
| 2 | EUR | 05-11-2012 | 3 |
Table client
| id | name | createdate | currencyId|
+--------+---------+------------+-----------+
| 4 | tony | 11-24-2010 | 1 |
| 5 | john | 09-14-2010 | 2 |
Table: account
| id | number | createdate | clientId |
+--------+---------+------------+-----------+
| 7 | 1234 | 12-24-2010 | 4 |
| 8 | 5648 | 12-14-2010 | 5 |
I need to insert to:
currency (id=3, Code=JPY, lastupdate=today, rate=4)
client (id=6, name=Joe, createdate=today, currencyId=Currency with Code 'USD')
account (id=9, number=0910, createdate=today, clientId=Client with name 'Joe')
Problem:
script must check if row exists or not before inserting new data
script must allow us to add a foreign key to the new row where this foreign related to a row already found in database (as currencyId in client table)
script must allow us to add the current datetime to the column in the insert statement (such as createdate in client table)
script must allow us to add a foreign key to the new row where this foreign related to a row inserted in the same script (such as clientId in account table)
Note: I tried the following SQL statement but it solved only the first problem
INSERT INTO Client (id, name, createdate, currencyId)
SELECT 6, 'Joe', '05-11-2012', 1
WHERE not exists (SELECT * FROM Client where id=6);
this query runs without any error but as you can see I wrote createdate and currencyid manually, I need to take currency id from a select statement with where clause (I tried to substitute 1 by select statement but query failed).
This is an example about what I need, in my database, I need this script to insert more than 30 rows in more than 10 tables.
any help
You wrote
I tried to substitute 1 by select statement but query failed
But I wonder why did it fail? What did you try? This should work:
INSERT INTO Client (id, name, createdate, currencyId)
SELECT
6,
'Joe',
current_date,
(select c.id from currency as c where c.code = 'USD') as currencyId
WHERE not exists (SELECT * FROM Client where id=6);
It looks like you can work out if the data exists.
Here is a quick bit of code written in SQL Server / Sybase that I think answers you basic questions:
create table currency(
id numeric(16,0) identity primary key,
code varchar(3) not null,
lastupdated datetime not null,
rate smallint
);
create table client(
id numeric(16,0) identity primary key,
createddate datetime not null,
currencyid numeric(16,0) foreign key references currency(id)
);
insert into currency (code, lastupdated, rate)
values('EUR',GETDATE(),3)
--inserts the date and last allocated identity into client
insert into client(createddate, currencyid)
values(GETDATE(), ##IDENTITY)
go