I'm using Microsoft SQL Server Management Studio. I am trying to create this database, but I can not figure out what I'm doing wrong with the timestamp; it has to default now.
This is the Create Table query:
create table users(
id int identity primary key,
username varchar(255) unique,
created_at date, TIMESTAMP default)
For a column date use default getdate()
for a column datetime use default CURRENT_TIMESTAMP
create table users(
id int identity primary key,
username varchar(255) unique,
created_date date DEFAULT GETDATE(),
created_date_time datetime default CURRENT_TIMESTAMP)
GO
✓
insert into users ( username) values ('new user');
GO
1 rows affected
select * from users
GO
id | username | created_date | created_date_time
-: | :------- | :----------- | :----------------------
1 | new user | 2022-05-31 | 2022-05-31 13:19:41.830
db<>fiddle here
Related
I have a table with logs which grew in size (~100M records) to the point where querying even the latest entries takes a considerable amount of a time.
I am wondering is there a smart way to make access to latest records fast (largest PK values) while also make inserts (appends) to it fast? I do not want to delete any data if possible, actually there is already a mechanism which monthly deletes logs older than N days.
Ideally what I mean is have the query
select * from t_logs order by log_id desc fetch first 50 rows only
to run in a split second (up to reasonable row count, say 500, if that matters).
The table is defined as follows:
CREATE TABLE t_logs (
log_id NUMBER NOT NULL,
method_name VARCHAR2(128 CHAR) NOT NULL,
msg VARCHAR2(4000 CHAR) NOT NULL,
type VARCHAR2(1 CHAR) NOT NULL,
time_stamp TIMESTAMP(6) NOT NULL,
user_created VARCHAR2(50 CHAR) DEFAULT user NOT NULL
);
CREATE UNIQUE INDEX logs_pk ON t_logs ( log_id ) REVERSE;
ALTER TABLE t_logs ADD (
CONSTRAINT logs_pk PRIMARY KEY ( log_id )
);
I am not really a DBA, so I am not familiar with all the performance tuning methods. I just use logs a lot and I was wondering if I could do something data-not-invasive to ease my pain. Up to my knowledge, what I did: tried re-computing statistics/re-analyze table (no effect), looked into query plan
-------------------------------------------
| Id | Operation | Name |
-------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | VIEW | |
| 2 | WINDOW SORT PUSHED RANK| |
| 3 | TABLE ACCESS FULL | T_LOGS |
-------------------------------------------
I would expect query to leverage index to perform the lookup, why doesn't it? Maybe this is a reason it takes so long to find the results?
Version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Mr Cave, in the accepted answer, seems to be right
alter table t_logs drop constraint log_pk;
drop index log_pk;
create unique index logs_pk on t_logs ( log_id );
alter table t_logs add (
constraint logs_pk primary key ( log_id )
);
Queries run super fast now, plan looks as expected:
-------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | VIEW | |
| 2 | WINDOW NOSORT STOPKEY | |
| 3 | TABLE ACCESS BY INDEX ROWID| T_LOGS |
| 4 | INDEX FULL SCAN DESCENDING| LOGS_PK |
-------------------------------------------------
100 million rows isn't that large.
Why are you creating a reverse-key index for your primary key? Sure, that has the potential to reduce contention on inserts but were you really constrained by contention? That would be pretty unusual. Maybe you have an unusual environment. But my guess is that someone was trying to prematurely optimize the design for inserts without considering what that did to queries.
My wager would be that a nice, basic design would be more than sufficient for your needs
CREATE TABLE t_logs (
log_id NUMBER NOT NULL,
method_name VARCHAR2(128 CHAR) NOT NULL,
msg VARCHAR2(4000 CHAR) NOT NULL,
type VARCHAR2(1 CHAR) NOT NULL,
time_stamp TIMESTAMP(6) NOT NULL,
user_created VARCHAR2(50 CHAR) DEFAULT user NOT NULL
);
CREATE UNIQUE INDEX logs_pk ON t_logs ( log_id );
ALTER TABLE t_logs ADD (
CONSTRAINT logs_pk PRIMARY KEY ( log_id )
);
If you can't recreate the primary key for some reason, create an index on time_stamp and change your queries to use that
CREATE INDEX log_ts ON t_logs( time_stamp );
SELECT *
FROM log_ts
ORDER BY time_stamp DESC
FETCH FIRST 100 ROWS ONLY;
i created table with 4column with
create table conversation(
user_name varchar(200),
employer_name varchar(200),
message text,
date_sent timestamp
)
now i want alter date_sent column without remove it i want to set default value of current_timestamp for this column what i must do
You can use
ALTER TABLE conversation ALTER COLUMN
date_sent SET DEFAULT current_timestamp;
create table conversation(
user_name varchar(200),
employer_name varchar(200),
message text,
date_sent timestamp );
ALTER TABLE conversation ALTER COLUMN
date_sent SET DEFAULT current_timestamp;
insert into conversation
(user_name) values
('me');
select * from conversation;
user_name | employer_name | message | date_sent
:-------- | :------------ | :------ | :-------------------------
me | null | null | 2022-04-01 09:38:13.674547
db<>fiddle here
I have the following three tables:
CREATE TABLE group (
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
insert_date TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE TABLE customer (
id SERIAL PRIMARY KEY,
ext_id VARCHAR NOT NULL,
insert_date TIMESTAMP WITH TIME ZONE NOT NULL
);
CREATE TABLE customer_in_group (
id SERIAL PRIMARY KEY,
customer_id INT NOT NULL,
group_id INT NOT NULL,
insert_date TIMESTAMP WITH TIME ZONE NOT NULL,
CONSTRAINT customer_id_fk
FOREIGN KEY(customer_id)
REFERENCES customer(id),
CONSTRAINT group_id_fk
FOREIGN KEY(group_id)
REFERENCES group(id)
)
I need to find all of the groups which have not had any customer_in_group entities' group_id column reference them within the last two years. I then plan to delete all of the customer_in_groups that reference them, and finally delete that group after finding them.
So basically given the following two groups and the following 3 customer_in_groups
Group
| id | name | insert_date |
|----|--------|--------------------------|
| 1 | group1 | 2011-10-05T14:48:00.000Z |
| 2 | group2 | 2011-10-05T14:48:00.000Z |
Customer In Group
| id | group_id | customer_id | insert_date |
|----|----------|-------------|--------------------------|
| 1 | 1 | 1 | 2011-10-05T14:48:00.000Z |
| 2 | 1 | 1 | 2020-10-05T14:48:00.000Z |
| 3 | 2 | 1 | 2011-10-05T14:48:00.000Z |
I would expect just to get back group2, since group1 has a customer_in_group referencing it inserted in the last two years.
I am not sure how I would write the query that would find all of these groups.
As a starter, I would recommend enabling on delete cascade on foreing keys of customer_in_group.
Then, you can just delete the rows you want from groups, and it will drop the dependent rows in the child table. For this, you can use not exists:
delete from groups g
where not exists (
select 1
from customer_in_group cig
where cig.group_id = g.id and cig.insert_date >= now() - interval '2 year'
)
I have a project table:
CREATE TABLE DOC.BRAND
(
ID int PRIMARY KEY IDENTITY (1, 1),
project_id varchar(150) ,
project_name varchar(250) ,
)
For example, project_id should be PRJ001, PRJ002 based on identity column value as shown here:
+----+-------------+---------------+
| ID | project_id | project_name |
+----+-------------+---------------+
| 1 | PRJ001 | PROJECT1 |
| 2 | PRJ002 | PROJECT2 |
+----+-------------+---------------+
How we can achieve that using a stored procedure or is there any table-level setting?
If you are using SQL Server (which seems likely based on the syntax), you can use a computed column:
CREATE TABLE DOC.BRAND (
ID int PRIMARY KEY IDENTITY (1, 1),
project_id as ('PRJ' + format(id, '000')),
project_name varchar(250)
);
Here is a db<>fiddle.
In a database that contains many tables, I need to write a SQL script to insert data if it is not exist.
Table currency
| id | Code | lastupdate | rate |
+--------+---------+------------+-----------+
| 1 | USD | 05-11-2012 | 2 |
| 2 | EUR | 05-11-2012 | 3 |
Table client
| id | name | createdate | currencyId|
+--------+---------+------------+-----------+
| 4 | tony | 11-24-2010 | 1 |
| 5 | john | 09-14-2010 | 2 |
Table: account
| id | number | createdate | clientId |
+--------+---------+------------+-----------+
| 7 | 1234 | 12-24-2010 | 4 |
| 8 | 5648 | 12-14-2010 | 5 |
I need to insert to:
currency (id=3, Code=JPY, lastupdate=today, rate=4)
client (id=6, name=Joe, createdate=today, currencyId=Currency with Code 'USD')
account (id=9, number=0910, createdate=today, clientId=Client with name 'Joe')
Problem:
script must check if row exists or not before inserting new data
script must allow us to add a foreign key to the new row where this foreign related to a row already found in database (as currencyId in client table)
script must allow us to add the current datetime to the column in the insert statement (such as createdate in client table)
script must allow us to add a foreign key to the new row where this foreign related to a row inserted in the same script (such as clientId in account table)
Note: I tried the following SQL statement but it solved only the first problem
INSERT INTO Client (id, name, createdate, currencyId)
SELECT 6, 'Joe', '05-11-2012', 1
WHERE not exists (SELECT * FROM Client where id=6);
this query runs without any error but as you can see I wrote createdate and currencyid manually, I need to take currency id from a select statement with where clause (I tried to substitute 1 by select statement but query failed).
This is an example about what I need, in my database, I need this script to insert more than 30 rows in more than 10 tables.
any help
You wrote
I tried to substitute 1 by select statement but query failed
But I wonder why did it fail? What did you try? This should work:
INSERT INTO Client (id, name, createdate, currencyId)
SELECT
6,
'Joe',
current_date,
(select c.id from currency as c where c.code = 'USD') as currencyId
WHERE not exists (SELECT * FROM Client where id=6);
It looks like you can work out if the data exists.
Here is a quick bit of code written in SQL Server / Sybase that I think answers you basic questions:
create table currency(
id numeric(16,0) identity primary key,
code varchar(3) not null,
lastupdated datetime not null,
rate smallint
);
create table client(
id numeric(16,0) identity primary key,
createddate datetime not null,
currencyid numeric(16,0) foreign key references currency(id)
);
insert into currency (code, lastupdated, rate)
values('EUR',GETDATE(),3)
--inserts the date and last allocated identity into client
insert into client(createddate, currencyid)
values(GETDATE(), ##IDENTITY)
go