copy table (create table like) - not keeping auto incrementing primary key - sql

I'm new to postgres (on 9.5) and I can't find this in the docs anywhere.
Basically create a table like this:
CREATE TABLE test (
id serial primary key,
field1 CHARACTER VARYING(50)
);
Then copy it:
create table test_copy (like test);
The table test has these columns:
COLUMN_NAME id field1
DATA_TYPE 4 12
TYPE_NAME serial varchar
COLUMN_SIZE 10 50
IS_NULLABLE NO YES
IS_AUTOINCREMENT YES NO
But test_copy has these:
COLUMN_NAME id field1
DATA_TYPE 4 12
TYPE_NAME int4 varchar
COLUMN_SIZE 10 50
IS_NULLABLE NO YES
IS_AUTOINCREMENT NO NO
Why am I losing serial and autoincrement? How can I make a copy of a table that preserves these?

This is because serial isn't really a datatype. It gets "expanded" to an integer + a sequence + a default value.
See the manual for details
To get the default definition you need to use create table test_copy (like test INCLUDING DEFAULTS).
However, that will then use the same sequence as the original table.
You can see the difference when you display the table definition in psql:
psql (9.5.3)
Type "help" for help.
postgres=> CREATE TABLE test (
postgres(> id serial primary key,
postgres(> field1 CHARACTER VARYING(50)
postgres(> );
CREATE TABLE
postgres=> create table test_copy_no_defaults (like test);
CREATE TABLE
postgres=> create table test_copy (like test including defaults);
CREATE TABLE
postgres=> \d test
Table "public.test"
Column | Type | Modifiers
--------+-----------------------+---------------------------------------------------
id | integer | not null default nextval('test_id_seq'::regclass)
field1 | character varying(50) |
Indexes:
"test_pkey" PRIMARY KEY, btree (id)
postgres=> \d test_copy
Table "public.test_copy"
Column | Type | Modifiers
--------+-----------------------+---------------------------------------------------
id | integer | not null default nextval('test_id_seq'::regclass)
field1 | character varying(50) |
postgres=> \d test_copy_no_defaults
Table "public.test_copy_no_defaults"
Column | Type | Modifiers
--------+-----------------------+-----------
id | integer | not null
field1 | character varying(50) |

you can try:
create table test_inh () inherits (test);
and then
alter table test_inh no inherit test;
should leave same sequence default value for you

Related

How to declare "nextval('testing_thing_thing_id_seq'::regclass)" as default value for column "thing_id" in postgres table "testing_thing"?

In my postgres db there is a table called testing_thing, which I can see (by running \d testing_thing in my psql prompt) it is defined as
Table "public.testing_thing"
Column | Type | Collation | Nullable | Default
--------------+-------------------+-----------+----------+-----------------------------------------------------
thing_id | integer | | not null | nextval('testing_thing_thing_id_seq'::regclass)
thing_num | smallint | | not null | 0
thing_desc | character varying | | not null |
Indexes:
"testing_thing_pk" PRIMARY KEY, btree (thing_num)
I want to drop it and re-create it exactly as it is, but I don't know how to reproduce the
nextval('testing_thing_thing_id_seq'::regclass)
part for column thing_id.
This is the query I put together to create the table:
CREATE TABLE testing_thing(
thing_id integer NOT NULL, --what else should I put here?
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
what is it missing?
Add a DEFAULT to the column you want to increment and call nextval():
CREATE SEQUENCE testing_thing_thing_id_seq START WITH 1;
CREATE TABLE testing_thing(
thing_id integer NOT NULL DEFAULT nextval('testing_thing_thing_id_seq'),
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
Side note: Keep in mind that attaching a sequence to a column does not prevent users to manually fill it with random data, which can create really nasty problems with primary keys. If you want to overcome it and do not necessarily need to have a sequence, consider creating an identity column, e.g.
CREATE TABLE testing_thing(
thing_id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
Demo: db<>fiddle

uniqueness in an array datatype in psql

I have a table with the following structure:-
uniq_id | integer | not null default nextval('#########'::regclass)
user_id | integer | not null
team_type | text[] |
I have a unique index on user_id and team_type.
The column team_type will have values like 'R' and 'F'.
for instance ,for a user_id 1234,I have a team type inserted {R,F}.i do not want anyone to insert {R} for the same user.
One more example for my requirements:
If user_id 1234 has 2 entries as {R} AND {F}, I do not want anyone to insert a record as {R,F} for the same user.
I am in need of a constraint or unique index which can do this for me.
Thanks in advance.!

Can't update or delete row from table (Postgres)

I have table with bytea field. When I try to delete a row from this table, I get such error:
[42704] ERROR: large object 0 does not exist
Can you help me in this situation?
Edit. Information from command \d photo:
Table "public.photo"
Column | Type | Modifiers
------------+------------------------+-----------
id | character varying(255) | not null
ldap_name | character varying(255) | not null
file_name | character varying(255) | not null
image_data | bytea |
Indexes:
"pk_photo" PRIMARY KEY, btree (id)
"photo_file_name_key" UNIQUE CONSTRAINT, btree (file_name)
"photo_ldap_name" btree (ldap_name)
Triggers:
remove_unused_large_objects BEFORE DELETE OR UPDATE ON photo FOR EACH ROW EXECUTE PROCEDURE lo_manage('image_data')
Drop the trigger:
drop trigger remove_unused_large_objects on photo;
try using this
delete from photo where primarykey = 'you want to delete';

How to Insert Primary Value (PK) to Related Table (FK)

I'm having trouble from Inserting 1 Primary Value (Increment) of TABLE to another TABLE (Foreign Key)
Table 1 has the Primary key of Student Number; if i enter values for last and first name from TABLE 1 then the student number will automatically giving it's own value because of Increment, and else if i entered from TABLE 2, I want the value of Student Number from TABLE i will increment even the value of Last and First name if TABLE 1 is NULL
Table 1
(PK)Student_# | Last_Name | First_Name
...........1...........|........a..........|..........b.......
...........2...........|........c..........|..........b.......
Table 2
(FK)Student_# | Year_Level | Section
...........NULL................|..........2nd Year......|.....C1 .........
...........NULL................|..........3rd Year......|.....D1 .........
Needed
(FK)Student_# | Year_Level | Section
..............1...................|..........2nd Year......|.....C1 .........
..............2...................|..........3rd Year......|.....D1 .........
It sounds to me that you need a primary key with an identity seed on table2 and also a foreign key to the student table:
(PK/Identity) Table2ID | (FK)Student_# | Year_Level | Section
This way you can insert the student_# when you insert the record into table 2 and also be able to give each row in table2 a unique identifier
CREATE TABLE Table2
(
Table2ID INT IDENTITY(1,1) PRIMARY KEY
,Student_# INT NOT NULL FOREIGN KEY REFERENCES Table1(Student_#)
,Year_Level NVARCHAR(255) --Use whatever data type you need
,Section NVARCHAR(255) --Use whatever data type you need
)
I have assumed you are using sql server as you have not specified in your question. You may need to change this query for a different RDBMS.

MySQL: Which indexes to use for a simple range select?

I have a table with ~30 million rows ( and growing! ) and currently i have some problems with a simple range select.
The query, looks like this one:
SELECT SUM( CEIL( dlvSize / 100 ) ) as numItems
FROM log
WHERE timeLogged BETWEEN 1000000 AND 2000000
AND user = 'example'</pre>
It takes minutes to finish and i think that the solution would be at the indexes that i'm using. Here is the result of explain:
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
| 1 | SIMPLE | log | range | PRIMARY,timeLogged | PRIMARY | 4 | NULL | 11839754 | Using where |
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
My table structure is this one ( reduced to make it fit better on the problem ):
CREATE TABLE IF NOT EXISTS `log` (
`origDomain` varchar(64) NOT NULL default '0',
`timeLogged` int(11) NOT NULL default '0',
`orig` varchar(128) NOT NULL default '',
`rcpt` varchar(128) NOT NULL default '',
`dlvSize` varchar(255) default NULL,
`user` varchar(255) default NULL,
PRIMARY KEY (`timeLogged`,`orig`,`rcpt`),
KEY `timeLogged` (`timeLogged`),
KEY `orig` (`orig`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Any ideas of what can I do to optimize this query or indexes on my table?
You may want to try adding a composite index on (user, timeLogged):
CREATE TABLE IF NOT EXISTS `log` (
...
KEY `user_timeLogged` (user, timeLogged),
...
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Related Stack Overflow post:
Database: When should I use a composite index?
In addition to the suggestions made by the other answers, I note that you have a column user in the table which is a varchar(255). If this refers to a column in a table of users, then 1) it would most likely to far more efficient to add an integer ID column to that table, and use that as the primary key and as a referencing column in other tables; 2) you are using InnoDB, so why not take advantage of the foreign key capabilities it offers?
Consider that if you index by a varchar(n) column, it is treated like a char(n) in the index, so each row of your current primary key takes up 4 + 128 + 128 = 260 bytes in the index.
Add an index on user.