I'm working on a DB and would like to implement a system where a tables unique ID is generated by combining several other IDs/factors. Basically, I'd want an ID that looks like this:
1234 (A reference to a standard incrementing serial ID from another table)
10 (A reference to a standard incrementing serial ID from another table)
1234 (A number that increments from 1000-9999)
So the ID would look like:
1234101234
Additionally, each of those "entries" will have multiple time sensitive instances that are stored in another table. For these IDs I want to take the above ID and append a time stamp, so it'll look like:
12341012341234567890123
I've looked a little bit at PSQL sequences, but they seem like they're mostly used for simply incrementing up or down at certain levels, I'm not sure how to do this sort of concatenation in creating an ID string or whether it's even possible.
Don't do it! Just use a serial primary key id and then have three different columns:
otherTableID
otherTable2ID
timestamp
You can uniquely identify each row using your serial id. You can look up the other information. And -- even better -- you can create foreign key constraints to represent the relationships among the tables.
I'm not sure what do you want to achive, but
SELECT col_1::text || col_2::text || col_3::text || now()::text
should work. You should also add UNIQUE constraint on the column, i.e.
ALTER TABLE this_table ADD UNIQUE INDEX (this_new_column);
But the real question is: why do you want to do this? If you just want a unique meaningless ID, you need just to create column of type serial.
create procedure f_return_unq_id(
CONDITIONAL_PARAMS IN INTEGER,
v_seq in out integer
)
is
QUERY_1 VARCHAR2(200);
RESP INTEGER;
BEGIN
QUERY_1:='SELECT TAB1.SL_ID||TAB2.SL_ID||:v_seq||SYSTIMESTAMP FROM TABLE1 TAB1,TABLE2 TAB2 WHERE TAB1.CONDITION=:V_PARAMS';
BEGIN
EXECUTE IMMEDIATE QUERY_1 INTO RESP USING v_seq,CONDITIONAL_PARAMS;
EXCEPTION
when others then
DBMS_OUTPUT.PUT_LINE(SQLCODE);
END;
v_seq:=RESP;
EXCEPTION
when others then
DBMS_OUTPUT.PUT_LINE(SQLCODE);
END;
pass the v_seq to this procedure as your sequence number 1000-9999 and conditional parameters if any are there.
Related
I have a geospatial db with (a.o.) a table with locations, and a table with features. The primary key for the locations table is location_id. Location_id is also a foreign key in the features table. The features table also includes the fields "type" (in which a two-letter code is entered to denote particular types of features), and N (which differentiates the different features that may be linked to one location). I figured a combination of location_id, type, and N would make a decent primary key for the features table. Previously, I entered these ids manually. However, I would like for this to be automatically done when a "user" enters a location ID, N, and type. (Ideally I want to find a way to automatically generate the correct N, so that "users" need only enter location_id and type, but I think this should be posted as a separate question?).
I have been trying to achieve this via triggers (see code below), but when I test it by trying to add a new data row to my features table, I get the error message "duplicate key value violates unique constraint features_pkey". Could someone point me in the direction of help for this issue?
CREATE OR REPLACE FUNCTION set_features_id()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
DECLARE
compos_id text;
BEGIN
SELECT loc_id || type || N FROM features INTO compos_id;
NEW.id := compos_id;
RETURN NEW;
END;
$$;
DROP TRIGGER IF EXISTS set_lf_id_trigger on public.landscape_features_point;
CREATE TRIGGER set_features_id_trigger
BEFORE INSERT
ON "features"
FOR EACH ROW
EXECUTE PROCEDURE set_features_id();
I searched but only could found partial answer to this question
The goal would be here to create a new ID column on an existing table.
This new column would be the primary key for the table and I simply want it to be filled with integer values from 1 to number of rows.
What would be the query for that?
I know I have to first alter table to create the new column :
ALTER TABLE <MYTABLE> ADD (ID INTEGER);
Then I could use the series generator :
INSERT INTO <MYTABLE.ID> SELECT SERIES_GENERATE_INTEGER(1,1,(number of rows));
Once the column is filled I could use this line:
ALTER TABLE <MYTABLE> ADD PRIMARY KEY ("ID");
I am sure there is an easier way to do this
You wrote that you want to add a "new ID column to an existing table" and fill it with unique values.
That's not a "standard" operation in any DBMS, as the usual assumption is that records are created with a primary key and not retro fitted.
Thus, "ease" of operation for this is relative to what else you want to do.
For example, if you want to continue using this ID as a primary key for further operations, then using a once-off generator function like the SERIES_GENERATE_INTEGER or a query won't be very helpful since you have to avoid duplicates of already existing values.
Two, relatively easy, options come to mind:
Using a sequence:
create sequence myid;
update <table> set ID = myid.nextval;
And for succeeding inserts:
insert into <table> (id, ..., ...) VALUES (myid.nextval, ..., ...) ;
Note that this generates a value for every existing record and not a predefined set of size X.
Using a GUID
By using a GUID you generate a unique value every time you call the 'SYSUUID' function in SAP HANA. check docu here
Something like
update <table> set ID = SYSUUID;
should do the trick here.
Subsequent inserts would simply call the function for values of ID.
I'm trying to make a blog system of sort and I ran into a slight problem.
Simply put, there's 3 columns in my article table:
id SERIAL,
category VARCHAR FK,
category_id INT
id column is obviously the PK and it is used as a global identifier for all articles.
category column is well .. category.
category_id is used as a UNIQUE ID within a category so currently there is a UNIQUE(category, category_id) constraint in place.
However, I also want for category_id to auto-increment.
I want it so that every time I execute a query like
INSERT INTO article(category) VALUES ('stackoverflow');
I want the category_id column to be automatically be filled according to the latest category_id of the 'stackoverflow' category.
Achieving this in my logic code is quite easy. I just select latest num and insert +1 of that but that involves two separate queries.
I am looking for a SQL solution that can do all this in one query.
This has been asked many times and the general idea is bound to fail in a multi-user environment - and a blog system sounds like exactly such a case.
So the best answer is: Don't. Consider a different approach.
Drop the column category_id completely from your table - it does not store any information the other two columns (id, category) wouldn't store already.
Your id is a serial column and already auto-increments in a reliable fashion.
Auto increment SQL function
If you need some kind of category_id without gaps per category, generate it on the fly with row_number():
Serial numbers per group of rows for compound key
Concept
There are at least several ways to approach this. First one that comes to my mind:
Assign a value for category_id column inside a trigger executed for each row, by overwriting the input value from INSERT statement.
Action
Here's the SQL Fiddle to see the code in action
For a simple test, I'm creating article table holding categories and their id's that should be unique for each category. I have omitted constraint creation - that's not relevant to present the point.
create table article ( id serial, category varchar, category_id int )
Inserting some values for two distinct categories using generate_series() function to have an auto-increment already in place.
insert into article(category, category_id)
select 'stackoverflow', i from generate_series(1,1) i
union all
select 'stackexchange', i from generate_series(1,3) i
Creating a trigger function, that would select MAX(category_id) and increment its value by 1 for a category we're inserting a row with and then overwrite the value right before moving on with the actual INSERT to table (BEFORE INSERT trigger takes care of that).
CREATE OR REPLACE FUNCTION category_increment()
RETURNS trigger
LANGUAGE plpgsql
AS
$$
DECLARE
v_category_inc int := 0;
BEGIN
SELECT MAX(category_id) + 1 INTO v_category_inc FROM article WHERE category = NEW.category;
IF v_category_inc is null THEN
NEW.category_id := 1;
ELSE
NEW.category_id := v_category_inc;
END IF;
RETURN NEW;
END;
$$
Using the function as a trigger.
CREATE TRIGGER trg_category_increment
BEFORE INSERT ON article
FOR EACH ROW EXECUTE PROCEDURE category_increment()
Inserting some more values (post trigger appliance) for already existing categories and non-existing ones.
INSERT INTO article(category) VALUES
('stackoverflow'),
('stackexchange'),
('nonexisting');
Query used to select data:
select category, category_id From article order by 1,2
Result for initial inserts:
category category_id
stackexchange 1
stackexchange 2
stackexchange 3
stackoverflow 1
Result after final inserts:
category category_id
nonexisting 1
stackexchange 1
stackexchange 2
stackexchange 3
stackexchange 4
stackoverflow 1
stackoverflow 2
Postgresql uses sequences to achieve this; it's a different approach from what you are used to in MySQL. Take a look at http://www.postgresql.org/docs/current/static/sql-createsequence.html for complete reference.
Basically you create a sequence (a database object) by:
CREATE SEQUENCE serials;
And then when you want to add to your table you will have:
INSERT INTO mytable (name, id) VALUES ('The Name', NEXTVAL('serials')
I know that the question is very long and I understand if someone doesn't have the time to read it all, but I really wish there is a way to do this.
I am writing a program that will read the database schema from the database catalog tables and automatically build a basic application with the information extracted from the system catalogs.
Many tables in the database can be just a list of items of the form
CREATE TABLE tablename (id INTEGER PRIMARY KEY, description VARCHAR NOT NULL);
so when a table has a column that references the id of tablename I just resolve the descriptions by querying it from the tablename table, and I display a list in a combo box with the available options.
There are some tables however that cannot directly have a description column, because their description would be a combination of other columns, lets take as an example the most important of those tables in my first application
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL
);
this as many would know, would be the full account number for a bank account, in my country it's composed as follows
[XXXX][XXXX][XX][XXXXXXXXXX]
^ ^ ^ ^
bank id | crc account number
|
|_ bank office id
so that's the reason of the way my bankaccount table is structured as is.
Now, I would like to have the complete bank account number in a description column so I can display it in the application without giving a special treatment to this situation, since there are some other tables with similar situation, something like
CREATE TABLE bankaccount (
bankid INTEGER NOT NULL REFERENCES bank,
officeid INTEGER NOT NULL REFERENCES bankoffice,
crc INTEGER NOT NULL,
number BIGINT NOT NULL,
description VARCHAR DEFAULT bankid || '-' || officeid || '-' || crc || '-' || number
);
Which of course doesn't work since the following error is raised1
ERROR: cannot use column references in default expression
If there is any different approach that someone can suggest, please feel free to suggest it as an answer.
1 This is the error message given by PostgreSQL.
What you want is to create a view on your table. I'm more familiar with MySQL and SQLite, so excuse the differences. But basically, if you have table 'AccountInfo' you can have a view 'AccountInfoView' which is sort of like a 'stored query' but can be used like a table. You would create it with something like
CREATE VIEW AccountInfoView AS
SELECT *, CONCATENATE(bankid,officeid,crc,number) AS FullAccountNumber
FROM AccountInfo
Another approach is to have an actual FullAccountNumber column in your original table, and create a trigger that sets it any time an insert or update is performed on your table. This is usually less efficient though, as it duplicates storage and takes the performance hit when data are written instead of retrieved. Sometimes that approach can make sense, though.
What actually works, and I believe it's a very elegant solution is to use a function like this one
CREATE FUNCTION description(bankaccount) RETURNS VARCHAR AS $$
SELECT
CONCAT(bankid, '-', officeid, '-', crc, '-', number)
FROM
bankaccount this
WHERE
$1.bankid = this.bankid AND
$1.officeid = this.officeid AND
$1.crc = this.crc AND
$1.number = this.number
$$ LANGUAGE SQL STABLE;
which would then be used like this
SELECT bankaccount.description FROM bankaccount;
and hence, my goal is achieved.
Note: this solution works with PostgreSQL only AFAIK.
I have a 'users' table with two columns, 'email' and 'new_email'. I need:
A case-insensitive uniqueness constraint covering both columns - i.e., if "Bob#Example.com" appears in one row's 'email' column, then inserting "bob#example.com" into another row's (or even the same row's) 'new_email' column should fail.
Fast case-insensitive searching for a given email address in either the 'email' or 'new_email' fields - i.e. find the row where the new_email OR email is "Bob#example.com", case-insensitive.
I know that I could do this more easily by creating a related 'emails' table, but I'm expecting to be looking up users in this table (by primary key) from several applications, and I'd like to avoid duplicating the join logic in various places to also retrieve their emails. So I think some kind of expression index would be best, if that's possible.
If this isn't possible, I suppose my next best option would be to create a view that the other applications could use to easily fetch a user's emails along with their other information, but I'm not sure how to do that either.
I'm using Postgres 8.4. Thank you!
I think you'll have to use a trigger to enforce your cross-column uniqueness constraint. If you add unique indexes on each column and then a trigger something like this (untested off the top of my head code):
CREATE FUNCTION no_dups_allowed() RETURNS trigger AS $$
DECLARE
r ROW;
BEGIN
SELECT 1 INTO r
FROM users
WHERE LOWER(email) = LOWER(NEW.email_new)
OR LOWER(email_new) = LOWER(NEW.email);
IF FOUND THEN
-- Found a duplicate so it is time for a hissy fit!
RAISE 'Duplicate email address found' USING ERRCODE = 'unique_violation';
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
You'd want something like that as a BEFORE INSERT and BEFORE UPDATE trigger. That trigger would take care of catching cross-column duplicates and the unique indexes would take care of in-column duplicates.
Some useful references:
FOUND
RAISE
Triggers
Trigger Procedures
You'll want the individual indexes for your queries anyway and using the uniqueness half of the indexes simplifies your trigger by leaving it to only deal with the cross-column part; if you try to do it all in the trigger, then you'll have to watch out for updating a row without really changing the email or email_new columns.
For the querying half, you could create a view that used a UNION to combine the two columns. You could also create a function to merge the user's email addresses into one list. Hard to say which would be best without know more details of these other queries but I suspect that fixing all the other queries to know about email and email_new would be the best approach; you'll have to update all the other queries to use the view or function anyway so why build a view or function at all?
No need for triggers. Try this:
create table et (email text, email2 text);
create unique index et_u on et (coalesce(lower(email),lower(email2)));
insert into et (email,email2) values ('scott#gmail.com',NULL);
insert into et (email,email2) values ('scott#gmail.com',NULL);
ERROR: duplicate key value violates unique constraint "et_u"
insert into et (email,email2) values (NULL,'scott#gmail.com');
ERROR: duplicate key value violates unique constraint "et_u"
insert into et (email,email2) values (NULL,'Scott#gmail.com');
ERROR: duplicate key value violates unique constraint "et_u"