Let's say we have a database table named Users and a column named CreatedAt. When a new user is inserted into Users, the value of column CreatedAt is set to the current timestamp.
Now let's say the value of CreatedAt should never be allowed to change. Afterall, it's the date the user joined, it's basically their anniversary date.
Does H2 Database support the ability to prevent a column from being modified? In this case, we want to prevent any modification of the CreatedAt column.
Seems MySQL supports this feature via Triggers, for example:
CREATE TRIGGER my_trig BEFORE UPDATE ON Users
FOR EACH ROW BEGIN
SET NEW.CreatedAt = OLD.CreatedAt
END
Thanks for your help.
Related
I've got a database called SimpleCredentials in which there is a table called dbo.Properties which has (PK) UserID and then some other attributes like Name, Date of Birth etc. There is another Primary Key attribute called ExtendedCredentials which is a string dbo.UserID. So, for example the user with UserID = S-1-5-21-2177 will have the string dbo.S15212177 in their ExtendedCredentials column.
I've got another database called ExtendedCredentials. For every User there is a unique table in that database. Using the previous example, there will be a table called dbo.S15212177.
So, if I have 100 users there will be 100 rows in the dbo.Properties table in the SimpleCredentials database, and there will be 100 unique tables incorporating their UserID in the ExtendedCredentials database.
I want to create an entity relationship diagram, and eventually a MS SQL schema, but how do I represent the multiplicity of dbo.UserIDs and their relationship to their dbo.UserID string attribute in dbo.Properties?
Am I getting something fundamentally wrong here?
You may ask why I don't have a single database called ExtendedProperties with a single table in which each row is the UserID PK and the various extended properties are contained in columns. The simple answer is that some properties are themselves tables. Not every user has the same attributes in those tables. And I can't know ahead of time (a priori) what the full set of user extended property attributes is. So each user gets a table of their own.
Is there a better way to do this?
Is it possible to find out if a row in a table has been created by the current transaction (and therefore is not yet visible for other transactions, because the current transaction is still active)?
My use case: I am adding event logging to the database. This is done in plpgsql triggers. A row in the event table looks like this: (event id:serial, event action:text, count:integer:default 1).
Now, the reasoning behind my question: If a certain row has been created by this transaction (most likely in another trigger), I could increment the count instead of creating a new row in the event table.
You could just look for logging entries like this:
SELECT ...
FROM tablename
WHERE xmin = current_txid() % (2^32)::bigint;
That will find all rows added or modified in the current transaction.
The downside is that this will force a sequential scan of the whole table, and you cannot avoid that since you cannot have an index on a system column.
So you could add an extra column xid to your table that is filled with txid_current()::bigint whenever a row is inserted or updated. Such a column can be indexed and efficiently used in a search:
SELECT ...
FROM tablename
WHERE xid = current_txid();
You might consider something like this:
create table ConnectionCurrentAction (
connectionID int primary key,
currentActionID uuid
)
then at the beginning of the transaction:
delete ConnectionCurrentAction where connectionID = pg_backend_pid()
insert ConnectionCurrentAction(connectionID, currentActionID)
select pg_backend_pid(), uuid_generate_v4()
You can wrap this in a proc called say, audit_action_begin
Note: You may instead choose to enforce the requirement that an "action" be created explicitly by removing the delete here.
At the end of a transaction, do audit_action_end:
delete ConnectionCurrentAction where connectionID = pg_backend_pid()
Whenever you want to know the current transaction:
(select currentActionID from ConnectionCurrentAction where connectionID - pg_backend_pid()(
You can wrap that in a function audit_action_current()
You can then put the currentActionID into your log which will enable you to identify whether a row was created in the current action or not. This will also allow you to identify where rows in different audit tables were created in the current logical action.
If you don't want to use a uuid a sequence would do just as well here. I like uuids.
I have a table that I created with a unique key and each other column representing one day of December 2014 (eg named D20141226 for data from 26/12/2014). So the table consists of 32 columns (key + 31 days). These daily columns are indicating that a customer had a transaction on that specific day or no transaction is indicated by a 0.
Now I want to execute the same query on a daily basis, producing a list of unique keys that had a transaction on that specific day. I used this easy script:
CREATE TABLE C01012015 AS
SELECT DISTINCT CALLING_ISDN AS A_PARTY
FROM CDRICC_012015
WHERE CALL_STA_TIME ::date = '2015-01-01'
Now my question is, how can I add the content of the new daily table to the existing table with the 31 days, making it effectively a table with 32 days of data (and then continue to do so on a daily basis to store up to 360 days of data)?
Please note that new customer are doing transactions every day hence there will unique keys in the daily table that aren't in the big table holding all the previous days.
It would be ideal if those new rows would automatically get a 0 instead of a NULL but I can work around it if it gets a NULL value (not sure how to make sure it gets a 0 instead).
I thought that a FULL OUTER JOIN would be the solution but that would mean that I have to list all variables in the select statement, which becomes quite large as I add one more column each day. Is there a more elegant way to do this?
Or is SQL just not suited to this and a programming language like eg R would be much better at this?
If you have the option to change your schema completely, you should unpivot your table so that your columns are something like CUSTOMER_ID INTEGER, D DATE, DID_TRANSACTION BOOLEAN. There's a post on the Enzee Community website that suggests using a user-defined table function (UDTF) to do this. If you change your schema in this way, a simple insert will work just fine and there will be no need to add columns dynamically.
If you can't change your schema that much but you're still able to add columns, you could add a column for every day of the year up front with a default value of FALSE (assuming it's a boolean column representing whether the customer had a transaction or not on that day). You probably want to script this.
ALTER TABLE table_with_daily_columns MODIFY COLUMN (D20140101 BOOLEAN DEFAULT FALSE);
ALTER TABLE table_with_daily_columns MODIFY COLUMN (D20140102 BOOLEAN DEFAULT FALSE);
-- etc
ALTER TABLE table_with_daily_columns ADD COLUMN (D20150101 BOOLEAN DEFAULT FALSE);
GROOM TABLE table_with_daily_columns;
When you alter a table like this, Netezza creates a new table and an internal view that does a UNION of the new table and the old. You need to GROOM the table to merge the tables back into a single one for improved performance.
If you really must keep one column per day, then you'll have to use the method you described to pivot the data from your daily transaction table. Set the default value for each of your columns to 0 or FALSE as described above, then:
INSERT INTO table_with_daily_columns
SELECT
cust_id,
TRUE as D20150101
FROM C01012015;
I have a sql server 2012 table with a bunch of columns - none of these columns are a date/time stamp. Is there any way to get the date the rows were entered? Is there a hidden sys column somewhere that I can latch onto, just as a temporary measure?
No hidden sys column, but if you are using transaction logging you can try a log reader to view all the INSERT statements on the table.
To be able to use the date information. You should add a new column such as time_stamp and define an after insert trigger. Good luck.
I have a database with a column that I want to query the amount of times it has changed over a period of time. For example, I have the username, user's level, and date. How do I query this database to see the number of times the user's level has changed over x amount of years?
(I've looked in other posts on stackoverflow, and they're telling me to use triggers. But in my situation, I want to query the database for the number of changes that has been made. If my question can't be answered, please tell me what other columns might I need to look into to figure this out. Am I supposed to use Lag for this? )
A database will not inherently capture this information for you. Two suggestions would be to either store your data as a time series so instead of updating the value you add a new row to a table as the new current value and expire the old value. The other alternative would be to just add a new column for tracking the number of updates to the column you care about. This could be done in code or in a trigger.
Have you ever heard of the LOG term ?
You have to create a new table, in wich you will store your wanted changes.
I can imagine this solution for the table:
id - int, primary key, auto increment
table - the table name where the info has been changed
table_id - the information unique id from the table where changes
have been made
year - integer
month - integer
day - integer
knowin this, you can count everything
In case you are already keeping track of the level history by adding a new row with a different level and date every time a user changes level:
SELECT username, COUNT(date) - 1 AS changes
FROM table_name
WHERE date >= '2011-01-01'
GROUP BY username
That will give you the number of changes since Jan 1, 2011. Note that I'm subtracting 1 from the COUNT. That's because a user with a single row on your table has never changed levels, that row represents the user's initial level.