We added a new column in a migration, and then need to modify it to no longer be nullable. The entries are null, however.
Is there a way to apply default value as part of the migration, to entries where the value is null?
Or is the best solution simply to run a stored procedure, or otherwise manually edit the fields to be something valid?
ps. Did the later, because i'm a student, needed it working now, and it was just 8 entries, but i'm still curious.
DavidG's response is pretty much what I did in a testbed, and it works perfectly well.
Related
I have a table with guid identifier and one field that is a 5 characters string that can be specified by user, but it is optional, and it should be unique per user. I'm looking for a way to have this field always there, even if user doesn't specify it. The easiest approach is to have it like "00001", "00002"... etc. in case that user doesn't specify it, it is stored like this. I'm using SQL and entity framework core. What is the best way to achieve this?
EDIT: maybe trigger that will check after insert if that field is not specified and then just take current row number and convert it to string? does this make sense?
Cheers
Setting a default value to '00001' can be done define the field with:
NOT NULL DEFAULT right('0000' || to_char(SomeSequence.nextval),5) (pseudo-code to be adapted to the DBMS you are connected to).
Compared to the solution in your EDIT, this will at least guarantee that 2 inserts at the same time from 2 different users get assigned different values.
The real problem comes with the unique constraint on the column. This does not work nicely when mixing manual input with calculated values.
If as a user, I input (manually) 00005, then the insertion will fail when SomeSequence reaches 5.
I think this problem will exist regardless of how you implement the generation of values (sequence, trigger, external code, ...)
Even if you are fine with coding some additional (and probably complicated) logic to manage that, it will probably decrease concurrency.
I have a sample of a stored procedure like this (from my previous working experience):
Select * from table where (id=#id or id='-999')
Based on my understanding on this query, the '-999' is used to avoid exception when no value is transferred from users. So far in my research, I have not found its usage on the internet and other company implementations.
#id is transferred from user.
Any help will be appreciated in providing some links related to it.
I'd like to add my two guesses on this, although please note that to my disadvantage, I'm one of the very youngest in the field, so this is not coming from that much of history or experience.
Also, please note that for any reason anybody provides you, you might not be able to confirm it 100%. Your oven might just not have any leftover evidence in and of itself.
Now, per another question I read before, extreme integers were used in some systems to denote missing values, since text and NULL weren't options at those systems. Say I'm looking for ID#84, and I cannot find it in the table:
Not Found Is Unlikely:
Perhaps in some systems it's far more likely that a record exists with a missing/incorrect ID, than to not be existing at all? Hence, when no match is found, designers preferred all records without valid IDs to be returned?
This however has a few problems. First, depending on the design, user might not recognize the results are a set of records with missing IDs, especially if only one was returned. Second, current query poses a problem as it will always return the missing ID records in addition to the normal matches. Perhaps they relied on ORDERing to ease readability?
Exception Above SQL:
AFAIK, SQL is fine with a zero-row result, but maybe whatever thing that calls/used to call it wasn't as robust, and something goes wrong (hard exception, soft UI bug, etc.) when zero rows are returned? Perhaps then, this ID represented a dummy row (e.g. blanks and zeroes) to keep things running.
Then again, this also suffers from the same arguments above regarding "record is always outputted" and ORDER, with the added possibility that the SQL-caller might have dedicated logic to when the -999 record is the only record returned, which I doubt was the most practical approach even in whatever era this was done at.
... the more I type, the more I think this is the oven, and only the great grandmother can explain this to us.
If you want to avoid exception when no value transferred from user, in your stored procedure declare parameter as null. Like #id int = null
for instance :
CREATE PROCEDURE [dbo].[TableCheck]
#id int = null
AS
BEGIN
Select * from table where (id=#id)
END
Now you can execute it in either ways :
exec [dbo].[TableCheck] 2 or exec [dbo].[TableCheck]
Remember, it's a separate thing if you want to return whole table when your input parameter is null.
To answer your id = -999 condition, I tried it your way. It doesn't prevent any exception
I am trying to get my head around with jsonb in Postgres. There are quite a few issues here, What I wanted to do was something like:
SELECT table.column->>'key_1' as a FROM "table"
I tried with -> and also some combinations of brackets as well, but I was always getting nil in a.
So I tried to get all keys first to see if it is even recognizing jsonb or not.
SELECT jsonb_object_keys(table.column) as a FROM "table"
This threw an error:
cannot call jsonb_object_keys on a scalar
So, to check the column type(which I created, so I know it IS jsonb, but anyway)
SELECT pg_typeof(column) as a FROM "table" ORDER BY "table"."id" ASC LIMIT 1
This correctly gave me "jsonb" in the result.
values in the column are similar to {"key_1":"New York","key_2":"Value of key","key_3":"United States"}
So, I am really confused on what actually is going on here and why is it calling my json data to be scalar? What does it actually means and how to solve this problem?
Any help in this regard will be greatly helpful.
PS: I am using rails, posted this as a general question for the problem. Any rails specific solution would also work.
So the issue turned out to be OTHER than only SQL.
As I mentioned I am using rails(5.1), I had used default value '{}' for the jsonb column. And I was using a two-way serializer for the column by defining it in my model for the table.
Removing this serializer and adjusting the default value to {} actually solved the problem.
I think my serializer was doing something to the values, but still, in the database, it had correct value like i mentioned in the question.
It is still not 100% clear to me what was the problem. But it is solved anyway. If anyone can shed some light on what exactly the problem was, that will be great.
Hope this might help someone.
In my case the ORM layer somehow managed to wrote a null string into the JSON column and Postgres was happy with it. Trying to execute json_object_keys on such value resulted in the OP error.
I have managed to track down the place that allow such null strings and after fixing the code, I have also fixed the data with the following query:
UPDATE tbl SET col = '{}'::jsonb WHERE jsonb_typeof(col) <> 'object';
If you intentionally mix the types stored in the column (e.g. sometimes it is an object, sometimes array etc), you might want to filter out all rows that don't contain objects with a simple WHERE:
SELECT jsonb_object_keys(tbl.col) as a FROM tbl WHERE jsonb_typeof(col) = 'object';
Currently I have a database that stores boolean fields as VARCHAR(1) ('T' or 'F'). I want to replace these with BIT. The problem is that this would require a ton of changes in the program that uses the database. So I thought the logical step is to add a BIT field and replace the existing VARCHAR(1) field with a computed column that I access rather than accessing the BIT field (thus the program can continue to work as is without changes, and be changed to use the BIT field over time).
I know this won't work (UPDATE and INSERT doesn't work on computed columns). I know one option is to rename the existing table and add a view through which to access it, but I don't see that as a viable solution, as adding and removing columns, changing dependent views, etc. would be prone to errors (and it's not a neat solution in my opinion).
My question is - what are my options to achieve the above behaviour (such that the program can continue working as is)?
An example:
User (Active VARCHAR(1), ...)
Changed to use computed columns: (won't work)
User (Active_B BIT, Active AS CASE Active_B WHEN 1 THEN 'T' ELSE 'F' END, ...)
UPDATE: Fixed error in example.
It would have to be:
ALTER TABLE dbo.User
ADD Active AS CASE Active_B WHEN 1 THEN 'T' ELSE 'F' END PERSISTED
You need to use the column name (not the datatype) in the CASE. And I'd recommend making the computed column persisted, too - so that the value gets actually stored on disk (and not recomputed every time you access it).
An option is to have both a VARCHAR and a BIT field and use triggers to update between them.
I'll just have to figure out how to prevent infinite recursion (one idea is to have a field that serves no other purpose than to check if this trigger resulted from an update within another trigger (check if we're updating it and include it in the update in the trigger)). The updates need to go both ways to allow for easy backward compatibility.
I want to know what columns where updated during update operation on a triger on first scaaning books online it looks like COLUMNS_UPDATED is the perfect solution but this function actualy don't check if values has changed , it check only what columns where selected in update clause, any one has other suggestions ?
The only way you can check if the values have changed is to compare the values in the DELETED and INSERTED virtual tables within the trigger. SQL doesn't check the existing value before updating to the new one, it will happily write a new identical value over the top - in other words, it takes your word for the update and tracks the update rather than actual changes.
We can use Update function to find if a particular column is updated:
IF UPDATE(ColumnName)
Refer to this link for details: http://msdn.microsoft.com/en-us/library/ms187326.aspx
As the others have posted, you'll need to interrogate INSERTED and DELETED. The only other useful bit of advice might be that you can get only the rows that have changed values (and discard the rows that didn't change) by using the EXCEPT operator - like this:
SELECT * FROM Inserted
EXCEPT
SELECT * FROM Deleted
The only way I can think of is that you can compare the values in DELETED and INSERTED to see which columns have changed.
Doesn't seem a particularly elegant solution though.
I asked this same question!
The previous posters are correct -- without directly comparing the values, you can't tell for sure whether the data has actually changed or not. However, there are several ways to do this type of checking, depending on what else you're trying to do in the trigger. My question has some good advice in the answers about those different mechanisms and their tradeoffs.