MySQL Columns default value is the default value from primary key of another table - sql

(Table)File has many (Table)Words
FK Words.file_id related to a single File.id
Default value of Words.frame is equal to File.frame for that PK/FK
Does this type of default relationship have a name? Examples on getting this setup? (MySQL)
Edit
The reason for this is that words may have the same frame as the file and if they do, we want to use that default, however some may not and need to be set manually. Is this really bad practice to handle it this way as described in one of the answers? Any improvement suggestions?

You may want to use a Trigger. You should be able to mimick the "default value" of Words.frame to be based on the value of another field from the File table.

It doesn't have a name, but feels like denormalization / data duplication to me.
#Daniel Vassallo suggests an insert trigger for this, and I think that would be the best approach as well if this is really what you need.

Related

Using auto assigned primary key or setting it on INSERT?

I just answered this question: Can I get the ID of the data that I just inserted?. I explained that in order to know the ID of the last record inserted in a table, what I would do is inserting it manually, instead of using some sequence or serial field.
What I like to do is to run a Max(id) query before INSERT, add 1 to that result, and use that number as ID for the record I'm about to insert.
Now, what I would like to ask: is this a good idea? Can it give some trouble? What are the reasons to use automatically set field on IDs fields?
Note: this is not exactly a question, but looking help center it seems like a good question to ask. If you find it to be off-topic, please tell me and I'll remove it.
This is a bad idea and it will fail in a multi threaded (or multi users) environment.
Please note that the surrogate-key vs natural-key debate is still far from having a concrete definitive solution - but putting that aside for a minute - even if you do go with a surrogate key - you should never try to manually auto-increment columns. Let the database do that for you and avoid all kinds of problems that can occur if you try to do that manually - such as primary key constraint violations in the best case, or duplicate values in the worst case.
If an Entity uses an ID as the Primary-Key, it is in general a good idea to let the DB autocreate it, so you don't need to determine an unused one while creating this Entity in your code. Furthermore a DateAccessObject(DAO) does not need to operate on the ID.
Dependant on what DB u might use, you might even not be allowed to retrieve all IDs of that Table..
I guess there might be other good reasons to let the DB manage this part.

Cast 'new' and 'old' dynamically [duplicate]

I'm interested in using the following audit mechanism in an existing PostgreSQL database.
http://wiki.postgresql.org/wiki/Audit_trigger
but, would like (if possible) to make one modification. I would also like to log the primary_key's value where it could be queried later. So, I would like to add a field named something like "record_id" to the "logged_actions" table. The problem is that every table in the existing database has a different primary key fieldname. The good news is that the database has a very consistent naming convention. It's always, _id. So, if a table was named "employee", the primary key is "employee_id".
Is there anyway to do this? basically, I need something like OLD.FieldByName(x) or OLD[x] to get value out of the id field to put into the record_id field in the new audit record.
I do understand that I could just create a separate, custom trigger for each table that I want to keep track of, but it would be nice to have it be generic.
edit: I also understand that the key value does get logged in either the old/new data fields. But, what I would like would be to make querying for the history easier and more efficient. In other words,
select * from audit.logged_actions where table_name = 'xxxx' and record_id = 12345;
another edit: I'm using PostgreSQL 9.1
Thanks!
You didn't mention your version of PostgreSQL, which is very important when writing answers to questions like this.
If you're running PostgreSQL 9.0 or newer (or able to upgrade) you can use this approach as documented by Pavel:
http://okbob.blogspot.com/2009/10/dynamic-access-to-record-fields-in.html
In general, what you want is to reference a dynamically named field in a record-typed PL/PgSQL variable like 'NEW' or 'OLD'. This has historically been annoyingly hard, and is still awkward but is at least possible in 9.0.
Your other alternative - which may be simpler - is to write your audit triggers in plperlu, where dynamic field references are trivial.

Interview: How to handle SQL NOT NULL constraint on the code end

I was recently asked this question in an interview an I'm having trouble formulating the question well enough to find an answer via search engine.
If my SQL database has a NOT NULL constraint placed on the "name" column, how would I be able to create that row, filling it with other data, without tripping the "name" NOT NULL constraint, assuming that you don't have the proper data to insert into the "name" field?
My off the cuff response was to insert an empty string into the "name" field, but I feel like that's too hacky. Does anyone know the proper response?
It's usually a best practice to insert a dummy value such as a -1 that you can easily replace later. A blank string can be more problematic in some cases. To do this you would either use a CASE WHEN statement, or ideally, an ISNULL() function which would look like this ISNULL([ColName], -1) ISNULL is probably the answer they were looking for. That would insert the data if you have it and then if it's null, it would insert a -1.
As Gordon commented, you could also use a DEFAULT value when creating the table. In my answer above, I am assuming you're working with a table that had already been created - meaning you couldn't do that without altering the table.
There are two ways that I can think of for not having to insert name if it is NULL. By far the simpler is to define a default value:
alter table t alter column name varchar(255) not null default '<no name>';
The alternative is to use a trigger, but that is much more cumbersome.
If I were asking a similar question, this is the answer that I would want.
Why bypass the constraint?
In my opinion either your data is wrong or the constraint.
If you bypass constraints, you can't assure some data quality inside the db.
So, i would say the scenario where a table could not be changed even if the constraint is wrong is a huge technical debt which should be solved instead.

Trying to make my database more dynamic

I am trying to figure out what the best way to design this database would be. Currently what I have works, but it requires me to hard-code values where I would like it to be dynamic in the future.
Here is my current database design:
As you can see, for both the Qualities and the PressSettingsSet tables, there are many columns that are hard-coded such as BlownInsert, Blowout, Temperature1, Temperature2, etc.
What I am trying to accomplish is to have these be dynamic. Each job will have these same settings, but I would like to allow the users to define these settings. Would it be best to create a table with just a name field and have a one-to-one relationship to another table with a value for the field and a relation to the Job Number?
I hope this makes sense, any help is appreciated. I can make a database diagram of how I think it should work if that is more helpful to what I am trying to convey. I think that what I have in mind will work, but it just seems like it will be creating a lot of extra rows in the database, so I wanted to see if there is possibly a better way.
Would it be best to create a table with just a name field and have a one-to-one relationship to another table with a value for the field and a relation to the Job Number?
That would be the simplest - you could expand that by adding data-effective fields or de-normalize it by putting it all in one table (with just a name and value field).
Are the values for the settings different per job? If so then yes a "key" table" with the name ans a one-to-many relationship to the value per job would be best.

Is this schema design good?

I inherited a system that stores default values for some fields in some tables in the database. These default values are used in the application to prepopulate control values. So, essentially, every field in every table in the database can potentially have a default value. The previous developer decided to store these values in a single table that had a key/value pair combo. The key represented by the source table + field name (as a varchar) and the default value as a varchar field as well. The Business layer would then cast the varchar field to the appropriate data type.
Somehow, I feel this is brittle. Though the application works as expected, there appears to be a flaw in the design.
Any suggestions on how this requirement could have been handled earlier? Is there anything that can be done now to make it more robust?
EDIT: I should have defined what the term "default" meant. This is NOT related to the default value of a field in the table. Instead, it's a default value that will be used by the application in the front end.
That schema design is fine. I've seen it used in commercial apps and I've also used it in a few apps of my own where the users needed to be able to change the defaults or other parameters around fields in the application (limits, allowable characters etc.) or the application allowed the users to add new fields for use in the app.
Having it in a single table (not separate default tables for each table) protects it from schema changes in the tables it supports. Those schema changes become simple configuration changes in this model.
The single table makes it easy to encapsulate in a Class to serve as the "defaults" configuration object.
Some general advice:
When you inherit a working system and don't understand why something was designed the way it is - the problem is most likely your understanding, not the system. If it isn't broken, do not fix it.
Specific advice on the only improvements I would recommend (if they become necessary):
You can use the new SQLVARIANT field for the value rather than a varchar - it can hold any of the regular data types - you will need to add support for casting them to the correct data type when using the value though.
Refactoring the schema now would be risky and disruptive so I would not recommend it (unless you absolutely need to do that to fix some pressing issue, but from what you say it doesn't look like you do).
Were you doing the design from scratch, I'd recommend one defaults-table per real-table, with a single row recording the defaults with their real column names and types. Having several tiny tables scares some DBAs, but it's not really any substantial performance hit in my experience, and it sure does make the system sounder and more robust as you desire.
If you want to use SQL's own DEFAULT clauses as other answers recommend, be sure to name those explicitly, otherwise altering them when a default changes can be a doozy. Personally, I like to keep the default values separate from the schema's metadata, especially in a system where updating or tweaking a default value is a much more common and should-be-innocuous operation than the momentous undertaking of metadata/schema changes!
A better way to go would be using SQL Server's built-in DEFAULT constraint.
e.g.
CREATE TABLE Orders
(
OrderID int IDENTITY NOT NULL,
OrderDate datetime NULL CONSTRAINT DF_Orders_OrderDate DEFAULT(GETDATE()),
Freight money NULL CONSTRAINT DF_Orders_Freight DEFAULT (0) CHECK(Freight >= 0),
ShipAddress nvarchar (60) NULL DF_Orders_ShipAddress DEFAULT('NO SHIPPING ADDRESS'),
EnteredBy nvarchar (60) NOT NULL DF_Orders_EnteredBy DEFAULT(SUSER_SNAME())
)
If the requirement was that the default selection of a given control be configurable and the "application works as expected" then I don't see a problem. You didn't elaborate on the "flaw" in the design.
If you want (and should!) use default values on the database, I would strongly urge to use the built-in DEFAULT constraint that's available on any field. Only that is really guaranteed to work properly - anything else is a hack solution at best.....
CREATE TABLE
MyTable(ID INT IDENTITY(1,1),
NumericField INT CONSTRAINT DF_MyTable_Numeric DEFAULT(42),
StringID VARCHAR(20) CONSTRAINT DF_MyTable_StringID DEFAULT 'rubbish',
.......)
and so on - you get the idea.
Just learn this mantra: DRY - DON'T REPEAT YOURSELF - don't go out re-inventing stuff that's already there and has been heavily tested and used - just use it.
Marc
I think the real answer here depends heavily on how often these default values change. If default values are set once when the database is designed, then DEFAULT constraints make sense. If some non-technical person needs to change them every couple of months, I really like the design presented.
Where it becomes brittle is when you have a mismatch between the column names or data types and the default values in the Defaults table. If you code a careful interface to manage the Defaults table values, this shouldn't be a problem.
If its a case of UI defaults - the following questions come up.
How 'dynamic' or generic is your schema.? Does the same schema support multiple front-ends - i.e. the same column in the Db-table supports 2 front-ends - each with multiple-defaults?
Do multiple apps use your DB? In that case having the default defined in the DB could still help
Its possible to query the Data-dictionary to get default info for each column.
If a UI field does not have a corresponding db-column, then your current implementation will be justified in such cases
One downside is more code is needed to handle and use this table.
If it was a one-off application and this default 'intelligence' was not leveraged across multiple-apps - thats a consideration
Its more like a 'frameworky' kind of thing to do - though I'd say its quite non-standard, and would be done on the web-layer.
If the table of default values is what irks you, here's some food for thought:
Rather than stick to dogma about varchar(max) or casting strings or key/value tables - a good approach is to ask what is a better solution?
From your description, it seems like this table contains few rows, and has only two columns: key and value.
I should ask - is the data in this table controlled from an administrative UI? Perhaps this is the reason behind the original design decision to make it a table.
If type-safety is an issue, you could consider the existence of a "type" column and analyze how the code would need to be changed.
I wouldn't jump to conclusions about "good" or "bad" until you really analyze WHY the system is implemented this way.
The idea (not necessarily the implementation) makes sense if you want to keep the application defaults separate from the data, allowing different apps to have different defaults.
This is generally a good thing, because many databases inevitably spawn secondary applications (import jobs, if not anything else), where you do NOT want the same defaults (or any defaults at all); and in principle, a defaults table can support this.
What I think makes this implementation less-than-ideal is that while the defaults are MOSTLY data-driven, the calling application either needs its own set of defaults IF the defaults are not specified in the table or terminate.
If the former is employed, this could introduce a number of headaches when you're trying to track down bugs, especially if you don't have good audit tables keeping track of which user/application inserted/updated which rows on which tables.
Disclaimer: I'm generally of the thought that columns ought to be NULLable and w/o defaults, except where it absolutely makes sense from a data point of view (id/primary key, custom timestamp, etc.). If a column should never be NULL introduce a constraint forbidding NULLs, not a concrete default.