Every programmer knows what CRUD is, but what does Replace actually mean when it comes to RDBMs? With an object database, replace makes more sense, ie, you might have to replace an object with another object. But during runtime using an RDBMs, what are you replacing with what?
Or does CRUD actually stand for Create Read Update Delete? (makes more sense).
The later - create, read, update, delete
See: http://en.wikipedia.org/wiki/Create,_read,_update_and_delete
Create – If N doesn’t exist, then create it. If already exists don’t do anything! So basically it won’t overwrite anything you already have, could be an issue if you decide to update the settings later.
Replace – Remove N if already exists, and create a new N with these settings. If N does not exist just create it with your settings. In short, whether N existed or not you’re getting it. If you have settings that you wish to keep and just want to add, then update is what you need to select. But could be an issue later knowing exactly what settings a user is actually getting as they might not match exactly the GPO.
Update – If N exists it will be updated with the new settings. If there are other settings associated with the drive mapping that aren’t specified here, they will be maintained. If N doesn’t exists create it. Nothing gets removed like with the Replace setting, but there is still a chance that you’ll overwrite something.
Delete – If that drive mapping exists, it will be removed. If N doesn’t exist it won’t do anything.
Personally i would almost always use replace. The setting i apply are the ones i want the user to have, i don’t want to be debugging a computer because i used update and legacy settings are being applied still.
Hope this help someone.
Source: http://wicher.co.uk/gpo-c-r-u-d-create-replace-update-delete/
Related
For some reason I only recently found out about unique constraints for Core Data. It looks way cleaner than the alternative (doing a fetch first, then inserting the missing entities in the designated context) so I decided to refactor all my existing persistence code.
If I got it right, the gist of it is to always insert a new entity, and, as longs as I have a proper merge policy, saving the context will take care of the uniqueness and in a more efficient way. The problem is every time I save a context with the inserted entity I get a NSCoreDataConstraintViolationException, no error though. When I do the fetch to make sure
there is indeed only one instance with a unique field
other changes to this entity were applied
everything seems to be okay, but I’m still concerned about this exception, since I do saves and therefore get it quite often, a few times per second in some cases.
My project is in objective-c and I know exceptions are expensive there so I’m having doubts if I’m missing something.
Here is a sample project with this issue (just a few lines of code, be sure to add an exception breakpoint)
NSMergeByPropertyObjectTrumpMergePolicy and constraints are not useful tools and should also never be used. The correct way to manage uniqueness is with a fetch before the insert as it appears you have already been doing.
Let's starts with why the only correct merge policy is NSErrorMergePolicy. You should only be writing to core data in on synchronous say (performBackgroundTask is not enough you also need an operation queue). If you have two performBackgroundTask running at the same time and they contradict then you will lose data. Merge policy is answer the question of "which data would you like to lose?" the correct answer is "Don't lose my data!" which is NSErrorMergePolicy.
The same issue happens when you have a constraint. Let's says you have an entity with a unique constraint on the phone number. And you try to insert another entity with the same phone number. What would you like to happen? It depends on what exactly the data is. It might be two different people, and the phone number should be made different (perhaps they were lacking area code), or it might be one person and the data should be merged. Or you might have a constraint on an uniqueID and the number should just be incremented. But on the database level it doesn't know. It always just does a merge. It will silently lose data.
You can create a custom NSMergePolicy and inspect NSConstraintConflict to decide what do to. But in practice you'd have to think about every time you edit the database and what each change means, which can be very hard outside of the context of writing a change to the database. In other words, the problem with a constraints and merge policy is that it the run is on the wrong level of your application to effectively deal with the problem.
Using constraints with a merge policy of error is OK, as it is a way to find problems with your app (as long as you are monitoring crashes and fixing them). But you still need to do the fetch before the insert to make sure the error doesn't happen.
If you want to clean up code then just have one place that you create your objects. Something like objectWithId:createIfNeed:inContext: which does the fetch and create.
I have defined a flat file schema which works fine. However, I got now a new requirement for this schema: It has to support future potential additional fields in the end of the records.
The solution I used is quit "ugly". I added an additional filler at the end of the record and configured it as "minOccurs = 0" and set Allow early termination of optional fileds to true.
This works but I don't like it.
It seems to me that there must be a property for ignoring any additional fields after the last one, so I won't need this filler field.
Does anyone familiar with such option/ property?
Thanks all.
Nope, what you've done is the correct way to handle this situation. Beauty is in the eye of the beholder.
The Flat File Parser requires all possible content be defined in the schema so it doesn't ever have to 'guess' what's next.
When a flat file changes, the schema must change as well. That is part of the job for a BizTalk developer.
You can't anticipate changes to the flat file inside your schema. With the filler field you have now, what are you going to do if 2 extra fields appear and have to be used? How will you get the data in, say, a mapping?
This is the way the flat file parser works, everything has to be well defined and if the specs change you must update your schemas. There is no magic here to make it all completely dynamic. Unless if you were to write a custom flat file disassembler from scratch that supports it, but good luck with that.
Let me explain this question a bit :)
I'm writing a bunch of stored procedures for a new product.
They will only ever be called by the c# application, written by the developers who are following the same tech spec I've been given.
I cant go into the real tech spec, so I'll give an close enough example:
In the tech spec, we're having to store file data in a couple of proprietary zip files, with a database storing the names and locations of each file within a zip (eg, one database for each zip file)
Now, lets say that this tech spec states that, to perform "Operation A", the following steps must be done:
1: Calculate the space requirements of the file to be added
2: Get a list of zip files and their database connection strings (call stored proc "GetZips")
2: Find a suitable location within the zip file to store the file (call stored proc "GetSuitableFileLocation" against each database connection, until a suitable one is found)
3: In step 2, you will be provided with a start/end point within the zip to add your file.
Call the "AllocateLocationToFile" stored proc, passing in these values, then add your file to the zip.
OK - so the question is, should "AllocateLocationToFile" re-check the specified start/end points are still "free", and if not, raise an exception?
There was a bit of a discussion about this in the office, and whilst I believe it should check and raise, others believe that it should not, as there is no need due to the developer calling "GetSuitableFileLocation" immediately beforehand.
Can I ask for some valued oppinions?
Generally, it is better to be as safe as possible. A calling code should never rely on an external code (the sps are kind of external). The idea is that you can not predict what would happen in the future. New guys come to the company... the sps are given to another team and so on...
Personally, the fact that B() is right after A() doesn't guarantee anything. To change this for whatever reason is not something to be considered impossible.
A team should never take decisions based on "we are going to maintain this, no problem at all" because they might get fired, the company may sell the product and so on..
My suggestion is to do the checking, profile the code and if it is really a bottleneck to remove it, but write somewhere that THIS CAN BREAK!.
Given that you're manipulating files, with all the potential havoc this can create, I'd say in this scenario the risk (damage component) is high enough to be cautious.
And Svetlozar's right: what if great success does cause re-use, or other added-on applications? Not everyone may be as well-behaved as your team is right now.
One reason why it might be a good idea would involve race conditions. Is it possible two users could call the process at the same time and get the same values? Please at least test this scenario with the currently designed process.
Example case:
We're building a renting service, using SQL Server. Information about items that can be rented is stored in a table. Each item has a state that can be either "Available", "Rented" or "Broken". The different states reside in a lookup table.
ItemState table:
id name
1 'Available'
2 'Rented'
3 'Broken'
Adding to this we have a business rule which states that whenever an item is returned, it's state is changed from "Rented" to "Available".
This could be done with a an update statement like "update Items set state=1 where id=#itemid". In application code we might have an enum that maps to the ItemState id:s. However, these contain hard coded values that could lead to maintenance issues later on. Say if a developer were to change the set of states but forgot to fix the related business logic layer...
What good methods or alternate designs are there for dealing with this type of design issues?
Links to related articles are also appreciated in addition to direct answers.
In my experience this is a case where you actually have to hardcode, preferably by using an Enum which integer values match the id's of your lookup tables. I can't see nothing wrong with saying that "1" is always "Available" and so forth.
Most systems that I've seen hard code the lookup table values and live with it. That's because, in practice, code tables rarely change as much as you think they might. And if they ever do change, you generally need to re-compile any programs that rely on that DDL anyway.
That said, if you want to make the code maintainable (a laudable goal), the best approach would be to externalize the values into a properties file. Then you can edit this file later without having to re-code your entire app.
The limiting factor here is that your app depends for its own internal state on the value you get from the lookup table, so that implies a certain amount of coupling.
For lookups where the app doesn't rely on that code, (for instance, if your code table stores a list of two-letter state codes for use in an address drop-down), then you can lazily load the codes into an object and access them only when needed. But that won't work for what you're doing.
When you have your lookup tables as well as enums defined in the code, then you always have an issue with keeping them in sync. There is not much that can be done here. Both live effectively in two different worlds and are generally unaware of each other.
You may wish to reject using lookup tables and only let your business logic operate these values. In that case you miss the options of relying on referential integrity to back you ap on the data integrity.
The other option is to build up your application in that way that you never need these values in your code. That means moving part of your business logic to the database layer, meaning, putting them in stored procedures and triggers. This will also have the benefit of being agnostic to the client. Anyone can invoke SPs and get assured the data will be kept in the consistence state, consistent with your business logic rules as well.
You'll need to have some predefined value that never changes, be it an integer, a string or something else.
In your case, the numerical value of the state is the state's surrogate PRIMARY KEY which should never change in a well-designed database.
If you're concerned about the consistency, use a CHAR code: A, R or B.
However, you should stick to it as well as to a numerical code so that A always means Available etc.
You database structure should be documented as well as the code is.
The answer depends entirely on the language you're using: solutions for this are not the same in Java, PHP, Smalltalk or even Assembler...
But let me tell you something: while it's true hard coded values are not a great thing, there are times in which you do need them. And this one is pretty much one of them: you need to declare in your code your current knowledge of the business logic, which includes these hard coded states.
So, in this particular case, I would hard code those values.
Don't overdesign it. Before trying to come up with a solution to this problem, you need to figure out if it's even a problem. Can you think of any legit hypothetical scenario where you would change the values in the itemState table? Not just "What if someone changes this table?" but "Someone wants to change this table in X way for Y reason, what effect would that have?". You need to stay realistic.
New state? you add a row, but it doesn't affect the existing ones.
Removing a state? You have to remove the references to it in code anyway.
Changing the id of a state? There is no legit reason to do that.
Changing the name of a state? There is no legit reason to do that.
So there really should be no reason to worry about this. But if you must have this cleanly maintainable in the case of irrational people who randomly decide to change Available to 2 because it just fits their Feng Shui better, make sure all tables are generated via a script which reads these values from a configuration file, and then make sure all code reads constants from that same configuration file. Then you have one definition location and any time you want to change the value you modify that configuration file instead of the DB/code.
I think this is a common problem and a valid concern, that's why I googled and found this article in the first place.
What about creating a public static class to hold all the lookup values, but instead of hard-coding, we initialize these values when the application is loaded and use names to refer them?
In my application, we tried this, it worked. Also you can do some checking, e.g. the number of different possible values of a lookup in code should be the same as in db, if it's not, log/email/etc. But I don't want to manually code this for the status of 40+ biz entities.
Moreover, this can be part of the bigger problem of OR mapping. We're exposed with too much details of the persistence layer, and thus we have to take care of it. With technologies like Entity Framework, we don't need to worry about the "sync" part because it's automated, am I right?
Thanks!
I've used a similar method to what you're describing - a table in the database with values and descriptions (useful for reporting, etc.) and an enum in code. I've handled the synchronization with a comment in code saying something like "these values are taken from table X in database ABC" so that the programmer knows the database needs to be updated. To prevent changes from the database side without the corresponding changes in code I set permissions on the table so that only certain people (who hopefully remember they need to change the code as well) have access.
The values have to be hard-coded, which effectively means that they can't be changed in the database, which means that storing them in the database is redundant.
Therefore, hard-code them and don't have a lookup table in the database. Instead store the items state directly in the items table.
You can structure your database so that your application doesn't actually have to care about the codes themselves, but rather the business rules behind them.
I have done both of the following:
Do one or more of your codes have a certain characteristic, such as IsAvailable, that the application cares about? If so, add it as a flag column to the code table, where those that match are set to true (or your DB's equivalent), and those that don't are set to false.
Do you need to use a specific, single code under a certain condition? You can create a singleton table, named something like EnvironmentSettings, with a column such as ItemStateIdOnReturn that's a foreign key to the ItemState table.
If I wanted to avoid declaring an enum in the application, I would use #2 to address the example in the question.
Whether you take this approach depends on your application's priorities. This type of structure comes at the cost of additional development and lookup overhead. Plus, if every individual code comes with its own business rules, then it's not practical to create one new column per required code.
But, it may be worthwhile if you don't want to worry about synchronizing your application with the contents of a code table.
After reading through many of the questions here about DB schema migration and versions, I've come up with a scheme to safely update DB schema during our update process. The basic idea is that during an update, we export the database to file, drop and re-create all tables, and then re-import everything. Nothing too fancy or risky there.
The problem is that this system is somewhat "viral", meaning that it is only safe to add columns or tables, since removing them would cause problems when re-importing the data. Normally, I would be fine just ignoring these columns, but the problem is that many of the removed items have actually been refactored, and the presence of the old ones in the code fools other programmers into thinking that they can use them.
So, I would like to find a way to be able to mark columns or tables as deprecated. In the ideal case, the deprecated objects would be marked while updating the schema, but then during the next update our backup script would simply not SELECT the objects which have been marked in this way, allowing us to eventually phase out these parts of the schema.
I have found that MySQL (and probably other DB platforms too, but this is the one we are using) supports the COLUMN attribute to both fields and tables. This would be perfect, except that I can't figure out how to actually use it in a meaningful manner. How would I go about writing an SQL query to get all column names which do not contain a comment matching text containing the word "deprecated"? Or am I looking at this problem all wrong, and missing a much better way to do this?
Maybe you should refactor to use views over your tables, where the views never include the deprocated columns.
"Deprecate" usually means (to me at least) that something is marked for removal at some future date, should not used by new functionality and will be removed/changed in existing code.
I don't know of a good way to "mark" a deprecated column, other than to rename it, which is likely to break things! Even if such a facility existed, how much use would it really be?
So do you really want to deprecate or remove? From the content of your question, I'm guessing the latter.
I have the nasty feeling that you may be in one of those "if I wanted to get to there I wouldn't start from here" situations. However, here are some ideas that spring to mind:
Read Recipes for Continuous Database Integration which seems to address much of your problem area
Drop the column explicitly. In MySQL 5.0 (and even earlier?) the facility exists as part of DDL: see the ALTER TABLE syntax.
Look at how ActiveRecord::Migration works in Ruby. A migration can include the "remove_column" directive, which will deal with the problem in a platform-appropriate way. It definitely works with MySQL, from personal experience.
Run a script against your export to remove the column from the INSERT statements, both column and values lists. Probably quite viable if your DB is fairly small, which I'm guessing it must be if you export and re-import it as described.