DOORS creates Object ID while they are not saved - requirements

I am adding some 2000 new objects to a DOORS module, I do this by importing a spread sheet with blank IDs, DOORS is supposed to create IDs for those blank rows.
Now the problem is, while i import spreadsheet, DOORS hangs, then when i Kill DOORS process, it anyhow creates IDs, next time when i add a new object, ID number starts from those which are already created but no exist. For some reason i need to continue from my last saved ID. Is there any way I can do this?

several remarks here:
works as designed. As soon as an object is created in any DOORS session, the new absolute number is centrally marked as "used". I think the main reason for this feature is the possibility to work in shared mode. If there were a different design, you would get into trouble as soon as two developers work on the module at the same time.
are you sure that DOORS really hangs? Perhaps it is just not yet finished, at least you can see that the objects are really created. Note that depending on how the script is written that you use for import, the number of imports per second might decrease significantly for bigger files
You should NEVER give any meaning to the absolute number other than uniqueness (perhaps QSS should have used timestamps or UUIDS instead of integers for their absolute numbers when they designed DOORS, this would make the situation clearer). You will have to rework “some reasons” . Perhaps you use a different mechanism to assign your own ID mechanism or you have to evaluate whether the requirement “generate consecutive numbers without gaps” is really necessary.

Related

Should I ignore _NSCoreDataConstraintViolationException?

For some reason I only recently found out about unique constraints for Core Data. It looks way cleaner than the alternative (doing a fetch first, then inserting the missing entities in the designated context) so I decided to refactor all my existing persistence code.
If I got it right, the gist of it is to always insert a new entity, and, as longs as I have a proper merge policy, saving the context will take care of the uniqueness and in a more efficient way. The problem is every time I save a context with the inserted entity I get a NSCoreDataConstraintViolationException, no error though. When I do the fetch to make sure
there is indeed only one instance with a unique field
other changes to this entity were applied
everything seems to be okay, but I’m still concerned about this exception, since I do saves and therefore get it quite often, a few times per second in some cases.
My project is in objective-c and I know exceptions are expensive there so I’m having doubts if I’m missing something.
Here is a sample project with this issue (just a few lines of code, be sure to add an exception breakpoint)
NSMergeByPropertyObjectTrumpMergePolicy and constraints are not useful tools and should also never be used. The correct way to manage uniqueness is with a fetch before the insert as it appears you have already been doing.
Let's starts with why the only correct merge policy is NSErrorMergePolicy. You should only be writing to core data in on synchronous say (performBackgroundTask is not enough you also need an operation queue). If you have two performBackgroundTask running at the same time and they contradict then you will lose data. Merge policy is answer the question of "which data would you like to lose?" the correct answer is "Don't lose my data!" which is NSErrorMergePolicy.
The same issue happens when you have a constraint. Let's says you have an entity with a unique constraint on the phone number. And you try to insert another entity with the same phone number. What would you like to happen? It depends on what exactly the data is. It might be two different people, and the phone number should be made different (perhaps they were lacking area code), or it might be one person and the data should be merged. Or you might have a constraint on an uniqueID and the number should just be incremented. But on the database level it doesn't know. It always just does a merge. It will silently lose data.
You can create a custom NSMergePolicy and inspect NSConstraintConflict to decide what do to. But in practice you'd have to think about every time you edit the database and what each change means, which can be very hard outside of the context of writing a change to the database. In other words, the problem with a constraints and merge policy is that it the run is on the wrong level of your application to effectively deal with the problem.
Using constraints with a merge policy of error is OK, as it is a way to find problems with your app (as long as you are monitoring crashes and fixing them). But you still need to do the fetch before the insert to make sure the error doesn't happen.
If you want to clean up code then just have one place that you create your objects. Something like objectWithId:createIfNeed:inContext: which does the fetch and create.

Handling multiple moves in chained planning variable

I'm trying to implement a variation of the vehicle routing example where instead of customers I have "pick ups" and "drop offs". My hard constraints are:
Each associated pick up/drop off pair must be carried out by the same vehicle (e.g. if vehicle A picks up an item, that item cannot be dropped off by vehicle B).
A pick up must be carried out before it's associated drop off (e.g. you can't drop something off unless you've already picked it up).
A vehicle cannot exceed it's maximum capacity.
Other than these hard constraints my solution works much like the vehicle routing example, where each vehicle has a chain of locations (either a PickUp or a DropOff).
The problem I'm having is that using the default moves it cannot easily move both a PickUp and a DropOff to a different vehicle. For example the following change move results in an invalid state and so will be rejected:
To finish the move properly, I would need to do an additional move so that the drop off belongs to the same chain as the pickup:
It feels like the right thing to do with be to implement some kind of composite move which carries out both moves simultaneously, however I'm not sure of the best way to approach this. Has anyone come across a similar issue to this before?
I've seen users do this before. The optaplanner-examples itself doesn't have a VRPPD example yet (PD stands for Pick Up and Delivery), so you can't just copy paste.
Reuse CompositieMove, see it's static methods to build one.
What usually works: build a custom MoveListFactory (and later refactor it to a MoveIteratorFactory to scale out) and have it produce CompositeMove's of ChainedChangeMove and/or ChainedSwapMove.

Are there VB.NET UI Templates for Managing a DataSet?

Is there a quick and easy way to make a VB.NET user interface for managing the data in a normalized DataSet?
I know that is a very subjective question, so let me explain. For a brief period early in my career, I used to create user interfaces in Microsoft Access. I developed a simple, but very effective approach to user interface design. Here are some details of that approach:
Create one form per table. Put on
each form all controls necessary to
completely manage one row in the
table.
Use combo boxes for
foreign-key columns.
Give the user a
standard way to add rows and delete
rows.
Use Apply and Undo buttons.
Let
the user navigate from one row to
another with a list box.
Provide a
search box and filter options for
more efficient navigation.
Let the
user double-click on controls
representing foreign-key columns to
quickly navigate from one form to
another.
Make the state of each form
persistent (so the user always
returns to the last navigation point)
etc.
Simple, right? I found that Access encouraged this approach. It has many built-in features that make this kind of UI easy. For instance, creating a combo box to represent a foreign key relationship takes about 10 seconds.
Well, I haven't worked in Access for a while. A couple of years ago, however, I was hired to write an application in VB.NET on the NET 2.0 framework. To get a data management user interface up and running quickly, I used my Access experience to write a quick & easy prototype in Access -- that took me about one week. Then I hired a programmer to implement that same UI in VB.NET. What a nightmare! We've been working on that implementation for a year, and I'm still very unsatisfied with the results. Some of the problems we are having:
Apply and Undo buttons don't work quite right. We can't find an event that tells us when the form is "dirty" (thus making Apply and Undo relevant).
Navigation from row to row and from form to form requires surprisingly complicated code. I get the impression that we are fighting against NET's binding features, not working with them the way they were intended to be used.
The NET controls seem buggy. For instance, when the user types a value into a combo box (as opposed to choosing it from the drop down), it doesn't trigger the SelectedValueChanged event.
We seem to be repeating a lot of information. For instance, the DataSet knows there is a relationship between the columns in two tables, but we must nevertheless effectively repeat the details of that relationship when we program the combo boxes, binding, navigtation features, etc.
We still don't have good solutions for the filter and search features. There are lots of little details to work out. (For instance, what if you choose a filter that doesn't include the currently displayed row?)
We are writing many helper functions and classes to simplify the work, and I can't figure out why that effort hasn't already been done by others -- I'm certain we are reinventing the wheel.
etc.
By themselves, none of the above are a big deal -- there are effective solutions to each one. Taken together, however, these problems are making my UI development go much slower than expected.
In an ideal world, I should be able to create a small amount of code relevant to my specific data model (for instance, one user control per table establishing the layout and logic relevant to the rows in that table) then integrate that code into a template which interprets the data model and handles everything else -- navigation, adding and deleting, apply and undo, search and filter, etc.
Thus, my question: Is there anything out there which makes this type of UI development easier?
I've searched the web for various combinations of "generic forms", "UI templates", "data managment forms", etc., but I haven't found anything on topic. Perhaps I just don't know the buzzwords. Is there a specific name for this type of UI development task?
Create UCs for each table. Drop a grid control onto the UC and bind it to the tables dataset using VS's wizard. Select the options that allow for insert, update, delete. Each row on the grid will have those buttons/actions automatically added for you.

Finding unused columns

I'm working with a legacy database which due to poor management and design has had a wildgrowth of columns which never have been or are no longer beeing used.
Is it possible to some how query for column usage? As in how often a column is beeing selected (either specifically or with *, or joined on)?
Seems to me like this is something we should be able to somehow retrieve but i have been unable to find anything like this.
Greetings,
F.B. ten Kate
Unfortunately, this analysis on the DB side isn't really going to be a full answer. I've seen a LOT of instances where application code only needed 3 columns of a 10+ column table, but selected them all anyway.
Your column would still show up on a usage report in any sort of trace or profiling you did, but it still may not ACTUALLY be in use.
You might have to either a) analyze the entire collection of apps that use this website or b) start drafting the a return-on-investment style doc on whether it's worth rebuilding.
This article will give you a good idea of how to search all fixed code (prodedures, views, functions and triggers) for the columns that are used. The code in the article searches for a specific table/column combination. You could easily adapt it to run for all columns. For anything dynamically executed, you'd probably have to set up a profiler trace.
Even if you could determine whether a column had been used in the past X period of time, would that be good enough? There may be some obscure program out there that populates a column once a week, a month, a year; or once every time they click the mystery button that no one ever clicks, or to log the report that only Fred in accounting ever runs (he quit two years ago), or that gets logged to if that one rare bug happens (during daylight savings time, perhaps?)
My point is, the only way you can truly be certain that a column is absolutely not used by anything is to review everything -- every call, every line of code, every ad hoc Excel data dump, every possible contingency -- everything that references the database . As this may be all but unachievable, try to get a formally defined group of programs and procedures that must be supported, bend over backwards to make sure they are supported, and be prepared to fix things when some overlooked or forgotten piece of functionality turns up.

Getting rid of hard coded values when dealing with lookup tables and related business logic

Example case:
We're building a renting service, using SQL Server. Information about items that can be rented is stored in a table. Each item has a state that can be either "Available", "Rented" or "Broken". The different states reside in a lookup table.
ItemState table:
id name
1 'Available'
2 'Rented'
3 'Broken'
Adding to this we have a business rule which states that whenever an item is returned, it's state is changed from "Rented" to "Available".
This could be done with a an update statement like "update Items set state=1 where id=#itemid". In application code we might have an enum that maps to the ItemState id:s. However, these contain hard coded values that could lead to maintenance issues later on. Say if a developer were to change the set of states but forgot to fix the related business logic layer...
What good methods or alternate designs are there for dealing with this type of design issues?
Links to related articles are also appreciated in addition to direct answers.
In my experience this is a case where you actually have to hardcode, preferably by using an Enum which integer values match the id's of your lookup tables. I can't see nothing wrong with saying that "1" is always "Available" and so forth.
Most systems that I've seen hard code the lookup table values and live with it. That's because, in practice, code tables rarely change as much as you think they might. And if they ever do change, you generally need to re-compile any programs that rely on that DDL anyway.
That said, if you want to make the code maintainable (a laudable goal), the best approach would be to externalize the values into a properties file. Then you can edit this file later without having to re-code your entire app.
The limiting factor here is that your app depends for its own internal state on the value you get from the lookup table, so that implies a certain amount of coupling.
For lookups where the app doesn't rely on that code, (for instance, if your code table stores a list of two-letter state codes for use in an address drop-down), then you can lazily load the codes into an object and access them only when needed. But that won't work for what you're doing.
When you have your lookup tables as well as enums defined in the code, then you always have an issue with keeping them in sync. There is not much that can be done here. Both live effectively in two different worlds and are generally unaware of each other.
You may wish to reject using lookup tables and only let your business logic operate these values. In that case you miss the options of relying on referential integrity to back you ap on the data integrity.
The other option is to build up your application in that way that you never need these values in your code. That means moving part of your business logic to the database layer, meaning, putting them in stored procedures and triggers. This will also have the benefit of being agnostic to the client. Anyone can invoke SPs and get assured the data will be kept in the consistence state, consistent with your business logic rules as well.
You'll need to have some predefined value that never changes, be it an integer, a string or something else.
In your case, the numerical value of the state is the state's surrogate PRIMARY KEY which should never change in a well-designed database.
If you're concerned about the consistency, use a CHAR code: A, R or B.
However, you should stick to it as well as to a numerical code so that A always means Available etc.
You database structure should be documented as well as the code is.
The answer depends entirely on the language you're using: solutions for this are not the same in Java, PHP, Smalltalk or even Assembler...
But let me tell you something: while it's true hard coded values are not a great thing, there are times in which you do need them. And this one is pretty much one of them: you need to declare in your code your current knowledge of the business logic, which includes these hard coded states.
So, in this particular case, I would hard code those values.
Don't overdesign it. Before trying to come up with a solution to this problem, you need to figure out if it's even a problem. Can you think of any legit hypothetical scenario where you would change the values in the itemState table? Not just "What if someone changes this table?" but "Someone wants to change this table in X way for Y reason, what effect would that have?". You need to stay realistic.
New state? you add a row, but it doesn't affect the existing ones.
Removing a state? You have to remove the references to it in code anyway.
Changing the id of a state? There is no legit reason to do that.
Changing the name of a state? There is no legit reason to do that.
So there really should be no reason to worry about this. But if you must have this cleanly maintainable in the case of irrational people who randomly decide to change Available to 2 because it just fits their Feng Shui better, make sure all tables are generated via a script which reads these values from a configuration file, and then make sure all code reads constants from that same configuration file. Then you have one definition location and any time you want to change the value you modify that configuration file instead of the DB/code.
I think this is a common problem and a valid concern, that's why I googled and found this article in the first place.
What about creating a public static class to hold all the lookup values, but instead of hard-coding, we initialize these values when the application is loaded and use names to refer them?
In my application, we tried this, it worked. Also you can do some checking, e.g. the number of different possible values of a lookup in code should be the same as in db, if it's not, log/email/etc. But I don't want to manually code this for the status of 40+ biz entities.
Moreover, this can be part of the bigger problem of OR mapping. We're exposed with too much details of the persistence layer, and thus we have to take care of it. With technologies like Entity Framework, we don't need to worry about the "sync" part because it's automated, am I right?
Thanks!
I've used a similar method to what you're describing - a table in the database with values and descriptions (useful for reporting, etc.) and an enum in code. I've handled the synchronization with a comment in code saying something like "these values are taken from table X in database ABC" so that the programmer knows the database needs to be updated. To prevent changes from the database side without the corresponding changes in code I set permissions on the table so that only certain people (who hopefully remember they need to change the code as well) have access.
The values have to be hard-coded, which effectively means that they can't be changed in the database, which means that storing them in the database is redundant.
Therefore, hard-code them and don't have a lookup table in the database. Instead store the items state directly in the items table.
You can structure your database so that your application doesn't actually have to care about the codes themselves, but rather the business rules behind them.
I have done both of the following:
Do one or more of your codes have a certain characteristic, such as IsAvailable, that the application cares about? If so, add it as a flag column to the code table, where those that match are set to true (or your DB's equivalent), and those that don't are set to false.
Do you need to use a specific, single code under a certain condition? You can create a singleton table, named something like EnvironmentSettings, with a column such as ItemStateIdOnReturn that's a foreign key to the ItemState table.
If I wanted to avoid declaring an enum in the application, I would use #2 to address the example in the question.
Whether you take this approach depends on your application's priorities. This type of structure comes at the cost of additional development and lookup overhead. Plus, if every individual code comes with its own business rules, then it's not practical to create one new column per required code.
But, it may be worthwhile if you don't want to worry about synchronizing your application with the contents of a code table.