Disable records removal of standard table through maintenance view - abap

There is a standard table T513 and customer master data table T7SK13, which is modified by customer using maintenance view V_T7SK13, with following tables/join conditions:
Is there a possibility to somehow disable removal of records from international table T513, but still allowing to add new records there?
I can remove the delete button for maintenance view using excl_cua_funct parameter of function view_maintenance_call, but this would also disable removal of records from customer table T7SK13, which still needs to work.

You may find this useful. You add code that is called by
generated views that is called at particular events.
Like before delete. Event 03.
See the View V_TVIMF
You can use mod assistant to add a form routine.
Or add it use implicit enhancements. If that makes people feel better. ;)
Sm30 v_tvimf
V_t7sk13

Related

Required an expert suggestion in TSQL

I have a scenarios where Table A has reference to Table B , Table B has a reference on Table C column ...etc.
To implement an update task in my project I ought to implement it in two phase logic
i.e. delete the row first and add the latest again.
But unfortunately , when I try to delete a row in table A it has reference which in turn has reference to other table and so on. Hence my logic of delete and add does not work in a proper way all the time. Even if it is deleted and added again , the sequence at which it is being added is last i.e. as a new record. Hence I am losing all the earlier references track in the same order as old one.
Hence I would like to delete a row from a table without effecting the references i.e. for time being it should allow to ignore reference , once i added it again i.e update record then i need to re-enforce/enable back the reference.
Is it possible to do in such a way ? or there any other logic works in similar fashion or replace the original intention ? could anyone please provide your expertise advice on this ?
How general logic of windows service pack works ? can any one elaborate on that? or share some info or doc or blog regarding the same?
Thank you so much.
Regards,
Shyam
What you want to do is a bad practice, I would rethink your design. It doesn't let you delete the parent record because there are child records. That is what the database ii supposed to and to try to circumvent it is a 100% guarantee of bad data.
If what you are trying to accomplish is to move the child records to a new parent, that can be done but you add the new record first and then make updates. It is best if you have some field to be able to define what old record it used to be associated with or a mapping table to use to make many changes. Then you would need to run updates for every child table. This kind of thing shoudl be a one time change, not a regular practice. It certainly shoudl virtually never happen from the application and shoudl only be done by a qualified databse developer.
If what you are trying to accomplish is to inactivate the parent so it can no longer be used for some purposes(such as creating new orders) and leave the details for reporting (wouldn't want to lose the finacials for old orders), then you should put an active flag on the table and use that to filter records instead. Often this means creating a view of only active records and pointing the code to the view insted of directly to the table.

What is the best method of logging data changes and user activity in an SQL database?

I'm starting a new application and was wondering what the best method of logging is. Some tables in the database will need to have every change recorded, and the user that made the change. Other tables may just need to have the last modified time recorded.
In previous applications I've used different methods to do this but want to hear what others have done.
I've tried the following:
Add a "modified" date-time field to the table to record the last time it was edited.
Add a secondary table just for recording changes in a primary table. Each row in the secondary table represents a changed field in the primary table. So one record update in the primary could create several records in the secondary table.
Add a table similar to no.2 but it records edits across three or fours tables, reference the table it relates to in an additional field.
what methods do you use and would recommend?
Also what is the best way to record deleted data? I never like the idea that a user can permanently delete a record from the DB, so usually I have a boolean field 'deleted' which is changed to true when its deleted, and then it'll be filtered out of all queries at model level. Any other suggestions on this?
Last one.. What is the best method for recording user activity? At the moment I have a table which records logins/logouts/password changes etc, and depending what the action is, gives it a code either 1,2, 3 etc.
Hope I haven't crammed too much into this question. thanks.
I know it's a very old question, but I'd wanted to add more detailed answer as this is the first link I got googling about db logging.
There are basically two ways to log data changes:
on application server layer
on database layer.
If you can, just use logging on server side. It is much more clear and flexible.
If you need to log on database layer you can use triggers, as #StanislavL said. But triggers can slow down your database performance and limit you to store change log in the same database.
Also, you can look at the transaction log monitoring.
For example, in PostgreSQL you can use mechanism of logical replication to stream changes in json format from your database to anywhere.
In the separate service you can receive, handle and log changes in any form and in any database (for example just put json you got to Mongo)
You can add triggers to any tracked table to olisten insert/update/delete. In the triggers just check NEW and OLD values and write them in a special table with columns
table_name
entity_id
modification_time
previous_value
new_value
user
It's hard to figure out user who makes changes but possible if you add changed_by column in the table you listen.

Access 2003 - Create and Delete Many-To-Many associations

I need to develop a front end to a MSSQL database just to modify a few tables. I decided to use Access 2003 simply because of time restraints.
I used Linked Tables over ODBC to get them into Access, I'm designing the forms but I'm having problems creating an interface to allow users to create and delete new association between entities.
My Database structure is:
product
# productcode
- name
product_part
* productcode
* partnumber
- position
part
# partnumber
- comment
There is a many-to-many relationship between product and part (a product can have many parts and a part can belong to many products) except I can't find any easy way to allow a user to just associate a new part to product, only view the existing ones.
I've defined the relationships in Access except the options for cardinality and referential integrity are greyed out, I'm assuming this is because they're linked tables? Not sure if this would affect anything.
I created a form for product with an embedded subform which lists all the associated parts and their position (position is an attribute of the relationship since it's contextual but I can spin this out into it's own table if it'll make things easier).
Basically I need to make an user interface mechanism which will associate a selected part from a list to the shown product or any other way to create new and delete existing associations flexibly. I would have thought Access would have something in some wizard somewhere to do this, but if it does I can't find it.
Any help would be appreciated.
Judging on what noted so far, then this should be a simple matter to have the main form based on your topmost table (product). The continues sub form should then be based on ONLY the product part table.
If you think about this, the third table is really only a lookup table there for your convenience to allow you to not have to type in manually type in the part number.
So, base the child sub form as a continuous form, and make that column for part number a combo box that looks up the part numbers from the third table (part). So this combo boss can search and display by description, but will in fact automatically store the part number in that colum for you.
So while there's no need for any types of wizards, you certainly do not have to write any type of code whatsoever. Just ensure that the master child link settings for the sub form are set up correctly, and access will thus insert and maintain The product code columns used to link back to the main product table. You can most certainly use the combo box wizard to create the combo box in the continuous sub form that you're going to use to Select what part and set the part number column from the parts table.
The result will be a form that allows you to add new part assemblies or edit existing. While access will maintain the product code column for you, if you delete a main record, you'll need to have setup referential integrity and cascade deletes on the back end database part. So as you correctly note, all the integrity features will be set up in the database back end, not in the access front end part.
I've discovered what I wanted to do isn't easily possible using Linked Tables, I was able to do what I wanted to do easily if I used native access tables (since it let me properly define the relationships) but I couldn't do that with linked tables.

Db design for data update approval

I'm working on a project where we need to have data entered or updated by some users go through a pending status before being added into 'live data'.
Whilst preparing the data the user can save incomplete records. Whilst the data is in the pending status we don't want the data to affect rules imposed on users editing the live data e.g. a user working on the live data should not run up against a unique contraint when entering the same data that is already in the pending status.
I envisage that sets of data updates will be grouped into a 'data submission' and the data will be re-validated and corrected/rejected/approved when someone quality control the submission.
I've thought about two scenarios with regards to storing the data:
1) Keeping the pending status data in the same table as the live data, but adding a flag to indicate its status. I could see issues here with having to remove contraints or make required fields nullable to support the 'incomplete' status data. Then there is the issue with how to handle updating existing data, you would have to add a new row for an update and link it back to existing 'live' row. This seems a bit messy to me.
2) Add new tables that mirror the live tables and store the data in there until it has been approved. This would allow me to keep full control over the existing live tables while the 'pending' tables can be abused with whatever the user feels he wants to put in there. The downside of this is that I will end up with a lot of extra tables/SPs in the db. Another issue I was thinking about was how might a user link between two records, whereby the record linked to might be a record in the live table or one in the pending table, but I suppose in this situation you could always take a copy of the linked record and treat it as an update?
Neither solutions seem perfect, but the second one seems like the better option to me - is there a third solution?
Your option 2 very much sounds like the best idea. If you want to use referential integrity and all the nice things you get with a DBMS you can't have the pending data in the same table. But there is no need for there to be unstructured data- pending data is still structured and presumably you want the db to play its part in enforcing rules even on this data. Even if you didn't, pending data fits well into a standard table structure.
A separate set of tables sounds the right answer. You can bring the primary key of the row being changed into the pending table so you know what item is being edited, or what item is being linked to.
I don't know your situation exactly so this might not be appropriate, but an idea would be to have a separate table for storing the batch of edits that are being made, because then you can quality control a batch, or submit a batch to live. Each pending table could have a batch key so you know what batch it is part of. You'll have to find a way to control multiple pending edits to the same rows (if you want to) but that doesn't seem too tricky a problem to solve.
I'm not sure if this fits but it might be worth looking into 'Master Data Management' tools such as SQL Server's Master Data Services.
'Unit of work' is a good name for 'data submission'.
You could serialize it to a different place, like (non-relational) document-oriented database, and only save to relational DB on approval.
Depends on how many of live data constraints still need to apply to the unapproved data.
I think second option is better. To manage this, you can use View which will contain both tables and you can work with this structure through view.
Another good approach is to use XML column in a separate table to store necessary data(because of unknown quantity/names of columns). You can create just one table with XML column ad column "Type" do determine which table this document is related with.
First scenerio seems to be good.
Add Status column in the table.There is no need to remove Nullable constraint just add one function to check the required fields based on flag like If flag is 1(incomplete) Null is allowed otherwise Not allowed.
regarding second doubt do you want to append the data or update the whole data.

SQL - Table Design - DateCreated and DateUpdated columns

For my application there are several entity classes, User, Customer, Post, and so on
I'm about to design the database and I want to store the date when the entities were created and updated. This is where it gets tricky. Sure one option is to add created_timestamp and update_timestamp columns for each of the entity tables but that isn't that redudant?
Another possibility could be to create a log table that stores this information, and it could be made to contain keep track of updates for any entity.
Any thoughts? I'm leaning on implementing the latter.
The single-log-table-for-all-tables approach has two main problems that I can think of:
The design of the log table will (probably) constrain the design of all the other tables. Most likely the log table would have one column named TableName and then another column named PKValue (which would store the primary key value for the record you're logging). If some of your tables have compound primary keys (i.e. more than one column), then the design of your log table would have to account for this (probably by having columns like PKValue1, PKValue2 etc.).
If this is a web application of some sort, then the user identity that would be available from a trigger would be the application's account, instead of the ID of the web app user (which is most likely what you really want to store in your CreatedBy field). This would only help you distinguish between records created by your web app code and records created otherwise.
CreatedDate and ModifiedDate columns aren't redundant just because they're defined in each table. I would stick with that approach and put insert and update triggers on each table to populate those columns. If I also needed to record the end-user who made the change, I would skip the triggers and populate the timestamp and user fields from my application code.
I do the latter, with a "log" or "events" table. In my experience, the "updated" timestamp becomes frustrating pretty quick, because a lot of the time you find yourself in a fix where you want not just the very latest update time.
How often will you need to include the created/updated timestamps in your presentation layer? If the answer is anything more than "once in a great great while", I think you would be better served by having those columns in each table.
On a project I worked on a couple of years ago, we implemented triggers which updated what we called an audit table (it stored basic information about the changes being made, one audit table per table). This included modified date (and last modified).
They were only applied to key tables (not joins or reference data tables).
This removed a lot of the normal frustration of having to account for LastCreated & LastModified fields, but introduced the annoyance of keeping the triggers up to date.
In the end the trigger/audit table design worked well and all we had to remember was to remove and reapply the triggers before ETL(!).
It's for a web based CMS I work on. The creation and last updated dates will be displayed on most pages and there will be lists for the last created (and updated) pages. The admin interface will also use this information.