I am experimenting with BDC.
I am inserting data for TCODE: MM01
I want to find out which tables my BDC inserts data into for TCODE MM01.
In other words, If I have a TCODE MM01. How do I know which all tables this TCODE inserts data into.
I know MM01 enters data into MARA table.
Here is how I am checking for this:
I check table TSTC, and find out that TCODE: MM01 uses program SAPMMG01.
However SAPMMG01 enters data into table RMMG1, not MARA as expected.
Here are the screenshots.
If you want to know which tables are involved in this process I recommend that you use transaction 'ST05' to trace the transaction and then run 'MM01' and save your material. Then stop the trace and check what happened. With that information you can be sure what tables are involved in this process.
Check this link regarding on transaction ST05 for more information.
BDC is a high level abstraction used in SAP to record the steps to automate programs, so, at this level you are more likely to see the GUI elements and structures that are used to achieve the task.
By the way, 'RMMG1' is an structure and not a table.
Hope it helps.
Related
I'm having some issues due to inexperience in SQL Server.
The issue/problem I'm facing right now is that I have data that needs to be inserted into a master table and as my database scheme is based on the "snowflake-scheme" ..
I have issues understanding what to do when adding data to the master table. My slave tables (I got more than 6 of them), which mostly contain foreign keys should be updated/and if data doesn't exist the id's (which are incremented in the master-table) should be inserted into the slave tables too..
I thought maybe one solution could be using triggers. Can someone give me some hints?
Yeah. RTFM. And grab some books about data warehouse design. Particular focus on data load scenarios.
The general consensus in non trivial scenarios seems to be from my work experience to NOT do this automatically but through a multi step load process.
You generally load the data into temporary tables (staging tables), then processing runs and fills out the missing information in the related tables FIRST. And then makes the fact table (that is the central table in a snowflake) insert last.
The fundamental problem you have is that it is not possible to insert magic data into a related slave table (as you call it) because the data is not there in the insert statement. I.e. you must have the proper ID in the insert AND you must actually know what other fields to fill out in the slave table. Automatic cascades will not allow the later at all, making it a dead end fundamentally.
And this is no a sql server specific issue. We talk about fundamental concepts of database work here.
So, I have read that using internal tables increases the performance of the program and that we should make operations on DB tables as less as possible. But I have started working on a project that does not use internal tables at all.
Some details:
It is a scanner that adds or removes products in/from a store. First the primary key is checked (to see if that type of product exists) and then the product is added or removed. We use ‘Insert Into’ and ‘Delete From’ to add/remove the products directly from the DB table.
I have not asked why they do not use internal tables because I do not have a better solution so far.
Here’s what I have so far: Insert all products in an internal table, place the deleted products in another internal table.
Form update.
Modify zop_db_table from table gt_table." – to add all new products
LOOP AT gt_deleted INTO gs_deleted.
DELETE FROM zop_db_table WHERE index_nr = gs_deleted-index_nr.
ENDLOOP. " – to delete products
Endform.
But when can I perform this update?
I could set a ‘Save button’ to perform the update, but then there will be the risk that the user forgets to save large amounts of data, or drops the scanner, shutting it down or similar situations. So this is clearly not a good solution.
My final question is: Is there a (good) way to implement internal tables in a project like this?
internal tables should be used for data processing, like lists or arrays in other languages (c#, java...). From a performance and system load perspective it is preferred to first load all data you need into an internal table, then process that internal table instead of loading individual records from the database.
But that is mostly true for reporting, which is probably the most common type of custom abap program. You often see developers use select...endselect-statements, that in effect loop over a database table, transferring row after row to the report, one at a time. That is extremely slow compared to reading all records at once into an itab, then looping over the itab. More than once i've cut the execution time of a report down to a fraction by just eliminating roundtrips to the database.
If you have a good reason to read from the database or update records immediately, you should do so. If you can safely delay updates and deletes to a point in time where you can process all of them together, without risking inconsistencies, I'd consider than an improvement. But if there is a good reason (like consistency or data loss) to update immediately, do it.
Update: as #vwegert mentioned regarding the select-endselect statement, the statement doesn't actually create individual database queries for each row. The database interface of the application server optimizes the query, transferring rows in bulk to the application server. From there the records are transported to the abap report one by one (because in the report there is only the work area to store a single row), which has a significant performance impact especially for queries with large result sets. A select into an internal table can transport all rows directly to the abap report (as long as there is enough memory to hold them), as now there is the internal table to hold those records in the report.
i have few questions for SQL gurus in here ...
Briefly this is ads management system where user can define campaigns for different countries, categories, languages. I have few questions in mind so help me with what you can.
Generally i'm using ASP.NET and i want to cache all result set of certain user once he asks for statistics for the first time, this way i will avoid large round-trips to server.
any help is welcomed
Click here for diagram with all details you need for my questions
1.Main issue of this application is to show to the user how many clicks/impressions were and how much money he spent on campaign. What is the easiest way to get this information for him? I will also include filtering by date, date ranges and few other params in this statistics table.
2.Other issue is what happens when user will try to edit campaign. Old campaign will die this means if user set 0.01$ as campaignPPU (pay-per-unit) and next day updates it to 0.05$ all will be reset to 0.05$.
3.If you could re-design some parts of table design so it would be more flexible and easier to modify, how would you do it?
Thanks... sorry for so large job but it may interest some SQL guys in here
For #1, you might want to use a series of views to show interesting statistics. If you want to cache results, you could store the results to a reporting table that only gets refreshed every n hours (maybe up to 3 or 4 times a day? I don't know what would be suitable for this scenario).
Once all the data is in a report table, you can better index it for filtering, and since it will be purged and re-populated on a schedule, it should be faster to access.
Note that this will only work if populating the stats table does not take too long (you will have to be the judge of how long is "too long").
For #2, it sounds like an underlying design issue. What do you mean by "edit"? Does this edit operation destroy the old campaign and create a new "clone" campaign (that is obviously not a perfect clone or there wouldn't be a problem)? Is there historical data that is important, but getting orphaned or deleted by the edit? You might want to analyze this "edit" process and see if you need to add history-tracking to some of these tables. Maybe a simple datetime to old records, or a separate "history" table(s) that mirrors the structure of the tables being modified by the "edit" operation.
For #3, It looks alright, but I'm only seeing a sliver of the system. I don't know how the rest of the app is designed...
Eugene,
If you plan to keep the edited campaigns around consider not removing them. Instead make the campaigns date sensitive. For example UserA started campaign 1 on 1/1/2010 and ended it on 2/1/2010 then started campaign2 on 2/2/2010.
Or if you dont like the notion of end dating your campaigns. You could consider a history table for your campaigns. Basically the same table structure but an added UniqueIdentifier to make rows unique.
I should also note that estimated size of this campaign table and its related tables are important on your design. If you expect to have only 1000s of rows keeping old and current records in one table isnt a problem. however if you plan to have 1000000s or more then you may want to separate the old from the new for faster queries, or properly plan statistics and indicies on fields you know will need to be filtered on. Also remember indicies are usefule for reads but they slow down writes.
My employer has developed a utility that will run a stored procedure line by line against a DataTable, passing the fields of each row as parameters into the Stored Procedure. This is particularly useful for automated imports.
However, I now need to extend this to provide a transactional-ized version so that we can see the potential results of running the utility to provide a summary of what changes will make to the database. This could be as much as '3 rows inserted into Customer table', or '5 rows amended in Orders table'. The user could then decide whether to go ahead with the real import.
I know triggers could be set up on tables, however I'm not sure this would be possible in this case as all the tables referenced by the stored procedure would not be known.
Is there any other way of viewing changes made during a transaction, or does anyone have any other suggestions on how I could achieve this?
Many thanks.
Edited based on feedback and re-reading the question:
I agree with Remus in that no serious importer of data would want to visually inspect the data as it gets imported into the system.
As an ETL Writer, I would expect to do this in my staging area, and run queries that validate my data before it gets imported into the actual production place.
You could also get into issues with resources, deadlocks and blocks by implementing functionality that "holds" transactions until visually OK'ed by someone.
You snapshot the current LSN, run yout 'line by line' procedure in a transaction, then use fn_dblog to read back the log after the LSN you snapshotted. The changes made are the records in the log that a are stamped with the current transaction id. The wrapper transaction can be rolled back. Of course this will only work with an import of 3 rows in Customer and 5 rows Orders, no serious employer would consider doing something like this on a real sized import job. Imagine importing 1 mil Orders just to count them, then rolling back...
This will not work with any arbitrary procedure though as often time procedure make their own transaction management and they don't work as expected when invoked under a wrapping transaction.
I'm trying to figure out what would be the best way to have a history on a database, to track any Insert/Delete/Update that is done. The history data will need to be coded into the front-end since it will be used by the users. Creating "history tables" (a copy of each table used to store history) is not a good way to do this, since the data is spread across multiple tables.
At this point in time, my best idea is to create a few History tables, which the tables would reflect the output I want to show to the users. Whenever a change is made to specific tables, I would update this history table with the data as well.
I'm trying to figure out what the best way to go about would be. Any suggestions will be appreciated.
I am using Oracle + VB.NET
I have used very successfully a model where every table has an audit copy - the same table with a few additional fields (time stamp, user id, operation type), and 3 triggers on the first table for insert/update/delete.
I think this is a very good way of handling this, because tables and triggers can be generated from a model and there is little overhead from a management perspective.
The application can use the tables to show an audit history to the user (read-only).
We've got that requirement in our systems. We added two tables, one header, one detail called AuditRow and AuditField. The AuditRow contains one row per row changed in any other table, and the AuditField contains one row per column changed with old value and new value.
We have a trigger on every table that writes a header row (AuditRow) and the needed detail rows (one per changed colum) on each insert/update/delete. This system does rely on the fact that we have a guid on every table that can uniquely represent the row. Doesn't have to be the "business" or "primary" key, but it's a unique identifier for that row so we can identify it in the audit tables. Works like a champ. Overkill? Perhaps, but we've never had a problem with auditors. :-)
And yes, the Audit tables are by far the largest tables in the system.
If you are lucky enough to be on Oracle 11g, you could also use the Flashback Data Archive
Personally, I would stay away from triggers. They can be a nightmare when it comes to debugging and not necessarily the best if you are looking to scale out.
If you are using an PL/SQL API to do the INSERT/UPDATE/DELETEs you could manage this in a simple shift in design without the need (up front) for history tables.
All you need are 2 extra columns, DATE_FROM and DATE_THRU. When a record is INSERTed, the DATE_THRU is left NULL. If that record is UPDATEd or DELETEd, just "end date" the record by making DATE_THRU the current date/time (SYSDATE). Showing the history is as simple as selecting from the table, the one record where DATE_THRU is NULL will be your current or active record.
Now if you expect a high volume of changes, writing off the old record to a history table would be preferable, but I still wouldn't manage it with triggers, I'd do it with the API.
Hope that helps.