Delete from C_t_data for Bi Extractor [duplicate] - abap

This question already exists:
Delete from c_t_data [closed]
Closed 2 years ago.
Trying to delete unwanted records from extractor Badi logic. Due to performance reasons, I want to run extractor for only one company code.
Can we use this below?
Delete c_t_data where bukrs NE t_cc.
t_cc is a variable that stores company code information.
c_t_data is of type extract structure of 0fi_gl_14 datasource.
Does this sound alright?

Related

Unable to copy a table into a new table [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm trying to clone a table schema and data into a new table, this is what I'm doing:
SELECT * INTO 'ECO7053__settings' FROM base__settings
I keep getting the Undeclared Variable error
EDIT: Also to complete the questions, is this the correct approach? I need to store data for different users; is it better to add an uID field on my tables and filter by that or rather what I'm trying to do, and have different tables one for each user with a prefix? In the example the uID would be 7053 what would be the correct way to handle this situation?
CREATE TABLE ECO7053__settings LIKE base__settings;
to copy the table schema and indices. Then
INSERT INTO ECO7053__settings SELECT * FROM base__settings;
to copy the data.
Use:
SHOW CREATE TABLE base__settings
Copy the create statement and change base__settings into ECO7053__settings (do a search and replace)
Then run
INSERT INTO ECO7053__settings
SELECT * FROM base__settings

SQL Queries converted for the Firebase Database [duplicate]

This question already has answers here:
read only certain fields in firebase
(3 answers)
Closed 3 years ago.
Pertaining to the video by Firebase below, I have the following question- I have not received a response on YouTube yet, and was wondering if anyone here knows of a possible solution.
https://www.youtube.com/watch?v=sKFLI5FOOHs
In General I want to be able to know how to query data for selected items, and not all data in a set.
Specific Question: Instead of pulling all entries in a child ref, how can we select specific entries (key & value) from the data in Firebase? I'm having bad luck nesting with orderByChild() & I believe would need the on value listener event he began to mention.
For Example (#2): SELECT Name from Users WHERE email = "alice#email.com" instead of SELECT * from Users WHERE email = "alice#email.com"
This is not possible with the client SDKs. When you receive a node, you will get all of its children every time.
If you need to limit the amount of data from a query, you will have to split the parent node into separate nodes that contain only the data for a desired query. Sometimes people duplicate data into different nodes just to create special queryable nodes for a specific purpose. This is fairly common to do with NoSQL type databases, and it's called data denormalization.

Cross Checking a SQL server report [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a report that runs daily. I want to send the output of this report to a csv file. Due to the nature of the report, from time to time some data can be lost (new data is generated when the job is executing so sometimes, some is lost during this process as it is a lengthy job).
Is there a way to cross check on a daily basis that there is not any data from the previous day that has been lost- Perhaps with a tick or cross at the end of each row to show that the data has not been exported as a csv?
I am working with sensitive information so cant share any of the report details.
This is a fairly common question. Without specifics, it's very hard to give you a concrete answer - but here are a few solutions I've used in the past.
Typically, such reports have "grand total" lines - your widget report might be broken down by month, region, sales person, product type, etc. - but you usually have a "total widgets sold" line. If that's a quick query (you may need to remove joins and other refinements) then running that query after you've generated the report data allows you to compare your report grand total with the grand total at the end of the report. If the results are different, you know that the data changed while running the report.
Another option - SQLServer specific - is to use a checksum over the data you're reporting on. If the checksum changes between the start and end of the reporting run, you know you've had data changes.
Finally - and most dramatically - if the report's accuracy is critical, you can store the fact that a particular row was included in a reporting run. This makes your report much more complex, but it allows you to be clear that you've included all the data you need. For instance:
insert into reporting_history
select #reportID, widget_sales_id
from widget_sales
--- reporting logic here
select widget.cost,
widget_sales.date,
widget_sales.price,
widget_sales......
from widgets inner join widget sales on ...
inner join reporting_history on widget_sales.widget_sales_id = widget_sales.widget_sales_id
---- all your other logic

How to fetch the changed data in a database? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I need to fetch the local names from table A, which have been changed in the past 30 days using SQL.
Do I need to create a backup of a table or is there any other method?
And If creating backup is the only method how do we compare and find out the locally overridden names?
Table Details:
TREE_ID (NUMBER)
TREE_NM (VARCHAR2)
TREE_LEVEL (VARCHAR2)
UPLEVEL_ID (NUMBER)
HRCHY_TYPE (VARCHAR2)
CATG_ID (NUMBER)
SUBCATG_ID (NUMBER)
STATUS (VARCHAR2)
USER_ID (NUMBER)
CREATE_DATE (DATE)
EFFCT_START_DATE (DATE)
EFFCT_END_DATE (DATE)
UPDATED_DATE (DATE)
TOP_LEVEL_ID (NUMBER)
I need to generate a feed at the end of every month to fetch the changed TREE_NM.
I think there is no default operation in Oracle to do that. A possible workaround could be to include a new column to your table A where you store the modificationDate. Then defining a Before Insert OR Update Trigger which simply writes the new value (current date) to all rows that have been inserted or updated.
Hope this helps.
If you can't modify the table, this can't be done unless you can modify the apps that modify the table. If you can do the latter, make a second table with:
TreeID NUMBER (foreign key)
LastModifiedDate datetime
And write to this table every time the first table is modified. Then, you can join the two tables together on
TableA.TreeID = Table2.TreeID
WHERE Table2.LastModifiedDate >= DATEADD(d, -30, getdate())
And that will return all records that were modified in last 30 days.
If you can't modify the database OR the apps, then this is impossible with your current structure, so hopefully you have the ability to make some changes.
EDIT:
If historical changes are something that you will need to track for other purposes in the future, you should look into implementing a data warehouse (specifically, look into slowly changing dimensions).
Second Edit:
I would seriously question why you're not allowed to add a field to this table. In SQL Server, you can add fields to tables without impacting the data or applications that access it. If I were you, I would push pretty hard to add the field to the table instead of creating a more complex and obfuscated database/application structure for no apparent reason.

SQL Server/Table Design, table for data snapshots where hundreds of columns possible [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a business process that requires taking a "snapshot" of portions of a client's data at a point in time, and being able to regurgitate it later. The data set has some oddities though that make the problem interesting:
The data is pulled from several databases, some of which are not ours.
The list of fields that could possibly be pulled are somewhere between 150 and 200
The list of fields that are typically pulled are somewhere between 10 and 20.
Each client can pull a custom set of fields for storage, this set is pre-determined ahead of time.
For example (and I have vastly oversimplified these):
Client A decides on Fridays to take a snapshot of customer addresses (1 record per customer address).
Client B decides on alternate Tuesdays to take a snapshot of summary invoice information (1 record per type of invoice).
Client C monthly summarizes hours worked by each department (1 record per department).
When each of these periods happen, a process goes out and fetches the appropriate information for each of these clients... and does something with them.
Sounds like an historical reporting system, right? It kind of is. The data is later parsed up and regurgitated in a variety of formats (xml, cvs, excel, text files, etc..) depending on the client's needs.
I get to rewrite this.
Since we don't own all of the databases, I can't just keep references to the data around. Some of that data is overwritten periodically anyway. I actually need to find the appropriate data and set it aside.
I'm hoping someone has a clever way of approaching the table design for such a beast. The methods that come to mind, all with their own drawbacks:
A dataset table (data set id, date captured, etc...);
A data table (data set id, row number, "data as a blob of crap")
A dataset table (data set id, date captured, etc....);
A data table (data set id, row number, possible field 1, possible field 2, possible field 3, ...., possible field x (where x > 150)
A dataset table (data set id, date captured, etc...); A field table (1 row per all possible field types); A selected field table (1 row for each field the client has selected); One table for each primitive data type possible (varchar, decimal, integer) (keyed on selected field, data set id, row, position, data is the single field value).
The first being the easiest to implement, but the "blob of crap" would have to be engineered to be parseable to break it down into reportable fields. Not very database friendly either, not reportable, etc.. Doesn't feel right.
The second is a horror show of columns. shudder
The third sounds right, but kind of doesn't. It's 3NF (yes, I'm old) so feels right that way. However reporting on the table screams of "rows that should have been columns" problems -- fairly useless to try to select on outside of a program.
What are your thoughts?
RE: "where hundreds of columns possible"
The limitations are 1000 columns per table
http://msdn.microsoft.com/en-us/library/ms143432.aspx