ABAP CURSOR FETCH with multiple internal table - abap

I have a CURSOR below .
OPEN CURSOR WITH HOLD s_cursor FOR
SELECT * FROM (QUERY_TABLE) WHERE (OPTIONS).
DO.
FETCH NEXT CURSOR s_cursor APPENDING TABLE <lt_itab> PACKAGE SIZE 100000.
IF sy-subrc <> 0.
EXIT.
ENDIF.
ENDDO.
Since I have huge data in the table , I need to split the data into multiple internal tables .
What want here is , need to pass different internal table for each FECTH .
Also I need to create multiple internal table with the same structure .
Naming of internal table will be like below
CONCATENATE 'Lt_ITAB' count INTO intname .
I should be able to create the internal table from the variable intname .
Kindly provide some sample or logic .
Thanks in advance
S Sukumar

DATA tables TYPE STANDARD TABLE OF table_type WITH EMPTY KEY.
OPEN CURSOR WITH HOLD cursor FOR
SELECT *
FROM (db_table)
WHERE (conditions).
WHILE sy-subrc = 0.
INSERT NEW LINE INTO TABLE tables ASSIGNING FIELD-SYMBOL(<target_table>).
FETCH NEXT CURSOR cursor APPENDING TABLE <target_table> PACKAGE SIZE 100000.
ENDWHILE.
Note that this doesn't make sense. As Sandra Rossi points out, the idea of FETCH CURSOR is that the database stores much more data than ABAP's main memory can hold. You want to retrieve all of that data into main memory anyway - if your data set is really huge, you will run out of memory anyway.
If, on the other hand, your data set is small enough to fit into ABAP's main memory, you should instead load it in one go and apply a suitable packaging afterwards, as in
SELECT * FROM (db_table) INTO TABLE (all_data) WHERE (conditions).
DATA(tables) = split_data_in_packages( data = all_data package_size = 100000 ).
Also note that there is no way to create a set of lt_itab<count> variables dynamically. My solution creates a table of tables instead, which can then be addressed conveniently with the [] index accessor as in lt_itab[ <count> ].
This answer focuses on the packaging aspect. There is another aspect in your question, with (db_table) being dynamic and you probably not being aware of the actual table type until runtime. For these cases, you can refer to ABAP's Run-Time Type Interfaces (RTTI) and the CREATE DATA statement to determine and instantiate suitable data types.

Related

Creating a Table Using Previous Values (Iterative Process)

I'm completely new to Visual FoxPro (9.0) and I was having trouble with creating a table which uses previous values to generate new values. What I mean by this is I have a given table that is two columns, age and probability of death. Using this I need to create a survival table which has the columns Age, l(x), d(x), q(x), m(x), L(x), T(x), and q(x) Where:
l(x): Survivorship Function; Defined as l(x+1) = l(x) * EXP(-m(x))
d(x): Number of Deaths; Defined as l(x) - l(x+1)
q(x): Probability of Death; This is given to me already
m(x): Mortality Rate; Defined as -LN(1-q(x))
L(x): Total Person-Years of Cohorts in the Interval (x, x+1); Defined as l(x+1) + (0.5 * d(x))
T(X): Total Person-Years of all Cohorts in the Interval (x, N); Defined as SUM(L(x)) [From x, N]
e(x): Remaining Life Expectancy; Defined as T(x) / l(x)
Now I'm not asking how to get all of these values, I just need help getting started and maybe pointed in the right direction. As far as I can tell, in VFP there is no way to point to a specific row in a data-table, so I can't do what I normally do in R and just make a loop. I.E. I can't do something like:
for (i in 1:length(given_table$Age))
{
new_table$mort_rate[i] <- -LN(1-given_table$death_prop[i])
}
It's been a little while so I'm not sure that's 100% correct anyway, but my point is I'm used to being able to create a table, and alter the values individually by pointing to a specific row and/or column using a loop with a simple counter variable. However, from what I've read there doesn't seem to be a way to do this in VFP, and I'm completely lost.
I've tried to make a Cursor, populating it with dummy values and trying to update each value sequentially using a SCATTER NAME and SCAN/REPLACE thing, but I don't really understand what's happening or how to fine tune this each calculation/entry that I need. (This is the post I was referencing when I tried this: Multiply and subtract values in previous row for new row in FoxPro.
So, how do I go about making a table that relies on iterative process to calculate subsequent values in Visual FoxPro? Are there any good resources that explain Cursors and the Scatter/Scan thing that I was trying (I couldn't find any resources that explained it in terms I could understand)?
Sorry if I've worded things poorly, I'm fairly new to programming in general. Thank you.
You absolutely can write a loop through an existing table in VFP. Use the SCAN command. However, if you're planning to add records to the same table as you go, you're going to run into some issues. Is that what you meant here? If so, my suggestion is to put the new records into a cursor as you create them and then APPEND them to the original table after you've processed all the records that were there when you started.
If you're putting records into a different table as you loop through the original, this is straightforward:
* Assumes you've already created the table or cursor to hold the result
SELECT YourOriginalTable && substitute in the alias/name for the original table
SCAN
* Do your calculations
* Substitute appropriately for YourNewTable and the two lists
INSERT INTO YourNewTable (<list of fields>) VALUES (<list of values>)
ENDSCAN
In the INSERT command, if you refer to any fields of the original table, you need to alias them, like this: YourOriginalTable.YourField, again substituting appropriately.
A bit too late but maybe still helps.
The steps to achieve what you want are:
0. close the tables - just in case (see CLOSE DATABASE)
open the Age table (see USE in VFP help)
create the Survival table structure (see CREATE TABLE)
for this you need to know the field type for each of your l(x), d(x), etc functions
Lets say that you named the fields like your functions (i.e. Lx,Dx, etc)
select the Age table (see SELECT)
loop through Age table (see SCAN)
pass each record into variables (see SCATTER)
made your calculations starting from the Age table data (variables) using L(x),D(x),etc formulas and store it into variables named as M.Your Survival Table Field
i.e. M.mx = -LOG(1-m.Age) && see LOG
Note: in these calculations you can use any mix of Age table variables and the new created variables.
after you calculated all the fields from Survival write it into table (see APPEND && GATHER commands)
close the tables (see CLOSE DATABASE)

How to list all tables in ABAP programmatically?

All tables may be listed with t-code SE16 and table DD02L. But how can that list be accessed programmatically?
It is rarely a good idea to access the database tables directly since you will have to deal with all kinds of technicalities you probably don't even know about - active / inactive versions, for example. You will also bypass all security and authorization checks, which might be irrelevant to you personally, but is undesirable in general. To get a list of tables, you can use the function module RPY_TABLE_SELECT. This function module will take care of the version handling and provide the description in the language of your choice as well.
Improved Alex code in some way and put it as an option:
SELECT tabname
FROM DD02L
INTO TABLE #DATA(itab)
WHERE TABCLASS = 'TRANSP'.
LOOP AT itab ASSIGNING FIELD-SYMBOL(<FS>).
WRITE:/ <FS>.
ENDLOOP.
Several things were refined: incline declarations were utilized, field-symbols added, SELECT * and WHERE IN were omitted and so on. Also tables in SAP have only TRANSP class, INTTAB class belongs to structures.
Note: the sample is functional since ABAP 7.40, SP08.
An ongoing search resulted in the following snippet:
DATA ITAB TYPE TABLE OF DD02L.
SELECT * FROM DD02L INTO TABLE ITAB WHERE TABCLASS IN ('TRANSP', 'INTTAB').
WRITE :SY-SUBRC .
DATA FS TYPE DD02L.
LOOP AT ITAB INTO FS.
WRITE:/ FS-TABNAME.
ENDLOOP.
Table description is given in table DD02T.

Get value from a table and save it inside of a structure

I am new in ABAP and I have to modify these lines of code:
LOOP AT t_abc ASSIGNING <fs_abc> WHERE lgart = xyz.
g_abc-lkj = g_abc-lkj + <fs_abc>-abc.
ENDLOOP.
A coworker told me that I have to use a structure and not a field symbol.
How will be the syntax and why to use a structure in this case?
I have no idea why the co-worker wants that you use a structure in this case, because using a field symbol while looping is usually more performant. The reason could be that you are doing some kind of a novice training and he wants you to learn different syntax variants.
Using a structure while looping would like this
LOOP AT t_abc INTO DATA(ls_abc)
WHERE lgart = xyz.
g_abc-lkj = g_abc-lkj + ls_abc-abc.
ENDLOOP.
Your code is correct, because Field symbol functions almost the same as a structure.
For Field symbol
Field symbol is a pointer,
so there is no data copy action for field symbol, and the performance is better
Well if we changed the value via field symbol, the internal table get changed also
For Structure
Structure is a copy of the data, so there is a data copy action, and the performance is bad if the data row is bigger than 200 bytes (based on the SAP ABAP programming guide for performance)
If changed the data in the structure, the original internal table remains the same because there are 2 copies of the data in memory

TSQL: Is there a way to limit the rows returned and count the total that would have been returned without the limit (without adding it to every row)?

I'm working to update a stored procedure that current selects up to n rows, if the rows returned = n, does a select count without the limit, and then returns the original select and the total impacted rows.
Kinda like:
SELECT TOP (#rowsToReturn)
A.data1,
A.data2
FROM
mytable A
SET #maxRows = ##ROWCOUNT
IF #rowsToReturn = ##ROWCOUNT
BEGIN
SET #maxRows = (SELECT COUNT(1) FROM mytableA)
END
I'm wanting reduce this to a single select statement. Based on this question, COUNT(*) OVER() allows this, but it is put on every single row instead of in an output parameter. Maybe something like FOUND_ROWS() in MYSQL, such as a ##TOTALROWCOUNT or such.
As a side note, since the actual select has an order by, the data base will need to already traverse the entire set (to make sure that it gets the correct first n ordered records), so the database should already have this count somewhere.
As #MartinSmith mentioned in a comment on this question, there is no direct (i.e. pure T-SQL) way of getting the total numbers of rows that would be returned while at the same time limiting it. In the past I have done the method of:
dump the query to a temp table to grab ##ROWCOUNT (the total set)
use ROW_NUBMER() AS [ResultID] on the ordered results of the main query
SELECT TOP (n) FROM #Temp ORDER BY [ResultID] or something similar
Of course, the downside here is that you have the disk I/O cost of getting those records into the temp table. Put [tempdb] on SSD? :)
I have also experienced the "run COUNT(*) with the same rest of the query first, then run the regular SELECT" method (as advocated by #Blam), and it is not a "free" re-run of the query:
It is a full re-run in many cases. The issue is that when doing COUNT(*) (hence not returning any fields), the optimizer only needs to worry about indexes in terms of the JOIN, WHERE, GROUP BY, ORDER BY clauses. But when you want some actual data back, that could change the execution plan quite a bit, especially if the indexes used to get the COUNT(*) are not "covering" for the fields in the SELECT list.
The other issue is that even if the indexes are all the same and hence all of the data pages are still in cache, that just saves you from the physical reads. But you still have the logical reads.
I'm not saying this method doesn't work, but I think the method in the Question that only does the COUNT(*) conditionally is far less stressful on the system.
The method advocated by #Gordon is actually functionally very similar to the temp table method I described above: it dumps the full result set to [tempdb] (the INSERTED table is in [tempdb]) to get the full ##ROWCOUNT and then it gets a subset. On the downside, the INSTEAD OF TRIGGER method is:
a lot more work to set up (as in 10x - 20x more): you need a real table to represent each distinct result set, you need a trigger, the trigger needs to either be built dynamically, or get the number of rows to return from some config table, or I suppose it could get it from CONTEXT_INFO() or a temp table. Still, the whole process is quite a few steps and convoluted.
very inefficient: first it does the same amount of work dumping the full result set to a table (i.e. into the INSERTED table--which lives in [tempdb]) but then it does an additional step of selecting the desired subset of records (not really a problem as this should still be in the buffer pool) to go back into the real table. What's worse is that second step is actually double I/O as the operation is also represented in the transaction log for the database where that real table exists. But wait, there's more: what about the next run of the query? You need to clear out this real table. Whether via DELETE or TRUNCATE TABLE, it is another operation that shows up (the amount of representation based on which of those two operations is used) in the transaction log, plus is additional time spent on the additional operation. AND, let's not forget about the step that selects the subset out of INSERTED into the real table: it doesn't have the opportunity to use an index since you can't index the INSERTED and DELETED tables. Not that you always would want to add an index to the temp table, but sometimes it helps (depending on the situation) and you at least have that choice.
overly complicated: what happens when two processes need to run the query at the same time? If they are sharing the same real table to dump into and then select out of for the final output, then there needs to be another column added to distinguish between the SPIDs. It could be ##SPID. Or it could be a GUID created before the initial INSERT into the real table is called (so that it can be passed to the INSTEAD OF trigger via CONTEXT_INFO() or a temp table). Whatever the value is, it would then be used to do the DELETE operation once the final output has been selected. And if not obvious, this part influences a performance issue brought up in the prior bullet: TRUNCATE TABLE cannot be used as it clears the entire table, leaving DELETE FROM dbo.RealTable WHERE ProcessID = #WhateverID; as the only option.
Now, to be fair, it is possible to do the final SELECT from within the trigger itself. This would reduce some of the inefficiency as the data never makes it into the real table and then also never needs to be deleted. It also reduces the over-complication as there should be no need to separate the data by SPID. However, this is a very time-limited solution as the ability to return results from within a trigger is going bye-bye in the next release of SQL Server, so sayeth the MSDN page for the disallow results from triggers Server Configuration Option:
This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible. We recommend that you set this value to 1.
The only actual way to do:
the query one time
get a subset of rows
and still get the total row count of the full result set
is to use .Net. If the procs are being called from app code, please see "EDIT 2" at the bottom. If you want to be able to randomly run various stored procedures via ad hoc queries, then it would have to be a SQLCLR stored procedure so that it could be generic and work for any query as stored procedures can return dynamic result sets and functions cannot. The proc would need at least 3 parameters:
#QueryToExec NVARCHAR(MAX)
#RowsToReturn INT
#TotalRows INT OUTPUT
The idea is to use "Context Connection = true;" to make use of the internal / in-process connection. You then do these basic steps:
call ExecuteDataReader()
before you read any rows, do a GetSchemaTable()
from the SchemaTable you get the result set field names and datatypes
from the result set structure you construct a SqlDataRecord
with that SqlDataRecord you call SqlContext.Pipe.SendResultsStart(_DataRecord)
now you start calling Reader.Read()
for each row you call:
Reader.GetValues()
DataRecord.SetValues()
SqlContext.Pipe.SendResultRow(_DataRecord)
RowCounter++
Rather than doing the typical "while (Reader.Read())", you instead include the #RowsToReturn param: while(Reader.Read() && RowCounter < RowsToReturn.Value)
After that while loop, call SqlContext.Pipe.SendResultsEnd() to close the result set (the one that you are sending, not the one you are reading)
then do a second while loop that cycles through the rest of the result, but never gets any of the fields:
while (Reader.Read())
{
RowCounter++;
}
then just set TotalRows = RowCounter; which will pass back the number of rows for the full result set, even though you only returned the top n rows of it :)
Not sure how this performs against the temp table method, the dual call method, or even #M.Ali's method (which I have also tried and kinda like, but the question was specific to not sending the value as a column), but it should be fine and does accomplish the task as requested.
EDIT:
Even better! Another option (a variation on the above C# suggestion) is to use the ##ROWCOUNT from the T-SQL stored procedure, sent as an OUTPUT parameter, rather than cycling through the rest of the rows in the SqlDataReader. So the stored procedure would be similar to:
CREATE PROCEDURE SchemaName.ProcName
(
#Param1 INT,
#Param2 VARCHAR(05),
#RowCount INT OUTPUT = -1 -- default so it doesn't have to be passed in
)
AS
SET NOCOUNT ON;
{any ol' query}
SET #RowCount = ##ROWCOUNT;
Then, in the app code, create a new SqlParameter, Direction = Output, for "#RowCount". The numbered steps above stay the same, except the last two (10 and 11), which change to:
Instead of the 2nd while loop, just call Reader.Close()
Instead of using the RowCounter variable, set TotalRows = (int)RowCountOutputParam.Value;
I have tried this and it does work. But so far I have not had time to test the performance against the other methods.
EDIT 2:
If the T-SQL stored procs are being called from the app layer (i.e. no need for ad hoc execution) then this is actually a much simpler variation of the above C# methods. In this case you don't need to worry about the SqlDataRecord or the SqlContext.Pipe methods. Assuming you already have a SqlDataReader set up to pull back the results, you just need to:
Make sure the T-SQL stored proc has a #RowCount INT OUTPUT = -1 parameter
Make sure to SET #RowCount = ##ROWCOUNT; immediately after the query
Register the OUTPUT param as a SqlParameter having Direction = Output
Use a loop similar to: while(Reader.Read() && RowCounter < RowsToReturn) so that you can stop retrieving results once you have pulled back the desired amount.
Remember to not limit the result in the stored proc (i.e. no TOP (n))
At that point, just like what was mentioned in the first "EDIT" above, just close the SqlDataReader and grab the .Value of the OUTPUT param :).
How about this....
DECLARE #N INT = 10
;WITH CTE AS
(
SELECT
A.data1,
A.data2
FROM mytable A
)
SELECT TOP (#N) * , (SELECT COUNT(*) FROM CTE) Total_Rows
FROM CTE
The last column will be populated with the total number of rows it would have returned without the TOP Clause.
The issue with your requirement is, you are expecting a SINGLE select statement to return a table and also a scalar value. which is not possible.
A Single select statement will return a table or a scalar value. OR you can have two separate selects one returning a Scalar value and other returning a scalar. Choice is yours :)
Just because you think TSQL should have a row count because of a sort doe not mean it does. And if it does it does it is not currently sharing it with the outside world.
What you are missing is this is very efficient
select count(*)
from ...
where ...
select top x
from ...
where ...
order by ...
With the count(*) unless the query is just plain ugly those indexes should be in memory.
It has to perform a count to sort based on what?
Did you actually evaluate any query plans?
If TSQL has to perform a sort then explain the following.
Why is the count(*) 100% of the cost when the second had to do a count anyway?
Just where in that second query plan is there a free opportunity to count?
Why are those query plans so different if they both need to count?
I think there is an arcane way to do what you want. It involves triggers and non-temporary tables. And, I should mention, although I have implemented each piece (for different purposes), I have never put them together for this purpose.
The idea starts with this Stack Overflow question. According to this source, ##ROWCOUNT counts the number of attempted inserts, even when they don't really happen. Now, I must admit that a perusal of available documentation doesn't seem to touch on this topic, so this may or may not be "correct" behavior. This method is relying on this "problem".
So, you could do what you want by:
Creating a new table for the output -- but not a table variable or a temporary table.
Creating an "instead of" trigger that prevents more than #maxRows from going into the table.
Select the query results into the table.
Read ##ROWCOUNT after the select.
Note that you can create the table and trigger using dynamic SQL. You could also create it once, and have the trigger read the #maxRows value from some sort of parameter table. As mentioned before, this needs to be a real table that supports triggers.

Move FMOIX/FMCOX structures into Internal Table

I am a newbie to ABAP (3 days experience) and I am currently on a task to write reports using ABAP code. It is like moving some data from a specific SAP database to a Business Intelligence staging area.
So the core difficulty is that some data on the SAP server is in the format of dictionary structures (FMOIX, FMCOX, etc.) and I need to move these data into internal tables during program runtime. I was told that OPENSQL would not work in this case.
If you still do not get what I mean, I can suggest several ways, actually given by my supervisor. First is to use GET event, say
GET FMOIX.
IF FMOIX-zhdlt > From_dat and FMOIX-zhdlt < to_dat.
Append FMOIX to itab.
ENDIF.
The thing is that I am still not very clear about this GET event. Is it just a event handler thing, or can it loop through data records?
What I googled for more than two days give me something like
LOOP at FMOIX.
MOVE FMOIX to itab.
ENDLOOP.
So what are the ways to move transactional structure like FMOIX into internal tables, say the internal table name is ITAB?
Your answer would be greatly appreciated. Though I have time, I am totally new.
Thanks a lot.
If your supervisor is suggesting that you use the GET event, it means that your program is (or should be) using a logical database - in this case probably FMF or FMF_BCS.
Doing GET FMOIX reads a set of fields defined in the logical database (as a node). Underneath your GET statement, you can use FMOIX as a structure, e.g. WRITE FMOIX-field1. The program will (implicitly, it's not explicity defined in the code like a LOOP...ENDLOOP is) loop through all the rows returned according to your selection criteria. You should be able to use MOVE-CORRESPONDING to move the contents of each row into a proper structure, and then APPEND that structure to your itab.
Quick link on GET in ABAPDocu
Note: this answer is a bit of a guess, since I've only used a logical database once, and the documentation is a little thin on the ground compared to the volumes out there about standard SELECTs and internal tables.
You can create your internal table in type of that structure such as:
data: itab like table of fmoix with header line.
And you can use this internal table to fill up wherever you are using your select codes.
Such as:
select * from ____
into corresponding fields of itab
where zhdlt gt from_dat
and zhdlt lt to_dat.
I'm not sure this is what you are looking for but I can tell you creating itab in type of that structure can be filled up with all corresponding datas that coming from your select. You cant loop FMOIX because its not a table, its a structure. So is there any specific reason to hold your datas in structures?
Hope it was helpful.
Talha