Standard deep nested data type? - abap

I took the nice example clientPrintDescription.py and create a HTML form from the description which matches the input data types for the particular RFC function.
In SAP data types can contain data types which can contain data types, and I want to test my HTML form generator with a very nested data type.
Of course I could create my own custom data type, but it would be more re-usable if I would use an existing (rfc-capable) data type.
Which data type in SAP contains a lot of nested data types? And maybe a lot of different data types?

I cannot tell which structure is the best for your case but you could filter the view DD03VV (now that is a meaningful name) using the transaction se16h. If you GROUP BY the column TABNAME and filter on WHERE TABCLASS = 'INTTAB' the number of entries is an indicator for the size of the structure.
You could also aggregate and in a next step filter on the maximum DEPTH value (like a SQL HAVING, which afaik does not exist in SAP R/3). On my system the maximum depth is 12.
Edit: If you cannot access se16h, here's a workaround: Call se37 and execute SE16N_START with I_HANA = 'X'. If you cannot access se37 use sa38 and call RSFUNCTIONBUILDER (the report behind se37).
PS: The requests on DD03VV are awfully slow, probably due to missing optimzation for complex requests on ABAP dictionary views.

If I had to give only one DDIC structure, I would give this one:
FDT_TEST_DDIC_BIND_DEEP_S
It contains many elements of miscellaneous types, including nested ones, and it exists in any ABAP-based system (it belongs to the "BASIS" layer).
As it contains some data and object references in sub-levels which are invalid in RFC, you'll have to copy it and remove those reference fields.
There are also these structures (column "TABNAME") with fields of some interest:
TABNAME FIELDNAME Description
-------------------- ------------- ------------------------------------------------
SFW_BF FROM_RELEASE elementary built-in type
SAUNIT_S_ALERT WHEN data element
SAUNIT_S_ALERT HEADER structure
SAUNIT_S_ALERT TEXT_INFOS table type
SAUNIT_PROG_INFO .INCLUDE include structure SAUNIT_S_TADIR_KEY
SKWF_IOFLD .INCLU-FLD include structure SKWF_IO
SWFEXPSTRU2 .INCLU--AP append structure SWFEXPSTRU3
APPEND_BAPI0002_2_2 .APPEND_DU append structure recursive (append of BAPI0002_2) (unique component of APPEND_BAPI0002_2_2)
SOADDRESS Structure with nested structures on 2 levels
Some structures may not be valid in some ABAP releases. They used to exist in ABAP basis 7.02 and 7.52.

Try the function module RFC_METADATA_TEST...
It has some deeply nested parameters.

In Se80 under Enterpise service browser, you will find examples of Proxy structures that are complex DDIC structures. With many different types.
Example edo_tw_a0401request
Just browse around, you will find something you like.

I found STFC_STRUCTURE in the docs of test_datatypes of PyRFC.
Works find for testing, since it is already available in my SAP system. I don't need a dummy rfc for testing. Nice.

Related

Smartform error "Flat types may only be referenced using LIKE for table parameters"

I created a table ZPDETAIL01 in se11 and activated it.
In smartforms' Form Interface I create a table parameter zdetail in the tables tab, with type assignment as TYPE and associated type as ZPDETAIL01.
When I check it ,an error occurred,"ZPDETAIL01 Flat types may only be referenced using LIKE for table parameters"
Is this error of my table itself or my parameter setting? Thx.
I changed the type assignment to LIKE and problem solved. But I wonder why in the search help button I can't find the LIKE option, only TYPE and TYPE REF.
Simply a flaw in the UI. If it's allowed then it should be listed as a possible value.
But I guess SAP just didn't care to correct those little things of this obsolete technology (i.e. Smart Forms... now prefer Adobe forms or third-party solutions). Note that the list of values come from the table RSFBTYPEIN, and probably LIKE was previously defined in this table but as LIKE became obsolete for typing import and export parameters in function modules, SAP probably removed it: majority wins over minority. Just a guess.
If you wish, you may open a ticket at SAP support to make it corrected.
Behaviors in ABAP 7.52 SP01 (tests done with DDIC objects: flat table SCARR, non-flat table SOTR_TEXTU, table type BAPIRETTAB):
Typing Associated type Button Error message
------ ---------------- -------------- -------------------------------------------
TYPE Flat struc/table Check SCARR Flat types may only be referenced
using LIKE for table parameters
TYPE Flat struc/table Activate Only table types may be used as the
reference type for a table parameter
TYPE Non-flat str/tab. Check/Activate Only table types may be used as the
reference type for a table parameter
TYPE Table type Check/Activate None
LIKE Flat struc/table Check/Activate None
LIKE Non-flat str/tab. Check/Activate None but short dump at runtime (because of
syntax error in FM: "&1" must be a flat
structure. Internal tables, strings, references,
and structures cannot be used as components.)
LIKE Table type Check Type BAPIRETTAB is not allowed in this context
LIKE Table type Activate Tables using LIKE may only reference flat structures
As you see, there's a bigger problem than just not displaying LIKE, there's a short dump in one case!
Note that I didn't test TYPE REF TO, but I doubt a TABLES parameter can use it.

Move FMOIX/FMCOX structures into Internal Table

I am a newbie to ABAP (3 days experience) and I am currently on a task to write reports using ABAP code. It is like moving some data from a specific SAP database to a Business Intelligence staging area.
So the core difficulty is that some data on the SAP server is in the format of dictionary structures (FMOIX, FMCOX, etc.) and I need to move these data into internal tables during program runtime. I was told that OPENSQL would not work in this case.
If you still do not get what I mean, I can suggest several ways, actually given by my supervisor. First is to use GET event, say
GET FMOIX.
IF FMOIX-zhdlt > From_dat and FMOIX-zhdlt < to_dat.
Append FMOIX to itab.
ENDIF.
The thing is that I am still not very clear about this GET event. Is it just a event handler thing, or can it loop through data records?
What I googled for more than two days give me something like
LOOP at FMOIX.
MOVE FMOIX to itab.
ENDLOOP.
So what are the ways to move transactional structure like FMOIX into internal tables, say the internal table name is ITAB?
Your answer would be greatly appreciated. Though I have time, I am totally new.
Thanks a lot.
If your supervisor is suggesting that you use the GET event, it means that your program is (or should be) using a logical database - in this case probably FMF or FMF_BCS.
Doing GET FMOIX reads a set of fields defined in the logical database (as a node). Underneath your GET statement, you can use FMOIX as a structure, e.g. WRITE FMOIX-field1. The program will (implicitly, it's not explicity defined in the code like a LOOP...ENDLOOP is) loop through all the rows returned according to your selection criteria. You should be able to use MOVE-CORRESPONDING to move the contents of each row into a proper structure, and then APPEND that structure to your itab.
Quick link on GET in ABAPDocu
Note: this answer is a bit of a guess, since I've only used a logical database once, and the documentation is a little thin on the ground compared to the volumes out there about standard SELECTs and internal tables.
You can create your internal table in type of that structure such as:
data: itab like table of fmoix with header line.
And you can use this internal table to fill up wherever you are using your select codes.
Such as:
select * from ____
into corresponding fields of itab
where zhdlt gt from_dat
and zhdlt lt to_dat.
I'm not sure this is what you are looking for but I can tell you creating itab in type of that structure can be filled up with all corresponding datas that coming from your select. You cant loop FMOIX because its not a table, its a structure. So is there any specific reason to hold your datas in structures?
Hope it was helpful.
Talha

SAP passing data from application to screen. How does 'TABLES' work?

I don't quite understand how the TABLES statement works in ABAP. From a few example codes I've seen that the tablename afther the statement is an already existing dictionary structure. Is this the only way it can be used? Because I'm never sure which structure it is that I need.
And once it is declared how do I pass this to the actual screen? I wish it were as straight forward as the HIDE method, I can't get my head around this.
The tables statement just provides you with a single-line work area of the dictionary structure that you specify. It allows you to use fields of the structure as select-options and make the structure of the table available as a variable in your program.
If you are trying to write the structure to an abap list you could use it as follows:
tables: aufk.
select single * from aufk into aufk
where aufnr = some_order_number.
"I'm pretty sure the into clause is optional
"because of the tables statement, but including it to be explicit.
write / aufk.
If you are trying to display the field using ABAP dynpro, you should make sure that you read the field in the PBO and add the field(s) to the screen from the dictionary.

Database : best way to model a spreadsheet

I am trying to figure out the best way to model a spreadsheet (from the database point of view), taking into account :
The spreadsheet can contain a variable number of rows.
The spreadsheet can contain a variable number of columns.
Each column can contain one single value, but its type is unknown (integer, date, string).
It has to be easy (and performant) to generate a CSV file containing the data.
I am thinking about something like :
class Cell(models.Model):
column = models.ForeignKey(Column)
row_number = models.IntegerField()
value = models.CharField(max_length=100)
class Column(models.Model):
spreadsheet = models.ForeignKey(Spreadsheet)
name = models.CharField(max_length=100)
type = models.CharField(max_length=100)
class Spreadsheet(models.Model):
name = models.CharField(max_length=100)
creation_date = models.DateField()
Can you think about a better way to model a spreadsheet ? My approach allows to store the data as a String. I am worried about it being too slow to generate the CSV file.
from a relational viewpoint:
Spreadsheet <-->> Cell : RowId, ColumnId, ValueType, Contents
there is no requirement for row and column to be entities, but you can if you like
Databases aren't designed for this. But you can try a couple of different ways.
The naiive way to do it is to do a version of One Table To Rule Them All. That is, create a giant generic table, all types being (n)varchars, that has enough columns to cover any forseeable spreadsheet. Then, you'll need a second table to store metadata about the first, such as what Column1's spreadsheet column name is, what type it stores (so you can cast in and out), etc. Then you'll need triggers to run against inserts that check the data coming in and the metadata to make sure the data isn't corrupt, etc etc etc. As you can see, this way is a complete and utter cluster. I'd run screaming from it.
The second option is to store your data as XML. Most modern databases have XML data types and some support for xpath within queries. You can also use XSDs to provide some kind of data validation, and xslts to transform that data into CSVs. I'm currently doing something similar with configuration files, and its working out okay so far. No word on performance issues yet, but I'm trusting Knuth on that one.
The first option is probably much easier to search and faster to retrieve data from, but the second is probably more stable and definitely easier to program against.
It's times like this I wish Celko had a SO account.
You may want to study EAV (Entity-attribute-value) data models, as they are trying to solve a similar problem.
Entity-Attribute-Value - Wikipedia
The best solution greatly depends of the way the database will be used. Try to find a couple of top use cases you expect and then decide the design. For example if there is no use case to get the value of a certain cell from database (the data is always loaded at row level, or even in group of rows) then is no need to have a 'cell' stored as such.
That is a good question that calls for many answers, depending how you approach it, I'd love to share an opinion with you.
This topic is one the various we searched about at Zenkit, we even wrote an article about, we'd love your opinion on it: https://zenkit.com/en/blog/spreadsheets-vs-databases/

Converting SQL Result Sets to XML

I am looking for a tool that can serialize and/or transform SQL Result Sets into XML. Getting dumbed down XML generation from SQL result sets is simple and trivial, but that's not what I need.
The solution has to be database neutral, and accepts only regular SQL query results (no db xml support used). A particular challenge of this tool is to provide nested XML matching any schema from row based results. Intermediate steps are too slow and wasteful - this needs to happen in one single step; no RS->object->XML, preferably no RS->XML->XSLT->XML. It must support streaming due to large result sets, big XML.
Anything out there for this?
With SQL Server you really should consider using the FOR XML construct in the query.
If you're using .Net, just use a DataAdapter to fill a dataset. Once it's in a dataset, just use its .WriteXML() method. That breaks your DB->object->XML rule, but it's really how things are done. You might be able to work something out with a datareader, but I doubt it.
Not that I know of. I would just roll my own. It's not that hard to do, maybe something like this:
#!/usr/bin/env jruby
import java.sql.DriverManager
# TODO some magic to load the driver
conn = DriverManager.getConnection(ARGV[0], ARGV[1], ARGV[2])
res = conn.executeQuery ARGV[3]
puts "<result>"
meta = res.meta_data
while res.next
puts "<row>"
for n in 1..meta.column_count
column = meta.getColumnName n
puts "<#{column}>#{res.getString(n)}</#{column}"
end
puts "</row>"
end
puts "</result>"
Disclaimer: I just made all of that up, I'm not even bothering to pretend that it works. :-)
In .NET you can fill a dataset from any source and then it can write that out to disk for you as XML with or without the schema. I can't say what performance for large sets would be like. Simple :)
Another option, depending on how many schemas you need to output, and/or how dynamic this solution is supposed to be, would be to actually write the XML directly from the SQL statement, as in the following simple example...
SELECT
'<Record>' ||
'<name>' || name || '</name>' ||
'<address>' || address || '</address>' ||
'</Record>'
FROM
contacts
You would have to prepend and append the document element, but I think this example is easy enough to understand.
dbunit (www.dbunit.org) does go from sql to xml and vice versa; you might be able to modify it more for your needs.
Technically, converting a result set to an XML file is straight forward and doesn't need any tool unless you have a requirement to convert the data structure to fit specific export schema. In general the result set gets the top-level element of an XML file, then you produce a number of record elements containing attributes, which effectively are the fields of a record.
When it comes to Java, for example, you just need appropriate JDBC driver for interfacing with DBMS of your choice addressing the database independency requirement (usually provided by a DBMS vendor), and a few lines of code to read a result set and print out an XML string per record, per field. Not a difficult task for an average Java developer in my opinion.
Anyway, the more concrete purpose you state the more concrete answer you get.
In Java, you may just fill an object with the xml data (like an entity bean) and then use XMLEncoder to get it to xml. From there you may use XSLT for further conversion or XMLDecoder to bring it back to an object.
Greetz, GHad
PS: See http://ghads.wordpress.com/2008/09/16/java-to-xml-to-java/ for an example for the Object to XML part... From DB to Object multiple more way are possible: JDBC, Groovy DataSets or GORM. Apache Common Beans may help to fill up JavaBeans via Reflection-like methods.
I created a solution to this problem by using the equivalent of a mail merge using the resultset as the source, and a template through which it was merged to produce the desired XML.
The template was standard XML, with a Header element, a Footer element and a Body element. Using a CDATA block in the Body element allowed me to include a complete XML structure that acted as the template for each row. In order to include a fields from the resultset in the template, I used markers that looked like this <[FieldName]>. The template was then pre-parsed to isolate the markers such that in operation, the template requests each of the fields from the resultset as the Body is being produced.
The Header and Footer elements are output only once at the beginning and end of the output set. The body could be any XML or text structure desired. In your case, it sounds like you might have several templates, one for each of your desired schemas.
All of the above was encapsulated in a Template class, such that after loading the Template, I merely called merge() on the template passing the resultset in as a parameter.