String and nested structure in import/export parameters of RFC - abap

Can import and export parameters of RFC function modules be string and nested structures ?

Yes, you can have nested types in RFC.
You need to create Type Table in SE11 as you wish.
You can look for Deep Structure.
You can look here for an example.

Support for nested types in RFC was introduced with R/3 kernel release 6.10, if I remember correctly. (Could already have been 4.6D?!) So the SCN entries you found, which say it is not supported, perhaps still refer to these old releases?

Related

How to get the base type of an array type in portable JDBC

If you have a table with a column whose type is SQL ARRAY, how do you find the base type of the array type, aka the type of the individual elements of the array type?
How do you do this in vendor-agnostic pure JDBC?
How do you do this without fetching and inspecting actual row data? Equivalently: what if the table is empty?
Similar questions were asked here:
How to get array base type in postgres via jdbc
JDBC : get the type of an array from the metadata
However, I am asking for a vendor-agnostic way through the JDBC API itself. I'm asking: How is one supposed to solve this problem with vendor-agnostic pure JDBC? This use case seems like a core use case of JDBC, and I'm really surprised that I can't find a solution in JDBC.
I've spent hours reading and re-reading the JDBC API javadocs, and several more hours scouring the internet, and I'm greatly surprised that there doesn't seem be a proper way of doing this via the JDBC API. It should be right there via DatabaseMetaData or ResultSetMetaData, but it's apparently not.
Here are the insufficient workarounds and alternatives that I've found.
Fetch some rows until you get a row with an actual value for that column, get the column value, cast to java.sql.Array, and call getBaseType.
For postgres, assume that SQL ARRAY type names are encoded as ("_" + baseTypeName).
For Oracle, use Oracle specific extensions that allow getting the answer.
Some databases have a special "element_types" view which contains one row for each SQL ARRAY type that is used by current tables et al, and the row contains the base type and base type name.
My context is that I would like to use vendor-supplied JDBC connectors in spark in cloud in my company product, and metadata discovery becomes an important thing. I'm also investigating the feasibility of writing JDBC connectors myself for other data sources that don't have a JDBC driver nor spark connector yet. Metadata discovery is important so that one can define the Spark InternalRow and Spark-JDBC data getters correctly. Currently, Spark-JDBC has very limited support for SQL ARRAY and SQL STRUCT, but I managed to provide the missing bits with a day or two of coding, but during that process I hit this problem which is blocking me. If I have control over the JDBC Driver implementation, then I could use a kludge (i.e. encode the type information in the type name, and in the Spark JdbcDialect, take the type name and decode it to create the Catalyst type). However, I want to do it in the proper JDBC way, and I ideally I want to do it in a way that some other vendor-supplied JDBC drivers will support.
PS: It took me a surprising amount of time to locate DatabaseMetaData.getAttributes(). If I'm reading this right, this can give me the names and types of the fields/attributes of a SQL STRUCT. Again, I'm very surprised that I can get the names and types of the fields/attributes of a SQL STRUCT in vendor-agnostic pure JDBC but not get the base-type of a SQL ARRAY in vendor-agnostic pure JDBC.

Is there a way to reuse a regex pattern as both a patternProperties key and string type pattern in a json schema?

For example, in the GBFS project, top level keys in the gbfs.json['properties']['data'] object are described as:
language: The language that will be used throughout the rest of the files. It MUST match the value in the system_information.json file.
This is enforced by a patternProperties definition in the gbfs.json schema. But as described by the explanation of the field, this property should match a string property with the same regex pattern in system_information.json.
Would there be a way to define this regex pattern once and use it both as a patternProperties key and string type pattern for the language field?
Not in JSON Schema syntax itself, but you could go up one level and generate your schema programmatically, say with a template that used a placeholder variable for that regex. You could then use the template to regenerate the schema whenever it changed -- for example, if your schema is normally kept in git, then you could use a git commit hook to update the regex in all the places it is used. Or if you deploy your schema with ansible, you can generate the file with a template there too.
As mentioned, not with JSON Schema itself.
The common approach to solving this problem is to use Jsonnet, which is a templating language for JSON.
https://jsonnet.org
Having not used it myself, I have no opinions on it, beyond I've seen it used effectively in large scale projects in the course of researching JSON Schema use cases.

Standard deep nested data type?

I took the nice example clientPrintDescription.py and create a HTML form from the description which matches the input data types for the particular RFC function.
In SAP data types can contain data types which can contain data types, and I want to test my HTML form generator with a very nested data type.
Of course I could create my own custom data type, but it would be more re-usable if I would use an existing (rfc-capable) data type.
Which data type in SAP contains a lot of nested data types? And maybe a lot of different data types?
I cannot tell which structure is the best for your case but you could filter the view DD03VV (now that is a meaningful name) using the transaction se16h. If you GROUP BY the column TABNAME and filter on WHERE TABCLASS = 'INTTAB' the number of entries is an indicator for the size of the structure.
You could also aggregate and in a next step filter on the maximum DEPTH value (like a SQL HAVING, which afaik does not exist in SAP R/3). On my system the maximum depth is 12.
Edit: If you cannot access se16h, here's a workaround: Call se37 and execute SE16N_START with I_HANA = 'X'. If you cannot access se37 use sa38 and call RSFUNCTIONBUILDER (the report behind se37).
PS: The requests on DD03VV are awfully slow, probably due to missing optimzation for complex requests on ABAP dictionary views.
If I had to give only one DDIC structure, I would give this one:
FDT_TEST_DDIC_BIND_DEEP_S
It contains many elements of miscellaneous types, including nested ones, and it exists in any ABAP-based system (it belongs to the "BASIS" layer).
As it contains some data and object references in sub-levels which are invalid in RFC, you'll have to copy it and remove those reference fields.
There are also these structures (column "TABNAME") with fields of some interest:
TABNAME FIELDNAME Description
-------------------- ------------- ------------------------------------------------
SFW_BF FROM_RELEASE elementary built-in type
SAUNIT_S_ALERT WHEN data element
SAUNIT_S_ALERT HEADER structure
SAUNIT_S_ALERT TEXT_INFOS table type
SAUNIT_PROG_INFO .INCLUDE include structure SAUNIT_S_TADIR_KEY
SKWF_IOFLD .INCLU-FLD include structure SKWF_IO
SWFEXPSTRU2 .INCLU--AP append structure SWFEXPSTRU3
APPEND_BAPI0002_2_2 .APPEND_DU append structure recursive (append of BAPI0002_2) (unique component of APPEND_BAPI0002_2_2)
SOADDRESS Structure with nested structures on 2 levels
Some structures may not be valid in some ABAP releases. They used to exist in ABAP basis 7.02 and 7.52.
Try the function module RFC_METADATA_TEST...
It has some deeply nested parameters.
In Se80 under Enterpise service browser, you will find examples of Proxy structures that are complex DDIC structures. With many different types.
Example edo_tw_a0401request
Just browse around, you will find something you like.
I found STFC_STRUCTURE in the docs of test_datatypes of PyRFC.
Works find for testing, since it is already available in my SAP system. I don't need a dummy rfc for testing. Nice.

What is the difference between __assertedDate and assertedDate in hl7 FHIR json schema?

In most of the JSON schemas of hl7 FHIR resources, i found fields prefixed with _. But they are not listed in the examples. So while creating classes for the resources, shall I go only with fields not prefixed with _.
Like _assertedDate and assertedDate. Whether both fields are needed or not?
Because, for same resources in xml schema definition, I dont find _assertedDate.
you need to read the details about the json format at http://build.fhir.org/json.html (particularly http://build.fhir.org/json.html#primitive), and see http://build.fhir.org/observation-example-10minute-apgar-score.json for an example

Apache Solr: Conditional block

I am reading columns from HBase and indexing it in Solr using morphines file. Some field values will be in either English or German. Is there a way to specify the type of the field as "text_english_german" and inside the definition of "text_english_german" can we do an condition check to see if it is English or German and use the language specific Stemmer filter factory for indexing and querying the data?
Thanks,
Kishore
With a slightly different approach, you could define two fields:
text_en
text_de
Each of them would have a language-specific text analysis configured. Then, you can use the language autodetection UpdateRequestProcessor [1]. There a lot of parameters where you can tune the behaviour of such component.
[1] https://wiki.apache.org/solr/LanguageDetection
[2] https://cwiki.apache.org/confluence/display/solr/Detecting+Languages+During+Indexing