Is there an expression language to query from flatbuffers (using JAVA)? - el

I have a generated flatbuffers object in Java. I need to query for a field at runtime based on user input. Is there a way to do that?

There is currently no way to dynamically query fields in Java, you have to know the field's name statically to retrieve its value. So best you can do is if (user_input == "field") return obj.field().
There is a way to find out what fields are in a schema dynamically, by inspecting a binary schema file (which can be generated with flatc --schema myschema.fbs). Sadly, there is no reflection helper code in Java yet that will allow you to use that information to read an actual field (C++ only so far).

Related

Is there a way to reuse a regex pattern as both a patternProperties key and string type pattern in a json schema?

For example, in the GBFS project, top level keys in the gbfs.json['properties']['data'] object are described as:
language: The language that will be used throughout the rest of the files. It MUST match the value in the system_information.json file.
This is enforced by a patternProperties definition in the gbfs.json schema. But as described by the explanation of the field, this property should match a string property with the same regex pattern in system_information.json.
Would there be a way to define this regex pattern once and use it both as a patternProperties key and string type pattern for the language field?
Not in JSON Schema syntax itself, but you could go up one level and generate your schema programmatically, say with a template that used a placeholder variable for that regex. You could then use the template to regenerate the schema whenever it changed -- for example, if your schema is normally kept in git, then you could use a git commit hook to update the regex in all the places it is used. Or if you deploy your schema with ansible, you can generate the file with a template there too.
As mentioned, not with JSON Schema itself.
The common approach to solving this problem is to use Jsonnet, which is a templating language for JSON.
https://jsonnet.org
Having not used it myself, I have no opinions on it, beyond I've seen it used effectively in large scale projects in the course of researching JSON Schema use cases.

Standard deep nested data type?

I took the nice example clientPrintDescription.py and create a HTML form from the description which matches the input data types for the particular RFC function.
In SAP data types can contain data types which can contain data types, and I want to test my HTML form generator with a very nested data type.
Of course I could create my own custom data type, but it would be more re-usable if I would use an existing (rfc-capable) data type.
Which data type in SAP contains a lot of nested data types? And maybe a lot of different data types?
I cannot tell which structure is the best for your case but you could filter the view DD03VV (now that is a meaningful name) using the transaction se16h. If you GROUP BY the column TABNAME and filter on WHERE TABCLASS = 'INTTAB' the number of entries is an indicator for the size of the structure.
You could also aggregate and in a next step filter on the maximum DEPTH value (like a SQL HAVING, which afaik does not exist in SAP R/3). On my system the maximum depth is 12.
Edit: If you cannot access se16h, here's a workaround: Call se37 and execute SE16N_START with I_HANA = 'X'. If you cannot access se37 use sa38 and call RSFUNCTIONBUILDER (the report behind se37).
PS: The requests on DD03VV are awfully slow, probably due to missing optimzation for complex requests on ABAP dictionary views.
If I had to give only one DDIC structure, I would give this one:
FDT_TEST_DDIC_BIND_DEEP_S
It contains many elements of miscellaneous types, including nested ones, and it exists in any ABAP-based system (it belongs to the "BASIS" layer).
As it contains some data and object references in sub-levels which are invalid in RFC, you'll have to copy it and remove those reference fields.
There are also these structures (column "TABNAME") with fields of some interest:
TABNAME FIELDNAME Description
-------------------- ------------- ------------------------------------------------
SFW_BF FROM_RELEASE elementary built-in type
SAUNIT_S_ALERT WHEN data element
SAUNIT_S_ALERT HEADER structure
SAUNIT_S_ALERT TEXT_INFOS table type
SAUNIT_PROG_INFO .INCLUDE include structure SAUNIT_S_TADIR_KEY
SKWF_IOFLD .INCLU-FLD include structure SKWF_IO
SWFEXPSTRU2 .INCLU--AP append structure SWFEXPSTRU3
APPEND_BAPI0002_2_2 .APPEND_DU append structure recursive (append of BAPI0002_2) (unique component of APPEND_BAPI0002_2_2)
SOADDRESS Structure with nested structures on 2 levels
Some structures may not be valid in some ABAP releases. They used to exist in ABAP basis 7.02 and 7.52.
Try the function module RFC_METADATA_TEST...
It has some deeply nested parameters.
In Se80 under Enterpise service browser, you will find examples of Proxy structures that are complex DDIC structures. With many different types.
Example edo_tw_a0401request
Just browse around, you will find something you like.
I found STFC_STRUCTURE in the docs of test_datatypes of PyRFC.
Works find for testing, since it is already available in my SAP system. I don't need a dummy rfc for testing. Nice.

Include Sub-Entities from relations when loading an Entity in Groovy Service

What I'm trying to achieve here is to load some fields from sub-entities.
For instance, let's suppose i want to load some features for the product list. In xml it's pretty clear:
<row-actions>
<entity-find-one entity-name="mantle.product.feature.ProductFeature" value-field="brandList">
<field-map field-name="productFeatureId" from="featureList.productFeatureId"/>
<field-map field-name="productFeatureTypeEnumId" from="featureList.productFeatureId" value="PftBrand"/>
</entity-find-one>
</row-actions>
Is there a way to do something similar in groovy, without iterating through the whole product list and add the desired fields manually?
Also, can somebody give me a short example with the concrete use of sqlFind(http://www.moqui.org/javadoc/org/moqui/entity/EntityFacade.html) ?
I tried to solve the issue i'm asking about using a join query but I couldn't figure out how the SQL query is supposed to look like.
a. The element 'entity-find-one' queries on primary key and returns a single map. You need to use the 'entity-find' element .
b. Yes, you can always drop down to groovy using the script tag. e.g. Just use ec.entity.find("mantle.product.feature.ProductFeature") or whatever you need in your groovy script.
c. In moqui, joined tables are handled by the 'view-entity' element and you can predefine your own (place in your 'entities' folder) or use the many existing ones that are provided in the framework. You don't need SQL.
EDIT - Sorry, you can also do it on the fly by using the EntityFind.makeEntityDynamicView() method.
Hope that helps.

How do I use native Lucene Query Syntax?

I read that Lucene has an internal query language where one specifies : and you make combinations of these using boolean operators.
I read all about it on their website and it works just fine in LUKE, I can do things like
field1:value1 AND field2:value2
and it will return seemingly correct results.
My problem is how do I pass this who Lucene query into the API? I've seen QueryParser, but I have to specifiy a field. Does this mean I still have to manually parse my input string, fields, values, parenthesis, etc or is there a way to feed the whole thing in and let lucene do it's thing?
I'm using Lucene.NET but since it's a method by method port of the orignal java, any advice is appreciated.
Are you asking whether you need to force your user to enter the field? If so, the query parser has a default field. Here's a little more info. As long as you have a default field that will do the job, they don't need to specify fields.
If you're asking how to get a Query object from the String, you need the parse method. It understands about fields, and the default field, etc. mentioned earlier. You just need to make sure that the query parser and the index builder are both using the same analysis.

Converting SQL Result Sets to XML

I am looking for a tool that can serialize and/or transform SQL Result Sets into XML. Getting dumbed down XML generation from SQL result sets is simple and trivial, but that's not what I need.
The solution has to be database neutral, and accepts only regular SQL query results (no db xml support used). A particular challenge of this tool is to provide nested XML matching any schema from row based results. Intermediate steps are too slow and wasteful - this needs to happen in one single step; no RS->object->XML, preferably no RS->XML->XSLT->XML. It must support streaming due to large result sets, big XML.
Anything out there for this?
With SQL Server you really should consider using the FOR XML construct in the query.
If you're using .Net, just use a DataAdapter to fill a dataset. Once it's in a dataset, just use its .WriteXML() method. That breaks your DB->object->XML rule, but it's really how things are done. You might be able to work something out with a datareader, but I doubt it.
Not that I know of. I would just roll my own. It's not that hard to do, maybe something like this:
#!/usr/bin/env jruby
import java.sql.DriverManager
# TODO some magic to load the driver
conn = DriverManager.getConnection(ARGV[0], ARGV[1], ARGV[2])
res = conn.executeQuery ARGV[3]
puts "<result>"
meta = res.meta_data
while res.next
puts "<row>"
for n in 1..meta.column_count
column = meta.getColumnName n
puts "<#{column}>#{res.getString(n)}</#{column}"
end
puts "</row>"
end
puts "</result>"
Disclaimer: I just made all of that up, I'm not even bothering to pretend that it works. :-)
In .NET you can fill a dataset from any source and then it can write that out to disk for you as XML with or without the schema. I can't say what performance for large sets would be like. Simple :)
Another option, depending on how many schemas you need to output, and/or how dynamic this solution is supposed to be, would be to actually write the XML directly from the SQL statement, as in the following simple example...
SELECT
'<Record>' ||
'<name>' || name || '</name>' ||
'<address>' || address || '</address>' ||
'</Record>'
FROM
contacts
You would have to prepend and append the document element, but I think this example is easy enough to understand.
dbunit (www.dbunit.org) does go from sql to xml and vice versa; you might be able to modify it more for your needs.
Technically, converting a result set to an XML file is straight forward and doesn't need any tool unless you have a requirement to convert the data structure to fit specific export schema. In general the result set gets the top-level element of an XML file, then you produce a number of record elements containing attributes, which effectively are the fields of a record.
When it comes to Java, for example, you just need appropriate JDBC driver for interfacing with DBMS of your choice addressing the database independency requirement (usually provided by a DBMS vendor), and a few lines of code to read a result set and print out an XML string per record, per field. Not a difficult task for an average Java developer in my opinion.
Anyway, the more concrete purpose you state the more concrete answer you get.
In Java, you may just fill an object with the xml data (like an entity bean) and then use XMLEncoder to get it to xml. From there you may use XSLT for further conversion or XMLDecoder to bring it back to an object.
Greetz, GHad
PS: See http://ghads.wordpress.com/2008/09/16/java-to-xml-to-java/ for an example for the Object to XML part... From DB to Object multiple more way are possible: JDBC, Groovy DataSets or GORM. Apache Common Beans may help to fill up JavaBeans via Reflection-like methods.
I created a solution to this problem by using the equivalent of a mail merge using the resultset as the source, and a template through which it was merged to produce the desired XML.
The template was standard XML, with a Header element, a Footer element and a Body element. Using a CDATA block in the Body element allowed me to include a complete XML structure that acted as the template for each row. In order to include a fields from the resultset in the template, I used markers that looked like this <[FieldName]>. The template was then pre-parsed to isolate the markers such that in operation, the template requests each of the fields from the resultset as the Body is being produced.
The Header and Footer elements are output only once at the beginning and end of the output set. The body could be any XML or text structure desired. In your case, it sounds like you might have several templates, one for each of your desired schemas.
All of the above was encapsulated in a Template class, such that after loading the Template, I merely called merge() on the template passing the resultset in as a parameter.