AX2012 Sharedtypes schema missing from imported schema - schema

I've received a schema from a customer, which contains a reference to "http://schemas.microsoft.com/dynamics/2008/01/sharedtypes" with a schema named "SharedTypes.xsd".
I am new to AX, so I'm not sure how I go about finding this schema. I looked around online and found this: https://technet.microsoft.com/en-us/library/hh769362.aspx, but I am not able to view the schema in Biztalk. Is it possible to find this schema online?

You should also receive this schema from your customer since it always contains the definition of complex data types. You may find one online, but you will miss every customization in AX for your customer implementation and all custom data types as well. Knowing AX, there is always customization work done. I have yet to encounter an AX which uses plain standard.
Ask your customer for the file, without it, your integration will not work.
Be aware however: this file is subject to a lot of changes when dealing with ongoing AX development. If the AX implementation is just starting out, try to have a version number in there somewhere. You'll thank me later.

Related

cTAKES indication that category > 0 sources are about to be used?

In appendix 1 of the UMLS license agreement, there is a listing of all sources within the current version of the UMLS Metathesaurus with an indication of any additional restrictions and notices that apply. Loosely speaking, it seems like you can generally have your way with the Metathesaurus data sources that fall within category-0 of the license, but things get more restrictive at categories above that.
For example (likely a bad example as I am not a lawyer), looking at section 12.2 of the main license section:
LICENSEE is prohibited from using the vocabulary source in operational
applications that create records or information containing data from
the vocabulary source. Use for data creation research or product
development is allowed.
My question then is: (since cTAKES already has my UMLS credentials) is there any way to tell when doing a certain action with cTAKES is going to instruct it use/access data from the Metathesaurus that have a category > 0 (eg. some popup warning or header comment in the binary files)? Thanks
** The reason I'm interested is because: suppose that a certain ctakes process uses a category-2 data source to do something on some input that populates data into some XMI output (I don't know much about ctakes' full implementation, but for sake of arg. lets assume this is true) that gets post-processed and stored as some report for an organization. It would seem that the organization has violated the category-2 restriction inadvertently (since they were never warned about the underlying data being used to generate the outputs). I may be grossly misunderstanding something here, so please let me know if this is the case.

Liquibase load data in a format other than CSV

With the load data option that Liquibase provides, one can specify seed data in a CSV format. Is there a way I can provide say, a JSON or XML file with data that Liquibase would understand?
The use case is we are trying to put in some sample data which is hierarchical. E.g. Category - Subcategory relation which would require putting in parent id for all related categories. If there is a way to avoid including the ids in the seed data via say, JSON.
{
"MainCat1": ["SubCat11", "SubCat12"],
"MainCat2": ["SubCat21", "SubCat22"]
}
Very likely to have this as not supported (couldn't make Google help me) but is there a way to write a plugin or something that does this? Pointer to a guide (if any) would help.
NOTE: This is not about specifying the change log in that format.
This not currently supported and supporting it robustly would be pretty difficult. The main difficultly lies in the fact that Liquibase is designed to be database-platform agnostic, combined with the design goal of being able to generate the SQL required to do an operation without actually doing the operation live.
Inserting data like you want without knowing the keys and just generating SQL that could be run later is going to be very difficult, perhaps even impossible. I would suggest approaching Nathan, who is the main developer for Liquibase, more directly. The best way to do that might be through the JIRA bug database for Liquibase.
If you want to have a crack at implementing it, you could start by looking at the code for the LoadDataChange class (source in Github), which is where the CSV support currently lives.

Migrating RMS to RDB

We're approaching the migration of legacy OpenVMS RMS files into relational database (both MS SQL 2012 and Oracle 10g are available).
I wonder if there are:
Tools to retrieve schema of indexed files
Tools to parse indexed files
Tools to deal with custom RMS data formats (zoned decimals etc)
as a bundle/API/Library
Perhaps I should change the approach?
There are several tools available, notably through ODBC vendors (I work for one: Attunity).
1 >> Tools to retrieve schema of indexed files
Please clarify. Looking for just record/column layout and indexes within the files or also relationships between files.
1a) How are the files currently being used? Cobol, Basic, Fortran programs? Datatrieve?
They will be using some data definition method, so you want a tool which can exploit that.
Connx, and Attunity Connect can 'import' CDD definitions, BASIC - MAP files, Cobol Copybooks. Variants are typically covered as well. I have written many a (perl/awk) script to convert special definition to XML.
1b ) Analyze/RMS, or a program with calling RMS XAB's can get available index information. Atunity connect will know how to map those onto the fields from 1a)
1c ) There is no formal, stored, relationship between (indexed) files on OpenVMS. That's all in the program logic. However, some modestly smart Perl/Awk/DCL script can often generate a tablem of likely foreign/primary keys by looking at filed names and datatypes matches.
How many files / layouts / gigabytes are we talking about?
2 >> Tools to parse indexed files
Please clarify? Once the structure is known (question 1), the parsing is done by reading using that structure right? You never ever want to understand the indexed file internals. Just tell RMS to fetch records.
3 >> Tools to deal with custom RMS data formats (zoned decimals etc) as a bundle/API/Library
Again, please clarify. Once the structure is known just use the 'right' tool to read using that structure and surely it will honor the detailed data definitions.
(I know it is quite simple to write one yourself, just thought there would be something in the industry)
Famous last words... 'quite simple'. Entire companies have been build and thrive doing just that for general cases. I admit that for specific cases it can be relatively straightforward, but 'the devil is in the details'.
In the Attunity Connect case we have a UDT (User Defined data Type) to handle the 'odd' cases, often involving DATES. Dates in integers, in strings, as units since xxx are all available out of the box, but for example some have -1 meaning 'some high date' which needs some help to be stored in a DB.
All the databases have some bulk load tool (BCP, SQL$LOADER).
As long as you can deliver data conforming to what those expect (tabular, comma-seperated, quoted-or-not, escapes-or-not) you should be in good shape.
The EGH tool Vselect may be a handy, and high performance, way to bulk read indexed files, filter and format some and spit out sequential files for the DB loaders. It can read RMS indexed file faster than RMS can! (It has its own metadata language though!)
Attunity offers full access and replication services.
They include a CDC (change data capture) to not a only load the data, but to also keep it up to date in near-real-time. That's useful for 'evolution' versus 'revolution'.
Check out Attunity 'Replicate'. Once you have a data dictionary, just point to the tables desired (include, exlude filters), point to a target DB and click to replicate. Of course there are options for (global or per-table) transformations (like an AREA-CODE+EXHANGE+NUMBER to single phone number, or adding a modified date columns ).
Will this be a single big switch conversion, or is there desire to migrate the data and keep the old systems alive for days, months, years perhaps, all along keeping the data in close sync?
Hope this helps some,
Hein van den Heuvel.
OP: Perhaps I should change the approach? Probably.
You might consider finding data migration vendors, some which likely have off-the-shelf solutions, if not as a COTS tool, more likely packaged as a service (I don't think this is a big market).
What this won't help you with is what I think of as much bigger problem with the application code: who is going to change all the code that is making RMS calls, in the corresponding code that makes relational DB calls? How will the entity ("Joe Programmer", or some tool), know where the data migrated to, so that he can write the correct call? What are you doing to do about the fact that the data representation is like to change?
Ideally you'd like an automated migration tool, that will move the data itself (therefore knows that datalayouts and representation changes), and will make the code changes that correspond. You can look for these kind of vendors, too.

Dynamic Methods/Rules

I need to create a product configurator, but according to the requirements, literally every product has a set of rules to validate it. The rules refer to the quantity of underlying components the configuration is made of.
At the moment the way this is being handled is just storing the "formula" string in the db, and since the UI is in Excel, then when you call a configuration, it comes with the rules as well and you just append a "=" in front of it. Thus, the final product works when quantities or components change.
So I've seen a few similar type of questions being asked, and the answer always seemed to be UJS, however, this is stored in the app itself, correct? The challenge for me is to create a way I can replicate these rules depending on the product, and different products are beeing added all the time, changed, etc, so keeping it in the app to redeploy each time you want to change something seems a bit extreme!
Can anyone think of a good solution? Help!
These are business rules so they'll need to be stored somewhere server-side (they could be mirrored in client-side code but storing them there exclusively is highly inadvisable ;). They could be represented as code, or configuration files, or in a database.
As you suggested, modeling frequently changing rules in source code makes your app brittle. I think the best storage option is your database.
If you need to implement a client-side behavior on a per-product basis, you could use AJAX to send a product ID out to a service which returns a configuration "package" to your (dumb) client.
Would that work? Sounds good to me, anyway. ;)

How to decode SAP text from from STXL.CLUSTD?

I know ! The "proper" way to read STXL.CLUSTD is through SAP ABAP function. But I'm sorry, we are suffering badly from performance problem. We have already make our decision to go directly to the database (Oracle), and we don't have any plan to revert our decision yet since everything goes so much better so far.
However, we've came across this issue. The text in STXL.CLUSTD field was stored in an incomprehensible format. We cannot find any information about its encoding format via google. Anybody can hint me how to decode text from STXL.CLUSTD ?
Thanks
Short version: You don't. Use the function module READ_TEXT.
Long version: You're looking at a so-called cluster table. See http://help.sap.com/saphelp_47x200/helpdata/en/fc/eb3bf8358411d1829f0000e829fbfe/frameset.htm for the details. The data you see is an internal representation of the text, somehow related to the way the ABAP kernel handles the data internally. This data does not make any sense without the metadata. If you change the original structure in an incompatible way, the data can no longer be read. Oh, and did I mention that the data does not contain a reference to the metadata? When reading the contents of these tables, even in ABAP, you need to know the original source data structure, otherwise you're doomed. Without the metadata and the knowledge of how the kernel handles these data types at runtime, you'll have a hard time deciphering the contents.
Personal opinion: Direct access to the database below the SAP R/3 system is a really bad idea since this not only bypasses all safety measures, but it also makes you very vulnerable to all structural changes of the database. The only real reason for accessing the database directly is not lack of performance, but lack of (ABAP) knowledge, and that should be curable :-)
You can definitely read clusters and pools without running any ABAP code, or invoking RFC's or BAPI's, etc. it is a very good approach, highly performant, and easy to use.
I don't like people flogging their products in StackOverflow, but the information that you must use ABAP to access SAP data has been outdated for over 7 years now.
Thanks,
Bill MacLean
I just noticed this thread and I work for Simplement. Snow_FFFF is correct (BTW, that user is not me, and ASFAIK is not anyone in our company). The Data Liberator product has been de-clustering and de-pooling tables (and many other things) for our customers since 2009.