I am struggling to understand how IDoc are used in SAP. What an IDoc is I understand but I have a few questions about how this IDoc are used by SAP (can't find such details on sap blog):
In SAP database all the tables are stored in IDocs? For example, if I create table employees with 4 records, the entire table will be stored in the body of an Idoc?
Is possible to make a query in the database and to receive the information as an IDoc?
In connection with the above question. In SAP are predefined types of Idocs(create an IDoc in SAP)? Or we can create various types of IDocs(for example I create an IDoc, in his body, I put some text and send it to an SAP system)?
An IDoc has many segments, how should I decide which of them should I use? (like the guy from above video)
PS: I am new with SAP and all these things are new for me.
Thank you for your patience,
If you are still looking for clarification regarding IDocs, I recommend the SAP Learning Hub ebook on ALE (application link enabling). It offers a technical and conceptual summary of IDoc technology and its use in SAP.
To answer a few of your questions, yes, there are predefined IDoc types. Together, the basic type and message type characterise the structure and data to be sent in the IDoc.
The IDoc itself consists of:
Control Record: this is the metadata, where information concerning
the IDoc type and details of sending and receiving systems are
stored.
Data Records: this is where the information (stored in segments)
that you are sending can be found.
Status Records: contains information concerning whether the IDoc
transfer was successful. If the IDoc fails, you can investigate the
potential reason by looking at the status codes here.
The segments that you use will be defined by the configuration and business requirements of the SAP system you are working in. You can append the segments and create customer defined types, but, as there are so many predefined IDocs, there will likely already exist something you can use.
IDocs are intended to be used in integration scenarios. They can be used to send data out of an SAP system or to receive data into an SAP system. IDocs are designed for data exchange between, rather than within, a system. They can be sent to other SAP systems or to non SAP systems, depending on business requirements. This is their main purpose in SAP.
If you are interested in the process of configuring an ALE scenario for a specific business process, this blog post provides a thorough example:
https://blogs.sap.com/2012/10/08/ale-distribution-of-cost-center-and-gl-account-master-data-across-sap-systems-through-idocs/
In SAP database all the tables are stored in IDocs? For example, if I
create table employees with 4 records, the entire table will be stored
in the body of an Idoc?
No tables are tables. Idocs are stored in their own tables. IDOCs represent data to is be posted or was sent depending on the direction .
Is possible to make a query in the database and to receive the
information as an IDoc?
Yes you can read the IDOC header and segments.
If you dont know how IDOCs are stored, (headers, segments, status records) then doing so is not a great idea. Research IDOC storage first.
Use a Function might be a better idea ;)
In connection with the above question. In SAP are predefined types of
Idocs(create an IDoc in SAP)?
Yes SAP supplies many standard IDOCS.
Or we can create various types of IDocs(for example I create an IDoc,
in his body, I put some text and send it to an SAP system)?
You can create you own IDOCs, you can add segments to existing IDOCs.
There is documentation on how to do this.
An IDoc has many segments, how should I decide which of them should I
use? (like the guy from above video)
It has the data you need.
Ask the person planning to use IDOCs for help.
Do some research.
After reading your questions, I am afraid that you might not have understood what an IDoc is. Maybe you will find this little crash course useful for getting an overview:
https://www.guru99.com/all-about-idocdefinition-architecture-implementation.html
Related
The other day I was looking at SOGo SQL tables and saw that the records are stored as vcard data instead of a fine table with different columns like surname, phone number, etc.
Though there is a table called sogo_quick_contacts with the schema I was expecting, not all the columns are there, only some basic ones.
I'm wondering why is it like that way? Is it better to query a record with the whole vcard-data and extract the information I require? Wouldn't it be better (faster) to apply a SELECT query indicating some columns I'm looking for, if they were available?
CardDav seems to provide this vcard-data, are they more suitable to contacts lookup, why?
What if I want to just list the names and birthdays. Wouldn't extracting all the vcards much slower then using a SQL Query where I have everything split up for different columns?
There are a lot of things which played a role in the way the ScalableOGo database schema is designed. Which BTW was designed by me ;-)
I think the core thing here is that it is designed specifically for two types of clients: a) native CardDAV clients (macOS/iOS contacts, Thunderbird) and b) the ScalableOGo web interface.
Native clients essentially never do the type of query you are asking about. They always sync a full vCard to their local cache. So there has to be a fast way to store and retrieve a full vCard, it is the most common operation against the server.
Web clients in 2003 (I suppose that was around the time I wrote the original web client) didn't yet have the capacity to store full objects locally and had to do what you are asking for: query just the fields the web client needs to display on a respective page.
This is what the 'quick' tables are for. They contain the columns the web clients needs to display overviews and such. It is essentially an app server provided index over the vCard content.
This should be the main answer to your question.
There are other reasons too, some in no particular order:
a vCard is quite complex, to convert it to a proper SQL schema / normalise it, is (was at the time, but this is still relevant, since the scale of systems grew 100-fold over the last 15 years) quite compute intensive (hence OpenGroupware.org vs ScalableOGo) A BLOB just needs to be streamed to disk.
a CardDAV server is supposed to store a full vCard as-is, byte-by-byte. So that the clients can do ETag protected requests. And store custom fields (all clients use their own X- tags for client specific fields)
the quick tables are also setup so that they can be build asynchronously, though I think that feature never made it into SOGo. If a client quickly loads 10000 vCards into the server (e.g. just dragging the vCards into the server using Finder), the server can batch-update the quick table in the background. The vCard to DB conversion doesn't have to happen in real time.
(notably native clients often have a similar 'quick' table setup locally.)
Hope this helps. Maybe one would design the thing a little different in 2017, though I think the basic ideas are still sound ;-)
I am quite new in the HL7 field and not a developer, so sorry if my question might seem to be too obvious.
We want to develop an app for a hospital which visualises performance and patient-flow data by aggregating data from other hospital applications. Our app will both visualise realtime data and historic data. During talks with the head of IT I got confused, he explained I need to:
Develop an HL7 listener like Mirth which can receive messages of other applications which communicate via HL7 2.x standards to catch realtime data and after this organise to migrate historic data from other applications via sql queries. Sounds pretty logic, though not sure if he's an expert since he had no idea what an API was and knew nothing about FHIR.
My questions are:
1 What triggers an application to send an HL7 2.x message around to other application when for instance someone changes the status of a patient? Is it programmed to automatically send a message with each change in record just randomly around? So assuming all applications do this standardly and you just need a listener like Mirth to catch those messages and migrate into my own database?
2 Can't I use the HL7 2.x standard to pull info via a query out of a database? Meaning can it be used for two-way communication? I send query, application sends me the data in an HL7 message? Meaning I can also use it to pull historic data from another database?
3 What kind of difference would the use of FHIR standard have in this situation? I believe it can definitely be used to pull information from another database. But would it in fact make a difference compared with the tactic which the tech guy is advising me, which is migrating historic data to my own database and further just catch new changes by receiving hl7 2.x messages?
4 Would it be an advise to use an FHIR RESTful API to pull/receive info from applications which still use HL7 2.x standard? So for both historic as realtime changes? Would this be a faster way of integration, or better to use the old fashioned way the Tech guy advises me.
Very keen to know more about this, since I want to organise a strategy which is future proof and won't cost months of integration time every time we migrate to a new hospital.
Thanks for your help guys!
depends on the application. most only send data, and it's configurable when and why.
no, you use hl7 v2 to pull data out of an application, not a database - if, that is, the application supports it. Many (most?) don't. And you can only do what the applcation allows
FHIR would be a lot easier to use, but it's still settling, and you'll have trouble finding applications that offer a fhir interface this year. you'll have to talk to potential customers to find out whether it's possible. btw, FHIR can do what v2 can in this regsard - both pull and push
it's always to advisable to use FHIR - if you can. mostly, though, you'll have to use v2 because that's what's on offer.
We are working on one custom project management application on top of Moqui framework. Our requirement is, we need to inform any changes in ticket to the developers associated with the project through email.
Currently we are using WorkEffortParty entity to store all parties associated with the project and then PartyContactMech entity to store their email addresses. Here we need to iterate through WorkEffortParty and PartyContactMech everytime to fetch all email address to which we need to send emails for changes in tickets every time.
To avoid these iterations, we are now thinking of giving feature to add comma separated email addresses at project level. Project admin can add email addresses of associated parties or mailing list address to which he needs to send email notification for ticket change.
For this requirement, we studied around the data model but we didn't got the right place to store this information. Do we need to extend any entity for this or is there any best practice for this? This requirement is very useful in any project management application. We appreciate any help on this data modeling problem.
The best practice is to use existing data model elements as they are available. Having a normalized data model involves more work in querying data, but also more flexibility in addressing a wide variety of requirements without changes to the data structures.
In this case with a joined query you can get a list of email addresses in a single query based on the project's workEffortId. If you are dealing with massive data and message volumes there are better solutions than denormalizing source data, but I doubt that's the case... unless you're dealing with more than thousands of projects and millions of messages per day the basic query and iterate approach will work just fine.
If you need to go beyond that the easiest approach with Moqui is to use a DataDocument and DataFeed to send updates on the fly to ElasticSearch, and then use it for your high volume queries and filtering (with arbitrarily complex filtering, etc requirements).
Your question is way too open to answer directly, data modeling is a complex topic and without good understanding of context and intended usage there are no good answers. In general it's best to start with a data model based on decades of experience and used in a large number of production systems. The Mantle UDM is one such model.
I wanted to know how one can retrieve data from the various query tools available in SAP EWM.
I found the queries in the following link: Extended Warehouse Management - SAP Library
The description of the query 0WM_MP17_Q0001 says:
0WM_MP17_Q0001
You can use this query to see the number and duration of confirmed warehouse orders by day, week, or month. This allows you to see when typical warehouse trends are changing, and thus take actions such as:
Adjusting work schedules to meet demands
Hiring new workers, or letting existing workers go
Requesting budget for expenses such as extra equipment
And I need to retrieve the data for the reasons above.
However, is there a transaction code that I can run to get this report? How can I retrieve this data?
I think you already asked this question on SDN and got a response, see your message and response.
This is BI content.
I have a database question. I am developing an application where users sends some request and gets an answer from a vendor. I have a server receiving the request (through a rest call or a running web service, haven't decided which yet).
Whenever a new request comes in it should be logged in a database and when the vendor responds the record should be updated indicating whether it was accepted or not and stuff like that. The only reason for this storage of transactions is for reporting and logging purposes. So now that I have stated my requirement I need help from someone with more expertise in this.
What I've come up with so far is that it would be best to use a structured database since all records will have one type and the same information, so there's no need to waste space using a semi-structured database with each record containing both structure and information.
But I don't know if there are any databases that are particularly good for this kind of "create/update operations only" ?? As I said I only need to read the data perhaps once a month or so.
Any inputs are appreciated!
You can use any open source database like postgreSql as you are mostly going to do inserts and not much other features needed. My suggestion will try to put logging process in separate threads rather than the one you are using for processing to have better performance for your api calls.
I'm developing a application with a lot of create/update queries and currently using Neo4j.
It's fast and really good with j2E and php. NoSQL is really fast to learn with it, and the web interface is really user friendly :)