I have this integration scenario from ECC to SAP PI 7.0: when a Purchase Requisition is created by the MRP process, the PR data should be sent automatically to other party through a web service or IDoc.
1) What would be the ideal scenario for this integration, I mean IDoc to SOAP, RFC to SOAP, etc.?
2) When the PR is created in ECC, how can it be pushed automatically to SAP XI/PI?
1) What would be the ideal scenario for this integration, I mean IDoc to SOAP, RFC to SOAP, etc.?
Between ECC to PI:
You can use IDOCs:
There is a standard ability of resend the data.
IDOC sending is asynchronous.
A simpler way (less customization) would be to use RFC call.
There isn't a standard way of resend the data.
The call could be synchronous/asynchronous
As response to comment, here are general instructions for RFC alternative:
Create remote enabled function in SE37.
import it once to PI.
Use it in the mapping.
The call from ECC is performed with the syntax: CALL FUNCTION 'your_function_name' DESTINATION 'your_defined_destination'.
Between PI to external system:
In PI mapping, after data arrived from RFC / IDOC call the webservice (SOAP).
2) When the PR is created in ECC, how can it be pushed automatically to SAP XI/PI?
In ECC, identify a creation of PR from MRP through a BADI like this. Send IDOC / Call the RFC inside the BADI you chose.
I'm absolutely not expert in the domain, but googling the web (answers come almost exclusively from SCN) made me think that no IDoc can be automatically generated at creation time. Consequently, the workaround is to:
Implement a user exit triggered when the purchase requisition is created. Maybe the MD_* (MD_PURREQ_POST?) BAdIs mentioned by Dorad are sufficient, or maybe the exit EXIT_SAPLMEREQ_008 of the enhancement MEREQ001 (via a project in transaction CMOD), or maybe the BAdI ME_PROCESS_REQ_CUST.
In this user exit, call the function module ALE_PR_CREATE to create the IDoc (message type PREQCR1).
Create an IDoc partner agreement in transaction WE20 so that the IDoc is sent when it's created, or postponed for later sending (job at regular intervals). The port can be tRFC, XML HTTP, etc.
You can find more details for each step by searching the web.
PS: your other question "what is the ideal scenario" cannot be answered without knowing your exact context, like quantity of PRs created during each MRP process, preferences in your company for technical solutions, near-zero custom development, etc.
Related
this might be somewhat of a weird, long and convoluted question but hear me out.
I am running a licensed 3rd party closed-source proprietary software on my on-premise server that stores and manipulates data, the specifics of what it does are not important. One of the features of this software is that it has an API that accepts requests to insert/manipulate/retrieve data. Because of the poorly designed software, there is no mechanism to write internal scripts (at least not anymore, it has been deprecated in the newest versions) for the software or any events to attach to for writing code that further enhances the functionality of the software (further manipulation of the data according to preset rules, timestamping through a TSA of the incoming packages, etc.).
How can I bypass the need for an internal scripting functionality that still gives me a way to e.g. timestamp an incoming package and return an appropriate response via the API to the sender in case of an error.
I have thought about using the in-built database trigger mechanisms (specifically MongoDB Change Streams API) to intercept the incoming data and adding the required hash and other timestamping-related information directly into the database. This is a neat solution, other than the fact that in case of an error (there have been some instances where our timestamping authority API is down or not responding to requests) there is no way to inform the sender that the timestamping process has not gone through as expected and that the new data will not be accepted into the server (all data on the server must be timestamped by law).
Another way this could be done is by intercepting the API request somehow before it reaches its endpoint, doing whatever needs to be done to the data, and then forwarding the request further to the server's API endpoint and letting it do its thing. If I am not mistaken the concept is somewhat similar to what a reverse proxy does on the network layer - it routes incoming requests according to rules set in the configuration, removes/adds headers to the packets, encrypts the connection to the server, etc.
Finally, my short question to this convoluted setup would be: what is the best way of tackling this problem, are there any software solutions or concepts that I should be researching?
In my landscape, the ERP system is creating deliveries in EWM via qRFC. The setup is standard and works via a distribution model in the ERP BD64 transaction. A BAPI is called that creates a delivery replica in EWM.
Sometimes the ERP deliveries are not properly validated and don't fulfill the requirements to be distributed to EWM, but they are still sent. In that case the error stays in EWM SMQ2 queue.
I want to prevent this from happening because if the issue needs to be solved in the ERP side, the error shouldn't stay in EWM. The obvious option is to implement a BADI in the ERP before distributing the delivery that would call a EWM validation API. If the API rejects the delivery, then the distribution should be prevented.
However, if for whatever reason the validation API is not called, the ERP could still send erroneous deliveries to EWM that would get stuck in the SMQ2 queue.
Is there a way to prevent this from happening? In case of an error (when validating the ERP delivery) in EWM qRFC processing, I would like to remove the faulty record from the queue, return the error message to the ERP and mark the ERP delivery as not distributed or distribution error.
Can this be done in a more or less standard way?
This ASYNC failure pattern is known in SAP ABAP stacks.
Sometimes there is a sync call for check before post calls.
When this isnt present, you might consider a sync call instead.
Without knowing why ASYNC inbound queue is used in the first place, it is hard to advise about workarounds or process improvements.
ASYNC inbound Queues have key weaknesses. Monitoring options are very week.
No APIs to read the to be posted data. No options to "fix" the data and retry etc.
Using a wrapper function on the EWM to react to the error and send feedback is an option.
I'm trying to manage a FHIR workflow based on API Rest for resources CRUD (as Patient, Practitioner and so on).
For workflow handling among different systems I want to use task resource, but I don't want to manage the Subscription resource and it's architecture.
So I have a doubt about manage of notifications.
The correct way is: the different systems must apply a polling operation on server to know if there's a task resource to consume? Or is it the server to warn the different systems?
The server FHIR I want to use is R4.
EDIT
We want to create a interoperability platform about the exchange of data among three systems. Every system is already in production developed by different software house and we can't work on them.
Every system, actually, hasn't got a server FHIR (as the Option B of Workflow architecture).
Every system is available about communication in HL7 v.3 / FHIR
So we want to add a layer with FHIR Server as the below image.
In this case:
if System A sends to FHIR server a resource (i.e. Appointment) then System B take this appointment to process in its environment. How works the schema of communication?
The FHIR workflow communication patterns page defines a number of architecture alternatives. One possibility is to create the Task on the fulfiller's system. In that case, no need for polling or subscription. If the Task is created on the placer's system or an intermediary system and you're sticking with pure REST, then the fulfilling system will need to either have a subscription that will result in them receiving a notification about the Task or they'll have to poll. Other non-RESTful options include POSTing to a "process task" operation on the fulfilling system or sending a FHIR message to the fulfilling system.
Is it possible to subscribe to mail events on an IBM Domino server?
I need a service similar to the one provided by Microsoft Exchange Event Notification, where you can subscribe to events and get notified when there are changes - eg. arrival of a new e-mail. I need the solution to be server side, since I can't rely on users having their client running.
Unfortunately, as per my comment above, there is no pre-packaged equivalent to the push, pull and streaming subscription services that EWS supports. A Notes client can get notifications via Notes RPC protocol, and there's also obviously some technology in IBM's Notes Traveler mobile product, but nothing that I'm aware of as a pre-packed web service or even as a notifications API. You would have to build it. There are a variety of ways you could go about it.
For push or streaming subscriptions, one way would be with a Notes C API plugin using the Extension Manager, running on the server and monitoring the mailboxes. You might be able to use a DSAPI plugin into Domino's HTTP stack to manage the incoming connections and feed the data out to subscribers, but honestly I have no idea if Domino's HTTP stack can handle the persistent connections that are implied in the subscription model. Alternatively, the Extension Manager plugin could quickly send the data over to code written in any other language that you want, running on any web stack that. Of course, you'll have to deal with security through all the linked-together parts.
For pull subscriptions, I guess it's really more of a polling archiecture, with state saved somewhere so that only changes since the last call will be delivered. You have any number of options for that. You could use Domino's built-in HTTP server, obviously, so you could write your own Domino-hosted web service for this. You could also use the Domino Data Service, which is a REST API, to do this -- with all necessary state information being stored on the client-side. (On quick look, I don't see a good option for getting all new docs since a specified date-time via Domino Data Service, but it might be possible.)
I do worry a bit about scalability of any custom solution for this. My understanding is that Microsoft has quite a bit of caching and optimization in their services in order to address scale. Obviously, you can build whatever you need for that into your own web service, but it will likely add a lot of effort.
Can any developers/architects with experience with NServiceBus offer guidance and help on the following?
We have a requirement in the business (and not a lot of money) to create a robust interface between an externally hosted application and our internal ERP's (yup, more than one).
When certain activities take place in the third party application they will send us the message. i.e. call a web service passing various fields of information in the message etc. We are not in control nor can we change this third party application.
My responsibility is creating this web service and the processing of the messages into each ERP. The third party dictates how the web service will look, but not what its responsible for. We have to accept that if they get a response back of 'success' then we at this point have taken responsibility for that message! i.e. we need to ensure as close to perfect no data loss takes place.
This is where I'm interested in the use of NServiceBus. Use it to store/accept a message at first. At this point I get lost, I can't tell what should happen, i.e. what design follows. Does another machine (process) subscribe and grab the message to process it into an ERP, if so since each ERPs integration logic differs do I make a subscriber per ERP? A message may have two destination ERP targets however, so is it best the message is sent and not subscribed to.
Obviously in the whole design, I need to have some business rules which help determine the destination ERP's and then business rules that determine what actually takes place with in each ERP. So I also have a question on BRE's but this can wait although still may be a driver for what the message has to do.
so:
Third party > web service call > store message (& return success) > determine which ERP is target > process each into ERP > mark message complete
If anything fails along the lines making sure the message does not get lost. p.s. how does MSMQ prevent loss since the whole machine may die ? is this just disk resilience etc?
Many thanks if you've read and even more for any advice.
This sounds like a perfect application for NServiceBus.
Your web service should ONLY parse the request from the third and translate it into an NServiceBus message, which it should Bus.Send(). You don't respond with a 200 status code until that message is on the Bus, at which point, you are responsible for it, and NServiceBus's built-in error/retry and error queue facilities become your best friend.
This message should be received by another endpoint, but it needs to be able to account for duplicate messages or use idempotence so that duplicates aren't a problem. If the third party hits your web service, and the message is successfully placed on the bus, but then some error prevents them from receiving the 200 response code, you will get duplicates from them.
At this point, the endpoint receiving the MessageFromWebServiceCommand message could Bus.Publish() a SomeBusinessEventHappenedEvent that contains the command data.
For each ERP, create an additional endpoint that subscribes to the SomeBusinessEventHappenedEvent and uses your business logic to decide what to do respective to that ERP. In some cases, that "something" may be "nothing". Keep idempotence in mind here too, because if the message fails it will be retried.
All the other things you're worried about (preventing loss of messages, what happened if machines die) will be taken care of thanks to NServiceBus and MSMQ being naturally resilient to such problems.
Here is a blog post, including a sample project, that shows how to receive messages from an external partner via a web service and handle them with NServiceBus, and a link straight to the sample project on GitHub:
Robust 3rd Party Integrations with NServiceBus
Project Source Code on GitHub