HL7 SIU Schedule Information - schedule

We're working on an HL7 interface right now and we've successfully setup Mirth Connect and are receiving and parsing sample HL7 feeds (for SIU messages, specially S12 I think for appointment schedule information). We are new to working with HL7, and since you mentioned Mirth I thought I would ask a couple of questions:
One thing we were unsure of is where the patient email address (if one exists in the EMR) would be in the HL7 message. Any advice on where this piece of information should be located if present?
Often times we need to determine the department or clinic a particular appointment is in (e.g. is it in "Orthopedics" or "General Surgery". In some sample messages I've seen things like an appointment having an "X-ray" label in it alongside the provider name, is this is the location of department or clinic names - or is that information elsewhere?
Thanks for your help!

First, Mirth Connect as an interface engine has nothing to do with your HL7 messages. You need an original HL7v2.x specification, which can be download from hl7.org site, to locate all required fields. Thus, SIU^S12 has PID and PD1 segments conveying all personal information, email included.
Same SIU^S12 contains the PV1 segment where location may be provided in fields with PL or IS data types. If this is not enough then all doctors with XCN data types may convey location information as well. How this information coded is defined by user-defined tables which may be extended or overridden by your local terminology sets.

PID.13 250 XTN O * Phone Number - Home
PID.14 250 XTN O * Phone Number - Business
allow eMails in XTN.4. Look for the definition of XTN Extended Telecommunication number
`AIL - Appointment information - location resource segment
should be the appropriate place for appointment location.

Based on your other comments I wanted to see if I could answer your question and additional comments:
For the location of a patient's phone number, these are typically in the PID-13.4 or PID-14.4. They typically have a data a couple sub-components afterwards called XTN, but it could vary depending on what vendor is sending you the data. Always check with the sending system to be sure they are sending it and you are looking in the appropriate field.
Typically you see things like department sent in the PV1-10 field. If it is a SIU message then you should check AIL-3 as well since this will contain additional resource scheudling information.
In an ADT message I would expect a sending system to populate the PV1-10 with that value, but again you would have to confirm with the sending system that the data is being sent and which field specifically it is mapped to. Some SIU feeds populate the PV1-10 in lieu of the AIL-3, so confirmation with the other system is a must.
On your comment, you mentioned you were setting up a connection to receive an inbound SIU feed. In a traditional SIU feed, you will sit and listen while the sending system broadcasts all updates it receives. In a lot of cases, it makes sense to request a full data feed of all scheduling messages and then refine/filter down to only the subset that you need. There are query/response interfaces that could be used as well but they are rare for SIU feeds and need to be fully supported by both you and the other application. Typically, listening to an SIU feed is your best case scenario, and a query interface probably won't work or make sense to develop in an SIU integration scenario.
If you have any other questions feel free to let me know.

msg['AIL']['AIL.3']['AIL.3.2'].toString()
In SIU(s12) message here you will find the department name.
msg['PID']['PID.13'][0]['PID.13.4'].toString()
Here, you will find the email id

Related

How to avoid sending repeated messages

Due to company policy we (my small team) couldn't use queue manager in the past (the only allowed was Websphere MQ, but there was no budget for it). We've implemented queues using database. Our applications are database centric implemented in .NET.
Recently this have been changed - we can use ActiveMQ or Rabbit. We've started thinking about migrating our queues but haven't decided yet which will be used.
It appeared that it is not so straightforward as it seem to be initially.
We have few scenarios when we check if message is in queue using business key to avoid repetition. For example: when impatient user sends application for credit card twice (Send button clicked twice) because he don't see status change yet. We are responsible for the backend and we don't have control over client application.
Current implementation is: take user name and search within the queue if in the recent hour there was a request to obtain credit card by this user.
It is quite easy to search in database. If match is found then exception is raised instead of placing message in queue.
I still don't know how to do this with queue manager, I couldn’t find any information about this. I've found only some information about using message id, but in our case repeated message will have different one.
Is it possible to check if message is already in queue using some business data?
A college of mine found a solution.
Currently we have queue implemented in a single database table with following data: msg_type, msg_key (this is the business key we need to be unique), msg_body, status, request_time, processed_time, retry_count and error_code.
The idea is following: this table will still be used for the purposes of deduplication only and queue manager for the purposes of queueing only (no deduplication functionality).
So we will remove unnecessary data from table. Remaining fields are: msg_type and msg_key.
Algorithm:
Our service receives request
Check if message is repeated with business key in the table. If it is then exception is raised.
Put message in queue and new message key into the table
This is just an idea. It is not yet implemented. Quite possible that this simple model will require some fixes.

Message types : how much information should messages contain?

We are currently starting to broadcast events from one central applications to other possibly interested consumer applications, and we have different options among members of our team about how much we should put in our published messages.
The general idea/architecture is the following :
In the producer application :
the user interacts with some entities (Aggregate Roots in the DDD sense) that can be created/modified/deleted
Based on what is happening, Domain Events are raised (ex : EntityXCreated, EntityYDeleted, EntityZTransferred etc ... i.e. not only CRUD, but mostly )
Raised events are translated/converted into messages that we send to a RabbitMQ Exchange
in RabbitMQ (we are using RabbitMQ but I believe the question is actually technology-independent):
we define a queue for each consuming application
bindings connect the exchange to the consumer queues (possibly with message filtering)
In the consuming application(s)
application consumes and process messages from its queue
Based on Enterprise Integration Patterns we are trying to define the Canonical format for our published messages, and are hesitating between 2 approaches :
Minimalist messages / event-store-ish : for each event published by the Domain Model, generate a message that contains only the parts of the Aggregate Root that are relevant (for instance, when an update is done, only publish information about the updated section of the aggregate root, more or less matching the process the end-user goes through when using our application)
Pros
small message size
very specialized message types
close to the "Domain Events"
Cons
problematic if delivery order is not guaranteed (i.e. what if Update message is received before Create message ? )
consumers need to know which message types to subscribe to (possibly a big list / domain knowledge is needed)
what if consumer state and producer state get out of sync ?
how to handle new consumer that registers in the future, but does not have knowledge of all the past events
Fully-contained idempotent-ish messages : for each event published by the Domain Model, generate a message that contains a full snapshot of the Aggregate Root at that point in time, hence handling in reality only 2 kind of messages "Create or Update" and "Delete" (+metadata with more specific info if necessary)
Pros
idempotent (declarative messages stating "this is what the truth is like, synchronize yourself however you can")
lower number of message formats to maintain/handle
allow to progressively correct synchronization errors of consumers
consumer automagically handle new Domain Events as long as the resulting message follows canonical data model
Cons
bigger message payload
less pure
Would you recommend an approach over the other ?
Is there another approach we should consider ?
Is there another approach we should consider ?
You might also consider not leaking information out of the service acting as the technical authority for that part of the business
Which roughly means that your events carry identifiers, so that interested parties can know that an entity of interest has changed, and can query the authority for updates to the state.
for each event published by the Domain Model, generate a message that contains a full snapshot of the Aggregate Root at that point in time
This also has the additional Con that any change to the representation of the aggregate also implies a change to the message schema, which is part of the API. So internal changes to aggregates start rippling out across your service boundaries. If the aggregates you are implementing represent a competitive advantage to your business, you are likely to want to be able to adapt quickly; the ripples add friction that will slow your ability to change.
what if consumer state and producer state get out of sync ?
As best I can tell, this problem indicates a design error. If a consumer needs state, which is to say a view built from the history of an aggregate, then it should be fetching that view from the producer, rather than trying to assemble it from a collection of observed messages.
That is to say, if you need state, you need history (complete, ordered). All a single event really tells you is that the history has changed, and you can evict your previously cached history.
Again, responsiveness to change: if you change the implementation of the producer, and consumers are also trying to cobble together their own copy of the history, then your changes are rippling across the service boundaries.

Prevent subscribers from reading certain samples temporarily

We have a situation where there are 2 modules, with one having a publisher and the other subscriber. The publisher is going to publish some samples using key attributes. Is it possible for the publisher to prevent the subscriber from reading certain samples? This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
We are planning on using Opensplice DDS but please give your inputs even if they are not specific to Opensplice.
Thanks.
RTI Connext DDS supplies an option to coordinate writes (in the documentation as "coherent write", see Section 6.3.10, and the PRESENTATION QoS.
myPublisher->begin_coherent_changes();
// (writers in that publisher do their writes) /* data captured at publisher */
myPublisher->end_coherent_changes(); /* all writes now leave */
Regards,
rip
If I understand your question properly, then there is no native DDS mechanism to achieve what you are looking for. You wrote:
This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
There is no such thing as a "global mutex" in DDS.
However, I suspect you can achieve your goal by adding some information to the data-model and adjust your application logics. For example, you could add an enumeration field to your data; let's say you add a field called status and it can take one of the values CALCULATING or READY.
On the publisher side, in stead of "taking a the mutex", your application could publish a sample with the status value set to CALCULATING. When the calculation is finished, the new sample can be written with the value of status set to READY.
On the subscriber side, you could use a QueryCondition with status=READY as its expression. Read or take actions should only be done through the QueryCondition, using read_w_condition() or take_w_condition(). Whenever the status is not equal to READY, the subscribing side will not see any samples. This approach takes advantage of the mechanism that newer samples overwrite older ones, assuming that your history depth is set to the default value of 1.
If this results in the behaviour that you are looking for, then there are two remaining disadvantages to this approach. First, the application logics get somewhat polluted by the use of the status field and the QueryCondition. This could easily be hidden by an abstraction layer though. It would even be possible to hide it behind some lock/unlock-like interface. The second disadvantage is due to the extra sample going over the wire when setting the status field to CALCULATING. But extra communications can not be avoided anyway if you want to implement a distributed mutex-like functionality. Only if your samples are pretty big and/or high-frequent, this is an issue. In that case, you might have to resort to a dedicated, small Topic for the single purpose of simulating the locking mechanism.
The PRESENTATION Qos is not specific RTI Connext DDS. It is part of the OMG DDS specification. That said the ability to write coherent changes on multiple DataWriters/Topics (as opposed to using a single DataWriter) is part of one of the optional profiles (object model profile), so no all DDS implementations necessariiy support it.
Gerardo

Raise an event or send a command?

We've created a web application that is an a e-book reader. So one thing to keep in mind is that the domain is not exactly that of reading a physical book. We are now trying to gather users' reading behavior by storing information about e-book pages accessed by our users. Since this information goes to a data warehouse we thought raising an event from the bookcontroller is the right way to do it.
bus.Publish()
But we are not sure if it should be a publish or a send since there is really only one consumer to this event and that is our business intelligence team. We've also read that it is not advisable to publish from the web app (http://www.make-awesome.com/2010/10/why-not-publish-nservicebus-messages-from-a-web-application/). So now the alternative is to use bus.Send(RecordPageAccessedCommand)
But the above command does not change our application state in anyway. So is it truly a command? I have a feeling that the mistake we are making is using NServiebus's features (Publish,Send) and trying to equate it with what a command or event is.
Please let me know what the solution to this is.
Based on the information you provided, I would recommend "sending" to your endpoint.
Sending a command implies that the endpoint handling the message should do something. In your case, recording that the page was accessed is the thing the endpoint should do.
Publishing an event implies that you are notifying 0..n subscribers that something occurred. You could publish an event from your command handler if some other service in your system was interested in the fact that a page was accessed. The key point here is that it's not a "fact" until you've recorded it.
I've found that consumers tend to grow once data is available. Having the ability to publish an event from your command handler will make it trivial to notify new consumers without changing/risking your existing code base.
The RecordPageAccessedCommand is a command as it is commanding the system to do something, in this case, record that a page has been accessed.
If I've understood your scenario correctly. A message should be sent from your controller to the "Business intelligence Team Service" telling the system to record that a page has been accessed. This service would store this information and would be the owner/technical authority of this information.
No other services should store or require this information in its pure form, they can however subscribe to events from this service, in highly contrived scenario for example, when a user reads 1000 pages the "Business intelligence Team Service" can publish an event that a 1000 pages have been read ie Bus.Publish(), which may be handled by a billing service that gives a discount for the user on their next purchase.
The data warehouse can have access to this information stored in your "Business intelligence Team Service" as it would fall under IT/OPS.

NServiceBus Specify BinarySerializer for certain message types but not for all

Does NServiceBus 2.0 allow for defining serializer for given message type?
I want for all but one of my messaages to be serialized using XmlSerializer. The remaining one should be serialized using BinarySerializer.
Is it possible with NServiceBus 2.0?
I believe the serializer is specified on an endpoint basis, so all messages using that endpoint would use the same serializer.
However, if you follow the rote NServiceBus recommendation of one message type per endpoint/queue then you could effectively isolate one message type and use a different serializer for it.
I'm curious, however, what is special about the one message type that requires binary serialization?
Edit in response to comment
The Distributor info indirectly mentions this under Routing with the Distributor. Udi Dahan also frequently advises this in the NServiceBus Yahoo Group although it's difficult to provide links because the search there is poor.
Basically, the idea is that you wouldn't want high priority messages to get stuck behind lower-priority ones, and also that this provides you with the greatest flexibility to scale out certain message processing if necessary.
Because the MsmqTransportConfig only allows for one InputQueue to be specified, having one message type per queue also means that you only have one message handler per endpoint.
To address the image, you may still be able to encapsulate it in an XML-formatted message if you encode the byte array as a Base64-encoded string. It's not ideal, but if your images aren't too large, it may be easier to do this than to go to the trouble of using a different serializer on only one message type.
Another option is to store the image data out-of-band in a database or filesystem and then refer to it by an ID or path (respectively).
Not possible in Version 2. But it can be done using the pipeline in versions 5 and above http://docs.particular.net/samples/pipeline/multi-serializer/