How can I begin to develop EDI 837 Professional? [closed] - vb.net

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I want to develop an EDI 837 Professional project and I don't understand from where to start. If anybody knows about this & have worked on it then please help me & give me advice where do I start from ?

I guess it depends on what you are trying to do. I am new to this as well and I am trying to generate EDI X12 837 files from a datasource in our system. I have found this open source project to be very helpful. It contains an executable that can transform EDI's to XML. There are also some API's that you can use if you are developing in C#/VB.
Also you can take a look at this article. It provides information about EDI formatting. I found it very helpful in getting started.

If you are interested in a Web API that takes care of your 837P (and other) claims that can easily be integrated into your web based platform, check out https://837-edi.clearehr.com - it is free and eliminates a large amount of development (and costs, and headaches) for those trying to implement EDI insurance claims into their solutions. Over 1,000 payers supported at this point and more are added all the time.
Disclaimer: I am a developer at ClearEHR

The first thing you need is to get the 837 implementation guide. It tells you what data goes into the file and how it should be structured. The publisher of these documents is called wpc-edi.com.
The second thing you need is a good EDI library component that helps you create these files without having to worry about the format and just concentrate on the data. One good one is RDPCrystal.com

One way of doing it is using Microsoft BizTalk Server.Microsoft's BizTalk offers many added features for maintaining and monitoring the solution.
Visit https://learn.microsoft.com/en-us/biztalk/core/edi-support-in-biztalk-server1 for details

EDI or IDOC or X12, best practice in developing complex Integration solutions is to use canonical schemas.
This is a design pattern to decouple systems. Specially when you want this to expose as data contracts on your web services. For your EDI 837 (Patient Info), Canonical schema might contain some additional info that you can reuse later.
EDI Schema -> Canonical Schema (Patient Info) -> Target Schema
There's a good article on how to create canonical schemas. You can find it here.

Related

Multi Language Software Documentation / Manual [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
we have an rather extensive set of Documentation for our software currently written in german. Now we want to translate this documentation into english for our foreign customers. For this we will use an external translation service.
But we want to keep the english and german version in sync as close as possible, as it will be updated in future accordingly to updates of our software. In this case we want to give only the changed pages of the documentation to the translation service.
Currently we use Atlassian Confluence to manage our documentation, but it has no support for internationalization.
The next approach that came to my mind was using some external tool to write/manage the documentation and then export it to confluence.
Things I found:
How to best manage multi-lingual presentations? - Use LaTex and Export it somehow to pdf/confluence/whatever
Some approach based on docBook or DITA (Paper in German)
So what is the best way to manage our software documentation in german and english simultaneously?
At the moment, localization support in Wikis actually seems to be very poor. See for example http://www.kilkku.com/blog/2012/09/the-final-obstacle-to-wiki-tech-comm-localization/.
You would need an efficient way to prepare the source language files for translation. This seems to be a major problem with Wikis.
In addition, with an extensive set of documentation, to even have a chance to keep multiple languages in sync, you or your service provider should use a translation memory system that can handle your file format. Translation memory systems divide the source text into segments. Normally, a sentence corresponds to a segment, as an option, segmentation can also be done at the paragraph level. The translator works on these segments. In case of an update, the translation memory system detects new and modified texts automatically. Everything else can be pre-translated from the memory.
Now, I've been managing localization projects for more than 15 years, but I've never heard of a translation tool that handles LaTex files. On the other hand, Docbook or DITA are supported by quite a few of these tools. For example, Maxprograms Swordfish is affordable and handles DITA as well as Docbook. In addition, with both formats, there seems to be the option to output to Wiki again (for example: http://sourceforge.net/projects/dita2wiki/) - though I don't know how well established these methods are.

Best Practice for retrieving data from SAP by .net [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm looking for a 'best practice' in the industry for integrating SAP with a .Net application. So far I only need to read data from SAP, there's no need to update.
The most straightforward way I've found is using SAP Connector and call a BAPI. I am using SAP Connector 3.0. But I'm just wondering whether there's better design out there for retrieval of data? The requirement is - to touch SAP as little as possible and able to transfer data in bulk.
Also, if using this design, other than the SAP login info which I can safeguard via standard encryption etc, is there any other security concern?
Thanks.
I've written many SAP RFC applications. I believe that the .Net connector sits on top of their RFC protocol as does the Java connector. In my experience, the best practice depends on who you ask at SAP. They do have a web application server (WebAS I think it is called these days....it was renamed a few times) that can probably host a web service, but it depends on what you have installed. I think many people opt for the .Net or Java connector still. (I prefer the C++ library personally since it is quite fast, but that is only for the extremely foolish ;) )
My information may be dated, but if they have been consistent then the RFC communication layer is not encrypted out of the box. There is a third party plugin that is used on SAP GUI and all RFC type connectors (.Net/JCo) to encrypt the data stream. You have to set it up in the rfc .ini file.
Then there are IDOCs, which I don't think you want to play with. It is a flat file format much like EDI but dumber.
About the security part, if you're using the equivalent of JCO with .Net, you have a user on the SAP backend to connect with.
This user should be of type "Connection" (so that no-one can use it with the SAPGUI), and should have authorizations that are limited to what is needed (so that no program can use it to perform others operations that you did not thought). While the chance that someone manage to get this user/password are low, you don't take chance with productive datas. Also, password should not be a simple one.
This may sound like basic security, but since i just found the exact opposite on a productive system, i prefer to state it.
Regards

Any tips for creating a key value store abstraction layer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
With all the key value data stores out there I have started to create an abstraction layer so that a developer does not have to be tied in to a particular store. I propose to make libraries for:
Erlang
Ruby
Java
.NET
Does anyone have any tips on how I should go about designing this API?
Thanks
First off, and as a general rule for anytime you build "pluggable" abstraction layer, build it to support at least two real implementations to start. Don't build it for just one datastore and try to make it abstracted, because you'd overlook a details that won't plug into another implementation very well. By forcing it to use two seperate implementations, you'll get closer to something that is actually flexible, but you'll have to make further changes to support a third and fourth data store.
Second, don't bother, these things already exist. Microsoft has provided a ton of these for their technologies (ODBC, ADO, ADO.NET, etc), and I'm sure Ruby/Java/etc has several as well. I understand the desire to encapsulate the already existing technology, but the more data stores you need to support, the more complexity you need to build in, and the closer you'll get to ADO.NET (or similar technologies). Companies like MS have spent a ton of money and research on solving this exact problem, and that is what they came up with.
I would strongly recommend checking out Twitter's Storehaus project - this is a key-value store abstraction layer for the JVM and written in Scala, supporting (to date) Memcache, Redis, DynamoDB, MySQL, HBase, Elasticsearch and Kafka.
Storehaus's core module defines three traits:
A read-only ReadableStore with get, getAll and close
A write-only WritableStore with put, putAll and close
A read-write Store combining both
In the Ruby ecosystem, you should check out moneta, which again provides a unified interface to key/value stores. It has a lot more features than Storehaus.

Why is the Zend Framework API documentation that poor? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Is it my browser that doesn't work with their API online documentation?
The structure of it seems to be very bad compared to the Java API online documentation and the Yii API online documentation.
I am new to Zend Framework, so I wonder, should it be like that?
I think the API is nice, the only problem sometimes they don't give real meaning of arguments
After the API is maybe a bit behind some java examples, but I find the reference guilde quiet impressive and complete. You've got 900 pages describing every piece of the framework with short code snippet that's just wonderful.
Personally I use more often the reference guide then the API documentation.
I've been digging into Zend Framework for about a month now. I'm starting to catch on but I have to agree with the initial comment. The API documentation, at least what is available, is atrocious. What is this Dojo stuff anyway? I would expect a proper, standardized API reference for something as extensive and powerful as the Zend Framework. For an experienced software engineer the reference manual is really introductory material. Once it is digested all that is really needed is a good API reference that clearly shows properties, methods, inheritance tree, with brief descriptions where necessary. Like Java, AS3, etc. I could have saved myself about two weeks of time had I had full access to the API. I don't get it, but I intend to persevere with ZF.
For me the problem is the reference guide simply lists through all the components and has a massive page telling very long-haul uses of each component with no sort of scope of where said code should appear in your work flow.
I believe it should be re-factored to be like CakePHPs documentation, where each page is targeted at getting a specific task done, like "Saving Your Data", "Deleting Data", "Validating Data" and so on.
Real life examples with context are a lot more useful then the Zend Docs where I tend to have to guess where certain variables (commonly $db) come from - and in full MVC cases, does't even apply.

How to document applications and how they integrate with other applications? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As the years go by we get more and more applications. Figuring out if one application is using a feature from another application can be hard. If we change something in application A, will something in application B break?
We have been using MediaWiki for documentation, but it's hard to keep the data up-to-date.
I think what we need is some kind of visual map of everything. And the possibility to create some sort of reference integrity? Any ideas?
I'm in the same boat and still trying to sell my peers on Enterprise Architect, a CASE tool. It's a round trip tool - code to diagrams to code is possible. It's a UML centric too - although it also supports other methods of notation that I'm unfamiliar with...
Here are some things to consider when selecting a tool for documenting designs (be they inter-system communication, or just designing the internals of a single app):
Usability of the tool. That is, how easy is it to not only create, but also maintain the data you're interested in.
Familiarity with the notation.
A. The notation, such as UML, must be one your staff understands. If you try using a UML tool with a few people understanding how to use it properly you will get a big ball of confusion as some people document things incorrectly, and someone who understands what the UML says to implement either spots the error, or goes ahead and implements the erroneously documented item. Conversely more sophisticated notations used by the adept will confound the uninitiated.
B. Documentation isn't/shouldn't be created only for the documenters exclusive use. So those who will be reading the documentation must understand what they're reading. So getting a tool with flexible output options is always a good choice.
Cost. There are far more advanced tools than Enterprise Architect. My reasoning for using this one tool is that due to lack of UML familiarity and high pressure schedules, leaves little room to educate myself or my peers beyond using basic structure diagrams. This tool easily facilitates such a use and is more stable than say StarUML. (I tried both, StarUML died on the reverse engineering of masses of code -- millions of lines) For small projects I found StarUML adequate for home use, up until I got vista installed. Being opensource, it's also free.
With all that said, you will always have to document what uses what, that means maintaining the documentation! That task is one few companies see the value in despite its obvious value to those who get to do it. . .