General EDI XML processing from different parties - sql

We're starting with EDI with one of our suppliers. We agreed on a fairly simple XML structure that contains all necessary info to place an order, but no more than that.
I will write a tool to generate these XML purchase order files based on the ERP data in our SQL Server database (the ERP can't generate the XML's), and to import the order confirmation and shipment messages that will come from the supplier.
I would like to make things as general as possible, so other xml file formats from other suppliers can be processed too by only adding some configuration. So no hardcoded parsing per supplier.
I'm not an expert on these matters, but I was thinking along the lines of some kind of mapping table in SQL Server that contains the data fields we need (order number, requested date, ordered quantity,...), and then per supplier file format the path in the XML file structure where this element can be found.
So for example we're looking for the order number. For supplier A the order number can be found here:
<ediroot><message><head><ordernumber>
For supplier B the order number can be found here:
<ediroot><message><header><body><order_number>
This way I only have to add lines to this mapping table to be able to support new XML file types.
Or does this all seem too far fetched, and are there way easier/better solutions for this? We're an SME, we're not talking about millions of EDI messages to be processed.
Thanks

Related

Adding new product with category tree - specific price problems

I installed an add-on for bulk action (called ba_importer v 1.1.24), I upload an Excel file with my data and create a group of products.
I can set the categories' tree or manually add ID of main categories and associated. I tried with no luck to use the tree features (like Home/Products/etc) and so I use all the ID of main category and all the associated. The result is a product with the correct categories set, but with no specific price from the customer group linked to a category.
I tried to edit a single product, remove all categories and set it one by one (set one, save, set one, save etc.) and then the specific price from the group linked to a category appears to the product.
Is there a better solution? I'm thinking about make a personal PHP page that reads an Excel file and sets all the information about the product, but I'm scared to face the same problem with the specific price. 
There is no such thing as "category-related specific price",
if you have specific prices tied to customer groups , these are created as a result of the add/update product action with ps_specific_price DB entries having id_group with your restricted ID.
It is likely that the bulk module acts directly with DB queries to speed up things and bypasses this operation, I've seen this behaviour with those kind of modules in the past.
Since you are talking of a paid add-on, I would definitely seek help from the developer.

SQL Server big tables or store data in a xml field

I have a .net solution with a big form with many data that the customer need to fill, like a form with many steps to fill all data we need to get.
So i was wondering if it's better (from a performance and design approach) a traditional big table with many fields, o store the data only on one field of XML type.
Example of one "TraditionalTable":
RecordId
CustomerId
Data 1
Data 2....
to Data N
1
120
01/01/1980
abcd ....
123
2
20
04/02/2004
fgh ....
230
3
10
05/01/1995
xyz ....
135
Example of one "DataWithXMLField":
RecordId
CustomerId
FormData
1
120
< data>< customerdetails>< borndate>01/01/1980< /borndate>< /customerdetails >< financialinfo >...."
I've done many systems like this and prefer to keep the data as XML (often it's a serialized object). I find this to be efficient at runtime and at design time. (See item below about binary attachments).
The following are some suggestions based on what I've done in the past. Obviously it's not a one-sized hammer...
Often data is "collected" by a user and "approved" by an administrator. While collecting the data, it's stored as XML. When approved, the XML is shred and placed into "normal" relational tables/fields.
Often this data has been collected through multiple pages. Storing as XML allows collecting data in a way that is logical to the user but doesn't fit the final data structure very well.
If a form is abandoned (not completed or canceled) it's easy to delete a single row.
Things to keep in mind:
Some data is related to workflow and is separate from the data being collected. For example, and field for "Form Status" may go from "In Progress", to "Submitted" to "Approved". This type of data should be kept as regular columns.
Store Binary Data separately. If your form includes submitting binary data (like uploading a PDF) I like to generate a GUID on the front end. Store that GUID in the XML and then save the binary data separately using the GUID. Possibly on disk or in a separate "attachments" table.
Define a column for a "version number" of the XML. This way you can programmatically identify what is in the XML. This will help in the future when you need to make changes to the XML.
Define a column for a "Summary" that is short human-friendly version of the XML. For example, if your XML contains information for registering for summer camps, your "XML Summary" might contain the text: "SMITH,JOHN, Camp White Pine 2021". This text us calculated on the front end. It can then be used for displaying rows of data without having to poke into the XML. For example, an administrative page may exist that lists applications that require approval.
Define a column to indicate if the XML meets all your requirements. You don't want to validate XML in the database (it's often hard, and likely repetitive of the UI). Your business layer can apply business rules (Validation) to the XML (or classes) and store in the database an indicator that all business rules are met.

What kind of dynamic content is available in Eloqua?

In Eloqua, can you send out an email to a contact list but version the "hero" image headline for each segment using dynamic content blocks?
And then can you do the reverse, have the main image remain the same, and dynamically populate products below that they've purchased in the past?
For scenario 1, yes that is possible out of the box.
Scenario 2 however is a bit more complicated and would generally require a 3rd party tool to provide this type of dynamic code generation based upon a lookup table (in this case a line item inventory or purchases). Because a contact could have zero or more products (commonly as individual records in a CDO), you would generally need to aggregate or count the number of related records, and then generate your HTML table and formatting around those record values, and be contextually aware if it is the first or last record (to begin and close the table). Dynamic content does not have mathematical functions and would not be able to count those related records - this is something usually provided by a B2C system like SFMC using ampscript or dynamically generated through custom code and sent through a transactional SMTP service. You could have multiple dynamic content on top of each other, but your biggest limitation becomes the field merge, with only lets you select a record based upon earliest/last creation date, or last modified. This is not suitable if you have more than 2 records. A third party service that provides a cloud content module for your email is your best bet.

Linking quotation and order data in way that lets order data to be flexible

I have developed a data model as per screenshot. The purpose of this is to have all the relevant quotation data "mapped" out for further analysis.
The data between orders/quotations/invoices etc is linked by generating link tables, which comes out directly from SAP document flow (the document flow table only holds main document number, like quotation number and a line number of the material).
The links serve their purpose and are correct, if the quotations weren't so "flexible".
Orders to invoices are linked on a material level, so there cannot be any difference between them.
But Quotations are not hard linked to orders, I'll explain below.
For example if a quotation is created for 2 lines, which then later is converted in to an order, the user is not forced to keep the order in the same structure as the quotation, he/she can add more items in the order/change quantities etc. so they can really only be linked on the header level, aka Document number to document number.
So this is where I need some help.
I have tried to link the quotations to order in 2 ways, but they both have their issues.
Doc to Doc. - The lines are duplicated when trying to create a report at an item level.
Doc + line number to Doc + line number - If the order has extra lines added on to compared to quotation, this data is not captured in the flow.
I am hoping someone had a similar task/issue in the past and would be kind enough to share their experience/approach.
Regards

TSQL Query for analyzing Text

I have a table that has ordernumber, cancelled date and reason.
Reason field is varchar(255) field and it was written by many different sales rep and really hard to group by the reason category I need to generate a report to categorize cancelation reasons. What is the best way to analyse the reasons with TSQL?
Sample of reasons entered by sales rep
cust already has this order going out
cust can not hold for item Called to cancel order
cust doesn't want to pay for shipping
wife ordered same item from different vendor, sent email
cst made a duplicate order, sent email
cst can't hold
Cust doesn't want to go through verification process so is cancelling order
doesn't ant to hold for Bo
doesn't want
Cust called to cancel the order He can no longer get the product he wants
cnt hld
will not comply with export req
cant' hold
Custs request
Cust will not hold for BO
per. cust. request.
BTW I have SQL Server 2005.
part of your problem is that this these aren't truly reason codes. sounds like an issue with your schema to me. if there aren't predefined reason codes to reference and you're allowing free text entry for each reason, then there's really no way to do this directly, outside of pulling distinct reasons back, which is probably not going to be very useful.
just an idea, can you add another column to the table, even if it's in a temp or test environment and then give the business users the ability to assign a code (e.g. 1 for mis-ships, 2 for duplicate orders, 3 for wrong item etc.) to each order cancellation. then perform the analysis on that.
i assume that's what they're expecting from you, but i don't know that i see any better way. you could always perform the analysis yourself if you have the authority/knowledge but this might be painful if you have a ton of cancellations.
edit- i see now that you've tagged this with regex... it would be possible to setup specified keywords to pull out the entries, but there'd have to be some tolerance built in and still manual analysis afterwards for items which don't fall into any specified category due to misspellings etc. /edit
+1 to #jmatthews, you really need to have reason codes that are selected and then possibly allow free-form entry for the full reason.
If this isn't an option you can look into text clustering. Don't expect that to be fast or easy though, it's still an open research topic and is related to both AI and machine learning.
Look at Term Lookup in SSIS, here is an article to read.