Linking quotation and order data in way that lets order data to be flexible - qlikview

I have developed a data model as per screenshot. The purpose of this is to have all the relevant quotation data "mapped" out for further analysis.
The data between orders/quotations/invoices etc is linked by generating link tables, which comes out directly from SAP document flow (the document flow table only holds main document number, like quotation number and a line number of the material).
The links serve their purpose and are correct, if the quotations weren't so "flexible".
Orders to invoices are linked on a material level, so there cannot be any difference between them.
But Quotations are not hard linked to orders, I'll explain below.
For example if a quotation is created for 2 lines, which then later is converted in to an order, the user is not forced to keep the order in the same structure as the quotation, he/she can add more items in the order/change quantities etc. so they can really only be linked on the header level, aka Document number to document number.
So this is where I need some help.
I have tried to link the quotations to order in 2 ways, but they both have their issues.
Doc to Doc. - The lines are duplicated when trying to create a report at an item level.
Doc + line number to Doc + line number - If the order has extra lines added on to compared to quotation, this data is not captured in the flow.
I am hoping someone had a similar task/issue in the past and would be kind enough to share their experience/approach.
Regards

Related

What kind of dynamic content is available in Eloqua?

In Eloqua, can you send out an email to a contact list but version the "hero" image headline for each segment using dynamic content blocks?
And then can you do the reverse, have the main image remain the same, and dynamically populate products below that they've purchased in the past?
For scenario 1, yes that is possible out of the box.
Scenario 2 however is a bit more complicated and would generally require a 3rd party tool to provide this type of dynamic code generation based upon a lookup table (in this case a line item inventory or purchases). Because a contact could have zero or more products (commonly as individual records in a CDO), you would generally need to aggregate or count the number of related records, and then generate your HTML table and formatting around those record values, and be contextually aware if it is the first or last record (to begin and close the table). Dynamic content does not have mathematical functions and would not be able to count those related records - this is something usually provided by a B2C system like SFMC using ampscript or dynamically generated through custom code and sent through a transactional SMTP service. You could have multiple dynamic content on top of each other, but your biggest limitation becomes the field merge, with only lets you select a record based upon earliest/last creation date, or last modified. This is not suitable if you have more than 2 records. A third party service that provides a cloud content module for your email is your best bet.

Merge two CSV and collate data

I have two CSV files, the first like so:
Book1:
ID,TITLE,SUBJECT
0001,BLAH,OIL
0002,BLAH,HAMSTER
0003,BLAH,HAMSTER
0004,BLAH,PLANETS
0005,BLAH,JELLO
0006,BLAH,OIL
0007,BLAH,HAMSTER
0008,BLAH,JELLO
0009,BLAH,JELLO
0010,BLAH,HAMSTER
0011,BLAH,OIL
0012,BLAH,OIL
0013,BLAH,OIL
0014,BLAH,JELLO
0015,BLAH,JELLO
0016,BLAH,HAMSTER
0017,BLAH,PLANETS
0018,BLAH,PLANETS
0019,BLAH,HAMSTER
0020,BLAH,HAMSTER
And then a second CSV with items associated with the first list, with ID being the common attribute between the two.
Book2:
ID,ITEM
0001,PURSE
0001,STEAM
0001,SEASHELL
0002,TRUMPET
0002,TRAMPOLINE
0003,PURSE
0003,DOLPHIN
0003,ENVELOPE
0004,SEASHELL
0004,SERPENT
0004,TRUMPET
0005,CAR
0005,NOODLE
0006,CANNONBALL
0006,NOODLE
0006,ORANGE
0006,SEASHELL
0007,CREAM
0007,CANNONBALL
0007,GUM
0008,SERPENT
0008,NOODLE
0008,CAR
0009,CANNONBALL
0009,SERPENT
0009,GRAPE
0010,SERPENT
0010,CAR
0010,TAPE
0011,CANNONBALL
0011,GRAPE
0012,ORANGE
0012,GUM
0012,SEASHELL
0013,NOODLE
0013,CAR
0014,STICK
0014,ORANGE
0015,GUN
0015,GRAPE
0015,STICK
0016,BASEBALL
0016,SEASHELL
0017,CANNONBALL
0017,ORANGE
0017,TRUMPET
0018,GUM
0018,STICK
0018,GRAPE
0018,CAR
0019,CANNONBALL
0019,TRUMPET
0019,ORANGE
0020,TRUMPET
0020,CHERRY
0020,ORANGE
0020,GUM
The real datasets are millions of records, so I'm sorry in advance for my simple example.
The problem I need to solve is getting the data merged and collated in a way where I can see which item groupings most commonly appear together on the same ID. (e.g. GRAPE,GUM,SEASHELL appear together 340 times, ORANGE and STICK 89 times, etc...)
Then I need to see if there is any change/deviation to the general results in common appearance when grouped by SUBJECT.
Tools I'm familiar with are Excel and SQL, but I also have PowerBI and Alteryx at my disposal.
Full disclosure: Not homework, or work, but a volunteer project, thus my unfamiliarity with this kind of data manipulation.
Thanks in advance.
An Alteryx solution:
Drag the two .csv files onto your canvas (seen as book1.csv and book2.csv in my picture; Alteryx will create "Input" tools for you.
Drag a "Join" tool on and connect the two .csv files to its inputs; select "ID" as the join field; unselect the "Right_ID" as output since it's merely a duplicate of "ID"
Drag a "Summary" tool on and connect the Join tool's output to the Summary tool's input; select all three of the outputs and add as a "group by"... then add the ID column with a "count"
Drag a browse tool on and connect the summary's output to the browse tool's input.
run the workflow
After all that, click on the browse tool and you should see what is seen in my screenshot: (which is showing just the first ten rows of output):
+1 for taking on a volunteer project - I think anyone who knows data can have a big impact in support of their favourite group or cause.
I would just pull the 2 files into Power BI as 2 separate tables (Get Data / From File). Create a relationship between the 2 tables based on ID (it might get auto-generated). It should be one to many.
Then I would add a Calculated Column to the Book1 table to Concatenate the related ITEM values, eg.
Items =
CALCULATE (
CONCATENATEX (
DISTINCT ( 'Book2'[ITEM] ),
'Book2'[ITEM],
", ",
'Book2'[ITEM], ASC
)
)
Now you can use that Items field in visuals (e.g. a Table), along with Count of ID to get the frequency.
Adding Subject to a copy of the table (e.g. to the Columns well of a Matrix) will produce your grouped scenario, or you could add a Subject Slicer.
As you will be comparing subsets of varying size, I would change Count of ID to Show value as - % of grand total.
Little different solution using Alteryx.
With this dataset, there are very few repeating 3 or 4 item groups. You can do the two item affinity analysis and get a probability of 3 or 4 item groups, or you can count the 3 and 4 item groups individually. I believe what you want is the latter as your probability of getting grapes with oranges may be altered by whether you have bananas in the cart or not.
Anyway, I did not join in the subject until after finding all of my combinations. I found all the combinations by taking the Cartesian join of two, then three, then four of the original set. I then removed all duplicates by ensuring items were always in alphabetical order in each row. I then counted occurrences of each combination. More joins can be added in the same pattern to count groups of 5,6,7...
Once you have the counts of occurrences, then I would join back with the subjects and perform this analysis on each group and compare to the overall results.
I'm supposed to disclose that I work for Alteryx.
first of all if you are using windows
just navigate to the directory which contains the CSV and write the following command:
copy pattern newfileName.csv
#example
copy *.csv merged.csv
now you created one csv file, the file is too large now you can't process it once, depending on your programming language you can use appropriate way, for python you can use generators to process line by line, or pandas you can read chunk by chunk it will be easy.
I hope this help you.

General EDI XML processing from different parties

We're starting with EDI with one of our suppliers. We agreed on a fairly simple XML structure that contains all necessary info to place an order, but no more than that.
I will write a tool to generate these XML purchase order files based on the ERP data in our SQL Server database (the ERP can't generate the XML's), and to import the order confirmation and shipment messages that will come from the supplier.
I would like to make things as general as possible, so other xml file formats from other suppliers can be processed too by only adding some configuration. So no hardcoded parsing per supplier.
I'm not an expert on these matters, but I was thinking along the lines of some kind of mapping table in SQL Server that contains the data fields we need (order number, requested date, ordered quantity,...), and then per supplier file format the path in the XML file structure where this element can be found.
So for example we're looking for the order number. For supplier A the order number can be found here:
<ediroot><message><head><ordernumber>
For supplier B the order number can be found here:
<ediroot><message><header><body><order_number>
This way I only have to add lines to this mapping table to be able to support new XML file types.
Or does this all seem too far fetched, and are there way easier/better solutions for this? We're an SME, we're not talking about millions of EDI messages to be processed.
Thanks

Making rdlc datasource filter from other datasource

So to summarise the problem, I have a report which has two datasources - and is really two reports stuck to each other. I want the second part of the report to display data based on what the first part of the report is showing.
To go into more detail, the situation is as follows. I have two database tables - lets call one Customers, and the other Orders.
Customers contains data about the customers.
Orders contains a link to customers and contains the person's orders.
The report itself is supposed to display some sort of letter in part 1:
"Hello [CustomerName], you have an ongoing balance of [TotalBalance] bla bla bla..."
and a list of all the orders he has made in part 2
"Order 1: Item 1: 1 euro
Order 2: Item 2: 2 euro ..."
Originally these were two separate reports which we were generating one record at a time, outputting as pdf files and merging them using third party software such that the letter and the list of orders were next to each other. The problem is that this system will need to generate hundreds of them at a time, and it was taking ages. So now I want to pass a pair of large data sources and generate them in batches (call them 600 at a time) - which works faster.
So how can I force the second tablix which uses a different datasource, to filter based on what is in the first tablix with its own datasource?
I've looked at subreports, but they only work using reporting server and these are local reports.
Anything I can do ? I'm worried that its not possible.
There's no reason subreports won't work with local reports.
I recommend you download the samples from this site ReportViewer Samples. The project named "SupplyingData" shows how to load data into a subreport.

Database Design: Line Items & Additional Items

I am looking for a solution or to be told it simply is not possible/good practice.
I currently have a database whereby I can create new orders and select from a lookup table of products that I offer. This works great for the most part but i would also like to be able to add random miscellaneous items to the order. For instance one invoice may read "End of Tenancy Clean" and the listed product but then have also an entry for "2x Lightbulb" or something to that effect.
I have tried creating another lookup table for these items but the problem is i don't want to have to pre-define every conceivable item before I can make orders. I would much prefer to be able to simply type in the Item and price when it is needed.
Is there any database design or workaround that can achieve this? Any help is greatly appreciated. FYI I am using Lightswitch 2012 if that helps.
One option I've seen in the past is a record in your normal items table labeled something like "Additional Service", and the application code will recognize this item and also require you to enter or edit a description to print with the invoice.
In the ERP system which we have at work, there is a flag in the parts table which allows one to change the description of the part in orders; in other words, one lists the part number in the order and then changes the description. This one off description is stored in a special table (called NONSTANDARD) which basically has two fields - an id field and the description. There is a field in the 'orderlines' table which stores the id of the record in the special table. Normally the value of this field will be 0, which means that the normal description of the part be displayed, but if it's greater than 0, then the description is taken from the appropriate row in the nonstandard table.
You mean something like this?
(only key attributes included, for brevity)