How to convert xml to excel using mule? - mule

Input we will get input from email,ftp etc as xml file
required output: excel file

The lovely new Mule Datamapper (mule 3.4) will process the xml to Excel, you can select and transform the required attributes.
The older approach is to process the xml and convert to csv files that can be read by most excel tools. The xml can be eaily parsed to maps and requred part output using Maps to CSV connector with the flatpack formatting file.

Related

How to parse Flat File Schema to mulesoft object

I have requirement where I need to parse MuleSoft Flat File Schema to parse incoming file content, input file row to parse it and convert to Mule object. It should include parsing of multiple rows in a file with 5-7 attributes per row. I have seen many examples but no one is explaining how can we create flatFile schema to process the flat file in anypoint studio.
Could you please help me for the same.
Input file -
1220612WEBL23555606CA01
200000162608361 FFVV220606D915552982635 4TKTT0140MAZUR/ISWAR APRIL C YXYYXY /C9F6R1 MTHO DTD 0000
G002389100000000000CAD2070231 0 996AC 001 RESLE BALANCE
700CAD 0.00 NO ADC 00 0 00142152020558 Y262990535
889486594HGMRNL8785 00000000000082204CAD2 CC5 0423 0423 000000000020512 00000000000 CAD2 EX 000000
8002389 00000000000 CAD2 CA 00000000000 00000000000 00000000000
9002389 AGT6490/00 CASH
Z00625
The basic steps are simple:
Define the structure of the file. You need to understand the records
and fields of the file completely.
Convert that definition into a Flat file schema. Read the documentation, if the file has simple records structure it should be
very direct.
Use the schema in a DataWeave transformation
Flat file format are usually custom defined. It is very very unlikely that there is a tool to automatically translate it to a flat file definition since they lack delimiters and structure. You will have to read the definition of the structure and create the flat file schema manually. For that you need to understand how the flat files schemas are structured by reading the documentation. They are not complex, but you may want to start with simpler examples until you get the hang of it.

Store xml result as .xml file

By using the query:
SELECT CAST([RESULT] as xml)
FROM [dbo].[EVENT_RESULTS]
WHERE RESULT_TYPE=21
AND ANNUALIZATION_ID = 1
I produced an xml output with of course multiple lines. Now I simply want to store this output as .xml file in the c: folder.
The output looks as follows:
enter image description here
If I klick on it, it is declared as a .xml file. But how can I store this result automatically when I run the SQL script? I don't want to do the storage "by hand", since the SQL code will be part of a BATCH Programm.
Thanks very much!
This is Working in MSSQL Server
select * from TableNamefor Xml RAW('Rec'),elements XSINIL,root('Data')

ADLA AUs assigned for JSON files

I have a custom Extractor with AtomicFileProcessing set to false. It extracts a large no of JSON files (each line in the file is a JSON document) and output two files with successful and failed requests, both of them contains the json rows (AUs allocated more than 1 to extract the files). Problem is when I use the same extractor to extract the outputted files in first step with more than one AU, it fails with the error, Unexpected character encountered while parsing value: e. Path '', line 0, position 0.
If I assign 1 AU on Azure or run this locally with AU set to more than 1, it successfully processes the data. Is this behavior because of more AU provided to process a single JSON file and since the file is in non-splittable format, it can't be parallelized?
you can solve this problem converting your json file to Jsonlines.
http://jsonlines.org/examples/
Then you need to read the file using text extractor and use JsonFunctions available on Microsoft.Analytics.Samples.Formats
to read the json.
That transformation will make your file splittable and you can parallelized it!

How can I create an output as csv file in HP Exstream?

Hi I am new to HP Exstream. Trying to create an output in csv instead of pdf.
In the document could not find any indication. Please help
HP Exstream is used to generate output in AFP, PDF, HTML, PCL, etc. It doesnt support generating CSV as a output/output queue.
But, under "Data file" of an "Application" a report file can be generated. This report file can be delimited file. You can add any delimiter you want.

CSV to CSV datamapper in mule

I am trying to transform one csv file to another one using mule.
But how I want is for example I have 4 header in the source csv file,
heade1, header2, header3, header4
And client may pass only first 3 header and its value in the csv file. I am getting error if mule datamapper does not find all the header in source csv.
Parsing error: Unexpected end of file in record 1, field 2 ("test2"),
metadata "headertest"; value: '<Raw record data is not available,
please turn on verbose mode.>'
How can I set the datamapper to work if source file does not contains all the header/values
I couldn't find a clean way to do that yet, but you could add a pre process step that adds a field separator at the end of each line in the input csv (i.e. add a comma at the end of each line).
This way the last field will be assumed empty.
HTH,
Marcos