CSV to CSV datamapper in mule - mule

I am trying to transform one csv file to another one using mule.
But how I want is for example I have 4 header in the source csv file,
heade1, header2, header3, header4
And client may pass only first 3 header and its value in the csv file. I am getting error if mule datamapper does not find all the header in source csv.
Parsing error: Unexpected end of file in record 1, field 2 ("test2"),
metadata "headertest"; value: '<Raw record data is not available,
please turn on verbose mode.>'
How can I set the datamapper to work if source file does not contains all the header/values

I couldn't find a clean way to do that yet, but you could add a pre process step that adds a field separator at the end of each line in the input csv (i.e. add a comma at the end of each line).
This way the last field will be assumed empty.
HTH,
Marcos

Related

How to add title in a csv file for saved variables in bean shell script?

Im using Jmeter to extract datas and save in a csv file in mail notification,Im getting the data but i need to add headers for the variables saved
I need to get the titles for the saved variables in the csv file(for.eg.
Info RefNum (Title)
Sucess xxxxxxxx(variables)
You need to add print headers comma separated before data
print("Info,RefNum (Title),Transaction Id,Created, Success,Error");
print(update+","+refno+","+txnid+","+created+","+statusu+","+error);

Load compressed data from Amazon S3 to Postgres using datastage

I am trying to load data which is stored in .gz format in S3 to PostgreSQL server using Datastage. I am using the ODBC connector on the target (database) side. I am able to load uncompressed data from S3 to PostgreSQL but no luck with compressed data so far. I have tried the Expand Stage but it's not helping or I am not doing the right thing. Without the "Expand" the data is coming but it is trying to read the compressed data, while doing so it fails and throws an error:
Amazon_S3_0,1: com.ascential.e2.common.CC_Exception: Failed to initialize the parser: The row delimiter was not found within the first 132 bytes of the file. Ensure that the Row delimiter property matches the row delimiter of the file.
at com.ibm.iis.cc.cloud.CloudLogger.createCCException (CloudLogger.java: 196)
at com.ibm.iis.cc.cloud.CloudStage.processReadAndParse (CloudStage.java: 1591)
at com.ibm.iis.cc.cloud.CloudStage.process (CloudStage.java: 680)
at com.ibm.is.cc.javastage.connector.CC_JavaAdapter.run (CC_JavaAdapter.java: 443)
Amazon_S3_0,1: Failed to initialize the parser: The row delimiter was not found within the first 132 bytes of the file. Ensure that the Row delimiter property matches the row delimiter of the file. (com.ibm.iis.cc.cloud.CloudLogger::createCCException, file CloudLogger.java, line 196)
If someone has come across this, please share your valuable inputs.

ADLA AUs assigned for JSON files

I have a custom Extractor with AtomicFileProcessing set to false. It extracts a large no of JSON files (each line in the file is a JSON document) and output two files with successful and failed requests, both of them contains the json rows (AUs allocated more than 1 to extract the files). Problem is when I use the same extractor to extract the outputted files in first step with more than one AU, it fails with the error, Unexpected character encountered while parsing value: e. Path '', line 0, position 0.
If I assign 1 AU on Azure or run this locally with AU set to more than 1, it successfully processes the data. Is this behavior because of more AU provided to process a single JSON file and since the file is in non-splittable format, it can't be parallelized?
you can solve this problem converting your json file to Jsonlines.
http://jsonlines.org/examples/
Then you need to read the file using text extractor and use JsonFunctions available on Microsoft.Analytics.Samples.Formats
to read the json.
That transformation will make your file splittable and you can parallelized it!

Upload csv files with comma inside it

As per my requirement, I need to upload a .csv file into the application. I am trying to simulate this using loadrunner. The issue I am encoutering is that my csv file is in the below format
Header - AA,BB,CC
Data-xyz,"yyx,zzy",xxz
On using the below statement to upload the file, I am getting an error ""line 2 contains 4 columns instead of 3"
web_submit_data("upload",
"Action=xxx/upload",
"Method=POST",
"EncType=multipart/form-data",
"RecContentType=text/html",
"Referer=xxx",
"Snapshot=t86.inf",
"Mode=HTML",
ITEMDATA,
"Name=utf8", "Value=✓", ENDITEM,
"Name=token", "Value={token_1}", ENDITEM,
"Name=upload_file", "Value={NewParam_5}", "File=yes", "ContentType=text/csv", ENDITEM,
"Name=Button1", "Value=Upload", ENDITEM,
LAST);
AS per information provided in How to deal with a string with comma in it from a csv, when we have to read the data by using loadrunner? ,
I tried updating the .prm file to a new delimiter pipe, | but still i get the error.
[parameter:NewParam_5]
Delimiter="|"
ParamName="NewParam_5"
TableLocation="C:\temp"
ColumnName="Col 1"
I also notice that even though I set the delimiter to pipe, if I rightclick on the web_submit_data() and go to Parameter properties, i see a column delimiter option there as well and it is not set to pipe and is set to comma which indicates that this setting is taking higher precedence to the setting in .prm file.
Can someone please guide me the right way to set a new delimiter so that vugen recognizes and parses the csv file as I want it to.
I am using loadrunner 12.5
Thanks for your help.
Do you need to upload a file or a line of comma separated variables? Right now you appear to be reading a line of CSV variables, not a file as your parameter file would contain a list of filenames or a single file reference within the directory of the virtual user (extra files, transferred with the use) or created by the virtual user and then uploaded.

Pentaho Spoon Text File Output Additional Information Header

I am using the Text File Output step to create a CSV file, however i need to insert some additional rows of information at the top of the file. I have been able to have another transform output this data in a previous job step, however when doing so prevents me from outputting column headers in the appended csv output.
The end result I am looking for would look something like this:
EXTRACT TYPE: XYZ
DATE: 20110520
FIRST NAME,LAST NAME,AMOUNT
charlie, chaplain, 2345
someone, else, 1234
Any help would be greatly appreciated. Thanks.
You can output the text file without header option. Check the KTR file - I attach the links below.
Here's the : http://pentaho.phi-integration.com/kettle/kettle-files/csv_header_solution.ktr and the sample source CSV file : http://pentaho.phi-integration.com/kettle/kettle-files/source.csv.
Hope this help.