Import Class Path Location in dwl Mule - mule

In Munit , i'm using the Response asset to do Validation, not sure how to import the multi level folder path in dwl
, if i have multiple folder path like sample_data/response/address.dwl it is not working(import sample_data/response::address not liking the / also .), On keeping single folder followed by filename working import sample_data::address.
Not sure, syntax for multilevel folder on dwl import?
Tried in both RunTime 4.3 and 4.2.2? Any Thought please. Thanks.
<munit:validation >
<munit-tools:assert doc:name="Assert payload" doc:id="9df23ba1-befd-4da9-b8aa-a95b5b59efff" message="The payload does not match">
<munit-tools:that ><![CDATA[#[%dw 2.0
import sample_data::address
// how to import multiple folder path sample_data/response::address???
---
address::main({payload: payload})]]]></munit-tools:that>
</munit-tools:assert>
</munit:validation>
Thanks in advance.
Let me know if question requires details. Thanks.

try with below code.
<munit:validation >
<munit-tools:assert doc:name="Assert payload" doc:id="9df23ba1-befd-4da9-b8aa-a95b5b59efff" message="The payload does not match">
<munit-tools:that ><![CDATA[#[%dw 2.0
import sample_data::response::address
// how to import multiple folder path sample_data/response::address???
---
address::main({payload: payload})]]]></munit-tools:that>
</munit-tools:assert>
</munit:validation>

Related

Move specific file from SFTP to S3 using Airflow

I have a requirement where I need to move a specific file from an SFTP Server to an S3 Bucket. I am currently using the below code to move the required file if I provide its complete path(including filename). There are multiple files in the SFTP directory, but I only want to move files which are of .xlsx format or has .xlsx in filename, please suggest how I can do that. I am using SFTPtoS3Operator to get the file.
import pysftp
from airflow.models import DAG
from airflow.operators.python import PythonOperator
from airflow.providers.sftp.operators.sftp import SFTPOperator
from airflow.providers.sftp.sensors.sftp import SFTPSensor
from airflow.utils.dates import days_ago
from airflow.models import Variable
from airflow import models
from airflow.providers.amazon.aws.transfers.sftp_to_s3 import SFTPToS3Operator
with DAG("sftp_operators_workflow",
schedule_interval=None,
start_date=days_ago(1)) as dag:
wait_for_input_file = SFTPSensor(task_id="check-for-file",
sftp_conn_id="ssh_conn_id",
path="<full path with filename>",
poke_interval=10)
sftp_to_s3 = SFTPToS3Operator(
task_id="sftp_to_s3",
sftp_conn_id="ssh_conn_id",
sftp_path="<full path with filename>",
s3_conn_id="s3_conn_id",
s3_bucket="<bucket name>",
s3_key="full bucket path with filename")
wait_for_input_file >> sftp_to_s3
The file that needs to be moved as the filename like : M.DD.YYYY.xlsx.
I really appreciate all the help & support on this one.

Error message on google bigquery when I try to import a file

I want to import my csv files to bigquery but it doesn't work.
I have this message : c2580321527929797.
I don't know why. The file is clean and works on dbeaver.

How to write a large CSV file to SFTP in Mule 4

I am trying to write a large CSV file to SFTP.
Used For each to split the records and write using SFTP connector.
But the file is not reaching the SFTP.
What am I doing wrong here?
Below is the flow:
<flow name="sftp-Flow" doc:id="294e7265-0bb3-466b-add4-5819088bd33c">
<file:listener doc:name="File picked from path" directory="${processing.folder}" config-ref="File-Inbound" autoDelete="true" matcher="filename-regex-filter" doc:id="bbfb12df-96a4-443f-a137-ef90c74e7de1" outputMimeType="application/csv" primaryNodeOnly="true" timeBetweenSizeCheck="1" timeBetweenSizeCheckUnit="SECONDS">
<repeatable-in-memory-stream initialBufferSize="1" bufferSizeIncrement="1" maxBufferSize="500" bufferUnit="MB"/>
<scheduling-strategy>
<fixed-frequency frequency="${file.connector.polling.frequency}"/>
</scheduling-strategy>
</file:listener>
<set-variable value="#[attributes.fileName]" doc:name="fileName - Variable" doc:id="5f064507-be62-4484-86ea-62d6cfb547fc" variableName="fileName"/>
<foreach doc:name="For Each" doc:id="87b79f6d-1321-4231-bc6d-cffbb859d94b" batchSize="500" collection="#[payload]">
<sftp:write doc:name="Push file to SFTP" doc:id="d1562478-5276-4a6f-a7fa-4a912bb44b8c" config-ref="SFTP-Connector" path='#["${sftp.remote.folder}" ++ "/" ++ vars.fileName]' mode="APPEND">
<reconnect frequency="${destination.sftp.connection.retries.delay}" count="${destination.sftp.connection.retries}"/>
</sftp:write>
</foreach>
<error-handler ref="catch-exception-strategy"/>
I have found the solution. The foreach directive only supports collections in JSON, XML, or JSON formats. I just placed a transformer to convert the CSV to JSON before the foreach. Now the file is properly saved in batches.
Instead of splitting the payload in rexords try setting the CSV reader to streaming mode.
outputMimeType="application/csv; streaming=true"
Update: the best solution might just to remove both the foreach and the outputMimeType attribute from the File listener. The file will be read and write as a binary using streaming to the SFTP write operation. Remove outputMimeType will prevent Mule from trying to parse the big file as CSV, which is not really needed since the only processing the flow is doing as a CSV is the foreach, which will no longer be needed. This method will be faster and consume less resources.

How do I use sftp:content to write an sftp message from memory

I am trying to create an sftp file from memory using sftp:write and sftp:content. My dataweave code is:
<sftp:write doc:name="sftp from memory" doc:id="01bee2a1-69ad-4194-8ec8-c12852521e87" config-ref="SFTP_Config" path="#[vars.sftpFileName]" createParentDirectories="false">
<sftp:content><![CDATA[%dw 2.0
output application/csv
---
payload.toplevel.secondlevel.bottomlevel[0],
payload.more.andmore.andstillmore[0]
]]>
</sftp:content>
</sftp:write>
It does create a file in the correct directory, but the contents are not the payload values. Instead, it is the actual dataweave code. The file contents are:
%dw 2.0
output application/csv
---
payload.toplevel.secondlevel.bottomlevel[0]
payload.more.andmore.andstillmore[0]
I am using version 4.2.2 of the Mule Server and 1.3.2 of the SFTP component.
You aren't actually passing dataweave, you're passing a string. Press the fx button on fields when you're going to be using dataweave. The XML will look like this. Notice the extra #[? That indicates this is dataweave. Your dataweave is also invalid; you must output an object or an array of objects; to make your output an object, you wrap it in { .. } just like JSON, and use key-value pairs. When outputting this to CSV, the keys will be used as a header row unless you include header=false in the output line: https://docs.mulesoft.com/mule-runtime/4.3/dataweave-formats-csv#writer_properties
<sftp:write doc:name="sftp from memory" doc:id="01bee2a1-69ad-4194-8ec8-c12852521e87" config-ref="SFTP_Config" path="#[vars.sftpFileName]" createParentDirectories="false">
<sftp:content><![CDATA[#[%dw 2.0
output application/csv
---
{
someKeyName: payload.toplevel.secondlevel.bottomlevel[0],
someOtherKeyName: payload.more.andmore.andstillmore[0]
}]]]>
</sftp:content>
</sftp:write>

What is the wildcard for the File connector file path field in Anypoint Studio and Mule

I am using Anypoint Studio 7 and Mule 4.1.
A product file in csv format with a filename that will include the current timestamp will be added to a directory on a daily basis and needs to be processed. To do this we are creating a mule workflow using the file connector and want to configure the file path field to only read csv file formats regardless of name.
At the moment, the only way I can get it to work is by specifying the filename in the file path field which looks like this:
C:/Workspace/product-files-v1/src/main/resources/input/products-2018112011001111.csv
when I would like to specify some kind of wildcard in the file path similar to this:
C:/Workspace/product-files-v1/src/main/resources/input/products-*.csv
but the above does not work.
What is the correct wildcard syntax and also is there a way to specify the relative file path instead of the absolute one as when I try to specify a relative file path I get an error too?
Error message in logs:
********************************************************************************
Message : Illegal char <*> at index 108: C:/Workspace/product-files-v1/src/main/resources/input/products-*.csv.
Element : product-files-v1/processors/1 # product-files-v1:product-files-v1.xml:16 (Read File)
Element XML : <file:read doc:name="Read File" doc:id="fdbbf477-e831-4e7c-827c-71efd1d2e538" config-ref="File_Config" path="C:/Workspace/product-files-v1/src/main/resources/input/products-*.csv" outputMimeType="application/csv" outputEncoding="UTF-8"></file:read>
Error type : MULE:UNKNOWN
--------------------------------------------------------------------------------
Root Exception stack trace:
java.nio.file.InvalidPathException: Illegal char <*> at index 108: C:/Workspace/product-files-v1/src/main/resources/input/products-*.csv
Thanks for any help
i am assuming you need to user a <file:matcher> when you want to filter or read certain type of files from a directory.
an example would be
<file:matcher
filename-pattern="a?*.{htm,html,pdf}"
path-pattern="a?*.{htm,html,pdf}"
/>