creating file from sql data to csv in mule 4 - mule

I need to create file from SQL database and load data into CSV file. After CSV file generated successfully, need to transfer it to another sftp location.

I'm assuming that you are already familiar with Mule 4 applications and flows. In a flow you can query the database using the Database connector, use the Transform component to write a DataWeave transformation to a CSV (output application/csv), then use the SFTP connector to write the file to the SFTP server.
You should get familiar with how each component from the documentation to be able to write the database query, the transformation and the SFTP transfer.

Related

How to Read Synapse Analytics SQL Script (JSON Format) as a SQL Script in an IDE?

I have a Synapse Git Project that has SQL Scripts created in the Azure Portal like so Microsoft Docs SQL Scriptand the challenge is that in GIT they appear as this kinda bulky JSON file and I would love to read it as SQL File DBEAVER or IntelliJ …
Any way to do this without having to manually select the Query Section of the file and kinda clean it?
First Some Background
Synapse stores all artifacts in JSON format. In the Git repo, they are in the following folder structure:
Inside each folder are the JSON files that define the artifacts. Folder sqlscript contains the JSON for SQL Scripts in the following format:
NOTE: the Synapse folder of the script is just a property - this is why all SQL Script names have to be unique across the entire workspace.
Extracting the script
The workspace does allow you to Export SQL to a .sql file:
There are drawbacks: you have to do it manually, 1 file at a time, and you cannot control the output location or SQL file name.
To pull the SQL back out of the JSON, you have to access the properties.content.query property value and save it as a .sql file. As far as I know, there is no built in feature to automatically save a Script as SQL. Simple Copy/Paste doesn't really work because of the \ns.
I think you could automate at least part of this with an Azure DevOps Pipeline (or a GitHub Action). You might need to copy the JSON file out to another location, and then have a process (Data Factory, Azure Function, Logic App, etc.) read the file and extract the query.

Change database connection in pentaho without using GUI

i wanted to try to run the pentaho from Linux Centos 7.
In the server there are no GUI for it, so i can't open the Spoon GUI where we usually drag and drop the components.
If we use the Spoon, we can change the Database Connection by clicking the Database Connection then re-type the host.
But how do i do that if i can't open the Spoon? Is there a file or something where i can change those?
All transformation and job files are just XML.
You can edit a transformation in your laptop with the correct parameters, save it, find the relevant XML snippet, copy, open the ktr on the server in a text editor, delete the old db connection and paste in the new one.
It may actually be a bit tricky if you mess something up, but with a few tries you should have it done.
You can use JSON file to change database connection in Pentaho Data Integration without using GUI.
Set variable and database connection value in that variable inside that JSON file so that next time you just drag and drop JSON file in server where you cannot open Spoon GUI to change the database connection values.
Let me explain how we do it.
First create a transformation where we take JSON file as input and set that value into variable to later use that value anywhere inside that job as ${variable_name}.
JSON file looks like this ...
Browse and add your JSON file ...
Go to Fields tab and Select field ...
Now in set variable step go to Get Fields ...
Now let us suppose we have created these variables in JSON file now we use these variables to create a database connection.
${mysql_host}
${mysql_port}
${mysql_username}
${mysql_password}
${mysql_database_name}
like this ...
In this way you can build your ETL with dynamic database connection in Pentaho Data Integration. Just replace JSON file in server then database connection will be changed in that whole ETL package.
This example ETL package can be downloaded from this link:
Download

How to read and write data on CSV file using DB connector in Mule 3.8

How to read and write data on CSV file using DB connector in Mulesoft
The DB connector returns Java, so after the connector, you would have a transform message component that converts that Java to a CSV.

WSO2 EI (ESB) - CSV File Data Validation

I am able to read the csv file using WSO2 EI and insert the csv data into MySQL DB as well. Picked the csv fle from sftp as also and inserted the csv file data into MySQL DB. I have followed the WSO2 doc from official web site.
But I am not able to validate the data. Please suggest how to write a Java class to put validation on CSV File data while reading it.

How can I import a csv file into the google cloud sql service (opposed to importing a sql dump)

How can I import csv files into the google cloud sql service (opposed to importing a sql dump)?
Can I in some way use the squirrel client?
Thanks
I just started using Google Cloud SQL. I connected to my instance with the MySQL Workbench. I created my first table, but then I wanted to import a CSV file into that table, and it was not clear on how to do that from MySQL Workbench. So, here are the steps to do this (thanks Aimon Bustardo):
Convert your CSV file to a SQL file using this site (there are other options to convert to SQL, but this is an easy service): http://csv2sql.com/
From MySQL Workbench Navigator for the MySQL instance, click "Data Import/Restore"
Click "Import from Self-Contained File" and then select the SQL file created in Step 1.
Click "Start Import" and your data is uploaded to the Cloud!
Converting it to SQL is your best be until they have a csv importer. There are command line utilities or online services. One of which that is free: http://csv2sql.com/