I am trying to upload a .csv file in phpMyAdmin without success.
The first way: Selecting the current database and directly import the zipped file then selecting CSV under Format of imported file and , as delimiter.
I get no error but when the upload is complete nothing happens and the page remains white empty.
Second way: I created a table with the same number of columns (43) included in the csv file. Then import the CSV file using the same configuration as before.
I get an error basically saying that number of fields in the CSV imported is not valid even though it is.
What am I doing wrong?
Related
During a CSV file import from external URL i need to execute REPLACE
(I can't edit the CSV file manually/locally cause it's located on the suppliers FTP and will be used in the future for add/dele/update etc of the products in the file on a automated recurring scheduled task)
I got this expression for replacing value of a column in the CSV file:
REPLACE([CSV_COL(6)],"TEXSON","EXTERNAL")
It's working for column 6 in the CSV file cause all row values of that column is the same(TEXSON)
What i need help with:
In column 5 in the CSV file i have various values and there no connection between these values.
How can i run an expression that replaces all values in column 5 in the CSV with "EXTERNAL"?
See image of how it looks in the CSV file:
Maybe some "wildcard" to just replace everything in that column, no matter what value it is...
Additional information: Im working with the PrestaShop Store Manager to import products to the shop from our supplier...
Thanks!
I am new to Pentaho. I am trying to build a transformation that can convert a bunch of .xlsx files to .csv (utf-8).
I tried Get file Names and Text File Output, but it saves a single file as csv and the content of that file is the file properties.
I also tried Microsoft Excel Input and Microsoft Excel Output and that did not work either.
Any help will be appreciated. TIA!
I have prepare a SOLUTION for you. I have made my solution full dynamic. For that reason solution is combination of 6 (transformation & job). You only need to define following 2 things:-
Source folder location
Destination folder location
Others will work dynamically.
Also, I have learn a lot with this solution.
Would you like to generate a separate CSV for each Excel file?
It is better to do it like this:
Using the Get File Names component, read the list of Excel files from the folder.
Then call Execute Transformation, and pass the name of the file.
Then a separate Transformation will be performed for each file, and a separate CSV will be generated in it for each Excel file.
I have a database in phpmyadmin. I am trying to export my tables in sql format. the export file is generated with nothing in it but when I try to export and save file in csv or json format data gets saved. When I import the csv or json file the import fails without giving any error. Can't understand why.
I want to duplicate my complete database from one system to another. How can I do so?
I am using Cloudera's Hue. In the file browser, I upload a .csv file with about 3,000 rows (my file is small <400k).
After uploading the file I go to the Data Browser, create a table and import the data into it.
When I go to Hive and perform a simple query (say SELECT * FROM table) I only see results for 99 rows. The original .csv has more than those rows.
When I do other queries I notice that several rows of data are missing although they show in the preview in the Hue File Browser.
I have tried with other files and they also get truncated sometimes at 65 rows or 165 rows.
I have also removed all the "," from the .csv data before uploading the file.
I finally solved this. There were several issues that appeared to cause a truncation.
The main was that the variable type automatically set after importing the data was assigned according to the first lines. So when the data type changed from TinyINT to INT it got truncated or changed to "NULL". To solve this perform EDA and change the datatype before creating the table.
Other issues were that the memory I had assigned to the virtual machine slowed the preview process and that the csv contained commas. You can set the VM to have more memory or change a csv to tab separated.
Hello I have a csv file with 1 million row. And i try to import from the file to my source which is a database. Why do I get error exactly?
Please click at the images bellow and try to correct me and help me!
The error is pretty clear. It's telling you that there is a mismatch in column lengths. Your source (CSV file) contains columns that may get truncated when they are imported into the the destination (database).
The CSV column lengths are all 255, while your database shows to have columns that are length 50.