OpenCSV writing to multiple files every n record - iteration

How can I write to a new file in a loop based on every Nth record? If I have 1002 records and I want to create a file every 500 records I should end up with 3 files. Currently, I am able to write all records in file one and the other two are created but none of the records are in them.

Related

load a csv file from a particular point onwards

I have the below code to read a particular csv file, my problem is that the csv file contains two sets of data, one underneath the other, with different headers.
The starting point of the second data set can vary daily. So I need something that lets me find the row in the CSV where the second dataset begins (it will always start with the number '302') and load the csv from there. The problem i have is that the code below starts from where i need it to start, but it always includes the headers from the first part of the data, which is wrong.
USE csvImpersonation
FILE 'c:\\myfile.TXT'
SKIP_AT_START 3
RETURN #myfile
#loaddata = SELECT * FROM #mylife
where c1 = '302'
The below is a sample of the text file (after the first 3 rows are skipped, which are just full of file settings, dates, etc).
Any help is much appreciated

How to upload multiple .csv files to PostgreSQL?

I'm using pgAdmin in PostrgreSQL to work.
There are 10 .csv files with 10 different tables on my computer. All 10 tables represent the same type of data, it's just divided into 10 different months. Basically I need one table with all the months. But first I was thinking about creating 10 tables and import 10 .csv files and then combine them together into 1 single table.
I perform the following algorithm to create 1 table: create the table, create the column names, set the data types in each column, import the .csv file into the table. Then I repeat the same operation 9 times for the rest of the .csv files. Is there any way to upload all 10 files with one step? Thank you

How to loop through a table and print all row in differents files?

I'm working on a table which possess in one of it's column the configuraztion of a cisco router in each of it's row. I want to print each of those configurations in a different file.
ex: the conf in row 1 will be printed in a file named conf1.conf, the row 2 wil be conf2.conf, etc....
I'm trying to do this with a bash script. How should I do it? I can't really do it manually, it's a script that must be repeated every single day.

Handle 27mn records in transformation Pentaho

I have a list of 45,000 ids. For every id I need to generate the data for every calendar day which at the end will give me 27mn records. I can do it manually by passing the list of ids in the transformation and running it but I wonder what will be the automated way to do it? Save ids in a xls file/txt file in a batch of 1000 records and get Pentaho to read one file, run the transformation, save the output, open another file, run the transformation, save the output etc etc etc?

AWK command used for puting text from files to different file

I have 2 text files. I supose to use awk command to put data from these two files into third group of files that I already created. I need from first file first 10 rows and from second first 20 rows to put in fileNo1,then from first file second 10 rows and from second file second 20 rows in fileNo2 and to do like that until fileNo500.(I have 500 files where I shoud store data from first and second file and each of these final files will have 30 rows with data). How can I do this?