I have 2 text files. I supose to use awk command to put data from these two files into third group of files that I already created. I need from first file first 10 rows and from second first 20 rows to put in fileNo1,then from first file second 10 rows and from second file second 20 rows in fileNo2 and to do like that until fileNo500.(I have 500 files where I shoud store data from first and second file and each of these final files will have 30 rows with data). How can I do this?
Related
How can I write to a new file in a loop based on every Nth record? If I have 1002 records and I want to create a file every 500 records I should end up with 3 files. Currently, I am able to write all records in file one and the other two are created but none of the records are in them.
I have the below code to read a particular csv file, my problem is that the csv file contains two sets of data, one underneath the other, with different headers.
The starting point of the second data set can vary daily. So I need something that lets me find the row in the CSV where the second dataset begins (it will always start with the number '302') and load the csv from there. The problem i have is that the code below starts from where i need it to start, but it always includes the headers from the first part of the data, which is wrong.
USE csvImpersonation
FILE 'c:\\myfile.TXT'
SKIP_AT_START 3
RETURN #myfile
#loaddata = SELECT * FROM #mylife
where c1 = '302'
The below is a sample of the text file (after the first 3 rows are skipped, which are just full of file settings, dates, etc).
Any help is much appreciated
I'm working on a table which possess in one of it's column the configuraztion of a cisco router in each of it's row. I want to print each of those configurations in a different file.
ex: the conf in row 1 will be printed in a file named conf1.conf, the row 2 wil be conf2.conf, etc....
I'm trying to do this with a bash script. How should I do it? I can't really do it manually, it's a script that must be repeated every single day.
Sorry for the extremely awkward wording in that question. I'll explain.
I have a table with 14 columns in which i'm trying to import data to via BCP. My data comes from a text file. This text file is TAB delimited. Logically there should be 13 delimiters for 14 cells of data in a row. My data is inconsistent and doesn't have delimiters if the values at the end are null. This means that some rows of data only have 10 delimiters. This causes my data to "wrap around" when it is imported. The first cell of data in my text file is being put in the 10th column of the row prior to it. It should be the first cell in its own new row.
The thing is every single row in the text file ends in with "CRLF" which is used by default in BCP.
Is there a way to tell BCP to fill in all 14 columns before moving on to the next row? Or will i have to re format my data file every time i import (not ideal).
Here is my BCP command:
bcp testdb.dbo.MACARP in C:\Users\sysbrady\Desktop\MyData.txt /c /T /t "\t" /E -S WSTVDISTD023\SQLEXPRESS
"Is there a way to tell BCP to fill in all 14 columns before moving on to the next row?"
When you say "fill in", do you mean you want BCP to keep the null values present in your text file? The -k qualifier tells BCP to keep the nulls (make sure the column in your table allows nulls). See link below:
http://msdn.microsoft.com/en-us/library/ms187887.aspx
"The thing is every single row in the text file ends in with "CRLF" which is used by default in BCP."
This is unclear - could you post an image? Unsure of whether you have phrased this as a problem, or a feature you want to retain.
I was using phpmyadmin for ease of use and am using the Load Data Infile Syntax which gives the following error, - invalid field count in CSV on line 1. I know there is an invalid field count which is on purpose.
Basically the table has 8 columns and the files have 7. I can go into the file and change in manually to 8 by entering data in the 8th column but this is just too time consuming, in fact I would have to start again by the time I finish so I have to rule that out.
The eight column will be a number which is the exact same for each row per file, so unique for each file.
For example the first file has 1000 rows each with data that goes in the first seven columns, then the 8th column is used to identify to what this file data is in reference to. So for the 1000 rows on the sql table the first 7 columns are data, while the last column will just be 1000 1's, and then next file's 1000 rows will have an 8th column that says 1000 2's and so on. (note I'm actually goign to be entering 100001, rather than 1 or 000001 for obvious reasons).
Anyway, I can't delete the column either and add back after loading the file for good reasons which I'll not explain, but I am aware of that method is useless to this scenario.
What I would like is a method which as I load a file which fills the first 7 columns, while for the 8th column, to have a specified int placed in each row of the 8th column for each row there is in the csv. Like auto increment except, rather than increment each new row, just stay the same. Then for the second file all I need to do is change the specified int.
Notes: the solution can't be to change the csv file as this is to time consuming and it is actually counter intuitive.
I'm hoping someone knows if there is a way then to do this, possibly by having sql code which is both a mixture of Load File and Insert so that it processes correctly without error.
The solution is to simply load the 8th column into a variable, something like this:
SET #dummy_variable = 0; /* <-not sure if you need this line...*/
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, column2, ..., column7, #dummy_variable);