How to loop through a table and print all row in differents files? - sql

I'm working on a table which possess in one of it's column the configuraztion of a cisco router in each of it's row. I want to print each of those configurations in a different file.
ex: the conf in row 1 will be printed in a file named conf1.conf, the row 2 wil be conf2.conf, etc....
I'm trying to do this with a bash script. How should I do it? I can't really do it manually, it's a script that must be repeated every single day.

Related

OpenCSV writing to multiple files every n record

How can I write to a new file in a loop based on every Nth record? If I have 1002 records and I want to create a file every 500 records I should end up with 3 files. Currently, I am able to write all records in file one and the other two are created but none of the records are in them.

create a batch which runs a series of sqlcmds which first selects a list of unique names, then selects data related to each of those and output to csv

I wrote a batch file to execute a bunch of repeated lines of the following structure:
sqlcmd "select somedata from table where table.attribute is like 'name'" > name_date.csv
The where is filtering on the attribute which determines what data is related to a specific named piece of hardware. I then output to file and then repeat the line for the rest of the names. _date is set from a variable in the batch file. The DBMS is SQL server.
That works well and writes out all the files fine, which is already a time saver, but if I was to use that batch file on another customers database to extract the same data for their hardware, I’d have to manually update each repeat of the above command in the batch with their list of unique names. The hardware names could be anywhere from a couple to hundreds total. What I’d like to do is build on it so it is useful for any list of those names without manual intervention.
I can isolate that unique list of names by using select distinct, but then making some kind of for each loop which substitutes each of those unique values in turn into the sqlcmd and outputs each to a separate file, all within the same batch file so the user just has to run it without intervention, is beyond me.
Summary: I want to create a batch file which runs a series of sqlcmds which first selects a list of unique names, then for each of those names selects certain data relating to them and printing out to a csv File with that name, then repeats for the next unique name etc
Thank you for your time, I appreciate any tips and advice for further learning.
There are several ways to handle this, depending on details of your situation and how automated you want to get. Since the data originates in a DB, it is straightforward to write a SQL query that generates the lines of your batch file.
In the solution below, the inner select statement selects [attribute] from [table], concatenating the rest of your command around it. This provides each line (one line per row returned, i.e. one line per attribute name). The STRING_AGG in the outer select statement then combines the multiple rows (lines) together, with a linefeed to separate each line.
declare #bat nvarchar(max);
select #bat = STRING_AGG(lines.oneLine, char(10))
from (
select 'sqlcmd "select somedata from table where attribute like ''' +
[attribute] + '''" > ' +
[attribute] + '_date.csv' as oneLine
from [table]
where [attribute] is not null
group by [attribute]
) as lines;
From here, just copy & paste the results into your batch file. Or get creative and use xp_cmdshell to write #bat out to the file system. If you bundle all this into one script, you could create a BAT file that runs the script, which generates the BAT file.

load a csv file from a particular point onwards

I have the below code to read a particular csv file, my problem is that the csv file contains two sets of data, one underneath the other, with different headers.
The starting point of the second data set can vary daily. So I need something that lets me find the row in the CSV where the second dataset begins (it will always start with the number '302') and load the csv from there. The problem i have is that the code below starts from where i need it to start, but it always includes the headers from the first part of the data, which is wrong.
USE csvImpersonation
FILE 'c:\\myfile.TXT'
SKIP_AT_START 3
RETURN #myfile
#loaddata = SELECT * FROM #mylife
where c1 = '302'
The below is a sample of the text file (after the first 3 rows are skipped, which are just full of file settings, dates, etc).
Any help is much appreciated

How to combine two rows into one row with respective column values

I've a csv file which contains data with new lines for a single row i.e. one row data comes in two lines and I want to insert the new lines data into respective columns. I've loaded the data into sql but now I want to replace the second row data into 1st row with respective column values.
output details:
I wouldn't recommend fixing this in SQL because this is an issue with the CSV file. The issue is that file contains new lines, which causes rows split.
I strongly encourage fixing CSV files, if possible. It's going to be difficult to fix that in SQL given there are going to be more cases like that.
If you're doing the import with SSIS (or if you have the option of doing it with SSIS if you are not currently), the package can be configured to manage embedded carriage returns.
Define your file import connection manager with the columns you're expecting.
In the connection manager's Properties window, set the AlwaysCheckForRowDelimiters property to False. The default value is True.
By setting the property to False, SSIS will ignore mid-row carriage return/line feeds and will parse your data into the required number of columns.
Credit to Martin Smith for helping me out when I had a very similar problem some time ago.

Load Data Infile Syntax - Invalid field count in CSV on line 1

I was using phpmyadmin for ease of use and am using the Load Data Infile Syntax which gives the following error, - invalid field count in CSV on line 1. I know there is an invalid field count which is on purpose.
Basically the table has 8 columns and the files have 7. I can go into the file and change in manually to 8 by entering data in the 8th column but this is just too time consuming, in fact I would have to start again by the time I finish so I have to rule that out.
The eight column will be a number which is the exact same for each row per file, so unique for each file.
For example the first file has 1000 rows each with data that goes in the first seven columns, then the 8th column is used to identify to what this file data is in reference to. So for the 1000 rows on the sql table the first 7 columns are data, while the last column will just be 1000 1's, and then next file's 1000 rows will have an 8th column that says 1000 2's and so on. (note I'm actually goign to be entering 100001, rather than 1 or 000001 for obvious reasons).
Anyway, I can't delete the column either and add back after loading the file for good reasons which I'll not explain, but I am aware of that method is useless to this scenario.
What I would like is a method which as I load a file which fills the first 7 columns, while for the 8th column, to have a specified int placed in each row of the 8th column for each row there is in the csv. Like auto increment except, rather than increment each new row, just stay the same. Then for the second file all I need to do is change the specified int.
Notes: the solution can't be to change the csv file as this is to time consuming and it is actually counter intuitive.
I'm hoping someone knows if there is a way then to do this, possibly by having sql code which is both a mixture of Load File and Insert so that it processes correctly without error.
The solution is to simply load the 8th column into a variable, something like this:
SET #dummy_variable = 0; /* <-not sure if you need this line...*/
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, column2, ..., column7, #dummy_variable);