import a txt file with 2 columns into different columns in SQL Server Management Studio - sql

I have a txt file containing numerous items in the following format
DBSERVER: HKSER
DBREPLICAID: 51376694590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 09.03.2015 09:44:21 AM
READS: 1
Adds: 0
Updates: 0
Deletes: 0
DBSERVER: HKSER
DBREPLICAID: 21425584590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 08.03.2015 09:50:20 PM
READS: 2
Adds: 0
Updates: 0
Deletes: 0
.
.
.
.
please see the source capture here
I would like to import the txt file into the following format in SQL
1st column 2nd column 3rd column 4th column 5th column .....
DBSERVER DBREPLICAID DBPATH DBTITLE DATETIME ......
HKSER 51376694590 redirect.nsf Redirect AP 09.03.2015 09:44:21 AM
HKSER 21425584590 redirect.nsf Redirect AP 08.03.2015 01:08:07 AM
please see the output capture here
Thanks a lot!

You can dump that file into a temporary table, with just a single text column. Once imported, you loop through that table using a cursor, storing into variables the content, and every 10 records inserting a new row to the real target table.
Not the most elegant solution, but it's simple and it will do the job.

Using Bulk insert you can insert these headers and data in two different columns and then using dynamic sql query, you can create a table and insert data as required.

For Something like this I'd probably use SSIS.
The idea is to create a Script Component (As a Transformation)
You'll need to manually define your Output cols (Eg DBSERVER String (100))
The Src is your File (read Normally)
The Idea is that you build your rows line by line then add the full row to the Output Buffer.
Eg
Output0Buffer.AddRow();
Then write the rows to your Dest.
If all files have a common format then you can wrap the whole thiing in a for each loop

Related

trying to import csv file to table in sql

I have 4 csv files each having 500,000 rows. I am trying to import the csv data into my Exasol databse, but there is an error with the date column and I have a problem with the first unwanted column in the files.
Here is an example CSV file:
unnamed:0 , time, lat, lon, nobs_cloud_day
0, 2006-03-30, 24.125, -119.375, 22.0
1, 2006-03-30, 24.125, -119.125, 25.0
The table I created to import csv to is
CREATE TABLE cloud_coverage_CONUS (
index_cloud DECIMAL(10,0)
,"time" DATE -- PRIMARY KEY
,lat DECIMAL(10,6)
,lon DECIMAL(10,6)
,nobs_cloud_day DECIMAL (3,1)
)
The command to import is
IMPORT INTO cloud_coverage_CONUS FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv';
But I get this error:
SQL Error [42636]: java.sql.SQLException: ETL-3050: [Column=0 Row=0] [Transformation of value='Unnamed: 0' failed - invalid character value for cast; Value: 'Unnamed: 0'] (Session: 1750854753345597339) while executing '/* add path to the 4 csv files, that are in the cloud database folder*/ IMPORT INTO cloud_coverage_CONUS FROM CSV AT 'https://27.1.0.10:59205' FILE 'e12a96a6-a98f-4c0a-963a-e5dad7319fd5' ;'; 04509 java.sql.SQLException: java.net.SocketException: Connection reset by peer: socket write error
Alternatively I use this table (without the first column):
CREATE TABLE cloud_coverage_CONUS (
"time" DATE -- PRIMARY KEY
,lat DECIMAL(10,6)
,lon DECIMAL(10,6)
,nobs_cloud_day DECIMAL (3,1)
)
And use this import code:
IMPORT INTO cloud_coverage_CONUS FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv'(2 FORMAT='YYYY-MM-DD', 3 .. 5);
But I still get this error:
SQL Error [42636]: java.sql.SQLException: ETL-3052: [Column=0 Row=0] [Transformation of value='time' failed - invalid value for YYYY format token; Value: 'time' Format: 'YYYY-MM-DD'] (Session: 1750854753345597339) while executing '/* add path to the 4 csv files, that are in the cloud database folder*/ IMPORT INTO cloud_coverage_CONUS FROM CSV AT 'https://27.1.0.10:60350' FILE '22c64219-cd10-4c35-9e81-018d20146222' (2 FORMAT='YYYY-MM-DD', 3 .. 5);'; 04509 java.sql.SQLException: java.net.SocketException: Connection reset by peer: socket write error
(I actually do want to ignore the first column in the files.)
How can I solve this issue?
Solution:
IMPORT INTO cloud_coverage_CONUS FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv' (2 .. 5) ROW SEPARATOR = 'CRLF' COLUMN SEPARATOR = ',' SKIP = 1;
I did not realise that mysql is different from exasol
Looking at the first error message, a few things stand out. First we see this:
[Column=0 Row=0]
This tells us the problem is with the very first value in the file. This brings us to the next thing, where the message even tells us what value was read:
Transformation of value='Unnamed: 0' failed
So it's failing to convert Unnamed: 0. You also provided the table definition, where we see the first column in the table is a decimal type.
This makes sense. Unnamed: 0 is not a decimal. For this to work, the CSV data MUST align with the data types for the columns in the table.
But we also see this looks like a header row. Assuming everything else matches we can fix it by telling the database to skip this first row. I'm not familiar with Exasol, but according to the documentation I believe the correct code will look like this:
IMPORT INTO cloud_coverage_CONUS
FROM LOCAL CSV FILE 'D:\uni\BI\project 1\AOL_DB_ANALYSIS_TASK1\datasets\cloud\cfc_us_part0.csv'
(2 FORMAT='YYYY-MM-DD', 3 .. 5)
ROW SEPARATOR = 'CRLF'
COLUMN SEPARATOR = ','
SKIP = 1;

apache hive loads null values instead of intergers

I am new to apache hive and was running queries on sample data which is saved in a csv file as below:
0195153448;"Classical Mythology";"Mark P. O. Morford";"2002";"Oxford University Press";"//images.amazon.com/images/P/0195153448.01.THUMBZZZ.jpg";"http://images.amazon.com/images/P/0195153448.01.MZZZZZZZ.jpg";"images.amazon.com/images/P/0195153448.01.LZZZZZZZ.jpg"
and the table which i created is of form
hive> describe book;
OK
isbn bigint
title string
author string
year string
publ string
img1 string
img2 string
img3 string
Time taken: 0.085 seconds, Fetched: 8 row(s)
and the script which I used to create the table is:
create table book(isbn int,title string,author string, year string,publ string,img1 string,img2 string,img3 string) row format delimited fields terminated by '\;' lines terminated by '\n' location 'path';
When I try to retrieve the data from the table by using the following query:
select *from book limit 1;
I get the following result:
NULL "Classical Mythology" "Mark P. O. Morford" "2002" "Oxford University Press" "http://images.amazon.com/images/P/0195153448.01.THUMBZZZ.jpg" "images.amazon.com/images/P/0195153448.01.MZZZZZZZ.jpg" "images.amazon.com/images/P/0195153448.01.LZZZZZZZ.jpg"
Even though I specify the first column type as int or bigint the data into the table is getting loaded as NULL.
I tried searching on the internet and could figure out that I have to specify the row delimiter. I used that too but no change in the data from the table.
Is there anything that I am making a mistake... Please help.

SSIS - Using flat file as a Parameter/Variable

I would like to know how to use a flat file (with only one value, say datetime) as a Parameter/Variable. Instead of feeding a SQL query value from Edit SQL task into a variable I want to save them as a flat file and then load them again as a Parameter/Variable.
This can be done using Script Task .
1 Set ReadonlyVeriable == file name
2 select ReadWriteveriable name = Variablename you have to populate.
3 Script write logic to find the value ( read file and get value)
set the value
this.Dts.Variables("sFileContent").Value = StreamText ;

AS400 SQL Dynamic Delete issue

Some background...
I have 20 + Files.
I read these file names from a prebuilt table building a subfile screen.
I select 1 file then build another screen with the contents of file selected.
I then select the record I want to delete, so far so good...
eval MySQL = stat3 + %trimr(scrwcrd) + STAT3B
my SQL Statement which reads in debug
MySQL = DELETE FROM FILESEL WHERE K00001 = ? with NC
PREPARE STAT3 from :MYSQL
EXECUTE STAT3 using :PROD
where :prod is the variable supplied from Screen selection
My sqlcod ends up at 100 with sqlstt = 2000 after the EXECUTE indicating ROW not found for Delete.
Now for a fact this is not the case. I see the record on the file selected and I see the value of PROD using debug any ideas...
What datatypes and length are the K00001 field and :PROD host variable?
Equality could be an issue. If they are character fields you may need to TRIM/%TRIM the values in order to match.

Qlikview- How to unqualify/concatenate many rows from many differents tables?

I have multiple tables and each of them some fields are identical and several others are different. When I try to load them all at the same time the program "hangs" and I have to restart the application. It seems to me that the solution would be to use Qualify and unqualify or another script. I want all fields that are equal be concatenated. however there are tables that contain up to 229 columns.
I need from the key fields that I will be able to concatenate the information without losing the value of each field ..
How should I proceed to make all the same columms that are equal as a "KEY" without to need to list all of them?
this is how iam using the script..
LOAD Nome as Comarca,
Vara,
Entrancia,
Juiz,
Escrivao,
NomeMapa,
IdComarca,
Mes,
Ano,
MatJuiz,
IdVara,
IdEscrivao,
IdMapa,
DataFechamentoJuiz,
DataFechamentoEscrivao,
TitularRespondendo,
AndCausOrdiMesAnt,
AndCausOrdiAutu,
AndCausOrdiArqui,
AndCausOrdiAnda,
AndCausSumMesAnt,
AndCausSumAutu,
AndCausSumArqui,
AndCausSumAnda,
AndProcCautMesAnt,
AndProcCautAutu,
AndProcCautArqui,
AndProcCautAnda,
AndEmbarMesAnt,
AndEmbarAutu,
AndEmbarArqui,
AndEmbarAnda,
AndDemaisMesAnt,
AndDemaisAutu,
AndDemaisArqui,
AndDemaisAnda,
AndExecTotMesAnt,
AndExecTotAutu,
AndExecTotArqui,
AndExecTotAnda,
AndTituloExMesAnt,
AndTituloExAutu,
AndTituloExArqui,
AndTituloExAnda,
AndTituloJudMesAnt,
AndTituloJudAutu,
AndTituloJudArqui,
AndTituloJudAnda,
AndExecFiscMesAnt,
AndExecFiscAutu,
AndExecFiscArqui,
AndExecFiscAnda,
AndFedMesAnt,
AndFedAutu,
AndFedArqui,
AndFedAnda,
AndEstMesAnt,
AndEstAutu,
AndEstArqui,
AndEstAnda,
AndMuniMesAnt,
AndMuniAutu,
AndMuniArqui,
AndMuniAnda,
AndFalenMesAnt,
AndFalenAutu,
AndFalenArqui,
AndFalenAnda,
AndProcJuriMesAnt,
AndProcJuriAutu,
AndProcJuriArqui,
AndProcJuriAnda,
AndAcoPrevMesAnt,
AndAcoPrevAutu,
AndAcoPrevArqui,
AndAcoPrevAnda,
AndInciMesAnt,
AndInciAutu,
AndInciArqui,
AndInciAnda,
AndAcoIndeMesAnt,
AndAcoIndeAutu,
AndAcoIndeArqui,
AndAcoIndeAnda,
AndMandaMesAnt,
AndMandaAutu,
AndMandaArqui,
AndMandaAnda,
AndAcaCivMesAnt,
AndAcaCivAutu,
AndAcaCivArqui,
AndAcaCivAnda,
AndAcoTrabMesAnt,
AndAcoTrabAutu,
AndAcoTrabArqui,
AndAcoTrabAnda,
AndOutMesAnt,
AndOutAutu,
AndOutArqui,
AndOutAnda,
AndTotalMesAnt,
AndTotalAutu,
AndTotalArqui,
AndTotalAnda,
AndPrecMesAnt,
AndPrecAutu,
AndPrecArqui,
AndPrecAnda,
AndExecMesAnt,
AndExecAutu,
AndExecArqui,
AndExecAnda,
AndExecPenMesAnt,
AndExecPenAutu,
AndExecPenArqui,
AndExecPenAnda,
AndExecSuspMesAnt,
AndExecSuspAutu,
AndExecSuspArqui,
AndExecSuspAnda,
AndExecFisMesAnt,
AndExecFisAutu,
AndExecFisArqui,
AndExecFisAnda,
AndIncidProcJulg,
AndIncidProcExecJulg,
ProcConDist2005,
EmbExecDist2005,
ProcConDist2006MesAnt,
ProcConDist2006Julga,
ProcConDist2006Anda,
EmbaExec2006MesAnt,
EmbaExec2006Julga,
EmbaExec2006Anda,
MovProcConcPer,
MovProcConcl,
MovProcVistaMP,
MovProcCargaMP,
MovProcVistaPart,
MovProcOutTotal,
MovProcAudi,
MovProcCumpri,
MovProcDev,
MovProcPericia,
MovProcPubEdit,
MovProcProvEscriv,
MovProcSusp,
MovProcOutSitu,
MovProcArquiBaixa,
MovRecurInter,
MovRecurJulgAgravo,
MovRecurJulgapelacao,
MovRecurJulgtotal,
MovRecurProvAgravo,
MovRecurProvApelacao,
MovRecurProvTotal,
MovRecurInterFase,
MovRecurInterPend,
MovPrecNum,
MovPrecDataDist,
MovPrecDataUlt,
MovPrecDevTot,
MovPrecDevCit,
MovPrecDevOut,
RemTJMesAnt,
RemTJMesAtual,
RemTJDevolvTJ,
RemTJTotal,
RemOutTJMesAnt,
RemOutTJMesAtual,
RemOutTJDevolvTJ,
RemOutTJTotal,
RemOutComMesAnt,
RemOutComMesAtual,
RemOutComDevolvTJ,
RemOutComTotal,
RemRediOutMesAnt,
RemRediOutMesAtual,
RemRediOutDevolvTJ,
RemRediOutTotal,
RemOutrasInfo,
CustasProc,
CustasTaxaJudi,
CustasOutras,
AtosSentResMeritoTotal,
AtosSentResMeritoConhe,
AtosSentResMeritoCautelar,
AtosSentHomoTotal,
AtosSentHomoConhe,
AtosSentHomoCautelar,
AtosSentSemResolMeritoTotal,
AtosSentSemResolMeritoConhe,
AtosSentSemResolMeritoCautelar,
AtosMSentExecTotal,
AtosSentExecFiscal,
AtosMSentExecTitJud,
AtosMSentExecTitExt,
AtosDecisaoTotal,
AtosDecisaoLiminar,
AtosDecisaoOutras,
AtosDespProf,
AtosDespProfPlantao,
AtosAudRealizTotal,
AtosAudIntru,
AtosAudJulg,
AtosAudConcil,
AtosAudOutros,
AtosAudNRealiz,
AtosAudDesig,
AtosAcordoAudi,
AtosSentProfAudi,
AtosPesOuvAudi,
AtosDataAudiAfast,
AtosAutosConcSent,
AtosAutosConcPratica,
AtosAutosConcTotal,
AtosAutosConcSent100,
AtosAutosConcDiv100,
AtosDataConcAntiga,
AtosDecSusp,
AtosMandPriCivil,
AtosPresosCiveis,
AtosProcAntTramitNum,
AtosProcAntTramitData,
AtosProcAntTramitDUM,
AtosPrecAntTramitNum,
AtosPrecAntTramiData,
AtosPrecAntTramiDUM,
AtosPrecDevTotal,
AtosPrecDevCitacao,
AtosPrecDevOutras,
AtosInfTJ,
AtosOutrasAtividades,
Ferias,
MatSubstituicao,
MatAssinatura,
DataIniFerias,
DataFimFerias,
RemetOutraVara
FROM
[Z:\QLIKVIEW\Todos os Mapas\Area CĂ­vel.xlsx]
(ooxml, embedded labels, table is AreaCivil);
This a full list of rows of 1 of 16 tables.. some hows are equal in each table and some are different.
probably two different issues here. First, it hangs on you very likely because of memory issues - enable the detailed error log in the document settings so you can get some details during document reload.
Next, if i understand you correctly, you want to concatenate all 16 files to one table and these files have common columns and some different ones?
you have a few options here but I would recommend to manually rename common fields in your load script and also add columns you need to load from some other files to all other files but they will be blank if we are not on a specific file.
For example,
file1 has columns key1,key2, c1, c2
and file2 has columns key1,key2, c1,c3
you can load them separately in the load script, but then you load file1, add blank column c3 and for file2, add blank column c2 - not to the actual files but to your load script statement.
you can also use forced concatenation by using CONCATENATE keyword before your load statement but I personally like to have control of QV load script myself.