How the exported csv file from DB2 can include the content of XML - sql

I am trying to export a table with XMLType field from DB2 to a csv.
And I found that inside the csv file, the relational field in table can output the values correctly.
But the value of the XMLType field is a pointer to an exported XML file.
The csv file content:
1349714,,2,<XDS FIL='result.csv.001.xml' OFF='0' LEN='7013' />,2014-01-22-16.38.58.314000
You can see that the 4th field value is a pointer to a XML file.
May I know the command to include the XML content when exporting to a csv file in DB2??
For now, I'm using this cmd to do export:
EXPORT TO result.csv OF DEL MODIFIED BY NOCHARDEL SELECT col1, col2, coln FROM dbtable;
Thanks Buddy.

You need to convert XML to a character data type, e.g. using XMLSERIALIZE(yourxmlcolumn AS VARCHAR(32672)). Keep in mind that both the VARCHAR data type and the delimited export format have limitations on the value length (32672 and 32700 bytes respectively), so if your serialized XML fragment is longer than that it will be truncated.

Related

How to download a csv from a query but keeping the original encoding in pgadmin

I am Brazilian and I am workin with files that are encoded in windows 1252, when I execut the queries the names are fine, but when I try to export the data to excel using the download CSV I am faceing a encoding problem and all the letters with accents are having problems
I want to know how to change the encoding or the collate in the download as cvs for queries so that it have the same encoding in impoted
The code I used to import the that is
COPY base_ans_02 FROM 'C:\Users\ben201907_SP.csv' DELIMITER ','
CSV HEADER encoding 'windows-1252';
and one example of erro is
AMIL ASSISTÊNCIA MÉDICA INTERNACIONAL S.A.
If you inserted the data in your table using the WIN1252 encoding and it is not the default of your client, you might wanna also make sure it knows which encoding it's going to deal with.
Just set the client encoding right before your COPY command and you should be fine
SET CLIENT_ENCODING=WIN1252;
COPY base_ans_02 TO 'path_to_file' DELIMITER ',' CSV HEADER;

How to extract a file having varbinary column in u-sql script using default Extractor?

I have to Extract the varbinary column in a file. When I tried to extract it with byte[]. It show the error "Conversion Error. Column having invalid characters".
U-SQL Script:
EXTRACT Id int?,createddate DateTime?,Photo byte[]
FROM #input
USING Extractors.Csv(quoting: true,nullEscape:"\N");
The built-in Csv/Tsv/Text extractors assume that they operate on textual data, where the binary content is hex-encoded. This is necessary, since the binary otherwise could contain any of the delineation characters. See https://msdn.microsoft.com/en-us/library/azure/mt621366.aspx under byte[].
So if your photo is not hex-encoded, you would have to write your own custom extractor.

Loading huge csv file using COPY

I am loading CSV file using COPY.
COPY cts FROM 'C:\...\cts.csv' using DELIMITERS',';
However, error comes out
ERROR: invalid input syntax for type double precision: ""
CONTEXT: COPY testdata, line 7, column latitude: ""
How to fix it please?
Looks like your CSV isn't quite formatted correctly. "" isn't a number, and numbers don't need to be be quoted in CSV.
I find it's usually easier in PostgreSQL to create a staging import table with all text columns, and import CSVs to there first. Then do a cleanup query to put the CSV data into the real table.

Load substring in Hive data input

I am trying to load an input data file using Hive.
Consider I have the following input in a text file:
"10"
Is it possible to load the input without quotation: as an integer?
You can use the following third party CSV Serde in the following way.
add jar path/to/csv-serde.jar;
create table table_name (a string, b string, ...)
row format serde 'com.bizo.hive.serde.csv.CSVSerde'
stored as textfile
;
Here is the link: https://github.com/ogrodnek/csv-serde.git

How to make Postgres Copy ignore first line of large txt file

I have a fairly large .txt file ~9gb and I will like to load this txt file into postgres. The first row is the header, followed by all the data. If I postgres COPY the data directly, the header will cause an error that data type do not match with my postgres table, so I will need to remove it somehow.
Sample data:
ProjectId,MailId,MailCodeId,prospectid,listid,datemailed,amount,donated,zip,zip4,VectorMajor,VectorMinor,packageid,phase,databaseid,amount2
15,53568419,89734,219906,15,2011-05-11 00:00:00,0,0,90720,2915,NonProfit,POLICY,230,3,1,0
16,84141863,87936,164657,243,2011-03-10 00:00:00,0,0,48362,2523,NonProfit,POLICY,1507,5,1,0
16,81442028,86632,15181625,243,2011-01-19 00:00:00,0,0,11501,2115,NonProfit,POLICY,1508,2,1,0
While the COPY function for postgres has the "header" setting that can ignore the first row, it only works for csv files:
copy training from 'C:/testCSV.csv' DELIMITER ',' csv header;
when I try to run the code above on my txt file, it gets an error:
copy training from 'C:/testTXTFile.txt' DELIMITER ',' csv header
ERROR: unquoted newline found in data
HINT: Use quoted CSV field to represent newline.
I have tried adding "quote" and "escape" attributes but the command just won't seem to work for txt file:
copy training from 'C:/testTXTFile.txt' DELIMITER ',' csv header quote as E'"' escape as E'\\N';
ERROR: COPY escape must be a single one-byte character
Alternatively, I thought about running java or create a seperate stagging table to remove the first row...but these solutions are expansive and time consuming. I will need to load 9gb of data just to remove the first row of headers... are there other solutions out there to remove the first row of a txt file easily so that I can load the data into my postgres database?
Use HEADER option with CSV option:
\copy <table_name> from '/source_file.csv' delimiter ',' CSV HEADER ;
HEADER
Specifies that the file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table, and on input, the first line is ignored. This option is allowed only when using CSV format.
I've looked up docs at https://www.postgresql.org/docs/10/sql-copy.html
written about HEADER is not only true for CSV, but TSV also!
My solution was this in psql
\COPY mytable FROM 'mydata.tsv' DELIMITER E'\t' CSV HEADER;
(in addition mydata.tsv contaned header row which I excluded from copying to database table)