unable to specify db2 import parameters on bluemix? - sql

I subscribed a free sqldb service from bluemix and tried to import data in CSV file to this database instance.
For certain columns I have pure "space" as data, and some columns to be filled by default value. I can import this data with the following command on my local DB2:
db2 'import from MY_DATA.csv of del modified by usedefaults keepblanks timestampformat="MM/DD/YYYY HH:MM:SS" skipcount 1 insert into MY_TABLE'
On bluemix, I can only assign date / time / timestamp format and skip 1st row. How can I add the "modified by usedefaults keepblanks" part on bluemix to complete the import?
Also, when the import fails, I only receive the following message:
BaseException message: [Routine "SYSPROC.ADMIN_CMD" execution has completed, but at least one error, "_0911", was encountered during the execution. More information is available.. _CODE=20397, _STATE=01H52, DRIVER=3.66.46]
Where can I get the detail error log that I can see on my local DB such as:
SQL3125W The character data in row "2" and column "32" was truncated because
the data is longer than the target database column.
SQL3148W A row from the input file was not inserted into the table. SQLCODE
"-181" was returned.
SQL0181N The string representation of a datetime value is out of range.
SQLSTATE=22007
SQL3185W The previous error occurred while processing data from row "2" of
the input file.
SQL3110N The utility has completed processing. "2" rows were read from the
input file.
SQL3221W ...Begin COMMIT WORK. Input Record Count = "2".
SQL3222W ...COMMIT of any database changes was successful.
SQL3149N "2" rows were processed from the input file. "0" rows were
successfully inserted into the table. "1" rows were rejected.
Number of rows read = 2
Number of rows skipped = 1
Number of rows inserted = 0
Number of rows updated = 0
Number of rows rejected = 1
Number of rows committed = 2

In the same quick load page (load complete in step 4), there should be a link to view the logs for this load. Hopefully it'll reveal more details about the error message.
Also note that keepblanks is applicable to DEL file formats (Delimited ASCII) only. It is not applicable to ASCII file formats (ASC/DEL) or ASC file formats (Non-delimited ASCII).
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0023577.html?cp=SSEPGG_10.5.0%2F3-6-1-3-0-0-12&lang=en

Related

To grep contents from a CSV/Text File using Autohotkey(AHK) Script

Can anyone please help me in writing a script in AHK based on below requirement.
Requirement:
I have a CSV/TXT file in my windows environment which contains 20,000+ records in below format.
So, when I run the script it should prompt a InputBox to enter an instance name.
Example : If i enter Instance4 , it should display result in MsgBox as ServerName4
Sample Format:
ServerName1,ServerIP,Instance1,Type
ServerName2,ServerIP,Instance2,Type
ServerName3,ServerIP,Instance3,Type
ServerName4,ServerIP,Instance4,Type
ServerName5,ServerIP,Instance5,Type
.
.
.
Also as the CSV/TXT file contains large no of records , pls also consider the best way to avoid delay in fetching the results.
Please post your code, or at least show what you've already done.
You can use a Parsing Loop with CSV as the delimiter, and make a variable for each 'Instance' who's value is that of the current row's 'ServerName'.
The steps are to first FileRead the data from the file, then Loop, Parse like so:
Loop, Parse, data, CSV
{
; Parses row by row, then column by column in each row.
; A_LoopField // Current value
; A_Index // Current loop's index
; Write a script that makes a variable named with the current value of column 3, and give it the value of column 1
}
After that, you can make a Goto loop that spams InputBox and following a command that prints out the needed variable using the MsgBox command, like so:
MsgBox % %input%

"Error while reading data" error received when uploading CSV file into BigQuery via console UI

I need to upload a CSV file to BigQuery via the UI, after I select the file from my local drive I specify BigQuery to automatically detect the Schema and run the job. It fails with the following message:
"Error while reading data, error message: CSV table encountered too
many errors, giving up. Rows: 2; errors: 1. Please look into the
errors[] collection for more details."
I have tried removing the comma in the last column, and tried changing options in the advanced section but it always results in the same error.
The error log is not helping me understand where the problem is, this is example of the error log entry:
2
019-04-03 23:03:50.261 CLST Bigquery jobcompleted
bquxjob_6b9eae1_169e6166db0 frank#xxxxxxxxx.nn INVALID_ARGUMENT
and:
"Error while reading data, error message: CSV table encountered too
many errors, giving up. Rows: 2; errors: 1. Please look into the
errors[] collection for more details."
and:
"Error while reading data, error message: Error detected while parsing
row starting at position: 46. Error: Data between close double quote
(") and field separator."
The strange thing is that the sample CSV data has NO double quote field separator!?
2019-01-02 00:00:00,326,1,,292,0,,294,0,,-28,0,,262,0,,109,0,,372,0,,453,0,,536,0,,136,0,,2609,0,,1450,0,,352,0,,-123,0,,17852,0,,8528,0
2019-01-02 00:02:29,289,1,,402,0,,165,0,,-218,0,,150,0,,90,0,,263,0,,327,0,,275,0,,67,0,,4863,0,,2808,0,,124,0,,454,0,,21880,0,,6410,0
2019-01-02 00:07:29,622,1,,135,0,,228,0,,-147,0,,130,0,,51,0,,381,0,,428,0,,276,0,,67,0,,2672,0,,1623,0,,346,0,,-140,0,,23962,0,,10759,0
2019-01-02 00:12:29,206,1,,118,0,,431,0,,106,0,,133,0,,50,0,,380,0,,426,0,,272,0,,63,0,,1224,0,,740,0,,371,0,,-127,0,,27758,0,,12187,0
2019-01-02 00:17:29,174,1,,119,0,,363,0,,59,0,,157,0,,67,0,,381,0,,426,0,,344,0,,161,0,,923,0,,595,0,,372,0,,-128,0,,22249,0,,9278,0
2019-01-02 00:22:29,175,1,,119,0,,301,0,,7,0,,124,0,,46,0,,382,0,,425,0,,431,0,,339,0,,1622,0,,1344,0,,379,0,,-126,0,,23888,0,,8963,0
I shared an example of a few lines of CSV data. I expect BigQuery to be able to detect the schema and load the data into a new table.
Using BigQuery new WebUI and your input data I did the following:
Select a dataset
Clicked on create a table
Filled the create table form as follow:
The table was created and I was able to SELECT 6 rows as expected
SELECT * FROM projectId.datasetId.SO LIMIT 1000

unable to load csv file from GCS into bigquery

I am unable to load 500mb csv file from google cloud storage to big query but i got this error
Errors:
Too many errors encountered. (error code: invalid)
Job ID xxxx-xxxx-xxxx:bquijob_59e9ec3a_155fe16096e
Start Time Jul 18, 2016, 6:28:27 PM
End Time Jul 18, 2016, 6:28:28 PM
Destination Table xxxx-xxxx-xxxx:DEV.VIS24_2014_TO_2017
Write Preference Write if empty
Source Format CSV
Delimiter ,
Skip Leading Rows 1
Source URI gs://xxxx-xxxx-xxxx-dev/VIS24 2014 to 2017.csv.gz
I have gzipped 500mb csv file to csv.gz to upload to GCS.Please help me to solve this issue
The internal details for your job show that there was an error reading the row #1 of your CSV file. You'll need to investigate further, but it could be that you have a header row that doesn't conform to the schema of the rest of the file, so we're trying to parse a string in the header as an integer or boolean or something like that. You can set the skipLeadingRows property to skip such a row.
Other than that, I'd check that the first row of your data matches the schema you're attempting to import with.
Also, the error message you received is unfortunately very unhelpful, so I've filed a bug internally to make the error you received in this case more helpful.

UniVerse - SQL LIST: View List of All Database Tables

I am trying to obtain a list of all the DB Tables that will give me visibility on what tables I may need to JOIN for running SQL scripts.
For example, in TCL when I run "LIST.DICT" it returns "Name of File:" for input. I then enter "PRODUCT" and it returns a list of all available fields.
However, Where can I get a list of all my available Tables or list of my options that I can enter after "Name of File:"?
Here is what I am trying to achieve. In the screen shot below, I would like to run a SQL script that gives me the latest Log File Activity, Date - Time - Description. I would like the script to return '8/13/14 08:40am BR: 3;BuyPkg'
Thank you in advance for your help.
From TCL within the database account containing your database files, type: LISTF
Sample output:
FILES in your vocabulary 03:21:38pm 29 Jun 2015 Page 1
Filename........................... Pathname...................... Type Modulo
File - Contains all logical device names
DICT &DEVICE& /u1/uv/D_&DEVICE& 2 1
DATA &DEVICE& /u1/uv/&DEVICE& 2 3
File - Used by MAKE.MAP.FILE
DICT &MAP& /u1/uv/D_&MAP& 2 1
DATA &MAP& /u1/uv/&MAP& 6 7
File - Contains all parts of Distributed Files
DICT &PARTFILES& /u1/uv/D_&PARTFILES& 2 1
DATA &PARTFILES& /u1/uv/&PARTFILES& 18 7
DICT &PH& D_&PH& 3 1
DATA &PH& &PH& 1
DICT &SAVEDLISTS& D_&SAVEDLISTS& 3 1
DATA &SAVEDLISTS& &SAVEDLISTS& 1
File - Used by uniVerse to access the current directory.
DICT &UFD& /u1/uv/D_UFD 2 1
DATA &UFD& . 19 1
DICT &XML& D_&XML& 18 3
DATA &XML& &XML& 19 1
Firstly, UniVerse has no Log File Activity Date and Time.
However, you can still obtain the table's modified/ accessed date from the file system however.
To do this,
You need to have a subroutine accepting a path of the table to return a date or a time.
e.g. SUBROUTINE GET.FILE.MOD.DATE(DAT.MOD, S.FILE.PATH)
Inside the subroutine, you can use EXECUTE to run shell command like istat for getting these info on a unix e.g.
Please beware that for a dynamic file e.g. there are Data and Overflow parts under a directory. You should compare the dates obtained and return only the latest one.
Globally catalog the subroutine
Create an I-Desc in VOC, e.g. I.FILE.MOD.DATE in the field definition of this I-Desc: SUBR("*GET.FILE.MOD.DATE",F2) and Conv Code as "D/MDY2"
Create another I-Desc e.g. I.FILE.MOD.TIME
Finally, you can
LIST VOC I.FILE.MOD.DATE I.FILE.MOD.TIME DESC WITH TYPE LIKE "F..."
alternatively in SQL,
SELECT I.FILE.MOD.DATE, I.FILE.MOD.TIME, VOC.DESC FROM VOC WHERE TYPE LIKE "F%";

import csv error using SQL Loader and perl

Hello all I have a question I hope you guys can help me with. I tried to include all the relevant info. I'm building a perl script that will eventually loop though different sqlloader control files and import their respective csv data into oracle sql database tables. I'm testing multiple control loads before looping them.
The problem is that I get an error even though the script connects to the db and uploads all the csv data without any problems that I can see. all the rows are accounted for and the log doesn't really help:
================================================================================
[root#sanasr06 scripts]# perl db_upload.pl
connection made! Starting database upload...
Error: Can't open import control_general to SQL DB : at db_upload.pl line 44
================================================================================
line 44 is the system connection:
system ("sqlldr $userid\#$sid/$passwd control=#control_pools log=$log silent=all ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
I'm including the control file output, the perl script and the control file. (the skipped file mentioned is for the csv headers:)
SQL*Loader: Release 11.2.0.1.0 - Production on Tue Aug 14 12:32:36 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Control File: /despliegue/san/project/sql_ctrl/general.ctl
Character Set UTF8 specified for all input.
Data File: /despliegue/san/project/csv/Pools.csv
Bad File: /despliegue/san/project/logs/sql_error.bad
Discard File: /despliegue/san/project/logs/sql_discard.dsc
(Allow all discards)
Number to load: ALL
Number to skip: 1
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Silent options: FEEDBACK, ERRORS and DISCARDS
Table I_GENERAL, loaded from every logical record.
Insert option in effect for this table: TRUNCATE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
OBJECTID (FILLER FIELD) FIRST * , O(") CHARACTER
DESCRIPTION (FILLER FIELD) NEXT * , O(") CHARACTER
SERIALNUMBER NEXT * , O(") CHARACTER
PRODUCT_NAME NEXT * , O(") CHARACTER
CONTROLLER_VERSION NEXT * , O(") CHARACTER
NUMBER_OF_CONTROLLERS NEXT * , O(") CHARACTER
CAPACITY_GB NEXT * , O(") CHARACTER
PRODUCT_CODE NEXT * , O(") CHARACTER
value used for ROWS parameter changed from 64 to 15
Table I_GENERAL:
2512 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 247680 bytes(15 rows)
Read buffer bytes: 1048576
Total logical records skipped: 1
Total logical records read: 2512
Total logical records rejected: 0
Total logical records discarded: 0
Run began on Tue Aug 14 12:32:36 2012
Run ended on Tue Aug 14 12:32:38 2012
==================================================================================
the above file is of course shortened but includes all the relevant information.
here's the perl script.
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
use Log::Log4perl;
#this script loads multiple saved csv files into the database using the control files
################ Initialization #############################################
my $homepath = "/despliegue/san/project";
my $log_conf = "$homepath/logs/log.conf";
Log::Log4perl->init($log_conf)or die("Error: Can't open log.config Does it exist? $!");
my $logger = Log::Log4perl->get_logger();
################ database connection variables####
my ($serial, $model);
my $host="me.notyou33.safety";
my $port="1426";
my $userid="user";
my $passwd="pass";
my $sid="sid";
my $log="$homepath/logs/sql_import.log";
#Control file location
my #control_pools= "$homepath/sql_ctrl/pools.ctl";
my #control_general = "$homepath/sql_ctrl/general.ctl";
my #control_ports= "$homepath/sql_ctrl/ports.ctl";
my #control_replication = "$homepath/sql_ctrl/replication.ctl";
#######################Database connection and data upload #################
my $dbh = DBI->connect( "dbi:Oracle:host=$host;sid=$sid;port=$port", "$userid", "$passwd",
{ RaiseError => 1}) or $logger->logdie ("Database connection not made: $DBI::errstr");
print " connection made! Starting database upload...\n";
system ("sqlldr $userid\#$sid/$passwd control=#control_general log=$log silent=all") or $logger->logdie("Error: Can't open import control_general to SQL DB :$!");
print "one done moving to next one\n";
system ("sqlldr $userid\#$sid/$passwd control=#control_pools log=$log silent=all ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
system ("sqlldr $userid\#$sid/$passwd control=#control_ports log=$log ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
print "three done moving to last one\n";
system ("sqlldr $userid\#$sid/$passwd control=#control_replication log=$log silent=feedback ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
print "................Done\n";
############################################################################
$dbh->disconnect;
==================================================================================
the control file:
OPTIONS (SKIP=1)
LOAD DATA
CHARACTERSET UTF8
INFILE '/despliegue/san/project/csv/Pools.csv'
BADFILE '/despliegue/san/project/logs/sql_error.bad'
DISCARDFILE '/despliegue/san/project/logs/sql_discard.dsc'
TRUNCATE INTO TABLE I_GENERAL
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY "\""
TRAILING NULLCOLS
(
OBJECTID FILLER,
DESCRIPTION FILLER,
SERIALNUMBER,
PRODUCT_NAME,
CONTROLLER_VERSION,
NUMBER_OF_CONTROLLERS,
CAPACITY_GB,
PRODUCT_CODE,
)
system() returns the return value of the wait call which includes the return value of the program you executed. If everything goes right, this will be 0. This is different from almost all other funktions in Perl, where you expect them to return some Value which evaluates to True in boolean context. Therefore, the commonly used error handling by using the or operator, does not work properly. You might want to try something like this instead:
system ("sqlldr $userid\#$sid/$passwd control=#control_pools log=$log silent=all") == 0
or $logger->logdie("Error: Can't open import control data to SQL DB :$?");
You can read more about handling the return value of system() in the documentation under perldoc -f system
There is Logdie which should be logdie, AFAIU
The problem is the system call expects a return value of 0 to be "successful". Your sqlldr job, if it skips or discards a record, will not return 0 (I've seen it return 2, check docs to be sure). So, unless you load all records successfully, your perl script (as written) will exit out.
perl system
sqlldr return codes
In my case I execute the sqlldr with backticks (similar to system), that helps me to get any feedback in a variable.
my $sqlldr = "sqlldr userid=usr/pss\#TNS control=\'$controlfile\' log=\'$logfile\' silent=header,feedback";
$execution = `$sqlldr 2>&1`;
The trick is that the returned value is not 0 in perl and you have to shift right 8 bits that value to be sure that you get 0. In my case I do as follows:
# Get the returned code from the last execution
my $ret = $? >> 8;
if ($ret == 0) {
$logger->info("Class DLA_Upload: All rows were successfully loaded");
}
elsif ($ret == 1) {
die("Class DLA_Upload: Executing sqlldr returned the following error:\n$execution");
}
elsif ($ret == 2) {
$logger->info("Class DLA_Upload: SQL*Loader was executed but some or all rows were rejected or discarded, please check $logfile for further information");
}
else {
die("Class DLA_Upload: FATAL ERROR: sqlldr corrupted or not found");
}
Why, here you have a link from Perl monks that explains it properly.