Can't export sqlite to json because the json option disappeared? - sql

I'm trying to export specific columns of my sqlite database into json.
I found a question relevant to my issue here: Impossible to export SQLite query results to JSON?
Where they used the sqlite3 -json option to solve their problem and export their sqlite database to json.
However, when I try to do the same terminal command I get this in return:
sqlite3: Error: unknown option: -json
and sqlite3 -help returns the following list of options:
-append append the database to the end of the file
-ascii set output mode to 'ascii'
-bail stop after hitting an error
-batch force batch I/O
-column set output mode to 'column'
-cmd COMMAND run "COMMAND" before reading stdin
-csv set output mode to 'csv'
-deserialize open the database using sqlite3_deserialize()
-echo print commands before execution
-init FILENAME read/process named file
-[no]header turn headers on or off
-help show this message
-html set output mode to HTML
-interactive force interactive I/O
-line set output mode to 'line'
-list set output mode to 'list'
-lookaside SIZE N use N entries of SZ bytes for lookaside memory
-maxsize N maximum size for a --deserialize database
-memtrace trace all memory allocations and deallocations
-newline SEP set output row separator. Default: '\n'
-nullvalue TEXT set text string for NULL values. Default ''
-pagecache SIZE N use N slots of SZ bytes each for page cache memory
-quote set output mode to 'quote'
-readonly open the database read-only
-separator SEP set output column separator. Default: '|'
-stats print memory stats before each finalize
-version show SQLite version
-vfs NAME use NAME as the default VFS
json is nowhere to be found. Very strange, because the documentation shows the following:
$ sqlite3 --help
Usage: ./sqlite3 [OPTIONS] FILENAME [SQL]
FILENAME is the name of an SQLite database. A new database is created
if the file does not previously exist.
OPTIONS include:
-A ARGS... run ".archive ARGS" and exit
-append append the database to the end of the file
-ascii set output mode to 'ascii'
-bail stop after hitting an error
-batch force batch I/O
-box set output mode to 'box'
-column set output mode to 'column'
-cmd COMMAND run "COMMAND" before reading stdin
-csv set output mode to 'csv'
-deserialize open the database using sqlite3_deserialize()
-echo print commands before execution
-init FILENAME read/process named file
-[no]header turn headers on or off
-help show this message
-html set output mode to HTML
-interactive force interactive I/O
-json set output mode to 'json'
-line set output mode to 'line'
-list set output mode to 'list'
-lookaside SIZE N use N entries of SZ bytes for lookaside memory
-markdown set output mode to 'markdown'
-maxsize N maximum size for a --deserialize database
-memtrace trace all memory allocations and deallocations
-mmap N default mmap size set to N
-newline SEP set output row separator. Default: '\n'
-nofollow refuse to open symbolic links to database files
-nonce STRING set the safe-mode escape nonce
-nullvalue TEXT set text string for NULL values. Default ''
-pagecache SIZE N use N slots of SZ bytes each for page cache memory
-quote set output mode to 'quote'
-readonly open the database read-only
-safe enable safe-mode
-separator SEP set output column separator. Default: '|'
-stats print memory stats before each finalize
-table set output mode to 'table'
-tabs set output mode to 'tabs'
-version show SQLite version
-vfs NAME use NAME as the default VFS
-zip open the file as a ZIP Archive
As you can see, -json is clearly listed as an option (as well as many others that don't show up for me).
What's going on?

Related

How to Rollback DB2 Ingest statement for malformed data

I have a Bash Shell Script that runs a DB2 sql file. The job of this sql file is to completely replace the contents of a database table with whatever are the contents of this sql file.
However, I also need that database table to have its contents preserved if errors are discovered in the ingested file. For example, supposing my table currently looks like this:
MY_TABLE
C1
C2
row0
15
27
row1
19
20
And supposing I have an input file that looks like this:
15,28
34,90
"a string that's obviously not supposed to be here"
54,23
If I run the script with this input file, the table should stay exactly the same as it was before, not using the contents of the file at all.
However, when I run my script, this isn't the behavior I observe: instead, the contents of MY_TABLE do get replaced with all of the valid rows of the input file so the new contents of the table become:
MY_TABLE
C1
C2
row0
15
28
row1
34
90
row2
54
23
In my script logic, I explicitly disable autocommit for the part of the script that ingests the file, and I only call commit after I've checked that the sql execution returned no errors; if it did cause errors, I call rollback instead. Nonetheless, the contents of the table get replaced when errors occur, as though the rollback command wasn't called at all, and a commit was called instead.
Where is the problem in my script?
script.ksh
SQL_FILE=/app/scripts/script.db2
LOG=/app/logs/script.log
# ...
# Boilerplate to setup the connection to the database server
# ...
# +c: autocommit off
# -v: echo commands
# -s: Stop if errors occur
# -p: Show prompt for interactivity (for debugging)
# -td#: use '#' as the statement delimiter in the file
db2 +c -s -v -td# -p < $SQL_FILE >> $LOG
if [ $? -gt 2 ];
then echo "An Error occurred; rolling back the data" >> $LOG
db2 "ROLLBACK" >> $LOG
exit 1
fi
# No errors, commit the changes
db2 "COMMIT" >> $LOG
script.db2
ingest from file '/app/temp/values.csv'
format delimited by ','
(
$C1 INTEGER EXTERNAL,
$C2 INTEGER EXTERNAL
)
restart new 'SCRIPT_JOB'
replace into DATA.MY_TABLE
(
C1,
C2
)
values
(
$C1,
$C2
)#
Adding as answer per OP's suggestion:
Per the db2 documentation for the ingest command It appears that the +c: autocommit off will not function:
Updates from the INGEST command are committed at the end of an ingest
operation. The INGEST command issues commits based on the commit_period
and commit_count configuration parameters. As a result of this, the
following do not affect the INGEST command: the CLP -c or +c options, which
normally affect whether the CLP automatically commits the NOT LOGGED
INITIALLY option on the CREATE TABLE statement
You probably want to set the warningcount 1 option, which will cause the command to terminate after the first error or warning. The default behaviour is to continue processing while ignoring all errors (warningcount 0).

How export CSV with SQLCMD in UTF-16?

I am trying to export the data from different queries by SQLCMD, my files must be exported in UTF-16, currently they do it in UTF8 BOM and I have been able to even in UTF8, but I cannot get it to export in UTF16.
Reviewing the Encoding of the file in Notepad++ this is the format in which it should be exported (UTF16 LE BOM), in the list selection it appears with the name UCS-2 Little Endian
Encoding to export
Encoding in options
SET #cmd= 'sqlcmd -s"-" -f i:1252 -Q "SELECT * FROM Machines" -f o:65001 -o "C:\CSV\report_machines.csv"'
EXEC master..xp_cmdshell #cmd
Check the page (https://www.example-code.com/sql/load_text_file_using_code_page.asp) and tried code 1200, however SQL Server returns the following message:
Sqlcmd: The code page <1200> specified in option -f is invalid or not installed on this system.
I want to avoid using another program, if anyone knows how to solve this problem I would greatly appreciate it.

Where to run command for pgloader

I have installed the pgloader using Window Subsystem Linux.
I couldn't figure out where to run the pgloader commands, for example, loading CSV data: https://pgloader.readthedocs.io/en/latest/ref/csv.html
LOAD CSV
FROM 'GeoLiteCity-Blocks.csv' WITH ENCODING iso-646-us
HAVING FIELDS
(
startIpNum, endIpNum, locId
)
INTO postgresql://user#localhost:54393/dbname
TARGET TABLE geolite.blocks
TARGET COLUMNS
(
iprange ip4r using (ip-range startIpNum endIpNum),
locId
)
WITH truncate,
skip header = 2,
fields optionally enclosed by '"',
fields escaped by backslash-quote,
fields terminated by '\t'
SET work_mem to '32 MB', maintenance_work_mem to '64 MB';
Whenever I run the commands in the cmd, it won't recognize the syntax:
-bash: LOAD: command not found
You are supposed to put your commands in a .lisp file then execute the following command :
pgloader yourfile.lisp
Of course ensure pgloader is installed or that you use the binary you compiled.

How to Edit a text from the output in DCL -- OpenVMS scripting

I wrote the below code, which will extract the directory name along with the file name and I will use purge command on that extracted Text.
$ sear VAXMANAGERS_ROOT:[PROC]TEMP.LIS LOG/out=VAXMANAGERS_ROOT:[DEV]FVLIM.TXT
$ OPEN IN VAXMANAGERS_ROOT:[DEV]FVLIM.TXT
$ LOOP:
$ READ/END_OF_FILE=ENDIT IN ABCD
$ GOTO LOOP
$ ENDIT:
$ close in
$ ERROR=F$EXTRACT(0,59,ABCD)
$ sh sym ERROR
$ purge/keep=1 'ERROR'
The output is as follows:
ERROR = "$1$DKC102:[PROD_LIVE.LOG]DP2017_TMP2.LIS;27392 "
Problem here is --- Every time the directory length varies (Length may be 59 or 40 or some other value, but the directory and filename length will not exceed 59 characters in my system). So in the above output, the system is also fetching the Version number of that file number. So I am not able to purge the file along with the version number.
%PURGE-E-PURGEVER, version numbers not permitted
Any suggestion -- How to eliminate the version number from the output ?
I cannot use the exact length of the directory, as directory length varies everytime.... :(
The answer with F$ELEMENT( 0, ";", ABCD ) should work, as confirmed. I might script something like this:
$ ERROR = F$PARSE(";",ERROR) ! will return $1$DKC102:[PROD_LIVE.LOG]DP2017_TMP2.LIS;
$ ERROR = ERROR - ";"
$ PURGE/KEEP=1 'ERROR'
Not sure why you have the read loop. What you will get is the last line in the file, but assuming that's what you want.
While HABO explained it, some more explanations
Suppose I use f$search to check if a file exists
a = f$search("sys$manager:net$server.log")
then I find I it exists
wr sys$output a
shows
SYS$SYSROOT:[SYSMGR]NET$SERVER.LOG;9
From the help of f$parse I get
help lex f$parse arg
shows, among other things
`Specifies a character string containing the name of a field
in a file specification. Specifying the field argument causes
the F$PARSE function to return a specific portion of a file
specification.
Specify one of the following field names (do not abbreviate):
NODE Node name
DEVICE Device name
DIRECTORY Directory name
NAME File name
TYPE File type
VERSION File version number`
So I can do
wr sys$output f$parse(a,,,"DEVICE")
which shows
SYS$SYSROOT:
and also
wr sys$output f$parse(a,,,"DIRECTORY")
so I get
[SYSMGR]
and
wr sys$output f$parse(a,,,"NAME")
shows
NET$SERVER
and
wr sys$output f$parse(a,,,"TYPE")
shows
.LOG
the version is
wr sys$output f$parse(a,,,"VERSION")
shown as
;9
The lexicals functions can be handy, check it using
help lexical
it shows
F$CONTEXT F$CSID F$CUNITS F$CVSI F$CVTIME F$CVUI F$DELTA_TIME F$DEVICE F$DIRECTORY F$EDIT
F$ELEMENT F$ENVIRONMENT F$EXTRACT F$FAO F$FID_TO_NAME F$FILE_ATTRIBUTES F$GETDVI F$GETENV
F$GETJPI F$GETQUI F$GETSYI F$IDENTIFIER F$INTEGER F$LENGTH F$LICENSE F$LOCATE F$MATCH_WILD
F$MESSAGE F$MODE F$MULTIPATH F$PARSE F$PID F$PRIVILEGE F$PROCESS F$READLINK F$SEARCH
F$SETPRV F$STRING F$SYMLINK_ATTRIBUTES F$TIME F$TRNLNM F$TYPE F$UNIQUE F$USER

import csv error using SQL Loader and perl

Hello all I have a question I hope you guys can help me with. I tried to include all the relevant info. I'm building a perl script that will eventually loop though different sqlloader control files and import their respective csv data into oracle sql database tables. I'm testing multiple control loads before looping them.
The problem is that I get an error even though the script connects to the db and uploads all the csv data without any problems that I can see. all the rows are accounted for and the log doesn't really help:
================================================================================
[root#sanasr06 scripts]# perl db_upload.pl
connection made! Starting database upload...
Error: Can't open import control_general to SQL DB : at db_upload.pl line 44
================================================================================
line 44 is the system connection:
system ("sqlldr $userid\#$sid/$passwd control=#control_pools log=$log silent=all ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
I'm including the control file output, the perl script and the control file. (the skipped file mentioned is for the csv headers:)
SQL*Loader: Release 11.2.0.1.0 - Production on Tue Aug 14 12:32:36 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Control File: /despliegue/san/project/sql_ctrl/general.ctl
Character Set UTF8 specified for all input.
Data File: /despliegue/san/project/csv/Pools.csv
Bad File: /despliegue/san/project/logs/sql_error.bad
Discard File: /despliegue/san/project/logs/sql_discard.dsc
(Allow all discards)
Number to load: ALL
Number to skip: 1
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Silent options: FEEDBACK, ERRORS and DISCARDS
Table I_GENERAL, loaded from every logical record.
Insert option in effect for this table: TRUNCATE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
OBJECTID (FILLER FIELD) FIRST * , O(") CHARACTER
DESCRIPTION (FILLER FIELD) NEXT * , O(") CHARACTER
SERIALNUMBER NEXT * , O(") CHARACTER
PRODUCT_NAME NEXT * , O(") CHARACTER
CONTROLLER_VERSION NEXT * , O(") CHARACTER
NUMBER_OF_CONTROLLERS NEXT * , O(") CHARACTER
CAPACITY_GB NEXT * , O(") CHARACTER
PRODUCT_CODE NEXT * , O(") CHARACTER
value used for ROWS parameter changed from 64 to 15
Table I_GENERAL:
2512 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 247680 bytes(15 rows)
Read buffer bytes: 1048576
Total logical records skipped: 1
Total logical records read: 2512
Total logical records rejected: 0
Total logical records discarded: 0
Run began on Tue Aug 14 12:32:36 2012
Run ended on Tue Aug 14 12:32:38 2012
==================================================================================
the above file is of course shortened but includes all the relevant information.
here's the perl script.
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
use Log::Log4perl;
#this script loads multiple saved csv files into the database using the control files
################ Initialization #############################################
my $homepath = "/despliegue/san/project";
my $log_conf = "$homepath/logs/log.conf";
Log::Log4perl->init($log_conf)or die("Error: Can't open log.config Does it exist? $!");
my $logger = Log::Log4perl->get_logger();
################ database connection variables####
my ($serial, $model);
my $host="me.notyou33.safety";
my $port="1426";
my $userid="user";
my $passwd="pass";
my $sid="sid";
my $log="$homepath/logs/sql_import.log";
#Control file location
my #control_pools= "$homepath/sql_ctrl/pools.ctl";
my #control_general = "$homepath/sql_ctrl/general.ctl";
my #control_ports= "$homepath/sql_ctrl/ports.ctl";
my #control_replication = "$homepath/sql_ctrl/replication.ctl";
#######################Database connection and data upload #################
my $dbh = DBI->connect( "dbi:Oracle:host=$host;sid=$sid;port=$port", "$userid", "$passwd",
{ RaiseError => 1}) or $logger->logdie ("Database connection not made: $DBI::errstr");
print " connection made! Starting database upload...\n";
system ("sqlldr $userid\#$sid/$passwd control=#control_general log=$log silent=all") or $logger->logdie("Error: Can't open import control_general to SQL DB :$!");
print "one done moving to next one\n";
system ("sqlldr $userid\#$sid/$passwd control=#control_pools log=$log silent=all ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
system ("sqlldr $userid\#$sid/$passwd control=#control_ports log=$log ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
print "three done moving to last one\n";
system ("sqlldr $userid\#$sid/$passwd control=#control_replication log=$log silent=feedback ") or $logger->logdie("Error: Can't open import control data to SQL DB :$!");
print "................Done\n";
############################################################################
$dbh->disconnect;
==================================================================================
the control file:
OPTIONS (SKIP=1)
LOAD DATA
CHARACTERSET UTF8
INFILE '/despliegue/san/project/csv/Pools.csv'
BADFILE '/despliegue/san/project/logs/sql_error.bad'
DISCARDFILE '/despliegue/san/project/logs/sql_discard.dsc'
TRUNCATE INTO TABLE I_GENERAL
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY "\""
TRAILING NULLCOLS
(
OBJECTID FILLER,
DESCRIPTION FILLER,
SERIALNUMBER,
PRODUCT_NAME,
CONTROLLER_VERSION,
NUMBER_OF_CONTROLLERS,
CAPACITY_GB,
PRODUCT_CODE,
)
system() returns the return value of the wait call which includes the return value of the program you executed. If everything goes right, this will be 0. This is different from almost all other funktions in Perl, where you expect them to return some Value which evaluates to True in boolean context. Therefore, the commonly used error handling by using the or operator, does not work properly. You might want to try something like this instead:
system ("sqlldr $userid\#$sid/$passwd control=#control_pools log=$log silent=all") == 0
or $logger->logdie("Error: Can't open import control data to SQL DB :$?");
You can read more about handling the return value of system() in the documentation under perldoc -f system
There is Logdie which should be logdie, AFAIU
The problem is the system call expects a return value of 0 to be "successful". Your sqlldr job, if it skips or discards a record, will not return 0 (I've seen it return 2, check docs to be sure). So, unless you load all records successfully, your perl script (as written) will exit out.
perl system
sqlldr return codes
In my case I execute the sqlldr with backticks (similar to system), that helps me to get any feedback in a variable.
my $sqlldr = "sqlldr userid=usr/pss\#TNS control=\'$controlfile\' log=\'$logfile\' silent=header,feedback";
$execution = `$sqlldr 2>&1`;
The trick is that the returned value is not 0 in perl and you have to shift right 8 bits that value to be sure that you get 0. In my case I do as follows:
# Get the returned code from the last execution
my $ret = $? >> 8;
if ($ret == 0) {
$logger->info("Class DLA_Upload: All rows were successfully loaded");
}
elsif ($ret == 1) {
die("Class DLA_Upload: Executing sqlldr returned the following error:\n$execution");
}
elsif ($ret == 2) {
$logger->info("Class DLA_Upload: SQL*Loader was executed but some or all rows were rejected or discarded, please check $logfile for further information");
}
else {
die("Class DLA_Upload: FATAL ERROR: sqlldr corrupted or not found");
}
Why, here you have a link from Perl monks that explains it properly.