How to use sqlite3 in C without the dev library installed? - sql

I'm trying to load a local DB with SQLite under a RedHat Linux server. I have a C code to load the database from a very large file spliting the columns. The bad new is that sqlite3 is not installed in the machine (fatal error: sqlite3.h: No such file or directory) and I won't be able to have permissions to install libsqlite3-dev (acording to this) , so I could only use it throuth bash or python:
[dhernandez#zl1:~]$ locate sqlite3
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so.0
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so.0.8.6
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3
/opt/xiv/host_attach/xpyv/lib/python2.7/lib-dynload/_sqlite3.so
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/__init__.py
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/dbapi2.py
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/dump.py
/usr/bin/sqlite3
/usr/lib64/libsqlite3.so.0
/usr/lib64/libsqlite3.so.0.8.6
/usr/lib64/python2.6/sqlite3
/usr/lib64/python2.6/lib-dynload/_sqlite3.so
/usr/lib64/python2.6/sqlite3/__init__.py
/usr/lib64/python2.6/sqlite3/__init__.pyc
/usr/lib64/python2.6/sqlite3/__init__.pyo
/usr/lib64/python2.6/sqlite3/dbapi2.py
/usr/lib64/python2.6/sqlite3/dbapi2.pyc
/usr/lib64/python2.6/sqlite3/dbapi2.pyo
/usr/lib64/python2.6/sqlite3/dump.py
/usr/lib64/python2.6/sqlite3/dump.pyc
/usr/lib64/python2.6/sqlite3/dump.pyo
/usr/lib64/xulrunner/libmozsqlite3.so
/usr/share/man/man1/sqlite3.1.gz
/usr/share/mime/application/x-kexiproject-sqlite3.xml
/usr/share/mime/application/x-sqlite3.xml
What would be faster of the following options?
Split the columns in my C program, and then execute the insert like
this:
system("echo 'insert into t values(1,2);'" | sqlite3 mydb.db);
Split the columns in my C program, save it to a temp file and when
I've got 500.000 rows I execute the script like this (and then empty
the temp file to continue loading rows):
system("sqlite3 mydb.db < temp.sql);
Split the columns in my C program adding a delimiter between them, save it to a temp file and import it like this:
.delimiter '#'
.import temp.txt t

You can use the amalgation version. It is a single .c file you can include in your project, and all of SQLite is available. No need for dynamic linking.

You could probably try to dynamically load the sqlite3 library at runtime.
There are quite few stuff to learn about it, but that's a powerful functionality and I am quite sure this would solve your problem.
Here is a link describing how you can do it : http://tldp.org/HOWTO/Program-Library-HOWTO/dl-libraries.html

Download the devel package and use it from your project directory. You only need it for the compilation.

Related

Relacing a word in an db2 sql file causes DSNC105I : End of file reached while reading the command error

I have a dynamic sql file in which name of TBCREATOR changes as given in a parameter.
I use a simple python script to change the TBCREATOR=<variable here> and write the result to an output sql file.
calling this file using db2 -td# -vf <generated sql file>gives
DSNC105I : End of file reached while reading the command
Here is the file i need the TBCREATOR variable replaced:
CONNECT to 204.90.115.200:5040/DALLASC user *** using ****#
select REMARKS from sysibm.SYSCOLUMNS WHERE TBCREATOR='table' AND NAME='LCODE'
#
Here is the python script:
#!/usr/bin/python3
# #------replace table value with schema name
# print(list_of_lines)
fin = open("decrypt.sql", "rt")
#output file to write the result to
fout = open("decryptout.sql", "wt")
for line in fin:
fout.write(line.replace('table', 'ZXP214'))
fin.close()
fout.close()
After decryptout.sql is generated I call it using db2 -td# -vf decryptout.sql
and get the error given above.
Whats irritating is I have another sql file that contains exactly same data as decryptout.sql which runs smoothly with the db2 -td# -vf ... command. I tried to use the unix command cmp to compare the generated file and the one which I wrote, with the variable ZXP214 already replaced but there are no differences. What is causing this error?.
here is the file (that executes without error) I compare generated output with:
CONNECT to 204.90.115.200:5040/DALLASC user *** using ****#
select REMARKS from sysibm.SYSCOLUMNS WHERE TBCREATOR='ZXP214' AND NAME='LCODE'
#
I found that specifically on the https://ibmzxplore.influitive.com/ challenge, if you are using the java db2 command and working in the Zowe USS system (Unix System Services of zOS), there is a conflict of character sets. I believe the system will generally create files in EBCDIC format, whereas if you do
echo "CONNECT ..." > syscat.clp
the resulting file will be tagged as ISO8859-1 and will not be processed properly by db2. Instead, go to the USS interface and choose "create file", give it a folder and a name, and it will create the file untagged. You can use
ls -T
to see the tags. Then edit the file to give it the commands you need, and db2 will interoperate with it properly. Because you are creating the file with python, you may be running into similar issues. When you open the new file, use something like
open(input_file_name, mode=”w”, encoding=”cp1047”)
This makes sure the file is open as an EBCDIC file.
If you are using the Db2-LUW CLP (command line processor) that is written in c/c++ and runs on windows/linux/unix, then your syntax for CONNECT is not valid.
Unfortunately your question is ambigiously tagged so we cannot tell which Db2-server platform you actually use.
For Db2-LUW with the c/c++ written classic db2 command, the syntax for a type-1 CONNECT statement does not allow a connection-string (or partial connection string) as shown in your question. For Db2-LUW db2 clp, the target database must be externally defined (i.e not inside the script) , either via the legacy actions of both catalog tcpip node... combined with catalog database..., or must be defined in the db2dsdriver.cfg configuration file as plain XML.
If you want to use connection-strings then you can use the clpplus tool which is available for some Db2-LUW client packages, and is present on currently supported Db2-LUW servers. This lets you use Oracle style scripting with Db2. Refer to the online documentation for details.
If you not using the c/c++ classic db2 command, and you are instead using the emulated clp written in java only available with Z/OS-USS, then you must open a ticket with IBM support for that component, as that is not a matter for stackoverflow.

Efficient way to find if a given string is _not_ listed in (sqlite3) table

I have a Db table listing media files which have been archived to LTO (4.3 million of them). The ongoing archiving process is manual, carried out by different people as and when downtime arises. We need an efficient way of determining which files in a folder are not archived so we can complete the job if needed, or confidently delete the folder if it's all archived.
(For the sake of argument let's assume all filenames are unique, we do need to handle duplicates but that's not this question.)
I should probably just fire up Perl/Python/Ruby and talk to the Db thru them. But it would take me quite a while to get back up to speed in those and I have a nagging feeling that it would be overkill.
I can think of a two simpler approaches, but each has drawbacks and I wonder if there's a yet better way?
Method 1: is to simply bash-recurse down each directory structure, invoking sqlite3 per-file and outputting the filename if the query returns and empty result
This is probably less efficient than
Method 2: recurse through the directory structure and produce an sql file which will:
create a table with all our on-disk files in it (let's call it the "working table")
compare that with the archive table - select all files in the working table but not in the archive table
destroy the working table, or quit without saving
While 2 seems likely more efficient than 1, it seems that building the comparison table in the first place might incur some overhead and I did kind of imagine the backup table as a monolithic read-only thing that people refer to and don't write into.
Is there any way in pure SQL to just output a list of not-founds (without them existing in another table)?
Finding values not in some other table is easy:
SELECT *
FROM SomeTable
WHERE File NOT IN (SELECT File
FROM OtherTable);
To create the other table, you can write a series of INSERT statements, or just use the .import command of the shell from a plain text file.
A temporary table will not be saved.
Sooo, I think I have to answer my own question.
tl;dr - use a scripting language (the thing I was hoping to avoid)
Trying that and the other two approaches (details below) on my system yields the following numbers when checking a 33-file directory structure against the 4.3 million record Db:
A Ruby script: 0.27s
Bash running sqilte3 once per file ("Method 1"): 0.73s
SQL making a temp table and using "NOT IN" (Method 2): 8s
The surprising thing for me is that the all-sql is an order of magnitude slower than bash. This was true using the macOS (10.12) commandline sqlite3 and the GUI "DB Browser for SQLite"
The details
Script method
This is the crux of my Ruby script. Ruby of course is not the fastest language out there and you could probably do better than this (but if you really need speed, it might be time for C)
require "sqlite3"
db = SQLite3::Database.open 'path/to/mydb.db'
# This will skip Posix hidden files, which is fine by me
Dir.glob("search_path/**/*") do |f|
file = File.stat(f)
next unless file.file?
short_name = File.basename(f)
qouted_short_name = short_name.gsub("'", "''")
size = File.size(f)
sql_cmd = "select * from 'Backup_Table' where filename='#{qouted_short_name}' and sizeinbytesincrsrc=#{size}"
count = db.execute(sql_cmd).length
if count == 0
puts "UNARCHIVED: #{f}"
end
end
(Note the next two are Not The Answer, but I'll include them if anyone wants to check my methodology)
Bash
This is a crude Bash recurse-through-files which will print a list of files that are backed up (not what I want, but gives me an idea of speed):
#! /bin/bash
recurse() {
for file in *; do
if [ -d "${file}" ]; then
thiswd=`pwd`
(cd "${file}" && recurse)
cd "${thiswd}"
elif [ -f "${file}" ]; then
fullpath=`pwd`${file}
filesize=`stat -f%z "${file}"`
sqlite3 /path/to/mydb.db "select filename from 'Backup_Table' where filename='$file'"
fi
done
}
cd "$1" && recurse
SQL
CL has detailed method 2 nicely in his/her answer

Unload statement

I am new to programming and I am running into a issue. I am calling a table and need to put my results into a csv file in a certain path.
This is what I am doing and the error I get.
dbuser#cbos1:/var/lib/dbspace/bosarc/testing/Abe_Lincoln> cd dbaccess labor32<<?
> UNLOAD TO '/var/lib/dbspace/bosarc/Active_Sites/Cronos_test/Position7'
> select * from informix.position;
> ?
-bash: cd: dbaccess: No such file or directory
dbuser#cbos1:/var/lib/dbspace/bosarc/testing/Abe_Lincoln>
the file path exist but keeps getting message.
Using just $ as the command line prompt, you should be using just:
$ dbaccess labor32 <<?
> UNLOAD TO '/var/lib/dbspace/bosarc/Active_Sites/Cronos_test/Position7'
> select * from informix.position;
> ?
…message(s) from dbaccess
$
This will run the dbaccess program (usually from $INFORMIXDIR/bin) against the database labor32, and generate an UNLOAD format file in the given file name.
The cd command is for changing directory; you don't have a directory called dbaccess (and probably shouldn't), and even if you did have such a directory, you shouldn't provide more options to the cd command, or a here document as standard input — it will ignore them.
Note that the file generated (Position7 will be the base name of the file) will be in Informix's UNLOAD format (pipe delimited fields by default), not CSV. It's certainly possible to convert between the two; I have Perl scripts that can do the conversions — last modified about a decade ago, but not much has changed in the interim. You could also consider using SQLCMD (available as open source from the IIUG Software Repository) which does have support for CSV load and unload formats. (This is the original SQLCMD — or at least an original SQLCMD — and is not Microsoft's Johnny-come-lately program of the same name.)
Create a file unload-table.sh containing:
#!/bin/sh
dbaccess labor32 <<EOF
UNLOAD TO '/var/lib/dbspace/bosarc/Active_Sites/Cronos_test/Position7'
SELECT * FROM informix.position;
EOF
You can then run this as bash unload-table.sh, or make it executable and install it in your $HOME/bin directory (which is on your PATH, isn't it?) so that you can simply run unload-table.sh. Or you can arrange to 'compile' (copy) the file to unload-table (no .sh suffix) so you don't have to type it to execute it: unload-table. You can enhance the script to allow the program (dbacess), database (labor32), table (informix.position) and file (/var/lib/dbspace/bosarc/Active_sites/Cronos_test/Position7) to be set as command line arguments or via environment variables. That requires a bit of fiddling in the script, but nothing outrageous. I'd probably allow the file name to be specified separately from the directory where the file is to be stored so that it is easier to configure on the command line.

How to split sql in MAC OSX?

Is there any app for mac to split sql files or even script?
I have a large files which i have to upload it to hosting that doesn't support files over 8 MB.
*I don't have SSH access
You can use this : http://www.ozerov.de/bigdump/
Or
Use this command to split the sql file
split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart-
The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended.
Now you should be able to import your files one at a time through phpMyAdmin without issue.
More info http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/
This tool should do the trick: MySQLDumpSplitter
It's free and open source.
Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax.
Full disclosure: I am a share holder of the company that hosts this program.
The UploadDir feature in phpMyAdmin could help you, if you have FTP access and can modify your phpMyAdmin's configuration (or are allowed to install your own instance of phpMyAdmin).
http://docs.phpmyadmin.net/en/latest/config.html?highlight=uploaddir#cfg_UploadDir
You can split into working SQL statements with:
csplit -s -f db-part db.sql "/^# Dump of table/" "{99}"
Which makes up to 99 files named 'db-part[n]' from db.sql
You can use "CREATE TABLE" or "INSERT INTO" instead of "# Dump of ..."
Also: Avoid installing any programs or uploading your data into any online service. You don't know what will be done with your information!

config_file works wrong for sqlite3 database

Here is what am I doing:
#configure_file(${CMAKE_CURRENT_SOURCE_DIR}/xdb.db3 ${complex_BINARY_DIR}/) <-- works wrong
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/armd.conf ${complex_BINARY_DIR}/)
both files moves there properly but when I'm trying to use moved xdb.db3 my program and sqlite editor says "xdb.db3 is not sqlite database or encrypted"
How must I move sqlite database and why I can't do it with configure_file?
Try adding the COPYONLY flag to configure_file.
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/xdb.db3 ${complex_BINARY_DIR} COPYONLY)